-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Why can't we set the precision of all layers to fp16 or fp32? #3177
Comments
As the warning say: some layers is force to running in INT32, you can not set those layers to FP32. |
How can i skip that layer? |
I think you might do
|
Hello @nvluxiaoz , can you please explain, how to force precision to trt.float32 and run this from trtexec command line? If not can you provide a code snippet? Thank You |
To avoid the warning, just don't set those layers precision to FP32, or just safely ignore it. |
I want to skip that layers, but i don't know how to identify the layers |
Filter with layer name or layer type? |
closing since no activity for more than 3 weeks, thanks all! |
why closing, this is important. |
for i in range(network.num_layers):
|
Description
Hello, I'm trying to set the precision of specific layers to fp32, but after setting some layers, I don't see any improvement (the final output is still NaN). To troubleshoot this issue, I wanted to verify if setting it to fp32 actually makes a difference. However, I encountered an error when attempting to do so. Could you please explain the reason behind this error? Thank you very much.
Here is my code:
The text was updated successfully, but these errors were encountered: