Skip to content

❓ [Question] How to Enable the Torch-TensorRT Partition Feature ? #886

Answered by narendasan
huangxiao2008 asked this question in Q&A
Discussion options

You must be logged in to vote

It is enabled by default. The reason why your compilation is failing is because you are using ops from a 3rd party (not torch) and these ops are not loaded by the torchtrtc program. So both PyTorch and Torch-TRT don't know about them when the model is deserialized. I would recommend trying with the python api with torch_scatter imported as well, as the easiest way to try this.

So something like:

import torch # imports standard PyTorch ops and APIs 
import torch_scatter # imports custom ops and registers with PyTorch
import torch_tensorrt

...

trt_model = torch_tensorrt.compile(my_model, ...) # by default `require_full_compilation = False` - i.e. partial compilation 

Replies: 2 comments

Comment options

You must be logged in to vote
0 replies
Answer selected by huangxiao2008
Comment options

You must be logged in to vote
0 replies
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
question Further information is requested
2 participants
Converted from issue

This discussion was converted from issue #876 on February 19, 2022 23:57.