❓ [Question] How to Enable the Torch-TensorRT Partition Feature ? #886
-
❓ QuestionHello, I want to use TensorRT to run VectorNet from https://github.com/xk-huang/yet-another-vectornet However, when I try to convert torchscript using torchtrtc, it terminates by showing an unsupported op:torch_scatter::scatter_max
I have been noticed that Torch-TensorRT can fallback to native PyTorch when TensorRT does not support the model subgraphs. The question is, why does not this function work, and how to enable it? |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments
-
It is enabled by default. The reason why your compilation is failing is because you are using ops from a 3rd party (not So something like: import torch # imports standard PyTorch ops and APIs
import torch_scatter # imports custom ops and registers with PyTorch
import torch_tensorrt
...
trt_model = torch_tensorrt.compile(my_model, ...) # by default `require_full_compilation = False` - i.e. partial compilation |
Beta Was this translation helpful? Give feedback.
-
Thanks very much |
Beta Was this translation helpful? Give feedback.
It is enabled by default. The reason why your compilation is failing is because you are using ops from a 3rd party (not
torch
) and these ops are not loaded by the torchtrtc program. So both PyTorch and Torch-TRT don't know about them when the model is deserialized. I would recommend trying with the python api withtorch_scatter
imported as well, as the easiest way to try this.So something like: