You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We have Jetpack 5.1.2 installed on our Xavier Dev Board.
We tried to update its default CUDA 11.4 to CUDA 11.8 using the script below from NVIDIA download
Eventhough we want to update to CUDA11.8 the script installs CUDA12.2.
Then we built .whl files for torch and torchvision from source for CUDA12.2 using gcc11.
We installed the built .whl files into our Python 3.8 environment.
We can make training/inference on our Xavier Board using the Python environment successfully.
Moreover, we installed onnxruntime 1.18.0 from the link provided from Jetson Zoo and made inference succcesfully.
After that, we tried to build tensorrt from source according to the documentation in tensorrt github.
There are 2 options for getting the libraries for build: TensorRT GA or docker.
For TensorRT GA, there is no compatible version for CUDA12.2 in tonsorrt download page. Tensorrt 8 versions do not support CUDA versions newer than CUDA 11.8. Tensorrt 10 versions do not support CUDA versions older than CUDA 12.4.
We do not observe a compatible TensorRT GA for aarch64 architecture and CUDA12.2.
For docker, we do not also observe any compatible versions for CUDA12.2.
Is it impossible to use tensorrt with CUDA12.2 built on Jetpack 5.1.2?
The text was updated successfully, but these errors were encountered:
Hello
We have Jetpack 5.1.2 installed on our Xavier Dev Board.
We tried to update its default CUDA 11.4 to CUDA 11.8 using the script below from NVIDIA download
Eventhough we want to update to CUDA11.8 the script installs CUDA12.2.
Then we built .whl files for torch and torchvision from source for CUDA12.2 using gcc11.
We installed the built .whl files into our Python 3.8 environment.
We can make training/inference on our Xavier Board using the Python environment successfully.
Moreover, we installed onnxruntime 1.18.0 from the link provided from Jetson Zoo and made inference succcesfully.
After that, we tried to build tensorrt from source according to the documentation in tensorrt github.
There are 2 options for getting the libraries for build: TensorRT GA or docker.
For TensorRT GA, there is no compatible version for CUDA12.2 in tonsorrt download page.
Tensorrt 8 versions do not support CUDA versions newer than CUDA 11.8.
Tensorrt 10 versions do not support CUDA versions older than CUDA 12.4.
We do not observe a compatible TensorRT GA for aarch64 architecture and CUDA12.2.
For docker, we do not also observe any compatible versions for CUDA12.2.
Is it impossible to use tensorrt with CUDA12.2 built on Jetpack 5.1.2?
The text was updated successfully, but these errors were encountered: