You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The libcudnn8 should libcudnn8_8.9.1.23-1+cuda11.8_arm64, but, it is libcudnn8_8.9.1.23-1+cuda12.1_arm64.
The libcudnn8-dev should libcudnn8-dev_8.9.1.23-1+cuda11.8_arm64, but, it is libcudnn8-dev_8.9.1.23-1+cuda12.1_arm64.
So, when use the command trtexec --onnx=resnet50_onnx_model.onnx --saveEngine=resnet_engine_onnx.trt --explicitBatch, it will core dump.
I download the correct version: https://developer.download.nvidia.cn/compute/cuda/repos/ubuntu2004/sbsa/libcudnn8_8.9.1.23-1+cuda11.8_arm64.deb and https://developer.download.nvidia.cn/compute/cuda/repos/ubuntu2004/sbsa/libcudnn8-dev_8.9.1.23-1+cuda11.8_arm64.deb.
And down the version dpkg -i libcudnn8_8.9.1.23-1+cuda11.8_arm64.deb and dpkg -i libcudnn8-dev_8.9.1.23-1+cuda11.8_arm64.deb .
Then, it is ok for the command.
Environment
TensorRT Version:
8.5
I think other version who is not the latest one also has this problem.
NVIDIA GPU:A40
NVIDIA Driver Version:520.61.05
CUDA Version:11.8.0
CUDNN Version:
cudnn8
But, it is wrong
Operating System:ubuntu20.04, aarch64
Python Version (if applicable):3.8.10
Tensorflow Version (if applicable):None
PyTorch Version (if applicable):None
Baremetal or Container (if so, version):Container, 24.0.1
Commands or scripts:
./docker/build.sh --file docker/ubuntu-20.04-aarch64.Dockerfile --tag tensorrt-aarch64-ubuntu20.04-cuda11.8 --cuda 11.8.0
trtexec --onnx=resnet50_onnx_model.onnx --saveEngine=resnet_engine_onnx.trt --explicitBatch Have you tried the latest release?:
Can this model run on other frameworks? For example run ONNX model with ONNXRuntime (polygraphy run <model.onnx> --onnxrt):
The text was updated successfully, but these errors were encountered:
l1351868270
changed the title
thelibcudnn8 version is not correct
the libcudnn8 version is not correct, when use the version who is not the latest one.
May 25, 2023
Description
The libcudnn8 should libcudnn8_8.9.1.23-1+cuda11.8_arm64, but, it is libcudnn8_8.9.1.23-1+cuda12.1_arm64.
The libcudnn8-dev should libcudnn8-dev_8.9.1.23-1+cuda11.8_arm64, but, it is libcudnn8-dev_8.9.1.23-1+cuda12.1_arm64.
So, when use the command
trtexec --onnx=resnet50_onnx_model.onnx --saveEngine=resnet_engine_onnx.trt --explicitBatch
, it will core dump.I download the correct version:
https://developer.download.nvidia.cn/compute/cuda/repos/ubuntu2004/sbsa/libcudnn8_8.9.1.23-1+cuda11.8_arm64.deb
andhttps://developer.download.nvidia.cn/compute/cuda/repos/ubuntu2004/sbsa/libcudnn8-dev_8.9.1.23-1+cuda11.8_arm64.deb
.And down the version
dpkg -i libcudnn8_8.9.1.23-1+cuda11.8_arm64.deb
anddpkg -i libcudnn8-dev_8.9.1.23-1+cuda11.8_arm64.deb
.Then, it is ok for the command.
Environment
TensorRT Version:
8.5
I think other version who is not the latest one also has this problem.
NVIDIA GPU:A40
NVIDIA Driver Version:520.61.05
CUDA Version:11.8.0
CUDNN Version:
cudnn8
But, it is wrong
Operating System:ubuntu20.04, aarch64
Python Version (if applicable):3.8.10
Tensorflow Version (if applicable):None
PyTorch Version (if applicable):None
Baremetal or Container (if so, version):Container, 24.0.1
Relevant Files
Model link:https://github.com/NVIDIA/TensorRT/blob/release/8.4/quickstart/IntroNotebooks/4.%20Using%20PyTorch%20through%20ONNX.ipynb
Steps To Reproduce
Commands or scripts:
./docker/build.sh --file docker/ubuntu-20.04-aarch64.Dockerfile --tag tensorrt-aarch64-ubuntu20.04-cuda11.8 --cuda 11.8.0
trtexec --onnx=resnet50_onnx_model.onnx --saveEngine=resnet_engine_onnx.trt --explicitBatch
Have you tried the latest release?:
Can this model run on other frameworks? For example run ONNX model with ONNXRuntime (
polygraphy run <model.onnx> --onnxrt
):The text was updated successfully, but these errors were encountered: