-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
"Segmentation fault (core dumped)" for new JetPack 6.1 install on Orin Nano when running demo #1913
Comments
same issue.. moved to using Jetpack 6.0 and that worked for now. |
I am having the same problem with detectnet |
@Granluke sorry yes, TRT10 removed deprecated support for caffemodels and TensorFlow UFF models, which this tutorial used a lot having been started in that era. This repo also is based in C++ being created almost 8 years ago, that was more normal for the time and before there was Python bindings for TensorRT or good PyTorch support in ONNX/ect. Still trying to find a solution for this on JP 6.1 without migrating all the models, because they aren't VITs and not the focus moving forward. For now would use JP 6.0 if you require the repo, sorry for the inconvenience. If you get the jetson-inference:r36.3 docker image running on JP 6.1 / r36.4, let me know 👍 |
@dusty-nv No worries, really appreciate everything you are doing! However I am encountering this issue with .onnx models, which I thought TRT10 still supports. |
Yep, it does, that's now the primary way to import into TRT without using it's model/layer API directly. However there were other deprecations/removals from TRT10 that changed other inferencing APIs, and making segfault. Apologize that have not had the time to go back and figure it out. For newer stuff using TRT, I mostly use torch2trt for approaches like clip_trt where it just drops-in to existing PyTorch code replacing the vanilla model. Ultralytics also has native TRT support (with INT8) now. With the more rapid pace of model development now, it is easier keeping that stuff updated because it is in-tree with the models. EDIT: I still use jetson-utils on jetpack 6.1, that works fine and has persisted in being useful |
I have jeston-inference:r36.3.0 running under jetpack 6.1 R36.4 and can confirm almost all of the mocels no longer run. Detectnet does, but most others fail. Most common error is simply that the CSI cameras do not run. |
I'm experiencing a segmentation fault trying to run the segnet demo with Jetpack 6.1 on the Orin Nano.
Steps to replicate:
Download and write the JP 6.1 image from https://developer.nvidia.com/downloads/embedded/l4t/r36_release_v4.0/jp61-orin-nano-sd-card-image.zip to micro SD card.
Boot new image. Perform initial user setup, config, etc.
Follow jetson-inference instructions here: https://github.com/dusty-nv/jetson-inference/blob/master/docs/building-repo-2.md. Install PyTorch when prompted.
cd data/images
in this repo.segnet.py airplane_0.jpg airplane_0_output.jpg
. I've also ransegnet
with the same segmentation fault.Encounter segmentation fault. See below for log.
The text was updated successfully, but these errors were encountered: