You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
hey @cyrusbehr can you guide me with running yolov8 inference with tensorrt cpp on jetson?.
I have a Jetson Orin NX 16 GB with following config:
Jetpack - 5.1.4
CUDA - 11.4.315
cuDNN - 8.6.0.166
TensorRT - 8.5.2.2
OpenCV - 4.6.0 with CUDA
The text was updated successfully, but these errors were encountered:
Hi, the problem maybe is the TensorRT version you´re running. I had the same issues running the code since most of the code is written on the latest TensorRT version and specially when creating the .engine file. If I remember correctly, the main error when running it is on the engine.h file of the TensorRT CPP Library. And if I remember correctly is this line in particular:
else if (tensorDataType == nvinfer1::DataType::kFP8) {
throw std::runtime_error("Error, model has unsupported output type");
}
Since TensorRT received kFP8 support until 8.6.x, I suggest to you to update your Jetpack to 6.0. If you're having issues, I'm willing to create an edit of this code and maybe be able to make it work and also since I'm currently working in this library on my own for my own project but I really don't promise a lot to have it done in a week or so.
hey @cyrusbehr can you guide me with running yolov8 inference with tensorrt cpp on jetson?.
I have a Jetson Orin NX 16 GB with following config:
Jetpack - 5.1.4
CUDA - 11.4.315
cuDNN - 8.6.0.166
TensorRT - 8.5.2.2
OpenCV - 4.6.0 with CUDA
The text was updated successfully, but these errors were encountered: