-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unable to run batch inference with c++ #23450
Comments
It seems that the input data has not been handled correctly. You can refer to the Python code here for guidance: |
Thank, you @eKevinHoang. The fact is that I am already able to run batch inference in python, but I can not understand what's wrong in c++. I am honestly struggling to find both documentation and a working example of batch inference for networks that accept images as input.
How can I inspect the shape of bgr_image and input_tensor in my example? Thank you |
Check this one: ONNX-Runtime-GPU-image-classifciation-example. |
@davave1693 You can refer to the following example code. Based on your Python code, you also need to perform cv2::resize and cv2::cvtColor before calling this function.
|
Thanks @rodrigovimieiro and @eKevinHoang. @rodrigovimieiro I will try to compile and run the example at your link and then let you know if I succed. @eKevinHoang, I updated my preprocessing method as you suggested:
The line Same error occurs if I return std::vector and call (in main)
|
Hello @rodrigovimieiro, I was able to build the example but I have got the same exact error when I call
in image_classifier.cpp |
@davave1693 I think you are using a C++ version earlier than C++17. I am currently using C++17. 😄 To simplify testing, you can use the following code: Then create the tensor as follows: This way, you don’t need to perform cv2.resize or cv2.cvtColor. I hope this helps you test the results more effectively. |
Catch the exception and print its message. C++ terminates the program on unhandled exception. |
@eKevinHoang yes, I am currently using c++14. Anyway, the preprocessing method you provided made my inference work! I just had to modify it in such a way that a vector is rerturned from preprocessing instead of a Ort::Value Thank you again! |
Describe the issue
I am trying to run my object detection model in c++.
The model is in onnx format and has an input dynamic batch size. I was able to run preprocessing and inference in python, but I am encountering difficulties in c++.
when running
The following error occurs:
To reproduce
model.h
model.cpp
main.cpp
Below the python version
Urgency
Blocking problem for project
Platform
Windows
OS Version
11 Pro
ONNX Runtime Installation
Released Package
ONNX Runtime Version or Commit ID
1.20.1
ONNX Runtime API
C++
Architecture
X64
Execution Provider
TensorRT
Execution Provider Library Version
10.5.0.18
The text was updated successfully, but these errors were encountered: