-
Notifications
You must be signed in to change notification settings - Fork 174
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
如何将PaddleDetection中的ppyoloe_seg导出为ONNX + trtexec转换 #1483
Comments
我在PaddleDetection同样提问了类似问题:https://github.com/PaddlePaddle/PaddleDetection/issues/9289。 之前在GPU服务器上进行PaddleDetection ppyoloe_seg目标分割算法的训练/评估/转换实现都是在paddle 2.6.2环境下。经过实验,我使用paddle 3.0版本容器(paddlepaddle/paddle:3.0.0b1-gpu-cuda11.8-cudnn8.6-trt8.5),可以成功转换成ONNX格式,
但是在NVIDIA XAVIER边缘端转换ONNX -> TensorRT格式报错???求教!!! nvidia@linux:~/zj/paddle$ trtexec --onnx=ppyoloe_seg_s_80e_xfy.onnx --saveEngine=ppyoloe_seg_s_80e_xfy.engine --workspace=1024 --avgRuns=1000 --shapes=image:1x3x640x640,scale_factor:1x2 --fp16
&&&& RUNNING TensorRT.trtexec # trtexec --onnx=ppyoloe_seg_s_80e_xfy.onnx --saveEngine=ppyoloe_seg_s_80e_xfy.engine --workspace=1024 --avgRuns=1000 --shapes=image:1x3x640x640,scale_factor:1x2 --fp16
[02/05/2025-18:28:15] [I] === Model Options ===
[02/05/2025-18:28:15] [I] Format: ONNX
[02/05/2025-18:28:15] [I] Model: ppyoloe_seg_s_80e_xfy.onnx
[02/05/2025-18:28:15] [I] Output:
[02/05/2025-18:28:15] [I] === Build Options ===
[02/05/2025-18:28:15] [I] Max batch: explicit
[02/05/2025-18:28:15] [I] Workspace: 1024 MB
[02/05/2025-18:28:15] [I] minTiming: 1
[02/05/2025-18:28:15] [I] avgTiming: 8
[02/05/2025-18:28:15] [I] Precision: FP32+FP16
[02/05/2025-18:28:15] [I] Calibration:
[02/05/2025-18:28:15] [I] Safe mode: Disabled
[02/05/2025-18:28:15] [I] Save engine: ppyoloe_seg_s_80e_xfy.engine
[02/05/2025-18:28:15] [I] Load engine:
[02/05/2025-18:28:15] [I] Builder Cache: Enabled
[02/05/2025-18:28:15] [I] NVTX verbosity: 0
[02/05/2025-18:28:15] [I] Inputs format: fp32:CHW
[02/05/2025-18:28:15] [I] Outputs format: fp32:CHW
[02/05/2025-18:28:15] [I] Input build shape: image=1x3x640x640+1x3x640x640+1x3x640x640
[02/05/2025-18:28:15] [I] Input build shape: scale_factor=1x2+1x2+1x2
[02/05/2025-18:28:15] [I] Input calibration shapes: model
[02/05/2025-18:28:15] [I] === System Options ===
[02/05/2025-18:28:15] [I] Device: 0
[02/05/2025-18:28:15] [I] DLACore:
[02/05/2025-18:28:15] [I] Plugins:
[02/05/2025-18:28:15] [I] === Inference Options ===
[02/05/2025-18:28:15] [I] Batch: Explicit
[02/05/2025-18:28:15] [I] Input inference shape: scale_factor=1x2
[02/05/2025-18:28:15] [I] Input inference shape: image=1x3x640x640
[02/05/2025-18:28:15] [I] Iterations: 10
[02/05/2025-18:28:15] [I] Duration: 3s (+ 200ms warm up)
[02/05/2025-18:28:15] [I] Sleep time: 0ms
[02/05/2025-18:28:15] [I] Streams: 1
[02/05/2025-18:28:15] [I] ExposeDMA: Disabled
[02/05/2025-18:28:15] [I] Spin-wait: Disabled
[02/05/2025-18:28:15] [I] Multithreading: Disabled
[02/05/2025-18:28:15] [I] CUDA Graph: Disabled
[02/05/2025-18:28:15] [I] Skip inference: Disabled
[02/05/2025-18:28:15] [I] Inputs:
[02/05/2025-18:28:15] [I] === Reporting Options ===
[02/05/2025-18:28:15] [I] Verbose: Disabled
[02/05/2025-18:28:15] [I] Averages: 1000 inferences
[02/05/2025-18:28:15] [I] Percentile: 99
[02/05/2025-18:28:15] [I] Dump output: Disabled
[02/05/2025-18:28:15] [I] Profile: Disabled
[02/05/2025-18:28:15] [I] Export timing to JSON file:
[02/05/2025-18:28:15] [I] Export output to JSON file:
[02/05/2025-18:28:15] [I] Export profile to JSON file:
[02/05/2025-18:28:15] [I]
----------------------------------------------------------------
Input filename: ppyoloe_seg_s_80e_xfy.onnx
ONNX IR version: 0.0.7
Opset version: 13
Producer name:
Producer version:
Domain:
Model version: 0
Doc string:
----------------------------------------------------------------
[02/05/2025-18:28:18] [W] [TRT] onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
terminate called after throwing an instance of 'std::out_of_range'
what(): Attribute not found: axes
Aborted |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
请将下面信息填写完整,便于我们快速解决问题,谢谢!
你好,非常感谢开源PP系列框架。我目前使用PaddleDetection的ppyoloe_seg算法,训练完成后我希望能够转换成ONNX格式然后在jetson盒子上进行trtexec转换和测试,就像ppyoloe算法(https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.8/configs/ppyoloe/README_cn.md)一样:
但是我在ppyoloe_seg算法的README.md文件中没有发现类似的说明(https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.8/configs/ppyoloe_seg/README.md),所以我想要知道如何进行ppyoloe_seg的正常转换,非常感谢!!!
问题描述
请在此处详细的描述报错信息
我尝试着进行ONNX模型转换实现,看起来能过转换ONNX成功
但是最后一步在trtexec上出现了错误:
The text was updated successfully, but these errors were encountered: