Replies: 1 comment 1 reply
-
@bschmer try setting this line to False before exporting, and then visualizing with Netron to see if the outputs are correct: Line 53 in ace3e02 |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Sorry if this is a FAQ. I've poked around quite a bit and haven't found an answer. Newbie to YOLO, pytorch, OONX, etc so please forgive any misphrased questions.
I've trained a YOLOv5s model and it works great with the detect.py that's included with YOLO. When I export the model to ONNX and try to run inference using the model in pytorch, the shape and content of the output is extremely different from the output produced by detect.py. The reason I'd like to figure out the ONNX model is that I have an existing pipeline that does further processing on the ONNX model output, but it doesn't work properly due to the output differences.
I've tried to make sure that the versions of all components used by YOLO are the same ones used by the additional pipeline, but that hasn't helped at all.
Any suggestions?
Beta Was this translation helpful? Give feedback.
All reactions