You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We executed the model conversion to ONNX model by copying /DeepStream-Yolo/export_yoloV5.py to YOLOV5-6.2, and then generated an engine model through DeepStream. The log shows that the model is being built from model files.
logs:
gstnvinfer.cpp:680:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1923> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: onnx2trt_utils.cpp:375: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
However, when we ran the original model and the ONNX model separately, we found that one of the targets had a significant loss in scores in the ONNX model, while the scores of the other target was normal. In a single image, there are two targets with an scores of 0.97 in the original model, but in the ONNX model, one of the targets has an scores of around 0.07, while the other target's scores is normal at 0.97. How can we solve the problem of scores loss in the converted model?
The text was updated successfully, but these errors were encountered:
hechibing
changed the title
DeepStream converts and runs YOLOV5 model, but the scores of detected targets is significantly lost, especially for low-scores targets
DeepStream converts and runs YOLOV5 model, but the scores of detected targets is significantly lost
Sep 23, 2023
We executed the model conversion to ONNX model by copying /DeepStream-Yolo/export_yoloV5.py to YOLOV5-6.2, and then generated an engine model through DeepStream. The log shows that the model is being built from model files.
logs:
gstnvinfer.cpp:680:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1923> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: onnx2trt_utils.cpp:375: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
However, when we ran the original model and the ONNX model separately, we found that one of the targets had a significant loss in scores in the ONNX model, while the scores of the other target was normal. In a single image, there are two targets with an scores of 0.97 in the original model, but in the ONNX model, one of the targets has an scores of around 0.07, while the other target's scores is normal at 0.97. How can we solve the problem of scores loss in the converted model?
The text was updated successfully, but these errors were encountered: