-
-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changing conf parameter Inference API code has no effect on results #906
Comments
👋 Hello @curtinmjc, thank you for bringing this issue to our attention regarding the Ultralytics HUB Inference API 🚀! We're here to help ensure everything runs smoothly. Our HUB documentation offers comprehensive guides and insights, which you might find useful:
Based on your description, it sounds like you've encountered a potential 🐛 bug. To assist our engineering team further, could you please ensure that your bug report includes a detailed minimum reproducible example? This helps us replicate the issue on our side. For debugging purposes, confirming the following would be beneficial:
Rest assured, an Ultralytics engineer will review your issue soon to assist you further. Thank you for your patience and for helping us improve the Ultralytics HUB! 😊 |
@curtinmjc hello! Thank you for bringing this to our attention. It seems like you're experiencing an issue where changing the First, please ensure that you are using the latest version of the Ultralytics HUB and related packages, as updates may have addressed this issue. If the problem persists, it might be related to how the confidence threshold is applied in the classification model. Unlike detection models, classification models typically output a single prediction with the highest confidence, which might not be filtered by the confidence threshold in the same way. To further investigate, you might want to try using a detection model to see if the If you continue to experience this issue, please feel free to provide more details or any additional observations. Your feedback is invaluable in helping us improve our tools. 😊 Thank you for your patience and for being a part of the YOLO community! |
@curtinmjc Thank you for raising this issue. I have confirmed that this is a valid issue with the classification model in the Shared Inference API. I have reported this to the development team, and they are currently working on a fix. I will update you as soon as it is resolved. Thank you for your patience and for helping us improve the Ultralytics HUB! |
Any progress on this issue? Do you have an ETA for the fix? Thanks. |
Thank you for following up! The issue with the In the meantime, you can continue using the existing setup, with the understanding that the To stay updated, please make sure to watch this thread or the Ultralytics HUB repository. We'll notify you as soon as it’s resolved. Thank you for your patience and continued support! 😊 |
Search before asking
HUB Component
Inference
Bug
I am opening this GitHub Issue (Bug) at the suggestion of pderrenger in Issue (Question) #893. The issue being raised here was raised in Issue #893 about 4 days ago. The example of the issue concerns classification model inference results that are the same for both "conf": 0.25 and 0.90. The model used in this test is YOLO11n Classify (cm_v11n_100epoch-640imgsz_20241027LatlPhoto); it is using the Ultralytics HUB.
I am attaching two screenshots: Python code w/ conf=0.25.png and Python code w/ conf=0.90.png. Both show the same results being returned with the response in both cases having confidence = 0.61369.
Environment
Minimal Reproducible Example
Example is shown in above screenshots.
Additional
No response
The text was updated successfully, but these errors were encountered: