-
Notifications
You must be signed in to change notification settings - Fork 444
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactor detection FMetric #4130
Refactor detection FMetric #4130
Conversation
@eugene123tw @kprokofi could you discuss the changes in details? From my side I'm afraid that this would affect the final estimated threshold value, which is important in case of small train/val. Perhaps, we should keep both slow (dynamic NMS threshold)/fast modes, but that depends on experiments. If the final confidence threshold estimation is now skewed, then it's ok to leave the fast version only. |
@sovrasov @kprokofi refactoring the F1-score calculation is relatively safe. The primary change involves replacing the IoU computation loop with matrix operations, so making the evaluation more efficient and faster. The final F1-score results, including confidence thresholds, remain consistent. I validated the before/after comparison across 9 datasets and there's no significant differences in accuracy. The results are summarized in the table below:
Additionally, I verified functionality in tests/unit/core/metrics/test_fmeasure.py, and all checks passed without issues. |
@eugene123tw could you have a look at iseg tests failure?
|
@sovrasov I forgot to filter scores in TV MaskRCNN post-processing. New change in this file: |
Summary
Refactor FMetric by removing unused dynamic NMS threshold and optimizing the IOU/metric computations.
Evaluation Time Comparison
F1 Score Comparison
Overall Elapsed Time
Before PR Elapsed time: 0:38:00.087019
PR Elapsed time: 0:13:25.230139
Note:
How to test
Checklist
License
Feel free to contact the maintainers if that's a concern.