Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Average Precision in evaluation script? #16

Open
sravya8 opened this issue Oct 6, 2017 · 1 comment
Open

Average Precision in evaluation script? #16

sravya8 opened this issue Oct 6, 2017 · 1 comment

Comments

@sravya8
Copy link

sravya8 commented Oct 6, 2017

Seems like Coco-text - ICDAR17 is using VOC style AP as an evaluation metric, so curious why is it not supported in the evaluation API?

@sravya8
Copy link
Author

sravya8 commented Oct 6, 2017

I see there is an offline evaluation script provided in the competition website in the "My methods page". Here is the snippet for AP calculation, comments are mine:

for n in range(len(confList)): #Num predictions
                match = matchList[n]
                if match:
                    correct += 1
                    AP += float(correct)/(n + 1) #rel(n) missing?
            if numGtCare>0:
                AP /= numGtCare

Is there a rel(n) term missing ? Also, from competition page it seems like evaluation is based on VOC style AP. In that case, should'nt the script use interpolated Precision for intervals of confidence?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant