You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I see there is an offline evaluation script provided in the competition website in the "My methods page". Here is the snippet for AP calculation, comments are mine:
for n in range(len(confList)): #Num predictions
match = matchList[n]
if match:
correct += 1
AP += float(correct)/(n + 1) #rel(n) missing?
if numGtCare>0:
AP /= numGtCare
Is there a rel(n) term missing ? Also, from competition page it seems like evaluation is based on VOC style AP. In that case, should'nt the script use interpolated Precision for intervals of confidence?
Seems like Coco-text - ICDAR17 is using VOC style AP as an evaluation metric, so curious why is it not supported in the evaluation API?
The text was updated successfully, but these errors were encountered: