English models' Accuracy Evaluation values #13185
ojo4f3
started this conversation in
Language Support
Replies: 1 comment
-
English has enough training data that larger models and vectors don't make a huge difference. All the raw evaluation numbers are here (the website tables are dynamically directly generated from this data): |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello!
I have a question regarding the English model stats displayed on the spaCy website: https://spacy.io/models/en. The Accuracy Evaluation tables for the
en_core_web_sm
,en_core_web_md
, anden_core_web_lg
models are exactly the same (although theen_core_web_lg
model's Named entities (recall) value is 0.86 instead of 0.85 like the others). Theen_core_web_trf
model's values are unique.I understand these values may be correct but wanted to make sure since I'd assume that the larger models would have better results.
Thanks,
Colton
Beta Was this translation helpful? Give feedback.
All reactions