You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
With these models you should be able to use "longer" sentences, because the tokenizer uses less subtokens per token in theory (compared to the "normal" BERTurk models that have a 32k vocab).
How to user bert turkish sentiment cased model for calculating sentiment scores of sentences with more than 512 sequence length?
The text was updated successfully, but these errors were encountered: