You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I would like to know on which datasets and how the two models that are used when running OmniEvent.infer were fine-tuned. That is, the 2 models which links are accessibles in the utils module.
In particular, I did notice that there is an option "schema" in OmniEvent.infer. I took it as suggesting that the models where fine-tuned all on the schemas available. Yet, when digging a bit further I noticed that none of these schemas have been passed as special_tokens to the tokenizer. Thus I'm wondering how the model would know that we are refering to a specific task, that is the fine-tuning on a specific dataset, when prepending each text with f"<txt_schema>". To be sure, when given "<maven>The king married the queen" how does the model understand that I want it to focus on what it learned when being fine-tuned on the maven dataset?
I ran a test only with the EDProcessor class using the schema "maven" and indeed it treated it as any other token.
Thank you
The text was updated successfully, but these errors were encountered:
Hello, that's a good question. The models we release are trained on multiple EE datasets. When training on different datasets, we add a prefix to represent the schema of the data. For example, we use "" to represent the schema of the Maven dataset. However, due to limitations in data volume and model capacity, the models we release sometimes struggle to follow human instructions (i.e., the schema prefix).
We are currently researching how to align the model better for IE tasks to make it more adept at following human instructions.
I'm still unclear though on these prefixes. It seems like you did not add them as special tokens in the tokenizer. Is it that you considered that treating them like any other word was not a problem or am I missing something?
Thanks for the question. We trained two versions of the model: with prefixes added as special tokens or without. There is no significant difference between the results of these two. Previous work has also revealed the similar phenomenon (https://aclanthology.org/2022.aacl-short.21.pdf).
Hello,
Thank you for this great package!
I would like to know on which datasets and how the two models that are used when running
OmniEvent.infer
were fine-tuned. That is, the 2 models which links are accessibles in theutils
module.In particular, I did notice that there is an option "schema" in
OmniEvent.infer
. I took it as suggesting that the models where fine-tuned all on the schemas available. Yet, when digging a bit further I noticed that none of these schemas have been passed as special_tokens to the tokenizer. Thus I'm wondering how the model would know that we are refering to a specific task, that is the fine-tuning on a specific dataset, when prepending each text withf"<txt_schema>"
. To be sure, when given "<maven>The king married the queen" how does the model understand that I want it to focus on what it learned when being fine-tuned on the maven dataset?I ran a test only with the EDProcessor class using the schema "maven" and indeed it treated it as any other token.
Thank you
The text was updated successfully, but these errors were encountered: