Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can you give me some suggestions about fine tune squad models in your project ? #54

Open
svjack opened this issue Feb 1, 2021 · 2 comments

Comments

@svjack
Copy link

svjack commented Feb 1, 2021

As before discussion, i have try to replace the function in other language.
And i think about fine tuning the squad model you use to extract condition string
from question input (as the code says, you use colquery (construct by keyword and use “which”
or “number of” as question word) as question and truly input question as document to extract.
If one want to have a better inference on this, should have a fine tuning on its dataset.
So can you give me some suggestions about labeling myself datasets for fine tuning ?
I think if i always use your colquery construction to construct my squad dataset may be too plain.

@svjack
Copy link
Author

svjack commented Feb 2, 2021

And i think use named entity recognition and relation extraction can extract the conditions more accurately.

@abhijithneilabraham
Copy link
Owner

Yes what you said is correct. We are using a bert model from huggingface for the QA task. Since you are doing chinese QA, you actually have to use a chinese QA model, see if it's already available from huggingface, or you can find a dataset and train a QA model on top of it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants