Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to fine-tune the pretrain-model #18

Open
Hyfred opened this issue Aug 23, 2022 · 3 comments
Open

How to fine-tune the pretrain-model #18

Hyfred opened this issue Aug 23, 2022 · 3 comments

Comments

@Hyfred
Copy link

Hyfred commented Aug 23, 2022

I was wondering how to fine-tune the released model in another dataset.

@hagenw
Copy link
Member

hagenw commented Sep 8, 2022

For fine-tuning you can download the torch version of the model from https://huggingface.co/audeering/wav2vec2-large-robust-12-ft-emotion-msp-dim.

We mention in the README that the torch model is published there:

image

but maybe we should highlight this more?

@Vedaad-Shakib
Copy link

Some details like the Adam optimizer hyperparameters are not given in the paper. Should we assume that you use the default Wav2Vec2 hyperparameters if not specified in the paper?

@frankenjoe
Copy link
Collaborator

Yes, if not mentioned otherwise we kept to the default parameters of TrainingArguments.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants