Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Initialization has great impact on detection performance? #19

Open
lixucuhk opened this issue Jul 10, 2020 · 1 comment
Open

Initialization has great impact on detection performance? #19

lixucuhk opened this issue Jul 10, 2020 · 1 comment

Comments

@lixucuhk
Copy link

Dear author,

I really appreciate your codes for implementing the ASSERT model. It is an excellent work!!!
But in practice, I got some questions. I found that when I ran the scripts multiple times (with exact same settings), the results can be much different and had a large range. For e.g., I ran 10 times the SE-ResNet34 model for replay detetion task with your default settings, the best performance could archieve an EER of 0.67% for dev and EER of 1.11% for eval, but the worst one has an EER of 1.50% for dev and EER of 2.02% for eval. I am curious about the reason behind this. Does the model parameters initialization have such great impact? Or other reasons? How can I avoid this and make the training more stable? Could you please give some suggestions on it?
Thanks a lot!!!

@wangtao2668129173
Copy link

hi,I tested it with my own framework, and the best result were dev dataset 1.42% eval dataset 2.26%,could you tell me training techniques? Can you provide me with your trained model for testing ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants