-
Notifications
You must be signed in to change notification settings - Fork 220
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
did anyone reproduce the transformer network with frozen GPT-2? #70
Comments
I want to know how the results of this evaluation are displayed, how can I not run train.py |
I use https://github.com/salaniz/pycocoevalcap to evaluate result, I rewrite the captions_val2014_fakecap_results.json in folder "example" and enter command "python coco_eval_example.py" |
I want to know have you reproduced the results of transformer reported in the paper |
no, I have not reproduced this result |
您好,我已收到您的邮件,请您知悉。
|
1 similar comment
您好,我已收到您的邮件,请您知悉。
|
I also trained the only-transformer model and evaluate as you say, and the result is similar to yours. It's not as good as the result in this paper. Have you solved it? |
did anyone reproduce the transformer network with frozen GPT-2?
I enter command
python train.py --only_prefix --data ./data/coco/oscar_split_ViT-B_32_train.pkl --out_dir ./coco_train/ --mapping_type transformer --num_layers 8 --prefix_length 40 --prefix_length_clip 40
model is trained on mscoco dataset (train+val), the result on test split is
blue4 is 20.0 and cider is 66.3 , I got the best result in third epoch, but this result is less than what the paper gives, blue4 is 33.53 and cider is 113.08
I am confused about the result, did anyone reproduce the result? Did I miss something?
The text was updated successfully, but these errors were encountered: