-
Notifications
You must be signed in to change notification settings - Fork 58
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Training Plot #18
Comments
Hi, The losses I usually don't focus on, the dense eval on mega1500 seems well related, and it should be around 87% @ 1 pixel and 97% @ 5 pixels if I remember correctly at the end of training. We trained with 4 GPUs (batch size of 32 total). |
By the way, what is your AUC10 and AUC20? |
In the paper i found this result on MegaDepth-1500 RoMa 62.6 76.7 86.3 (5◦ ↑ 10◦ ↑ 20◦ ) |
What happens if you run the evaluation with our pretrained weights? |
Actually I see that some parts of the code has been updated while the eval is old, I'll go through code and update. |
this is the results of the pretrained weights |
And yes, this training takes quite a long time. As we report in the paper, it takes about 4 days on 4 A100s. This is currently one of the downsides of both DKM/RoMa. |
thanks! |
thanks for the plot. Thanks! |
So the backbone was pretrained on your dataset and then frozen like in roma? We have an experiment regarding the performance of different frozen backbone features, perhaps you could try, something similar to see if your pretraining produces better features than DINOv2 for matching. Hard to say much more without knowing more details of your experiment. |
Hello!
I try to training roma myself, i wonder if you can upload your training plot.
In addition the training process contains several loss(delta_regression_loss_1,delta_certainty_loss_16, delta_certainty_loss_4,gm_cls_loss_16..) and i am not sure on which output should I focus.
Also for how long the model had been train? after 250000 i was able to achieve auc 0.58 @5 on MegaDepth-1500.
The text was updated successfully, but these errors were encountered: