Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fail to get the same results #6

Open
neineimeow opened this issue Dec 26, 2020 · 3 comments
Open

Fail to get the same results #6

neineimeow opened this issue Dec 26, 2020 · 3 comments

Comments

@neineimeow
Copy link

Hi ! Thanks for your wonderful work ~But i fail to get as well as results in your paper. Can you post the ga results with VGG16 backbone in cityscape->foggy cityscape ?
GA:
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.174
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.331
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.157
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.019
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.156
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.359
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.146
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.263
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.286
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.052
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.262
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.560

GA_CA:
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.183
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.340
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.164
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.023
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.163
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.370
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.149
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.280
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.302
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.050
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.279
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.582

@mochaojie
Copy link

Hi, i have received the cityscapes dataset. However, when i click the link for foggy cityscapes, i don't know which one is i really need. In the website, i see some foggy cityscapes datasets for semantic segmentation, not for object detection. Can you give me the link for foggy dataset specifically. Thanks!

@neineimeow
Copy link
Author

Foggy cityscapes need u email to their team. You can see it on cityscape homepage

@tmp12316
Copy link

Hi ! Thanks for your wonderful work ~But i fail to get as well as results in your paper. Can you post the ga results with VGG16 backbone in cityscape->foggy cityscape ?
GA:
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.174
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.331
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.157
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.019
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.156
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.359
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.146
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.263
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.286
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.052
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.262
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.560

GA_CA:
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.183
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.340
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.164
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.023
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.163
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.370
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.149
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.280
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.302
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.050
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.279
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.582

Hi, are these the best AP50 results selected in your experiments? I find that the results are sensitive to the training iteration. If I choose some inappropriate iteration to evaluate, the result will be very bad. How about yours? Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants