-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ground/Flood masks TO DO List #128
Comments
Deeplab versions Currently for the ground segmentation in images that we are using Deeplab v2 pretrained on Cityscapes. |
Deeplab v2 vs Deeplab v3+MobileNet DeeplabV3+mobilenet seems to be performing better overall than v2. It ibetter at going around thin objects (poles, trees) but identifies ground in the sky and bushes. (for which we should try to do some postprocessing) Here are some samples (in each pair, top is v2 computed on 256 pixels resolution images, bottom is v3): |
Definitely better :) |
@melisandeteng can you share fail-cases? |
The last one is sort of a failure case. |
We actually need to compare versions of masks generated with deeplab that were computed on images of the same resolution. |
We compare Deeplabv2 and Deeplabv3 on 20 images that we labeled with Labelbox. Pixel accuracy: IOU: Although the metrics are slightly better with v2, it seems that visually, masks generated with v3 have less obvious holes than those with v2 and can go around thin objects better. @sashavor @vict0rsch @51N84D what do you think ? |
Is there another metric we can use? like from that paper I found? I'll send
it again to ML-core
…On Wed, Apr 29, 2020 at 1:40 PM melisandeteng ***@***.***> wrote:
We compare Deeplabv2 and Deeplabv3 on 20 images that we labeled with
Labelbox.
We infer on the images resized so that the longest side is 600 px.
Pixel accuracy:
v2: 0.978
v3: 0.973
IOU:
v2: 0.917
v3: 0.914
Although the metrics are slightly better with v2, it seems that visually,
masks generated with v3 have less obvious holes than those with v2 and can
go around thin objects better.
See here
<https://drive.google.com/open?id=1KiwfQNFwZNVq_pY0RoJxvnDNl-TqHfzA> for
the comparison on all the images: in each image, original (ground truth),
v2, and v3 are displayed.
Here are some examples :
[image: image]
<https://user-images.githubusercontent.com/34208548/80627957-a891c980-8a1e-11ea-9479-d5b6779363e4.png>
[image: image]
<https://user-images.githubusercontent.com/34208548/80628170-f1498280-8a1e-11ea-88d5-f3376b429fdb.png>
Where both models fail:
[image: image]
<https://user-images.githubusercontent.com/34208548/80628124-e393fd00-8a1e-11ea-8e44-05611e63dea3.png>
[image: image]
<https://user-images.githubusercontent.com/34208548/80627834-7c764880-8a1e-11ea-84f0-142972d75db2.png>
@sashavor <https://github.com/sashavor> @vict0rsch
<https://github.com/vict0rsch> @51N84D <https://github.com/51N84D> what
do you think ?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#128 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/ADMMIITEIQAM5VBUGHG3L2DRPBRB3ANCNFSM4MOEYU6A>
.
--
Sasha Luccioni, PhD
Director of Scientific Projects (AI for Humanity, Mila), Postdoctoral
Researcher (UdeM)
Directrice des projets scientifiques (IA pour l'humanité, Mila), Chercheure
postdoctorale (UdeM)
[image: Image result for universite de montreal logo]
|
Notes
✅ means finished
❌ means canceled
Deeplab models comparison
Mask generator
The text was updated successfully, but these errors were encountered: