-
Notifications
You must be signed in to change notification settings - Fork 235
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
zero mean intensity of gradient for some cases #25
Comments
I got the same problem and haven't solved it yet (¦3」∠) |
I did solve my error for the case that i did not choose the right layer output as the target. |
What layer did you take? Or what is the correct layer? |
@janphhe In my project, I used Xception. The target layer will be the global_avg_pooling layer, we should get a vector which seemed to be flattened. The grad visualization was made on it. |
Somebody has a solution for this. I get the same issue with a VGG19 network :/ I appreciate every help! Thanks a lot! |
I'm having the same problem, i'm sure I have selected the correct conv layer as it works for some images and not others. I'm not using a pre-trained model. EDIT: Change to using Leaky-relu activations to prevent the vanishing gradient problem has solved it for me, all though this probably isn't easy with pre-trained networks |
@Ada-Nick, can you please give more details about your solution/intuition? |
@margiki Leaky-relu allows the gradient of negative values to be non-zero, preventing pooled_grads from being a null-matrix |
In my case, using LeakyReLU didn't solve the issue. I delved deeper and found that the gradients computed by GradCAM were actually negative. Then, GradCAM applies a ReLU and the result was an empty map. |
I am using Keras with tensorflow backend and I have fine-tuned the last Conv layer and FC layer of my network based on VGG weights. Now I am using grad-CAM technique to visualize which parts of my image triggered the prediction and I get all zeros for mean intensity of the gradient over a specific feature map channel.
I have 4 classes, for my test sample these are the prediction:
Since I am using theano image ordering - when I calculate the mean of grads my axis is (0,2,3)
pooled_grads_value is all zero
Reference: https://github.com/fchollet/deep-learning-with-python-notebooks/blob/master/5.4-visualizing-what-convnets-learn.ipynb
I tested the algorithm with more images and found out it works for some of the images. Then I noticed that I have a dropout layer after my last conv layer. I did more research #2 and modified the code as:
But Still for some of the images all pooled_grads are zeros.
The text was updated successfully, but these errors were encountered: