You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I've been trying the code for a while and have a couple of questions about the gradients.
To give some context here, I would like to identify which feature maps are the most relevant for a specific class, e.g. not use all the feature maps for visualization, but only the most important.
So the paper says that after doing the back-propagation for a specific class, we average the gradients, and that captures the "importance" of a feature map. I've been exploring the distribution for each layer using alexnet and here I show the distribution of those averaged gradients for a specific layer in AlexNet:
So we have a distribution with both positive and negative gradients that are close to zero.
As the code is implemented, we use all of those gradients and that results in a visualization that looks like this for the class gondola from imagenet:
At first, I thought, ok I won't use all the gradients, I only want the gradients closest to zero and so I set a window to include those. And here comes my first question:
Are the gradients closest to zero (or zero), the ones that would require a smaller update for the kernels during training? meaning that those feature maps are the most relevant and meaningful for the final classification?
After trying that I didn't get an improved visualization, so I kept exploring the gradients. Then I decided to only use the positive or only the negative gradients, here are the results:
Only positive gradients:
Only negative gradients:
It seems for me that the negative gradients are the most meaningful, even more than using all the gradients, and the same happens for other images as well. Here I have my next question:
Why would the negative gradients be more relevant for the visualization?
What's going on with only the positive values and why are those guiding to a completely different visualization than in the other cases?
Thanks in advance for any answer and I'm looking forward for the discussion.
The text was updated successfully, but these errors were encountered:
this paper has came out later that year you guys posted, it presents BiGradV which is "based on the fact both positive and negative gradients can be sensitive to class discrimination"
Hi, I've been trying the code for a while and have a couple of questions about the gradients.
To give some context here, I would like to identify which feature maps are the most relevant for a specific class, e.g. not use all the feature maps for visualization, but only the most important.
So the paper says that after doing the back-propagation for a specific class, we average the gradients, and that captures the "importance" of a feature map. I've been exploring the distribution for each layer using alexnet and here I show the distribution of those averaged gradients for a specific layer in AlexNet:
So we have a distribution with both positive and negative gradients that are close to zero.
As the code is implemented, we use all of those gradients and that results in a visualization that looks like this for the class gondola from imagenet:
At first, I thought, ok I won't use all the gradients, I only want the gradients closest to zero and so I set a window to include those. And here comes my first question:
After trying that I didn't get an improved visualization, so I kept exploring the gradients. Then I decided to only use the positive or only the negative gradients, here are the results:
Only positive gradients:
Only negative gradients:
It seems for me that the negative gradients are the most meaningful, even more than using all the gradients, and the same happens for other images as well. Here I have my next question:
Thanks in advance for any answer and I'm looking forward for the discussion.
The text was updated successfully, but these errors were encountered: