You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I wanted to share that I added it to the pytorch-grad-cam package.
The focus here wasn't about finding shared concepts in a group of images, but instead explaining a single image by finding the concepts in it, as an alternative to methods like grad-cam.
A few additions I added that might be interesting for you were:
Classifying the concept embeddings so we can know which categories they correspond to.
Then we can add a legend to the visualizations, showing what every concept means.
Displaying all of the heatmaps on the same image with a color coding (the color coding is something you did in the paper as well).
Since we have multiple heatmaps that might overlap for some pixels, we have to combine them somehow.
So for every pixel I just chose the heatmap with the highest value for that pixel.
I'm thinking of improving this by applying some method for automatically finding the number of components.
This is an amazing overlooked method for explainability. It works really well.
The text was updated successfully, but these errors were encountered:
This is an amazing method.
I wanted to share that I added it to the pytorch-grad-cam package.
The focus here wasn't about finding shared concepts in a group of images, but instead explaining a single image by finding the concepts in it, as an alternative to methods like grad-cam.
Here is a tutorial about it:
https://jacobgil.github.io/pytorch-gradcam-book/Deep%20Feature%20Factorizations.html
A few additions I added that might be interesting for you were:
Then we can add a legend to the visualizations, showing what every concept means.
Since we have multiple heatmaps that might overlap for some pixels, we have to combine them somehow.
So for every pixel I just chose the heatmap with the highest value for that pixel.
I'm thinking of improving this by applying some method for automatically finding the number of components.
This is an amazing overlooked method for explainability. It works really well.
The text was updated successfully, but these errors were encountered: