Captum v0.2.0 Release
The second release, v0.2.0, of Captum adds a variety of new attribution algorithms as well as additional tutorials, type hints, and Google Colab support for Captum Insights.
New Attribution Algorithms
The following new attribution algorithms are provided, which can be applied to any type of PyTorch model, including DataParallel models. While the first release focused primarily on gradient-based attribution methods such as Integrated Gradients, the new algorithms include perturbation-based methods, marked by ^ below. We also add new attribution methods designed primarily for convolution networks, denoted by * below. All attribution methods share a consistent API structure to make it easy to switch between attribution methods.
Attribution of model output with respect to the input features
1. Guided Backprop *
2. Deconvolution *
3. Guided GradCAM *
4. Feature Ablation ^
5. Feature Permutation ^
6. Occlusion ^
7. Shapley Value Sampling ^
Attribution of model output with respect to the layers of the model
1. Layer GradCAM
2. Layer Integrated Gradients
3. Layer DeepLIFT
4. Layer DeepLIFT SHAP
5. Layer Gradient SHAP
6. Layer Feature Ablation ^
Attribution of neurons with respect to the input features
1. Neuron DeepLIFT
2. Neuron DeepLIFT SHAP
3. Neuron Gradient SHAP
4. Neuron Guided Backprop *
5. Neuron Deconvolution *
6. Neuron Feature Ablation ^
^ Denotes Perturbation-Based Algorithm. These methods compute attribution by evaluating the model on perturbed versions of the input as opposed to using gradient information.
* Denotes attribution method designed primarily for convolutional networks.
New Tutorials
We have added new tutorials to demonstrate Captum on BERT models, regression cases, and using perturbation-based methods. These tutorials include:
- Interpreting question answering with BERT
- Interpreting regression models using Boston House Prices Dataset
- Feature Ablation on Images
Type Hints
The Captum code base is now fully typed with Python type hints and type checked using mypy. Users can now accurately type-check code using Captum.
Bug Fixes and Minor Features
- All Captum methods now support in-place modules and operations. (Issue #156)
- Computing convergence delta was fixed to work appropriately on CUDA devices. (Issue #163)
- A ReLU flag was added to Layer GradCAM to optionally apply a ReLU operation to the returned attributions. (Issue #179)
- All layer and neuron attribution methods now support attribution with respect to either input or output of a module, based on the
attribute_to_layer_input
andattribute_to_neuron_input
flags. - All layer attribution methods now support modules with multiple outputs.
Captum Insights
- Captum Insights now works on Google Colab. (Issue #116)
- Captum Insights can also be launched as a Jupyter Notebook widget.
- New attribution methods in Captum Insights:
- Deconvolution
- Deep Lift
- Guided Backprop
- Input X Gradient
- Saliency