- π¦
Xplique (pronounced
\Ιks.plik\) is a Python toolkit dedicated to explainability. The goal of this library is to gather the state of the art of Explainable AI to help you understand your complex neural network models. Originally built for Tensorflow's model it also works for Pytorch's model partially.
+ π¦
Xplique (pronounced
\Ιks.plik\) is a Python toolkit dedicated to explainability. The goal of this library is to gather the state of the art of Explainable AI to help you understand your complex neural network models. Originally built for Tensorflow's model it also works for PyTorch models partially.
-
Explore Xplique docs Β»
+
π Explore Xplique docs
+ |
+
Explore Xplique tutorials π₯
Attributions
@@ -66,6 +68,8 @@ Finally, the _Metrics_ module covers the current metrics used in explainability.
- [**Attribution Methods**: Sanity checks paper](https://colab.research.google.com/drive/1uJOmAg6RjlOIJj6SWN9sYRamBdHAuyaS)
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1uJOmAg6RjlOIJj6SWN9sYRamBdHAuyaS)
- [**Attribution Methods**: Tabular data and Regression](https://colab.research.google.com/drive/1pjDJmAa9oeSquYtbYh6tksU6eTmObIcq)
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1pjDJmAa9oeSquYtbYh6tksU6eTmObIcq)
+ - [**Attribution Methods**: Object Detection](https://colab.research.google.com/drive/1X3Yq7BduMKqTA0XEheoVIpOo3IvOrzWL)
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1X3Yq7BduMKqTA0XEheoVIpOo3IvOrzWL)
+ - [**Attribution Methods**: Semantic Segmentation](https://colab.research.google.com/drive/1AHg7KO1fCOX5nZLGZfxkZ2-DLPPdSfbX)
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1AHg7KO1fCOX5nZLGZfxkZ2-DLPPdSfbX)
- [**FORGRad**: Gradient strikes back with FORGrad](https://colab.research.google.com/drive/1ibLzn7r9QQIEmZxApObowzx8n9ukinYB)
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1ibLzn7r9QQIEmZxApObowzx8n9ukinYB)
- [**Attribution Methods**: Metrics](https://colab.research.google.com/drive/1WEpVpFSq-oL1Ejugr8Ojb3tcbqXIOPBg)
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1WEpVpFSq-oL1Ejugr8Ojb3tcbqXIOPBg)
@@ -75,7 +79,7 @@ Finally, the _Metrics_ module covers the current metrics used in explainability.
- - [**PyTorch's model**: Getting started](https://colab.research.google.com/drive/1bMlO29_0K3YnTQBbbyKQyRfo8YjvDbhe)
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1bMlO29_0K3YnTQBbbyKQyRfo8YjvDbhe)
+ - [**PyTorch models**: Getting started](https://colab.research.google.com/drive/1bMlO29_0K3YnTQBbbyKQyRfo8YjvDbhe)
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1bMlO29_0K3YnTQBbbyKQyRfo8YjvDbhe)
- [**Concepts Methods**: Testing with Concept Activation Vectors](https://colab.research.google.com/drive/1iuEz46ZjgG97vTBH8p-vod3y14UETvVE)
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1iuEz46ZjgG97vTBH8p-vod3y14UETvVE)
@@ -95,19 +99,19 @@ Finally, the _Metrics_ module covers the current metrics used in explainability.
- [**Modern Feature Visualization with MaCo**: Getting started](https://colab.research.google.com/drive/1l0kag1o-qMY4NCbWuAwnuzkzd9sf92ic)
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1l0kag1o-qMY4NCbWuAwnuzkzd9sf92ic)
- You can find a certain number of [other practical tutorials just here](tutorials/). This section is actively developed and more contents will be
- included. We will try to cover all the possible usage of the library, feel free to contact us if you have any suggestions or recommandations towards tutorials you would like to see.
+ You can find a certain number of [**other practical tutorials just here**](tutorials/). This section is actively developed and more contents will be
+ included. We will try to cover all the possible usage of the library, feel free to contact us if you have any suggestions or recommendations towards tutorials you would like to see.
## π Quick Start
-Xplique requires a version of python higher than 3.6 and several libraries including Tensorflow and Numpy. Installation can be done using Pypi:
+Xplique requires a version of python higher than 3.7 and several libraries including Tensorflow and Numpy. Installation can be done using Pypi:
```python
pip install xplique
```
-Now that Xplique is installed, here are 4 basic examples of what you can do with the available modules.
+Now that Xplique is installed, here are some basic examples of what you can do with the available modules.
??? example "Attributions Methods"
Let's start with a simple example, by computing Grad-CAM for several images (or a complete dataset) on a trained model.
@@ -123,9 +127,7 @@ Now that Xplique is installed, here are 4 basic examples of what you can do with
# or just `explainer(images, labels)`
```
- All attributions methods share a common API. You can find out more about it [here](api/attributions/api_attributions/).
-
- In addition, you should also look at the [model's specificities](api/attributions/model/) and the [operator parameter documentation](api/attributions/operator/)
+ All attributions methods share a common API described [in the attributions API documentation](api/attributions/api_attributions/).
??? example "Attributions Metrics"
@@ -145,7 +147,7 @@ Now that Xplique is installed, here are 4 basic examples of what you can do with
score_grad_cam = metric(explanations)
```
- All attributions metrics share a common API. You can find out more about it [here](api/metrics/api_metrics/).
+ All attributions metrics share a common API. You can find out more about it [here](api/attributions/metrics/api_metrics/).
??? example "Concepts Extraction"
@@ -186,7 +188,7 @@ Now that Xplique is installed, here are 4 basic examples of what you can do with
??? example "PyTorch with Xplique"
- Even though the library was mainly designed to be a Tensorflow toolbox we have been working on a very practical wrapper to facilitate the integration of your PyTorch's model into Xplique's framework!
+ Even though the library was mainly designed to be a Tensorflow toolbox we have been working on a very practical wrapper to facilitate the integration of your PyTorch models into Xplique's framework!
```python
import torch
@@ -208,56 +210,61 @@ Now that Xplique is installed, here are 4 basic examples of what you can do with
score_saliency = metric(explanations)
```
- Want to know more ? Check the [PyTorch documentation](pytorch/)
+ Want to know more ? Check the [PyTorch documentation](api/attributions/pytorch/)
## π¦ What's Included
+There are 4 modules in Xplique, [Attribution methods](api/attributions/api_attributions/), [Attribution metrics](api/attributions/metrics/api_metrics/), [Concepts](api/concepts/cav/), and [Feature visualization](api/feature_viz/feature_viz/). In particular, the attribution methods module supports a huge diversity of tasks for diverse data types: [Classification](api/attributions/classification/), [Regression](api/attributions/regression/), [Object Detection](api/attributions/object_detection/), and [Semantic Segmentation](api/attributions/semantic_segmentation/). The methods compatible with such task and methods compatible with Tensorflow or PyTorch are highlighted in the following table:
+
??? abstract "Table of attributions available"
- All the attributions method presented below handle both **Classification** and **Regression** tasks.
-
- | **Attribution Method** | Type of Model | Source | Tabular Data | Images | Time-Series | Tutorial |
- | :--------------------- | :------------ | :---------------------------------------- | :----------------: | :----------------: | :----------------: | :----------------: |
- | Deconvolution | TF | [Paper](https://arxiv.org/abs/1311.2901) | β | β | WIP | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/19eB3uwAtCKZgkoWtMzrF0LTJ-htF_KE7) |
- | Grad-CAM | TF | [Paper](https://arxiv.org/abs/1610.02391) | | β | | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1nsB7xdQbU0zeYQ1-aB_D-M67-RAnvt4X) |
- | Grad-CAM++ | TF | [Paper](https://arxiv.org/abs/1710.11063) | | β | | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1nsB7xdQbU0zeYQ1-aB_D-M67-RAnvt4X) |
- | Gradient Input | TF, Pytorch** | [Paper](https://arxiv.org/abs/1704.02685) | β | β | WIP | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/19eB3uwAtCKZgkoWtMzrF0LTJ-htF_KE7) |
- | Guided Backprop | TF | [Paper](https://arxiv.org/abs/1412.6806) | β | β | WIP | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/19eB3uwAtCKZgkoWtMzrF0LTJ-htF_KE7) |
- | Integrated Gradients | TF, Pytorch** | [Paper](https://arxiv.org/abs/1703.01365) | β | β | WIP | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1UXJYVebDVIrkTOaOl-Zk6pHG3LWkPcLo) |
- | Kernel SHAP | TF, Pytorch** , Callable* | [Paper](https://arxiv.org/abs/1705.07874) | β | β | WIP | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1frholXRE4XQQ3W5yZuPQ2-xqc-LTczfT) |
- | Lime | TF, Pytorch** , Callable* | [Paper](https://arxiv.org/abs/1602.04938) | β | β | WIP | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1frholXRE4XQQ3W5yZuPQ2-xqc-LTczfT) |
- | Occlusion | TF, Pytorch** , Callable* | [Paper](https://arxiv.org/abs/1311.2901) | β | β | WIP | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/15xmmlxQkNqNuXgHO51eKogXvLgs-sG4q) |
- | Rise | TF, Pytorch** , Callable* | [Paper](https://arxiv.org/abs/1806.07421) | WIP | β | | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1icu2b1JGfpTRa-ic8tBSXnqqfuCGW2mO) |
- | Saliency | TF, Pytorch** | [Paper](https://arxiv.org/abs/1312.6034) | β | β | WIP | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/19eB3uwAtCKZgkoWtMzrF0LTJ-htF_KE7) |
- | SmoothGrad | TF, Pytorch** | [Paper](https://arxiv.org/abs/1706.03825) | β | β | WIP | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/12-tlM_TdZ12oc5lNL2S2g-hcMJV8tZUD) |
- | SquareGrad | TF, Pytorch** | [Paper](https://arxiv.org/abs/1806.10758) | β | β | WIP | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/12-tlM_TdZ12oc5lNL2S2g-hcMJV8tZUD) |
- | VarGrad | TF, Pytorch** | [Paper](https://arxiv.org/abs/1810.03292) | β | β | WIP | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/12-tlM_TdZ12oc5lNL2S2g-hcMJV8tZUD) |
- | Sobol Attribution | TF, Pytorch** | [Paper](https://arxiv.org/abs/2111.04138) | | β | | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1XproaVxXjO9nrBSyyy7BuKJ1vy21iHs2) |
- | Hsic Attribution | TF, Pytorch** | [Paper](https://arxiv.org/abs/2206.06219) | | β | | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1XproaVxXjO9nrBSyyy7BuKJ1vy21iHs2) |
- | FORGrad enhancement | TF, Pytorch** | [Paper](https://arxiv.org/abs/2307.09591) | | β | | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1ibLzn7r9QQIEmZxApObowzx8n9ukinYB) |
+ | **Attribution Method** | Type of Model | Source | Tabular Data | Images | Time-Series | Tutorial |
+ | :--------------------- | :----------------------- | :---------------------------------------- | :------------: | :------------------------: | :---------: | :----------------: |
+ | Deconvolution | TF | [Paper](https://arxiv.org/abs/1311.2901) | C:βοΈ
R:βοΈ | C:βοΈ
OD:β
SS:β | π΅ | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/19eB3uwAtCKZgkoWtMzrF0LTJ-htF_KE7) |
+ | Grad-CAM | TF | [Paper](https://arxiv.org/abs/1610.02391) | β | C:βοΈ
OD:β
SS:β | β | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1nsB7xdQbU0zeYQ1-aB_D-M67-RAnvt4X) |
+ | Grad-CAM++ | TF | [Paper](https://arxiv.org/abs/1710.11063) | β | C:βοΈ
OD:β
SS:β | β | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1nsB7xdQbU0zeYQ1-aB_D-M67-RAnvt4X) |
+ | Gradient Input | TF, PyTorch** | [Paper](https://arxiv.org/abs/1704.02685) | C:βοΈ
R:βοΈ | C:βοΈ
OD:βοΈ
SS:βοΈ | π΅ | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/19eB3uwAtCKZgkoWtMzrF0LTJ-htF_KE7) |
+ | Guided Backprop | TF | [Paper](https://arxiv.org/abs/1412.6806) | C:βοΈ
R:βοΈ | C:βοΈ
OD:β
SS:β | π΅ | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/19eB3uwAtCKZgkoWtMzrF0LTJ-htF_KE7) |
+ | Integrated Gradients | TF, PyTorch** | [Paper](https://arxiv.org/abs/1703.01365) | C:βοΈ
R:βοΈ | C:βοΈ
OD:βοΈ
SS:βοΈ | π΅ | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1UXJYVebDVIrkTOaOl-Zk6pHG3LWkPcLo) |
+ | Kernel SHAP | TF, PyTorch**, Callable* | [Paper](https://arxiv.org/abs/1705.07874) | C:βοΈ
R:βοΈ | C:βοΈ
OD:βοΈ
SS:βοΈ | π΅ | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1frholXRE4XQQ3W5yZuPQ2-xqc-LTczfT) |
+ | Lime | TF, PyTorch**, Callable* | [Paper](https://arxiv.org/abs/1602.04938) | C:βοΈ
R:βοΈ | C:βοΈ
OD:βοΈ
SS:βοΈ | π΅ | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1frholXRE4XQQ3W5yZuPQ2-xqc-LTczfT) |
+ | Occlusion | TF, PyTorch**, Callable* | [Paper](https://arxiv.org/abs/1311.2901) | C:βοΈ
R:βοΈ | C:βοΈ
OD:βοΈ
SS:βοΈ | π΅ | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/15xmmlxQkNqNuXgHO51eKogXvLgs-sG4q) |
+ | Rise | TF, PyTorch**, Callable* | [Paper](https://arxiv.org/abs/1806.07421) | π΅ | C:βοΈ
OD:βοΈ
SS:βοΈ | π΅ | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1icu2b1JGfpTRa-ic8tBSXnqqfuCGW2mO) |
+ | Saliency | TF, PyTorch** | [Paper](https://arxiv.org/abs/1312.6034) | C:βοΈ
R:βοΈ | C:βοΈ
OD:βοΈ
SS:βοΈ | π΅ | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/19eB3uwAtCKZgkoWtMzrF0LTJ-htF_KE7) |
+ | SmoothGrad | TF, PyTorch** | [Paper](https://arxiv.org/abs/1706.03825) | C:βοΈ
R:βοΈ | C:βοΈ
OD:βοΈ
SS:βοΈ | π΅ | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/12-tlM_TdZ12oc5lNL2S2g-hcMJV8tZUD) |
+ | SquareGrad | TF, PyTorch** | [Paper](https://arxiv.org/abs/1806.10758) | C:βοΈ
R:βοΈ | C:βοΈ
OD:βοΈ
SS:βοΈ | π΅ | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/12-tlM_TdZ12oc5lNL2S2g-hcMJV8tZUD) |
+ | VarGrad | TF, PyTorch** | [Paper](https://arxiv.org/abs/1810.03292) | C:βοΈ
R:βοΈ | C:βοΈ
OD:βοΈ
SS:βοΈ | π΅ | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/12-tlM_TdZ12oc5lNL2S2g-hcMJV8tZUD) |
+ | Sobol Attribution | TF, PyTorch** | [Paper](https://arxiv.org/abs/2111.04138) | π΅ | C:βοΈ
OD:βοΈ
SS:βοΈ | β | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1XproaVxXjO9nrBSyyy7BuKJ1vy21iHs2) |
+ | Hsic Attribution | TF, PyTorch** | [Paper](https://arxiv.org/abs/2206.06219) | π΅ | C:βοΈ
OD:βοΈ
SS:βοΈ | β | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1XproaVxXjO9nrBSyyy7BuKJ1vy21iHs2) |
+ | FORGrad enhancement | TF, PyTorch** | [Paper](https://arxiv.org/abs/2307.09591) | β | C:βοΈ
OD:βοΈ
SS:βοΈ | β | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1ibLzn7r9QQIEmZxApObowzx8n9ukinYB) |
TF : Tensorflow compatible
- \* : See the [Callable documentation](callable/)
+ C : [Classification](api/attributions/classification/) | R : [Regression](api/attributions/regression/) |
+ OD : [Object Detection](api/attributions/object_detection/) | SS : [Semantic Segmentation](api/attributions/semantic_segmentation/)
+
+ \* : See the [Callable documentation](api/attributions/callable/)
+
+ ** : See the [Xplique for PyTorch documentation](api/attributions/pytorch/), and the [**PyTorch models**: Getting started](https://colab.research.google.com/drive/1bMlO29_0K3YnTQBbbyKQyRfo8YjvDbhe) notebook.
- ** : See the [Xplique for Pytorch documentation](pytorch/), and the [**PyTorch's model**: Getting started](https://colab.research.google.com/drive/1bMlO29_0K3YnTQBbbyKQyRfo8YjvDbhe)
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1bMlO29_0K3YnTQBbbyKQyRfo8YjvDbhe) notebook
+ βοΈ : Supported by Xplique | β : Not applicable | π΅ : Work in Progress
??? abstract "Table of attribution's metric available"
| **Attribution Metrics** | Type of Model | Property | Source |
| :---------------------- | :------------ | :--------------- | :---------------------------------------- |
- | MuFidelity | TF, Pytorch** | Fidelity | [Paper](https://arxiv.org/abs/2005.00631) |
- | Deletion | TF, Pytorch** | Fidelity | [Paper](https://arxiv.org/abs/1806.07421) |
- | Insertion | TF, Pytorch** | Fidelity | [Paper](https://arxiv.org/abs/1806.07421) |
- | Average Stability | TF, Pytorch** | Stability | [Paper](https://arxiv.org/abs/2005.00631) |
- | MeGe | TF, Pytorch** | Representativity | [Paper](https://arxiv.org/abs/2009.04521) |
- | ReCo | TF, Pytorch** | Consistency | [Paper](https://arxiv.org/abs/2009.04521) |
+ | MuFidelity | TF, PyTorch** | Fidelity | [Paper](https://arxiv.org/abs/2005.00631) |
+ | Deletion | TF, PyTorch** | Fidelity | [Paper](https://arxiv.org/abs/1806.07421) |
+ | Insertion | TF, PyTorch** | Fidelity | [Paper](https://arxiv.org/abs/1806.07421) |
+ | Average Stability | TF, PyTorch** | Stability | [Paper](https://arxiv.org/abs/2005.00631) |
+ | MeGe | TF, PyTorch** | Representativity | [Paper](https://arxiv.org/abs/2009.04521) |
+ | ReCo | TF, PyTorch** | Consistency | [Paper](https://arxiv.org/abs/2009.04521) |
| (WIP) e-robustness |
TF : Tensorflow compatible
- ** : See the [Xplique for Pytorch documentation](pytorch/), and the [**PyTorch's model**: Getting started](https://colab.research.google.com/drive/1bMlO29_0K3YnTQBbbyKQyRfo8YjvDbhe)
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1bMlO29_0K3YnTQBbbyKQyRfo8YjvDbhe) notebook
+ ** : See the [Xplique for PyTorch documentation](api/attributions/pytorch/), and the [**PyTorch models**: Getting started](https://colab.research.google.com/drive/1bMlO29_0K3YnTQBbbyKQyRfo8YjvDbhe) notebook.
??? abstract "Table of concept methods available"
@@ -297,7 +304,7 @@ This library is one approach of many to explain your model. We don't expect it t
??? info "Other interesting tools to explain your model:"
- [Lucid](https://github.com/tensorflow/lucid) the wonderful library specialized in feature visualization from OpenAI.
- - [Captum](https://captum.ai/) the Pytorch library for Interpretability research
+ - [Captum](https://captum.ai/) the PyTorch library for Interpretability research
- [Tf-explain](https://github.com/sicara/tf-explain) that implement multiples attribution methods and propose callbacks API for tensorflow.
- [Alibi Explain](https://github.com/SeldonIO/alibi) for model inspection and interpretation
- [SHAP](https://github.com/slundberg/shap) a very popular library to compute local explanations using the classic Shapley values from game theory and their related extensions
diff --git a/docs/tutorials.md b/docs/tutorials.md
index 6dd83321..f7e5645a 100644
--- a/docs/tutorials.md
+++ b/docs/tutorials.md
@@ -1,6 +1,6 @@
# Tutorials: Notebooks π
-We propose here several tutorials to discover the different functionnalities that the library has to offer.
+We propose here several tutorials to discover the different functionalities that the library has to offer.
We decided to host those tutorials on [Google Colab](https://colab.research.google.com/notebooks/intro.ipynb?utm_source=scs-index)
mainly because you will be able to play the notebooks with a GPU which should greatly improve your User eXperience.
@@ -9,14 +9,16 @@ Here is the lists of the availables tutorial for now:
## Getting Started
-| **Tutorial Name** | Notebook |
-| :-------------------------- | :----------------------------------------------------------------------------------------------------------------------------------------------------------------: |
-| Getting Started | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1XproaVxXjO9nrBSyyy7BuKJ1vy21iHs2) |
+| **Tutorial Name** | Notebook |
+| :------------------------------ | :----------------------------------------------------------------------------------------------------------------------------------------------------------------: |
+| Getting Started | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1XproaVxXjO9nrBSyyy7BuKJ1vy21iHs2) |
| Sanity checks for Saliency Maps | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1uJOmAg6RjlOIJj6SWN9sYRamBdHAuyaS) |
-| Tabular data and Regression | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1XproaVxXjO9nrBSyyy7BuKJ1vy21iHs2) |
-| Metrics | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1WEpVpFSq-oL1Ejugr8Ojb3tcbqXIOPBg) |
-| Concept Activation Vectors | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1iuEz46ZjgG97vTBH8p-vod3y14UETvVE) |
-| Feature Visualization | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1st43K9AH-UL4eZM1S4QdyrOi7Epa5K8v) |
+| Tabular data and Regression | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1pjDJmAa9oeSquYtbYh6tksU6eTmObIcq) |
+| Object detection | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1X3Yq7BduMKqTA0XEheoVIpOo3IvOrzWL) |
+| Semantic Segmentation | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1AHg7KO1fCOX5nZLGZfxkZ2-DLPPdSfbX) |
+| Metrics | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1WEpVpFSq-oL1Ejugr8Ojb3tcbqXIOPBg) |
+| Concept Activation Vectors | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1iuEz46ZjgG97vTBH8p-vod3y14UETvVE) |
+| Feature Visualization | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1st43K9AH-UL4eZM1S4QdyrOi7Epa5K8v) |
## Attributions
@@ -52,10 +54,12 @@ Here is the lists of the availables tutorial for now:
## PyTorch Wrapper
-| **Tutorial Name** | Notebook |
-| :-------------------------- | :----------------------------------------------------------------------------------------------------------------------------------------------------------------: |
-| PyTorch's model: Getting started | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1bMlO29_0K3YnTQBbbyKQyRfo8YjvDbhe) |
-| Metrics: With Pytorch's model| [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/16bEmYXzLEkUWLRInPU17QsodAIbjdhGP) |
+| **Tutorial Name** | Notebook |
+| :------------------------------------- | :-----------------------------------------------------------------------------------------------------------------------------------------------------: |
+| PyTorch models: Getting started | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1bMlO29_0K3YnTQBbbyKQyRfo8YjvDbhe) |
+| Metrics: With PyTorch models | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/16bEmYXzLEkUWLRInPU17QsodAIbjdhGP) |
+| Object detection on PyTorch model | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1X3Yq7BduMKqTA0XEheoVIpOo3IvOrzWL) |
+| Semantic Segmentation on PyTorch model | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1AHg7KO1fCOX5nZLGZfxkZ2-DLPPdSfbX) |
## Concepts extraction
diff --git a/mkdocs.yml b/mkdocs.yml
index 3a2bfdb2..92f48a15 100644
--- a/mkdocs.yml
+++ b/mkdocs.yml
@@ -8,34 +8,36 @@ nav:
- Home: index.md
- Attributions methods:
- API Description: api/attributions/api_attributions.md
- - Model Specifications: api/attributions/model.md
- - Operator: api/attributions/operator.md
- Methods:
- - DeconvNet: api/attributions/deconvnet.md
- - Grad-CAM: api/attributions/grad_cam.md
- - Grad-CAM++: api/attributions/grad_cam_pp.md
- - Gradient Input: api/attributions/gradient_input.md
- - Guided Backprop: api/attributions/guided_backpropagation.md
- - Hsic Attribution Method: api/attributions/hsic.md
- - Integrated Gradient: api/attributions/integrated_gradients.md
- - KernelSHAP: api/attributions/kernel_shap.md
- - Lime: api/attributions/lime.md
- - Occlusion sensitivity: api/attributions/occlusion.md
- - Rise: api/attributions/rise.md
- - Saliency: api/attributions/saliency.md
- - SmoothGrad: api/attributions/smoothgrad.md
- - Sobol Attribution Method: api/attributions/sobol.md
- - SquareGrad: api/attributions/square_grad.md
- - VarGrad: api/attributions/vargrad.md
- - ForGRad: api/attributions/forgrad.md
+ - DeconvNet: api/attributions/methods/deconvnet.md
+ - ForGRad: api/attributions/methods/forgrad.md
+ - Grad-CAM: api/attributions/methods/grad_cam.md
+ - Grad-CAM++: api/attributions/methods/grad_cam_pp.md
+ - Gradient Input: api/attributions/methods/gradient_input.md
+ - Guided Backprop: api/attributions/methods/guided_backpropagation.md
+ - Hsic Attribution Method: api/attributions/methods/hsic.md
+ - Integrated Gradient: api/attributions/methods/integrated_gradients.md
+ - KernelSHAP: api/attributions/methods/kernel_shap.md
+ - Lime: api/attributions/methods/lime.md
+ - Occlusion sensitivity: api/attributions/methods/occlusion.md
+ - Rise: api/attributions/methods/rise.md
+ - Saliency: api/attributions/methods/saliency.md
+ - SmoothGrad: api/attributions/methods/smoothgrad.md
+ - Sobol Attribution Method: api/attributions/methods/sobol.md
+ - SquareGrad: api/attributions/methods/square_grad.md
+ - VarGrad: api/attributions/methods/vargrad.md
- Metrics:
- - API Description: api/metrics/api_metrics.md
- - Deletion: api/metrics/deletion.md
- - Insertion: api/metrics/insertion.md
- - MuFidelity: api/metrics/mu_fidelity.md
- - AverageStability: api/metrics/avg_stability.md
- - PyTorch: pytorch.md
- - Callable: callable.md
+ - API Description: api/attributions/metrics/api_metrics.md
+ - Deletion: api/attributions/metrics/deletion.md
+ - Insertion: api/attributions/metrics/insertion.md
+ - MuFidelity: api/attributions/metrics/mu_fidelity.md
+ - AverageStability: api/attributions/metrics/avg_stability.md
+ - PyTorch: api/attributions/pytorch.md
+ - Callable: api/attributions/callable.md
+ - Classification: api/attributions/classification.md
+ - Object Detection: api/attributions/object_detection.md
+ - Regression: api/attributions/regression.md
+ - Semantic Segmentation: api/attributions/semantic_segmentation.md
- Concept based:
- Cav: api/concepts/cav.md
- Tcav: api/concepts/tcav.md
diff --git a/requirements.txt b/requirements.txt
index 125d3ba5..da889c05 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -5,3 +5,4 @@ scikit-image
matplotlib
scipy
opencv-python
+deprecated
diff --git a/setup.cfg b/setup.cfg
index d588f2e2..d88d31a0 100644
--- a/setup.cfg
+++ b/setup.cfg
@@ -1,5 +1,5 @@
[bumpversion]
-current_version = 1.1.0
+current_version = 1.2.0
commit = True
tag = True
@@ -30,7 +30,7 @@ ignore-imports = no
envlist = py{37,38,39,310}-lint, py{37,38,39,310}-tf{22,25,28,211}, py{38,39,310}-tf{25,28,211}-torch{111,113,200}
[testenv:py{37,38,39,310}-lint]
-deps =
+deps =
pylint
-rrequirements.txt
commands =
diff --git a/setup.py b/setup.py
index c33a4bc5..5bd1d241 100644
--- a/setup.py
+++ b/setup.py
@@ -5,7 +5,7 @@
setup(
name="Xplique",
- version="1.1.0",
+ version="1.2.0",
description="Explanations toolbox for Tensorflow 2",
long_description=README,
long_description_content_type="text/markdown",
@@ -13,7 +13,7 @@
author_email="thomas_fel@brown.edu",
license="MIT",
install_requires=['tensorflow>=2.1.0', 'numpy', 'scikit-learn', 'scikit-image',
- 'matplotlib', 'scipy', 'opencv-python'],
+ 'matplotlib', 'scipy', 'opencv-python', 'deprecated'],
extras_require={
"tests": ["pytest", "pylint"],
"docs": ["mkdocs", "mkdocs-material", "numkdoc"],
diff --git a/tests/attributions/test_callable.py b/tests/attributions/test_callable.py
index f5457f12..c32a7d1d 100644
--- a/tests/attributions/test_callable.py
+++ b/tests/attributions/test_callable.py
@@ -10,7 +10,7 @@
from sklearn.ensemble import RandomForestClassifier
from xplique.attributions import (Occlusion, Rise, Lime, KernelShap, SobolAttributionMethod)
-from xplique.commons.operators import predictions_operator, batch_predictions,\
+from xplique.commons.operators_operations import predictions_operator, batch_predictions,\
batch_predictions_one_hot_callable
from xplique.commons.callable_operations import predictions_one_hot_callable
diff --git a/tests/attributions/test_object_detector.py b/tests/attributions/test_object_detector.py
index 581aa6ca..aeaf5c1c 100644
--- a/tests/attributions/test_object_detector.py
+++ b/tests/attributions/test_object_detector.py
@@ -1,100 +1,67 @@
-import numpy as np
+"""
+Test object detection BoundingBoxesExplainer
+"""
+import os
+import sys
+sys.path.append(os.getcwd())
+
+import unittest
+
import tensorflow as tf
-from xplique.attributions import BoundingBoxesExplainer, Rise
-from xplique.attributions.object_detector import SegmentationIouCalculator, BoxIouCalculator, \
- ImageObjectDetectorExplainer, ImageObjectDetectorScoreCalculator, IObjectFormater
-
-from ..utils import almost_equal, generate_model, generate_data
-
-
-def test_iou_mask():
- """Assert the Mask IoU calculation is ok"""
- dtype = np.float32
-
- m1 = np.array([
- [10, 20, 30, 40]
- ], dtype=dtype)
- m2 = np.array([
- [15, 20, 30, 40]
- ], dtype=dtype)
- m3 = np.array([
- [0, 20, 10, 40]
- ], dtype=dtype)
- m4 = np.array([
- [0, 0, 100, 100]
- ], dtype=dtype)
-
- iou_calculator = BoxIouCalculator()
-
- assert almost_equal(iou_calculator.intersect(m1, m2), 300.0 / 400.0)
- assert almost_equal(iou_calculator.intersect(m1, m3), 0.0)
- assert almost_equal(iou_calculator.intersect(m3, m2), 0.0)
- assert almost_equal(iou_calculator.intersect(m1, m4), 400.0 / 10_000)
- assert almost_equal(iou_calculator.intersect(m2, m4), 300.0 / 10_000)
- assert almost_equal(iou_calculator.intersect(m3, m4), 200.0 / 10_000)
-
-
-def test_iou_segmentation():
- """Assert the segmentation IoU computation is ok"""
-
- m1 = np.array([
- [0, 0],
- [1, 1],
- ])[None, :, :]
- m2 = np.array([
- [1, 1],
- [0, 0],
- ])[None, :, :]
- m3 = np.array([
- [1, 0],
- [1, 0],
- ])[None, :, :]
-
- iou_calculator = SegmentationIouCalculator()
-
- assert almost_equal(iou_calculator.intersect(m1, m2), 0.0)
- assert almost_equal(iou_calculator.intersect(m1, m3), 1.0/3.0)
- assert almost_equal(iou_calculator.intersect(m3, m2), 1.0/3.0)
- assert almost_equal(iou_calculator.intersect(m1, m1), 1.0)
- assert almost_equal(iou_calculator.intersect(m2, m2), 1.0)
-
-
-def test_image_object_detector():
+from tests.utils import generate_data, generate_object_detection_model
+
+from xplique.commons.exceptions import InvalidModelException
+from xplique.attributions import BoundingBoxesExplainer, Rise, Saliency
+
+
+def test_object_detector():
"""Assert input shape returned is correct"""
input_shape = (8, 8, 1)
+ nb_samples = 3
nb_labels = 2
- x, y = generate_data(input_shape, nb_labels, nb_labels)
- model = generate_model(input_shape, nb_labels)
+ max_nb_boxes = 4
+ x, _ = generate_data(input_shape, nb_labels, nb_samples)
+ model = generate_object_detection_model(input_shape, max_nb_boxes=max_nb_boxes, nb_labels=nb_labels)
method = Rise(model, nb_samples=10)
obj_ref = tf.cast([
[0, 0, 100, 100, 0.9, 1.0, 0.0],
+ [50, 50, 150, 150, 0.5, 1.0, 0.0],
+ [0, 10, 20, 30, 0.7, 0.0, 1.0],
], tf.float32)
- class BBoxFormater(IObjectFormater):
+ explainer = BoundingBoxesExplainer(method)
- def format_objects(self, predictions):
- if np.array_equal(predictions.numpy(), obj_ref.numpy()):
- return obj_ref[:4], obj_ref[4:5], obj_ref[5:]
+ test_raise_assertion_error = unittest.TestCase().assertRaises
+ test_raise_assertion_error(InvalidModelException, explainer.gradient)
+ test_raise_assertion_error(InvalidModelException, explainer.batch_gradient)
+ phis = explainer(x, obj_ref)
- bboxes = tf.cast([
- [0, 10, 20, 30],
- [0, 0, 100, 100],
- ], tf.float32)
+ assert phis.shape == (obj_ref.shape[0], input_shape[0], input_shape[1])
- proba = tf.cast([0.9, 0.1], tf.float32)
- classif = tf.cast([[1.0, 0.0], [0.0, 1.0]], tf.float32)
+def test_gradient_object_detector():
+ """Assert input shape returned is correct"""
+ input_shape = (8, 8, 1)
+ nb_samples = 3
+ nb_labels = 2
+ max_nb_boxes = 4
+ x, _ = generate_data(input_shape, nb_labels, nb_samples)
+ model = generate_object_detection_model(input_shape, max_nb_boxes=max_nb_boxes, nb_labels=nb_labels)
- return bboxes, proba, classif
+ method = Saliency(model)
- formater = BBoxFormater()
- explainer = ImageObjectDetectorExplainer(method, formater, BoxIouCalculator())
+ obj_ref = tf.cast([
+ [0, 0, 100, 100, 0.9, 1.0, 0.0],
+ [50, 50, 150, 150, 0.5, 1.0, 0.0],
+ [0, 10, 20, 30, 0.7, 0.0, 1.0],
+ ], tf.float32)
- phis = explainer(x, obj_ref)
+ explainer = BoundingBoxesExplainer(method)
- assert phis.shape == (1, input_shape[0], input_shape[1])
+ phis = explainer(x, obj_ref)
+ assert phis.shape == (obj_ref.shape[0], input_shape[0], input_shape[1])
\ No newline at end of file
diff --git a/tests/commons/test_object_detection_operator.py b/tests/commons/test_object_detection_operator.py
new file mode 100644
index 00000000..e895241a
--- /dev/null
+++ b/tests/commons/test_object_detection_operator.py
@@ -0,0 +1,162 @@
+"""
+Test object detection operator
+"""
+import os
+import sys
+sys.path.append(os.getcwd())
+
+from itertools import combinations
+
+import numpy as np
+import tensorflow as tf
+
+from tests.utils import almost_equal, generate_object_detection_model, generate_data
+
+from xplique.attributions import Occlusion, SmoothGrad
+from xplique.commons.operators import object_detection_operator
+
+
+def test_object_detector():
+ """Assert input shape returned is correct"""
+ input_shape = (8, 8, 1)
+ nb_samples = 3
+ nb_labels = 2
+ max_nb_boxes = 4
+ x, _ = generate_data(input_shape, nb_labels, nb_samples)
+ model = generate_object_detection_model(input_shape, max_nb_boxes=max_nb_boxes, nb_labels=nb_labels)
+
+ # 3 bounding boxes, one for each input sample
+ obj_ref = tf.cast([
+ [0, 0, 100, 100, 0.9, 1.0, 0.0],
+ [50, 50, 150, 150, 0.5, 1.0, 0.0],
+ [0, 10, 20, 30, 0.7, 0.0, 1.0],
+ ], tf.float32)
+
+ explainer = Occlusion(model, operator=object_detection_operator, patch_size=4, patch_stride=2)
+
+ # test with only one box to explain by image (3, 7)
+ phis = explainer(x, obj_ref)
+ assert phis.shape == (obj_ref.shape[0], input_shape[0], input_shape[1])
+ assert phis[0, 0, 0] != np.nan
+
+ phis2 = explainer(x, tf.expand_dims(obj_ref, axis=1))
+ assert phis.shape == phis2.shape
+ assert almost_equal(phis, phis2)
+
+
+def test_gradient_object_detector():
+ """Assert input shape returned is correct"""
+ input_shape = (8, 8, 1)
+ nb_samples = 3
+ nb_labels = 2
+ max_nb_boxes = 4
+ x, _ = generate_data(input_shape, nb_labels, nb_samples)
+ model = generate_object_detection_model(input_shape, max_nb_boxes=max_nb_boxes, nb_labels=nb_labels)
+
+ explainer = SmoothGrad(model, nb_samples=10, operator=object_detection_operator)
+
+ obj_ref = tf.cast([
+ [0, 0, 100, 100, 0.9, 1.0, 0.0],
+ [50, 50, 150, 150, 0.5, 1.0, 0.0],
+ [0, 10, 20, 30, 0.7, 0.0, 1.0],
+ ], tf.float32)
+
+ phis = explainer(x, obj_ref)
+
+ assert phis.shape[:3] == (obj_ref.shape[0], input_shape[0], input_shape[1])
+
+
+def test_several_boxes_object_detector():
+ """Assert input shape returned is correct"""
+ input_shape = (8, 8, 1)
+ nb_samples = 3
+ nb_labels = 2
+ max_nb_boxes = 4
+ x, _ = generate_data(input_shape, nb_labels, nb_samples)
+ model = generate_object_detection_model(input_shape, max_nb_boxes=max_nb_boxes, nb_labels=nb_labels)
+
+ explainer = SmoothGrad(model, nb_samples=10, operator=object_detection_operator)
+
+ obj_ref = tf.cast([
+ [0, 0, 100, 100, 0.9, 1.0, 0.0],
+ [50, 50, 150, 150, 0.5, 1.0, 0.0],
+ [0, 10, 20, 30, 0.7, 0.0, 1.0],
+ ], tf.float32)
+
+ phis = explainer(x, obj_ref)
+ assert phis.shape[:3] == (obj_ref.shape[0], input_shape[0], input_shape[1])
+
+ several_object_refs = tf.tile(tf.expand_dims(obj_ref, axis=1), [1, 5, 1])
+
+ phis = explainer(x, several_object_refs)
+
+ assert phis.shape[:3] == (several_object_refs.shape[0], input_shape[0], input_shape[1])
+
+
+def test_all_object_detector_operators():
+ """Assert input shape returned is correct"""
+ input_shape = (8, 8, 1)
+ nb_samples = 3
+ nb_labels = 2
+ max_nb_boxes = 4
+ x, _ = generate_data(input_shape, nb_labels, nb_samples)
+ model = generate_object_detection_model(input_shape, max_nb_boxes=max_nb_boxes, nb_labels=nb_labels)
+
+ obj_ref = tf.cast([
+ [0, 0, 100, 100, 0.9, 1.0, 0.0],
+ [50, 50, 150, 150, 0.5, 1.0, 0.0],
+ [0, 10, 20, 30, 0.7, 0.0, 1.0],
+ ], tf.float32)
+
+ # set params
+ parameters_normal = {
+ "include_detection_probability": True,
+ "include_classification_score": True,
+ }
+
+ parameters_intersection = {
+ "include_detection_probability": False,
+ "include_classification_score": False,
+ }
+
+ parameters_probability = {
+ "include_detection_probability": True,
+ "include_classification_score": False,
+ }
+
+ parameters_classification = {
+ "include_detection_probability": False,
+ "include_classification_score": True,
+ }
+
+ # create operators
+ normal_op = lambda model, inputs, targets: \
+ object_detection_operator(model, inputs, targets, **parameters_normal)
+
+ intersection_op = lambda model, inputs, targets: \
+ object_detection_operator(model, inputs, targets, **parameters_intersection)
+
+ probability_op = lambda model, inputs, targets: \
+ object_detection_operator(model, inputs, targets, **parameters_probability)
+
+ classification_op = lambda model, inputs, targets: \
+ object_detection_operator(model, inputs, targets, **parameters_classification)
+
+ # compute explanations
+ phis = Occlusion(model, operator=object_detection_operator, patch_size=4, patch_stride=2)(x, obj_ref)
+
+ normal_phis = Occlusion(model, operator=normal_op, patch_size=4, patch_stride=2)(x, obj_ref)
+
+ intersection_phis = Occlusion(model, operator=intersection_op, patch_size=4, patch_stride=2)(x, obj_ref)
+
+ probability_phis = Occlusion(model, operator=probability_op, patch_size=4, patch_stride=2)(x, obj_ref)
+
+ classification_phis = Occlusion(model, operator=classification_op, patch_size=4, patch_stride=2)(x, obj_ref)
+
+ for phi in [phis, normal_phis, intersection_phis, probability_phis, classification_phis]:
+ assert phi.shape[:3] == (obj_ref.shape[0], input_shape[0], input_shape[1])
+
+ assert almost_equal(phis, normal_phis)
+
+ for phi1, phi2 in combinations([normal_phis, intersection_phis, probability_phis, classification_phis], 2):
+ assert not almost_equal(phi1, phi2)
\ No newline at end of file
diff --git a/tests/commons/test_operators.py b/tests/commons/test_operators.py
index d33083d6..1e39a550 100644
--- a/tests/commons/test_operators.py
+++ b/tests/commons/test_operators.py
@@ -11,11 +11,8 @@
SquareGrad, GradCAM, Occlusion, Rise, GuidedBackprop, DeconvNet,
GradCAMPP, Lime, KernelShap, SobolAttributionMethod,
HsicAttributionMethod)
-from xplique.commons.operators import (check_operator, predictions_operator, regression_operator,
- binary_segmentation_operator, segmentation_operator)
-from xplique.commons.operators import Tasks, get_operator
-from xplique.commons.exceptions import InvalidOperatorException
-from ..utils import generate_data, generate_regression_model
+from xplique.commons.operators import (predictions_operator, regression_operator)
+from ..utils import generate_data, generate_model, generate_regression_model, almost_equal
def default_methods(model, operator):
@@ -35,14 +32,6 @@ def default_methods(model, operator):
]
-def get_segmentation_model():
- model = tf.keras.Sequential([
- tf.keras.layers.Input((20, 20, 1)),
- ])
- model.compile()
- return model
-
-
def get_concept_model():
model = tf.keras.Sequential([
tf.keras.layers.Input((6)),
@@ -52,64 +41,12 @@ def get_concept_model():
return model
-def test_check_operator():
- # ensure that the check operator detects non-operator
-
- # operator must have at least 3 arguments
- function_with_2_arguments = lambda x,y: 0
-
- # operator must be Callable
- not_a_function = [1, 2, 3]
-
- for operator in [function_with_2_arguments, not_a_function]:
- try:
- check_operator(operator)
- assert False
- except InvalidOperatorException:
- pass
-
-
-def test_proposed_operators():
- # ensure all proposed operators are operators
- for operator in [predictions_operator, regression_operator,
- binary_segmentation_operator, segmentation_operator]:
- check_operator(operator)
-
-
-def test_get_operator():
- tasks_name = [task.name for task in Tasks]
- assert tasks_name.sort() == ['classification', 'regression'].sort()
- # get by enum
- assert get_operator(Tasks.CLASSIFICATION) is predictions_operator
- assert get_operator(Tasks.REGRESSION) is regression_operator
-
- # get by string
- assert get_operator("classification") is predictions_operator
- assert get_operator("regression") is regression_operator
-
- # assert a not valid string does not work
- with pytest.raises(AssertionError):
- get_operator("random")
-
- # operator must have at least 3 arguments
- function_with_2_arguments = lambda x,y: 0
-
- # operator must be Callable
- not_a_function = [1, 2, 3]
-
- for operator in [function_with_2_arguments, not_a_function]:
- try:
- get_operator(operator)
- except InvalidOperatorException:
- pass
-
-
-def test_regression_operator():
- input_shape, nb_labels, samples = ((10, 10, 1), 10, 20)
+def test_predictions_operator():
+ input_shape, nb_labels, samples = ((10, 10, 3), 10, 20)
x, y = generate_data(input_shape, nb_labels, samples)
- regression_model = generate_regression_model(input_shape, nb_labels)
+ classification_model = generate_model(input_shape, nb_labels)
- methods = default_methods(regression_model, regression_operator)
+ methods = default_methods(classification_model, regression_operator)
for method in methods:
assert hasattr(method, 'inference_function')
@@ -120,18 +57,15 @@ def test_regression_operator():
phis = method(x, y)
assert x.shape[:-1] == phis.shape[:3]
-
-def test_segmentation_operator():
- segmentation_model = get_segmentation_model()
- x, y = generate_data((20, 20, 3), 10, 10)
- def segmentation_operator(model, x, y):
- # explaining channel 0
- return tf.reduce_sum(model(x)[:,:,0], (1, 2))
+def test_regression_operator():
+ input_shape, nb_labels, samples = ((10, 10, 1), 10, 20)
+ x, y = generate_data(input_shape, nb_labels, samples)
+ regression_model = generate_regression_model(input_shape, nb_labels)
- methods = default_methods(segmentation_model, segmentation_operator)
+ methods = default_methods(regression_model, regression_operator)
for method in methods:
assert hasattr(method, 'inference_function')
@@ -153,7 +87,6 @@ def test_concept_operator():
def concept_operator(model, x, y):
x = tf.reshape(x, (-1, 20*20))
- print(x.shape, random_projection.shape)
ui = x @ random_projection
return tf.reduce_sum(model(ui) * y, axis=-1)
diff --git a/tests/commons/test_operators_operations.py b/tests/commons/test_operators_operations.py
new file mode 100644
index 00000000..92e79abd
--- /dev/null
+++ b/tests/commons/test_operators_operations.py
@@ -0,0 +1,76 @@
+"""
+Ensure we can use the operator functionnality on various models
+"""
+
+import pytest
+
+import xplique
+from xplique.commons.operators_operations import check_operator, Tasks, get_operator
+from xplique.commons.operators import (predictions_operator, regression_operator,
+ semantic_segmentation_operator, object_detection_operator)
+from xplique.commons.exceptions import InvalidOperatorException
+
+
+def test_check_operator():
+ # ensure that the check operator detects non-operator
+
+ # operator must have at least 3 arguments
+ function_with_2_arguments = lambda x,y: 0
+
+ # operator must be Callable
+ not_a_function = [1, 2, 3]
+
+ for operator in [function_with_2_arguments, not_a_function]:
+ try:
+ check_operator(operator)
+ assert False
+ except InvalidOperatorException:
+ pass
+
+
+def test_get_operator():
+ possible_tasks = ["classification", "regression", "semantic segmentation", "object detection",
+ "object detection box position", "object detection box proba",
+ "object detection box class"]
+
+ tasks_name = [task.name for task in Tasks]
+ assert tasks_name.sort() == possible_tasks.sort()
+
+ # get by enum
+ assert get_operator(Tasks.CLASSIFICATION) is predictions_operator
+ assert get_operator(Tasks.REGRESSION) is predictions_operator # TODO, change when there is a real regression operator
+ assert get_operator(Tasks.OBJECT_DETECTION) is object_detection_operator
+ assert get_operator(Tasks.SEMANTIC_SEGMENTATION) is semantic_segmentation_operator
+
+ # get by string
+ assert get_operator("classification") is predictions_operator
+ assert get_operator("regression") is predictions_operator # TODO, change when there is a real regression operator
+ assert get_operator("object detection") is object_detection_operator
+ assert get_operator("semantic segmentation") is semantic_segmentation_operator
+
+ # assert a not valid string does not work
+ with pytest.raises(AssertionError):
+ get_operator("random")
+
+ # operator must have at least 3 arguments
+ function_with_2_arguments = lambda x,y: 0
+
+ # operator must be Callable
+ not_a_function = [1, 2, 3]
+
+ for operator in [function_with_2_arguments, not_a_function]:
+ try:
+ get_operator(operator)
+ except InvalidOperatorException:
+ pass
+
+
+def test_proposed_operators():
+ # ensure all proposed operators are operators
+ for operator in [task.value for task in Tasks]:
+ check_operator(operator)
+
+def test_enum_shortcut():
+ # ensure all proposed operators are operators
+ for operator in [task.value for task in xplique.Tasks]:
+ check_operator(operator)
diff --git a/tests/commons/test_segmentation_operator.py b/tests/commons/test_segmentation_operator.py
new file mode 100644
index 00000000..c51df8c0
--- /dev/null
+++ b/tests/commons/test_segmentation_operator.py
@@ -0,0 +1,109 @@
+"""
+Ensure we can use the operator functionality on various models
+"""
+
+import numpy as np
+import tensorflow as tf
+
+from xplique.attributions import (Saliency, GradientInput, IntegratedGradients, SmoothGrad, VarGrad,
+ SquareGrad, GradCAM, Occlusion, Rise, GuidedBackprop, DeconvNet,
+ GradCAMPP, Lime, KernelShap, SobolAttributionMethod,
+ HsicAttributionMethod)
+from xplique.commons.operators_operations import semantic_segmentation_operator
+from ..utils import generate_data, almost_equal
+
+
+def default_methods(model, operator):
+ return [
+ Saliency(model, operator=operator),
+ GradientInput(model, operator=operator),
+ SmoothGrad(model, operator=operator),
+ VarGrad(model, operator=operator),
+ SquareGrad(model, operator=operator),
+ IntegratedGradients(model, operator=operator),
+ Occlusion(model, operator=operator),
+ Rise(model, operator=operator, nb_samples=2),
+ GuidedBackprop(model, operator=operator),
+ DeconvNet(model, operator=operator),
+ SobolAttributionMethod(model, operator=operator, grid_size=2, nb_design=2),
+ HsicAttributionMethod(model, operator=operator, grid_size=2, nb_design=2),
+ ]
+
+
+def get_segmentation_model(input_shape=(20, 20, 1)):
+ model = tf.keras.Sequential([
+ tf.keras.layers.Input(input_shape),
+ ])
+ model.compile()
+ return model
+
+
+def test_segmentation_operator():
+ input_shape = (20, 20, 3)
+ segmentation_model = get_segmentation_model(input_shape)
+
+ x, _ = generate_data(input_shape, 10, 10)
+ y, _ = generate_data(input_shape, 10, 10)
+
+ methods = default_methods(segmentation_model, semantic_segmentation_operator)
+ for method in methods:
+
+ assert hasattr(method, 'inference_function')
+ assert hasattr(method, 'batch_inference_function')
+ assert hasattr(method, 'gradient')
+ assert hasattr(method, 'batch_gradient')
+
+ phis = method(x, y)
+
+ assert x.shape[:-1] == phis.shape[:3]
+
+
+def test_segmentation_operator_computation():
+ image = [[[[0, 0, 1.0],
+ [0, 0, 1.0],
+ [1.0, 1.0, 1.0],],
+ [[0, 0, 0],
+ [0, 1.0, 0],
+ [1.0, 1.0, 1.0],],
+ [[0, 0, 0],
+ [0, 0, 0],
+ [1.0, 0, 0],],]]
+ image = tf.transpose(tf.convert_to_tensor(image, tf.float32), perm=[0, 2, 3, 1])
+
+ model = lambda x: tf.concat([x, tf.expand_dims(tf.reduce_mean(x, axis=-1), axis=-1)], axis=-1)
+
+ target_1 = [[[[0, 0, 1.0],
+ [0, 0, 1.0],
+ [1.0, 1.0, 1.0],],
+ [[0, 0, 0],
+ [0, 0, 0],
+ [0, 0, 0],],
+ [[0, 0, 0],
+ [0, 0, 0],
+ [0, 0, 0],],
+ [[0, 0, 0],
+ [0, 0, 0],
+ [0, 0, 0],],]]
+ target_1 = tf.transpose(tf.convert_to_tensor(target_1, tf.float32), perm=[0, 2, 3, 1])
+
+ target_2 = [[[[0, 0, 0],
+ [0, 0, 0],
+ [0, 0, 0],],
+ [[0, 0, 0],
+ [0, 0, 0],
+ [0, 0, 0],],
+ [[0, 0, 0],
+ [0, 0, 0],
+ [0, 0, 0],],
+ [[0, 0, 0],
+ [0, 0, 0],
+ [1.0, 1.0, 1.0],],]]
+ target_2 = tf.transpose(tf.convert_to_tensor(target_2, tf.float32), perm=[0, 2, 3, 1])
+
+ scores = model(np.array(image)) * target_2
+
+ score_1 = semantic_segmentation_operator(model, image, np.array(target_1))
+ score_2 = semantic_segmentation_operator(model, np.array(image), target_2)
+
+ assert almost_equal(score_1, 1.0)
+ assert almost_equal(score_2, (7.0 / 3) / 3)
diff --git a/tests/utils.py b/tests/utils.py
index f9457372..ebbf13bd 100644
--- a/tests/utils.py
+++ b/tests/utils.py
@@ -2,7 +2,8 @@
from sklearn.linear_model import LinearRegression
import tensorflow as tf
from tensorflow.keras.models import Sequential, Model
-from tensorflow.keras.layers import Dense, Conv1D, Conv2D, Activation, GlobalAveragePooling1D, Dropout, Flatten, MaxPooling2D, Input
+from tensorflow.keras.layers import (Dense, Conv1D, Conv2D, Activation, GlobalAveragePooling1D,
+ Dropout, Flatten, MaxPooling2D, Input, Reshape)
from tensorflow.keras.utils import to_categorical
def generate_data(x_shape=(32, 32, 3), num_labels=10, samples=100):
@@ -114,3 +115,43 @@ def __call__(self, inputs):
return tf_model
+def generate_object_detection_model(input_shape=(32, 32, 3), max_nb_boxes=10, nb_labels=5, with_nmf=False):
+ # create a model that generates max_nb_boxes and select some randomly
+ output_shape = (max_nb_boxes, 5 + nb_labels)
+ model = Sequential()
+ model.add(Input(shape=input_shape))
+ model.add(Conv2D(4, kernel_size=(2, 2),
+ activation='relu'))
+ model.add(MaxPooling2D(pool_size=(2, 2)))
+ model.add(Flatten())
+ model.add(Dense(np.prod(output_shape)))
+ model.add(Reshape(output_shape))
+ model.add(Activation('sigmoid'))
+ model.compile(loss='mae', optimizer='sgd')
+
+ # ensure iou computation will work
+ def make_plausible_boxes(model_output):
+ coordinates = tf.sort(model_output[:, :, :4], axis=-1) * 200
+ probabilities = model_output[:, :, 4][:, :, tf.newaxis]
+ classifications = tf.nn.softmax(model_output[:, :, 5:], axis=-1)
+ new_output = tf.concat([coordinates, probabilities, classifications], axis=-1)
+ return new_output
+
+ valid_model = lambda inputs: make_plausible_boxes(model(inputs))
+
+ # equivalent of nmf
+ def randomly_select_boxes(boxes):
+ boxes_ids = tf.range(tf.shape(boxes)[0])
+ nb_boxes = tf.experimental.numpy.random.randint(1, max_nb_boxes)
+ boxes_ids = tf.random.shuffle(boxes_ids)[:nb_boxes]
+ return tf.gather(boxes, boxes_ids)
+
+ # model with nmf
+ def model_with_random_nb_boxes(input):
+ all_boxes = valid_model(input)
+ some_boxes = [randomly_select_boxes(boxes) for boxes in all_boxes]
+ return some_boxes
+
+ if with_nmf:
+ return model_with_random_nb_boxes
+ return valid_model
diff --git a/tests/utils_functions/__init__.py b/tests/utils_functions/__init__.py
new file mode 100644
index 00000000..e69de29b
diff --git a/tests/utils_functions/test_object_detection.py b/tests/utils_functions/test_object_detection.py
new file mode 100644
index 00000000..f44b1609
--- /dev/null
+++ b/tests/utils_functions/test_object_detection.py
@@ -0,0 +1,34 @@
+"""
+Test utils functions for object detection
+"""
+
+import numpy as np
+
+from xplique.utils_functions.object_detection import _box_iou
+
+from ..utils import almost_equal
+
+
+def test_iou_mask():
+ """Assert the Mask IoU calculation is ok"""
+ dtype = np.float32
+
+ m1 = np.array([
+ [10, 20, 30, 40]
+ ], dtype=dtype)
+ m2 = np.array([
+ [15, 20, 30, 40]
+ ], dtype=dtype)
+ m3 = np.array([
+ [0, 20, 10, 40]
+ ], dtype=dtype)
+ m4 = np.array([
+ [0, 0, 100, 100]
+ ], dtype=dtype)
+
+ assert almost_equal(_box_iou(m1, m2), 300.0 / 400.0)
+ assert almost_equal(_box_iou(m1, m3), 0.0)
+ assert almost_equal(_box_iou(m3, m2), 0.0)
+ assert almost_equal(_box_iou(m1, m4), 400.0 / 10_000)
+ assert almost_equal(_box_iou(m2, m4), 300.0 / 10_000)
+ assert almost_equal(_box_iou(m3, m4), 200.0 / 10_000)
diff --git a/tests/utils_functions/test_segmentation.py b/tests/utils_functions/test_segmentation.py
new file mode 100644
index 00000000..7e3020f8
--- /dev/null
+++ b/tests/utils_functions/test_segmentation.py
@@ -0,0 +1,212 @@
+import tensorflow as tf
+
+from xplique.utils_functions.segmentation import *
+
+def get_prediction():
+ predictions = [[[0.6, 0.6, 0.6, 0.2, 0.2],
+ [0.6, 0.6, 0.2, 0.2, 0.2],
+ [0.6, 0.2, 0.2, 0.2, 0.2],
+ [0.2, 0.2, 0.2, 0.2, 0.2],
+ [0.2, 0.2, 0.2, 0.2, 0.2],],
+ [[0.2, 0.2, 0.2, 0.6, 0.6],
+ [0.2, 0.2, 0.6, 0.6, 0.6],
+ [0.2, 0.6, 0.6, 0.6, 0.2],
+ [0.6, 0.6, 0.6, 0.2, 0.2],
+ [0.6, 0.6, 0.2, 0.2, 0.2],],
+ [[0.2, 0.2, 0.2, 0.2, 0.2],
+ [0.2, 0.2, 0.2, 0.2, 0.2],
+ [0.2, 0.2, 0.2, 0.2, 0.6],
+ [0.2, 0.2, 0.2, 0.6, 0.6],
+ [0.2, 0.2, 0.6, 0.6, 0.6],],]
+
+ predictions = tf.convert_to_tensor(predictions, tf.float32)
+
+ predictions = tf.transpose(predictions, perm=[1, 2, 0])
+
+ assert tf.reduce_all(tf.equal(tf.reduce_sum(predictions, axis=-1), tf.ones((5, 5))))
+
+ return predictions
+
+
+def test_get_class_zone():
+ predictions = get_prediction()
+
+ target_1 = get_class_zone(predictions, class_id=1)
+
+ expected_target = tf.transpose(tf.convert_to_tensor(
+ [[[0, 0, 0, 0, 0],
+ [0, 0, 0, 0, 0],
+ [0, 0, 0, 0, 0],
+ [0, 0, 0, 0, 0],
+ [0, 0, 0, 0, 0],],
+ [[0, 0, 0, 1, 1],
+ [0, 0, 1, 1, 1],
+ [0, 1, 1, 1, 0],
+ [1, 1, 1, 0, 0],
+ [1, 1, 0, 0, 0],],
+ [[0, 0, 0, 0, 0],
+ [0, 0, 0, 0, 0],
+ [0, 0, 0, 0, 0],
+ [0, 0, 0, 0, 0],
+ [0, 0, 0, 0, 0],],],tf.float32), perm=[1, 2, 0])
+
+ assert tf.reduce_all(tf.equal(target_1, expected_target))
+
+ target_0 = get_class_zone(predictions, class_id=0)
+ target_2 = get_class_zone(predictions, class_id=2)
+
+ assert tf.reduce_all(tf.equal(
+ tf.reduce_sum(tf.stack([target_0, target_1, target_2], axis=0), axis=[0, 3]),
+ tf.fill((5, 5), 1.0)))
+
+
+def test_get_connected_zone():
+ predictions = get_prediction()
+
+ target_1 = get_connected_zone(predictions, coordinates=(2, 2))
+
+ expected_target = tf.transpose(tf.convert_to_tensor(
+ [[[0, 0, 0, 0, 0],
+ [0, 0, 0, 0, 0],
+ [0, 0, 0, 0, 0],
+ [0, 0, 0, 0, 0],
+ [0, 0, 0, 0, 0],],
+ [[0, 0, 0, 1, 1],
+ [0, 0, 1, 1, 1],
+ [0, 1, 1, 1, 0],
+ [1, 1, 1, 0, 0],
+ [1, 1, 0, 0, 0],],
+ [[0, 0, 0, 0, 0],
+ [0, 0, 0, 0, 0],
+ [0, 0, 0, 0, 0],
+ [0, 0, 0, 0, 0],
+ [0, 0, 0, 0, 0],],],tf.float32), perm=[1, 2, 0])
+
+ assert tf.reduce_all(tf.equal(target_1, expected_target))
+
+ target_0 = get_connected_zone(predictions, coordinates=(0, 0))
+ target_2 = get_connected_zone(predictions, coordinates=(4, 4))
+
+ assert tf.reduce_all(tf.equal(
+ tf.reduce_sum(tf.stack([target_0, target_1, target_2], axis=0), axis=[0, 3]),
+ tf.fill((5, 5), 1.0)))
+
+
+def test_list_class_connected_zones():
+ predictions = get_prediction()
+
+ predictions = tf.stack([predictions[:, :, 0] + predictions[:, :, 2], predictions[:, :, 1]], axis=-1)
+
+ zones_0 = list_class_connected_zones(predictions, class_id=0, zone_minimum_size=1)
+ zones_1 = list_class_connected_zones(predictions, class_id=1, zone_minimum_size=1)
+ no_zones = list_class_connected_zones(predictions, class_id=0, zone_minimum_size=10)
+
+ assert len(zones_0) == 2
+ assert len(zones_1) == 1
+ assert len(no_zones) == 0
+
+ expected_zones_1 = tf.transpose(tf.convert_to_tensor(
+ [[[0, 0, 0, 0, 0],
+ [0, 0, 0, 0, 0],
+ [0, 0, 0, 0, 0],
+ [0, 0, 0, 0, 0],
+ [0, 0, 0, 0, 0],],
+ [[0, 0, 0, 1, 1],
+ [0, 0, 1, 1, 1],
+ [0, 1, 1, 1, 0],
+ [1, 1, 1, 0, 0],
+ [1, 1, 0, 0, 0],],], tf.float32), perm=[1, 2, 0])
+
+ assert tf.reduce_all(tf.equal(zones_1[0], expected_zones_1))
+
+ expected_zones_21 = tf.transpose(tf.convert_to_tensor(
+ [[[1, 1, 1, 0, 0],
+ [1, 1, 0, 0, 0],
+ [1, 0, 0, 0, 0],
+ [0, 0, 0, 0, 0],
+ [0, 0, 0, 0, 0],],
+ [[0, 0, 0, 0, 0],
+ [0, 0, 0, 0, 0],
+ [0, 0, 0, 0, 0],
+ [0, 0, 0, 0, 0],
+ [0, 0, 0, 0, 0],],], tf.float32), perm=[1, 2, 0])
+
+ expected_zones_22 = tf.transpose(tf.convert_to_tensor(
+ [[[0, 0, 0, 0, 0],
+ [0, 0, 0, 0, 0],
+ [0, 0, 0, 0, 1],
+ [0, 0, 0, 1, 1],
+ [0, 0, 1, 1, 1],],
+ [[0, 0, 0, 0, 0],
+ [0, 0, 0, 0, 0],
+ [0, 0, 0, 0, 0],
+ [0, 0, 0, 0, 0],
+ [0, 0, 0, 0, 0],],], tf.float32), perm=[1, 2, 0])
+
+ assert tf.reduce_all(tf.equal(zones_0[0], expected_zones_21))\
+ or tf.reduce_all(tf.equal(zones_0[0], expected_zones_22))
+
+ assert tf.reduce_all(tf.equal(zones_0[1], expected_zones_21))\
+ or tf.reduce_all(tf.equal(zones_0[1], expected_zones_22))
+
+
+
+def test_get_in_out_border():
+ predictions = get_prediction()
+
+ central_zone = get_connected_zone(predictions, coordinates=(2, 2))
+
+ borders = get_in_out_border(central_zone)
+
+ expected_borders = tf.transpose(tf.convert_to_tensor(
+ [[[0, 0, 0, 0, 0],
+ [0, 0, 0, 0, 0],
+ [0, 0, 0, 0, 0],
+ [0, 0, 0, 0, 0],
+ [0, 0, 0, 0, 0],],
+ [[ 0, -1, -1,1, 0],
+ [-1, -1, 1, 1, 1],
+ [-1, 1, 1,1 , -1],
+ [ 1, 1, 1, -1, -1],
+ [ 0,1, -1, -1, 0],],
+ [[0, 0, 0, 0, 0],
+ [0, 0, 0, 0, 0],
+ [0, 0, 0, 0, 0],
+ [0, 0, 0, 0, 0],
+ [0, 0, 0, 0, 0],],],tf.float32), perm=[1, 2, 0])
+
+ assert tf.reduce_all(tf.equal(borders, expected_borders))
+
+
+def test_get_common_border():
+ predictions = get_prediction()
+
+ left_corner_zone = get_connected_zone(predictions, coordinates=(0, 0))
+ central_zone = get_connected_zone(predictions, coordinates=(2, 2))
+
+ left_corner_borders = get_in_out_border(left_corner_zone)
+ central_borders = get_in_out_border(central_zone)
+
+ common_borders_0 = get_common_border(left_corner_borders, central_borders)
+ common_borders_1 = get_common_border(central_borders, left_corner_borders)
+
+ expected_common_borders = tf.transpose(tf.convert_to_tensor(
+ [[[ 0, 1, 1, -1, 0],
+ [ 1, 1, -1, -1, 0],
+ [ 1, -1, -1, 0, 0],
+ [-1, -1, 0, 0, 0],
+ [ 0, 0, 0, 0, 0],],
+ [[ 0, -1, -1, 1, 0],
+ [-1, -1, 1, 1, 0],
+ [-1, 1, 1, 0, 0],
+ [ 1, 1, 0, 0, 0],
+ [ 0, 0, 0, 0, 0],],
+ [[0, 0, 0, 0, 0],
+ [0, 0, 0, 0, 0],
+ [0, 0, 0, 0, 0],
+ [0, 0, 0, 0, 0],
+ [0, 0, 0, 0, 0],],],tf.float32), perm=[1, 2, 0])
+
+ assert tf.reduce_all(tf.equal(common_borders_0, common_borders_1))
+
+ assert tf.reduce_all(tf.equal(common_borders_0, expected_common_borders))
\ No newline at end of file
diff --git a/xplique/__init__.py b/xplique/__init__.py
index daad8dfc..d3ea3182 100644
--- a/xplique/__init__.py
+++ b/xplique/__init__.py
@@ -6,10 +6,12 @@
techniques
"""
-__version__ = '1.1.0'
+__version__ = '1.2.0'
from . import attributions
from . import concepts
from . import features_visualizations
from . import commons
from . import plots
+
+from .commons import Tasks
diff --git a/xplique/attributions/object_detector.py b/xplique/attributions/object_detector.py
index 760e4305..cc434806 100644
--- a/xplique/attributions/object_detector.py
+++ b/xplique/attributions/object_detector.py
@@ -2,268 +2,108 @@
Module related to Object detector method
"""
-from typing import Iterable, Tuple, Union, Optional
-import abc
+from deprecated import deprecated
-import tensorflow as tf
import numpy as np
+import tensorflow as tf
-from xplique.attributions.base import BlackBoxExplainer
-from xplique.commons import operator_batching
+from ..types import Optional, Callable, Union
+from .base import BlackBoxExplainer, WhiteBoxExplainer
+from ..commons import get_gradient_functions
+from ..commons.operators import object_detection_operator
+from ..utils_functions.object_detection import _box_iou, _format_objects
-class IouCalculator:
- """
- Used to compute the Intersection Over Union (IOU).
- """
- @abc.abstractmethod
- def intersect(self, objects_a: tf.Tensor, objects_b: tf.Tensor) -> tf.Tensor:
- """
- Compute the intersection between two batched objects (e.g boxes, segmentation masks...)
- Parameters
- ----------
- objects_a
- First batch of objects to compare.
- objects_b
- Second batch of objects to compare.
+OLD_OBJECT_DETECTION_DEPRECATION_MESSAGE = """
+\n
+The method to compute attribution explanation explanations changed drastically after version 1.0.0.
+For more information please refer to the documentation:
+https://deel-ai.github.io/xplique/latest/api/attributions/object_detection/
- Returns
- -------
- score
- Real value between [0,1] corresponding to the intersection of the 2 objects.
- """
- raise NotImplementedError()
+Nonetheless, here is a quick example of how it is used now:
+```
+from xplique.attributions import AnyMethod
+explainer = AnyMethod(model, operator="object detection", ...)
+explanation = explainer(inputs, targets) # be aware, targets are specific to object detection here
+```
+"""
-class SegmentationIouCalculator(IouCalculator):
- """
- Compute segmentation masks IOU.
+class BoundingBoxesExplainer(BlackBoxExplainer):
"""
- def intersect(self, masks_a: tf.Tensor, masks_b: tf.Tensor) -> tf.Tensor:
- """
- Compute the intersection between two batched segmentation masks.
- Each segmentation is a boolean mask on the whole image
-
- Parameters
- ----------
- masks_a
- First batch of segmentation masks.
- masks_b
- Second batch of segmentation masks.
-
- Returns
- -------
- iou_score
- The IOU score between the first and second batch of masks.
- """
- # pylint: disable=W0221,W0237
-
- axis = np.arange(1, tf.rank(masks_a))
-
- inter_area = tf.reduce_sum(tf.cast(tf.logical_and(masks_a, masks_b),
- dtype=tf.float32), axis=axis)
- union_area = tf.reduce_sum(tf.cast(tf.logical_or(masks_a, masks_b),
- dtype=tf.float32), axis=axis)
-
- iou_score = inter_area / tf.maximum(union_area, 1.0)
-
- return iou_score
+ For a given black box explainer, this class allows to find explications of an object detector
+ model. The object model detector shall return a list (length of the size of the batch)
+ containing a tensor of 2 dimensions.
+ The first dimension of the tensor is the number of bounding boxes found in the image
+ The second dimension is:
+ [xmin, ymin, xmax, ymax, probability_detection, ones_hot_classif_result]
+ This work is a generalization of the following article at any kind of black box explainer and
+ also can be used for other kind of object detector (like segmentation)
+ Ref. Petsiuk & al., Black-box Explanation of Object Detectors via Saliency Maps (2021).
+ https://arxiv.org/pdf/2006.03204.pdf
-class BoxIouCalculator(IouCalculator):
- """
- Used to compute the Bounding Box IOU.
+ Parameters
+ ----------
+ explainer
+ the black box explainer used to explain the object detector model
+ _
+ inheritance from old versions
+ intersection_score
+ the iou calculator used to compare two objects.
"""
- EPSILON = tf.constant(1e-4)
-
- def intersect(self, boxes_a: tf.Tensor, boxes_b: tf.Tensor) -> tf.Tensor:
- """
- Compute the intersection between two batched bounding boxes.
- Each bounding box is defined by (x1, y1, x2, y2) respectively (left, bottom, right, top).
-
- Parameters
- ----------
- boxes_a
- First batch of bounding boxes.
- boxes_b
- Second batch of bounding boxes.
-
- Returns
- -------
- iou_score
- The IOU score between the first and second batch of bounding boxes.
- """
- # pylint: disable=W0221,W0237
-
- # determine the intersection rectangle
- left = tf.maximum(boxes_a[..., 0], boxes_b[..., 0])
- bottom = tf.maximum(boxes_a[..., 1], boxes_b[..., 1])
- right = tf.minimum(boxes_a[..., 2], boxes_b[..., 2])
- top = tf.minimum(boxes_a[..., 3], boxes_b[..., 3])
-
- intersection_area = tf.math.maximum(right - left, 0) * tf.math.maximum(top - bottom, 0)
-
- # determine the areas of the prediction and ground-truth rectangles
- a_area = (boxes_a[..., 2] - boxes_a[..., 0]) * (boxes_a[..., 3] - boxes_a[..., 1])
- b_area = (boxes_b[..., 2] - boxes_b[..., 0]) * (boxes_b[..., 3] - boxes_b[..., 1])
-
- union_area = a_area + b_area - intersection_area
-
- iou_score = intersection_area / (union_area + BoxIouCalculator.EPSILON)
-
- return iou_score
-
-
-class IObjectFormater:
- """
- Generic class to format the model prediction
- """
- def format_objects(self, predictions) -> Iterable[Tuple[tf.Tensor, tf.Tensor, tf.Tensor]]:
- """
- Format the model prediction of a given image to have the prediction of the following format:
- objects, proba_detection, one_hots_classifications
-
- Parameters
- ----------
- predictions
- prediction of the model of a given image
-
- Returns
- -------
- object
- bounding box or mask component of the prediction
- proba
- existence probability component of the prediction
- classification
- classification component of the prediction
- """
- raise NotImplementedError()
+ @deprecated(version="1.0.0", reason=OLD_OBJECT_DETECTION_DEPRECATION_MESSAGE)
+ def __init__(self,
+ explainer: BlackBoxExplainer,
+ _: Optional[Callable] = _format_objects,
+ intersection_score: Optional[Callable] = _box_iou):
+ # make operator function based on arguments
+ operator = lambda model, inputs, targets: \
+ object_detection_operator(model, inputs, targets, intersection_score)
+
+ # BlackBoxExplainer init to set operator
+ super().__init__(model=explainer.model, batch_size=explainer.batch_size, operator=operator)
+ self.explainer = explainer
-class ImageObjectDetectorScoreCalculator:
- """
- Class to compute batch score
- """
- def __init__(self, object_formater: IObjectFormater, iou_calculator: IouCalculator):
- self.object_formater = object_formater
- self.iou_calculator = iou_calculator
+ # update explainer inference functions for explain method
+ self.explainer.inference_function = self.inference_function
+ self.explainer.batch_inference_function = self.batch_inference_function
- self.batch_score = operator_batching(self.score)
+ if isinstance(self.explainer, WhiteBoxExplainer):
+ # check and get gradient function from model and operator
+ self.gradient, self.batch_gradient = get_gradient_functions(self.model, operator)
+ self.explainer.gradient = self.gradient
+ self.explainer.batch_gradient = self.batch_gradient
- def score(self, model, inp, object_ref) -> tf.Tensor:
+ def explain(self,
+ inputs: Union[tf.data.Dataset, tf.Tensor, np.array],
+ targets: Optional[Union[tf.Tensor, np.array]] = None) -> tf.Tensor:
"""
- Compute the matching score between prediction and a given object
+ Compute the explanation of the object detection through the explainer
Parameters
----------
- model
- the model used for the object detection
- inp
- the batched image
- object_ref
- the object target to compare with the prediction of the model
+ inputs
+ Dataset, Tensor or Array. Input samples to be explained.
+ If Dataset, targets should not be provided (included in Dataset).
+ Expected shape (N, H, W, C).
+ More information in the documentation.
+ targets
+ Tensor or Array. One-hot encoding of the model's output from which an explanation
+ is desired. One encoding per input and only one output at a time. Therefore,
+ the expected shape is (N, ...). With features matching the object formatting.
+ See object detection operator documentation for more information
+ More information in the documentation.
Returns
-------
- score
- for each image, the matching score between the object of reference and
- the prediction of the model
- """
- objects = model(inp)
- score_values = []
- for obj, obj_ref in zip(objects, object_ref):
- if obj is None or obj.shape[0] == 0:
- score_values.append(tf.constant(0.0, dtype=inp.dtype))
- else:
- current_boxes, proba_detection, classification = \
- self.object_formater.format_objects(obj)
-
- if len(tf.shape(obj_ref)) == 1:
- obj_ref = tf.expand_dims(obj_ref, axis=0)
-
- obj_ref = self.object_formater.format_objects(obj_ref)
-
- scores = []
- size = tf.shape(current_boxes)[0]
- for boxes_ref, proba_ref, class_ref in zip(*obj_ref):
- boxes_ref = tf.repeat(tf.expand_dims(boxes_ref, axis=0), repeats=size, axis=0)
- proba_ref = tf.repeat(tf.expand_dims(proba_ref, axis=0), repeats=size, axis=0)
- class_ref = tf.repeat(tf.expand_dims(class_ref, axis=0), repeats=size, axis=0)
-
- iou = self.iou_calculator.intersect(boxes_ref, current_boxes)
- classification_similarity = tf.reduce_sum(class_ref * classification, axis=1) \
- / (tf.norm(classification, axis=1) * tf.norm(class_ref, axis=1))
-
- current_score = iou * tf.squeeze(proba_detection, axis=1) \
- * classification_similarity
- current_score = tf.reduce_max(current_score)
- scores.append(current_score)
-
- score_value = tf.reduce_max(tf.stack(scores))
- score_values.append(score_value)
-
- score_values = tf.stack(score_values)
-
- return score_values
-
-
-class ImageObjectDetectorExplainer(BlackBoxExplainer):
- """
- Used to define method as an object detector one
- """
-
- def __init__(self, explainer: BlackBoxExplainer, object_detector_formater: IObjectFormater,
- iou_calculator: IouCalculator):
+ explanation
+ The resulting object detection explanation
"""
- Constructor
-
- Parameters
- ----------
- explainer
- the black box explainer used to explain the object detector model
- object_detector_formater
- the formater of the object detector model used to format the prediction
- of the right format
- iou_calculator
- the iou calculator used to compare two objects.
- """
- super().__init__(explainer.model, explainer.batch_size)
- self.explainer = explainer
- self.score_calculator = ImageObjectDetectorScoreCalculator(object_detector_formater,
- iou_calculator)
- self.explainer.inference_function = self.score_calculator.score
- self.explainer.batch_inference_function = self.score_calculator.batch_score
-
- def explain(self, inputs: Union[tf.data.Dataset, tf.Tensor, np.array],
- targets: Optional[Union[tf.Tensor, np.array]] = None) -> tf.Tensor:
if len(tf.shape(targets)) == 1:
targets = tf.expand_dims(targets, axis=0)
return self.explainer.explain(inputs, targets)
-
-
-class BoundingBoxesExplainer(ImageObjectDetectorExplainer, IObjectFormater):
- """
- For a given black box explainer, this class allows to find explications of an object detector
- model. The object model detector shall return a list (length of the size of the batch)
- containing a tensor of 2 dimensions.
- The first dimension of the tensor is the number of bounding boxes found in the image
- The second dimension is:
- [x1_box, y1_box, x2_box, y2_box, probability_detection, ones_hot_classif_result]
-
- This work is a generalisation of the following article at any kind of black box explainer and
- also can be used for other kind of object detector (like segmentation)
-
- Ref. Petsiuk & al., Black-box Explanation of Object Detectors via Saliency Maps (2021).
- https://arxiv.org/pdf/2006.03204.pdf
- """
-
- def __init__(self, explainer: BlackBoxExplainer):
- super().__init__(explainer, self, BoxIouCalculator())
-
- def format_objects(self, predictions) -> Iterable[Tuple[tf.Tensor, tf.Tensor, tf.Tensor]]:
- boxes, proba_detection, one_hots_classifications = tf.split(predictions,
- [4, 1, tf.shape(predictions[0])[0] - 5], 1)
- return boxes, proba_detection, one_hots_classifications
diff --git a/xplique/commons/__init__.py b/xplique/commons/__init__.py
index 3dfa73af..94237f90 100644
--- a/xplique/commons/__init__.py
+++ b/xplique/commons/__init__.py
@@ -7,7 +7,7 @@
find_layer, open_relu_policy
from .tf_operations import repeat_labels, batch_tensor
from .callable_operations import predictions_one_hot_callable
-from .operators import Tasks, get_operator, check_operator, operator_batching,\
- get_inference_function, get_gradient_functions
+from .operators_operations import (Tasks, get_operator, check_operator, operator_batching,
+ get_inference_function, get_gradient_functions)
from .exceptions import no_gradients_available, raise_invalid_operator
from .forgrad import forgrad
diff --git a/xplique/commons/operators.py b/xplique/commons/operators.py
index 62af7a1f..5770420e 100644
--- a/xplique/commons/operators.py
+++ b/xplique/commons/operators.py
@@ -2,14 +2,13 @@
Custom tensorflow operator for Attributions
"""
-import inspect
-from enum import Enum
+from deprecated import deprecated
import tensorflow as tf
-from ..types import Callable, Optional, Union, OperatorSignature
-from .exceptions import raise_invalid_operator, no_gradients_available
-from .callable_operations import predictions_one_hot_callable
+from ..types import Callable, Optional
+from ..utils_functions.object_detection import _box_iou, _format_objects, _EPSILON
+
@tf.function
def predictions_operator(model: Callable,
@@ -36,6 +35,7 @@ def predictions_operator(model: Callable,
return scores
@tf.function
+@deprecated(version="1.0.0", reason="Gradient-based explanations are zeros with this operator.")
def regression_operator(model: Callable,
inputs: tf.Tensor,
targets: tf.Tensor) -> tf.Tensor:
@@ -63,294 +63,162 @@ def regression_operator(model: Callable,
@tf.function
-def binary_segmentation_operator(model: Callable,
- inputs: tf.Tensor,
- targets: tf.Tensor) -> tf.Tensor:
+def semantic_segmentation_operator(model, inputs, targets):
"""
- Compute the segmentation score for a batch of samples.
+ Explain the class of a zone of interest.
Parameters
----------
model
Model used for computing predictions.
+ The model outputs should be between 0 and 1, otherwise, applying a softmax is recommended.
inputs
Input samples to be explained.
+ Expected shape of (n, h, w, c_in), with c_in the number of channels of the input.
targets
- One-hot encoded labels or regression target (e.g {+1, -1}), one for each sample.
+ Tensor, a mask indicating the zone and class to explain.
+ It contains the model predictions limited to a certain zone and channel.
+ The zone indicates the zone of interest and the channel the class of interest.
+ For more detail and examples please refer to the documentation.
+ https://deel-ai.github.io/xplique/latest/api/attributions/semantic_segmentation/
+ Expected shape of (n, h, w, c_out), with c_out the number of classes.
+ `targets` can also be designed to explain the border of a zone of interest.
Returns
-------
scores
Segmentation scores computed.
"""
- scores = tf.reduce_sum(model(inputs) * targets, axis=(1, 2))
- return scores
+ # compute absolute difference between prediction and target on targets zone
+ scores = model(inputs) * targets
+
+ # take mean over the zone and channel of interest
+ return tf.reduce_sum(scores, axis=(1, 2, 3)) /\
+ tf.reduce_sum(tf.cast(tf.not_equal(targets, 0), tf.float32), axis=(1, 2, 3))
@tf.function
-def segmentation_operator(model: Callable,
- inputs: tf.Tensor,
- targets: tf.Tensor) -> tf.Tensor:
- """
- Compute the segmentation score for a batch of samples.
+def object_detection_operator(model: Callable,
+ inputs: tf.Tensor,
+ targets: tf.Tensor,
+ intersection_score_fn: Optional[Callable] = _box_iou,
+ include_detection_probability: Optional[bool] = True,
+ include_classification_score: Optional[bool] = True,) -> tf.Tensor:
+ """
+ Compute the object detection scores for a batch of samples.
+
+ For a given image, there are two possibilities:
+ - One box per image is provided: Then, in the case of perturbation-based methods,
+ the model makes prediction on the perturbed image and choose the most similar predicted box.
+ This similarity is computed following the DRise method.
+ In the case of gradient-based methods, the gradient is computed from the same score.
+ - Several boxes are provided for one image: In this case, the attributions for each box are
+ computed and the mean is taken.
+
+ Therefore, to explain each box separately, the easiest way is to call the attribution method
+ with a batch of the same image tiled to match the number of predicted box.
+ In this case, inputs and targets shapes should be: (nb_boxes, H, W, C) and (nb_boxes, (5 + nc)).
+
+ This work is a generalization of the following article at any kind of attribution method.
+ Ref. Petsiuk & al., Black-box Explanation of Object Detectors via Saliency Maps (2021).
+ https://arxiv.org/pdf/2006.03204.pdf
Parameters
----------
model
- Model used for computing predictions.
+ Model used for computing object detection prediction.
+ The model should have input and output shapes of (N, H, W, C) and (N, nb_boxes, (4+1+nc)).
+ The model should not include the NMS computation,
+ it is not differentiable and drastically reduce the number of boxes for the matching.
inputs
- Input samples to be explained.
+ Batched input samples to be explained. Expected shape (N, H, W, C).
+ More information in the documentation.
targets
- One-hot encoded labels or regression target (e.g {+1, -1}), one for each sample.
+ Specify the box are boxes to explain for each input. Preferably, after the NMS.
+ It should be of shape (N, (4 + 1 + nc)) or (N, nb_boxes, (4 + 1 + nc)),
+ with nc the number of classes,
+ N the number of samples in the batch (it should match `inputs`),
+ and nb_boxes the number of boxes to explain simultaneously.
+
+ (4 + 1 + nc) means: [boxes_coordinates, proba_detection, one_hots_classifications].
+
+ In the case the nb_boxes dimension is not 1,
+ several boxes will be explained at the same time.
+ To be more precise, explanations will be computed for each box and the mean is returned.
+ intersection_score_fn
+ Function that computes the intersection score between two bounding boxes coordinates.
+ This function is batched. The default value is `_box_iou` computing IOU scores.
+ include_detection_probability
+ Boolean encoding if the box objectness (or detection probability)
+ should be included in DRise score.
+ include_classification_score
+ Boolean encoding if the class associated to the box should be included in DRise score.
Returns
-------
scores
- Segmentation scores computed.
- """
- scores = tf.reduce_sum(model(inputs) * targets, axis=(1, 2, 3))
- return scores
-
-class Tasks(Enum):
- """
- Enumeration of different tasks for which we have defined operators
- """
- CLASSIFICATION = predictions_operator
- REGRESSION = regression_operator
-
- @staticmethod
- def from_string(operator_name: str) -> "Tasks":
- """
- Restore an operator from a string
-
- Parameters
- ----------
- operator_name
- String indicating the operator to restore: must be one
- of 'classification' or 'regression'
-
- Returns
- -------
- operator
- The Tasks object
- """
- assert operator_name in [
- "classification",
- "regression",
- ], "Only 'classification' and 'regression' are supported."
-
- if operator_name == "regression":
- return Tasks.REGRESSION
- return Tasks.CLASSIFICATION
-
-def check_operator(operator: Callable):
- """
- Check if the operator is valid g(f, x, y) -> tf.Tensor
- and raise an exception and return true if so.
-
- Parameters
- ----------
- operator
- Operator to check
-
- Returns
- -------
- is_valid
- True if the operator is valid, False otherwise.
- """
- # handle tf functions
- # pylint: disable=protected-access
- if hasattr(operator, '_python_function'):
- return check_operator(operator._python_function)
-
- # the operator must be callable
- if not hasattr(operator, '__call__'):
- raise_invalid_operator()
-
- # the operator should take at least three arguments
- args = inspect.getfullargspec(operator).args
- if len(args) < 3:
- raise_invalid_operator()
-
- return True
-
-def get_operator(
- operator: Optional[Union[Tasks, str, OperatorSignature]]):
- """
- This function allows to retrieve an operator from: a Tasks, a task name. If the operator
- is a custom one, we simply check if its signature is correct
-
- Parameters
- ----------
- operator
- An operator from the Tasks enum or the task name or a custom operator. If None, use a
- classification operator.
-
- Returns
- -------
- operator
- The operator requested
- """
- # case when no operator is provided
- if operator is None:
- return predictions_operator
-
- # case when the query is a string
- if isinstance(operator, str):
- return Tasks.from_string(operator)
-
- # case when the query belong to the Tasks enum
- if operator in [t.value for t in Tasks]:
- return operator
-
- # case when the operator is a custom one
- assert check_operator(operator)
- return operator
-
-def get_gradient_of_operator(operator):
- """
- Get the gradient of an operator.
-
- Parameters
- ----------
- operator
- Operator to compute the gradient of.
-
- Returns
- -------
- gradient
- Gradient of the operator.
- """
- @tf.function
- def gradient(model, inputs, targets):
- with tf.GradientTape() as tape:
- tape.watch(inputs)
- scores = operator(model, inputs, targets)
-
- return tape.gradient(scores, inputs)
-
- return gradient
-
-
-def operator_batching(operator: OperatorSignature) -> tf.Tensor:
- """
- Take care of batching an operator: (model, inputs, labels).
-
- Parameters
- ----------
- operator
- Any callable that take model, inputs and labels as parameters.
-
- Returns
- -------
- batched_operator
- Function that apply operator by batch.
+ Object detection scores computed following DRise definition:
+ intersection_score * proba_detection * classification_similarity
"""
+ def batch_loop(args):
+ # function to loop on for `tf.map_fn`
+ obj, obj_ref = args
- def batched_operator(model, inputs, targets, batch_size=None):
- if batch_size is not None:
- dataset = tf.data.Dataset.from_tensor_slices((inputs, targets))
- results = tf.concat([
- operator(model, x, y)
- for x, y in dataset.batch(batch_size)
- ], axis=0)
- else:
- results = operator(model, inputs, targets)
+ if obj is None or obj.shape[0] == 0:
+ return tf.constant(0.0, dtype=inputs.dtype)
- return results
+ # compute predicted boxes for a given image
+ # (nb_box_pred, 4), (nb_box_pred, 1), (nb_box_pred, nb_classes)
+ current_boxes, proba_detection, classification = _format_objects(obj)
+ size = tf.shape(current_boxes)[0]
- return batched_operator
+ if len(tf.shape(obj_ref)) == 1:
+ obj_ref = tf.expand_dims(obj_ref, axis=0)
+ # DRise consider the reference objectness to be 1
+ # (nb_box_ref, 4), _, (nb_box_ref, nb_classes)
+ boxes_refs, _, class_refs = _format_objects(obj_ref)
-batch_predictions = operator_batching(predictions_operator)
-gradients_predictions = get_gradient_of_operator(predictions_operator)
-batch_gradients_predictions = operator_batching(gradients_predictions)
-batch_predictions_one_hot_callable = operator_batching(predictions_one_hot_callable)
+ # (nb_box_ref, nb_box_pred, 4)
+ boxes_refs = tf.repeat(tf.expand_dims(boxes_refs, axis=1), repeats=size, axis=1)
+ # (nb_box_ref, nb_box_pred)
+ intersection_score = intersection_score_fn(boxes_refs, current_boxes)
-def get_inference_function(
- model: Callable,
- operator: Optional[OperatorSignature] = None):
- """
- Define the inference function according to the model type
+ # (nb_box_pred,)
+ detection_probability = tf.squeeze(proba_detection, axis=1)
- Parameters
- ----------
- model
- Model used for computing explanations.
- operator
- Function g to explain, g take 3 parameters (f, x, y) and should return a scalar,
- with f the model, x the inputs and y the targets. If None, use the standard
- operator g(f, x, y) = f(x)[y].
-
- Returns
- -------
- inference_function
- Same definition as the operator.
- batch_inference_function
- An inference function which treat inputs and targets by batch,
- it has an additionnal parameter `batch_size`.
- """
- if operator is not None:
- # user specified a string, an operator from the ones available or a
- # custom operator, we check if the operator is valid
- # and we wrap it to generate a batching version of this operator
- operator = get_operator(operator)
- inference_function = operator
- batch_inference_function = operator_batching(operator)
+ # set detection probability to 1 if it should be included
+ detection_probability = tf.cond(tf.cast(include_detection_probability, tf.bool),
+ true_fn=lambda: detection_probability,
+ false_fn=lambda: tf.ones_like(detection_probability))
- elif isinstance(model, (tf.keras.Model, tf.Module, tf.keras.layers.Layer)):
- inference_function = predictions_operator
- batch_inference_function = batch_predictions
+ # (nb_box_ref, nb_box_pred, nb_classes)
+ class_refs = tf.repeat(tf.expand_dims(class_refs, axis=1), repeats=size, axis=1)
- else:
- # completely unknown model (e.g. sklearn), we can't backprop through it
- inference_function = predictions_one_hot_callable
- batch_inference_function = batch_predictions_one_hot_callable
+ # (nb_box_ref, nb_box_pred)
+ classification_score = tf.reduce_sum(class_refs * classification, axis=-1) \
+ / (tf.norm(classification, axis=-1) * tf.norm(class_refs, axis=-1)+ _EPSILON)
- return inference_function, batch_inference_function
+ # set classification score to 1 if it should be included
+ classification_score = tf.cond(tf.cast(include_classification_score, tf.bool),
+ true_fn=lambda: classification_score,
+ false_fn=lambda: tf.ones_like(classification_score))
+ # Compute score as defined in DRise for all possible pair of boxes
+ # (nb_box_ref, nb_box_pred)
+ boxes_pairwise_scores = intersection_score \
+ * detection_probability \
+ * classification_score
-def get_gradient_functions(
- model: Callable,
- operator: Optional[OperatorSignature] = None):
- """
- Define the gradient function according to the model type
+ # select for a reference box the most similar predicted box score
+ # (nb_box_ref,)
+ ref_boxes_scores = tf.reduce_max(boxes_pairwise_scores, axis=1)
- Parameters
- ----------
- model
- Model used for computing explanations.
- operator
- Function g to explain, g take 3 parameters (f, x, y) and should return a scalar,
- with f the model, x the inputs and y the targets. If None, use the standard
- operator g(f, x, y) = f(x)[y].
+ # get an attribution for several boxes in the same time
+ # ()
+ image_score = tf.reduce_mean(ref_boxes_scores)
+ return image_score
- Returns
- -------
- gradient
- Gradient function of the operator.
- batch_gradient
- An gradient function which treat inputs and targets by batch,
- it has an additionnal parameter `batch_size`.
- """
- if operator is not None:
- # user specified a string, an operator from the ones available or a
- # custom operator, we check if the operator is valid
- # and we wrap it to generate a batching version of this operator
- operator = get_operator(operator)
- gradient = get_gradient_of_operator(operator)
- batch_gradient = operator_batching(gradient)
-
- elif isinstance(model, tf.keras.Model):
- # no custom operator, for keras model we can backprop through the model
- gradient = gradients_predictions
- batch_gradient = batch_gradients_predictions
-
- else:
- # custom model or completely unknown model (e.g. sklearn), we can't backprop through it
- gradient = no_gradients_available
- batch_gradient = no_gradients_available
-
- return gradient, batch_gradient
-
\ No newline at end of file
+ objects = model(inputs)
+ return tf.map_fn(batch_loop, (objects, targets), fn_output_signature=tf.float32)
diff --git a/xplique/commons/operators_operations.py b/xplique/commons/operators_operations.py
new file mode 100644
index 00000000..09c815a3
--- /dev/null
+++ b/xplique/commons/operators_operations.py
@@ -0,0 +1,296 @@
+"""
+Custom tensorflow operator for Attributions
+"""
+
+import inspect
+from enum import Enum
+
+import tensorflow as tf
+
+from ..types import Callable, Optional, Union, OperatorSignature
+from .exceptions import raise_invalid_operator, no_gradients_available
+from .callable_operations import predictions_one_hot_callable
+from .operators import (predictions_operator, # regression_operator,
+ semantic_segmentation_operator, object_detection_operator)
+
+
+class Tasks(Enum):
+ """
+ Enumeration of different tasks for which we have defined operators
+ """
+ CLASSIFICATION = predictions_operator
+ # `regression_operator` do not work for gradient-based method
+ # the problem is its use for multi output regression
+ # REGRESSION = regression_operator
+ REGRESSION = predictions_operator
+ SEMANTIC_SEGMENTATION = semantic_segmentation_operator
+ OBJECT_DETECTION = object_detection_operator
+
+ # object detection operator limited to box position explanation
+ OBJECT_DETECTION_BOX_POSITION = lambda model, inputs, targets:(
+ object_detection_operator(model, inputs, targets,
+ include_detection_probability=False,
+ include_classification_score=False)
+ )
+
+ # object detection operator limited to box proba and position explanation
+ OBJECT_DETECTION_BOX_PROBA = lambda model, inputs, targets:(
+ object_detection_operator(model, inputs, targets,
+ include_detection_probability=True,
+ include_classification_score=False)
+ )
+
+ # object detection operator limited to box class and position explanation
+ OBJECT_DETECTION_BOX_CLASS = lambda model, inputs, targets:(
+ object_detection_operator(model, inputs, targets,
+ include_detection_probability=False,
+ include_classification_score=True)
+ )
+
+ @staticmethod
+ def from_string(operator_name: str) -> "Tasks":
+ """
+ Restore an operator from a string
+
+ Parameters
+ ----------
+ operator_name
+ String indicating the operator to restore: must be one
+ of 'classification' or 'regression'
+
+ Returns
+ -------
+ operator
+ The Tasks object
+ """
+
+ string_to_tasks = {
+ "classification": Tasks.CLASSIFICATION,
+ "regression": Tasks.REGRESSION,
+ "semantic segmentation": Tasks.SEMANTIC_SEGMENTATION,
+ "object detection": Tasks.OBJECT_DETECTION,
+ "object detection box position": Tasks.OBJECT_DETECTION_BOX_POSITION,
+ "object detection box proba": Tasks.OBJECT_DETECTION_BOX_PROBA,
+ "object detection box class": Tasks.OBJECT_DETECTION_BOX_CLASS,
+ }
+
+ assert operator_name in string_to_tasks,\
+ f"Only `operator` value among {string_to_tasks.keys()} are supported,\n "+\
+ f"but {operator_name} was given."
+
+ return string_to_tasks[operator_name]
+
+
+def check_operator(operator: Callable):
+ """
+ Check if the operator is valid g(f, x, y) -> tf.Tensor
+ and raise an exception and return true if so.
+
+ Parameters
+ ----------
+ operator
+ Operator to check
+
+ Returns
+ -------
+ is_valid
+ True if the operator is valid, False otherwise.
+ """
+ # handle tf functions
+ # pylint: disable=protected-access
+ if hasattr(operator, '_python_function'):
+ return check_operator(operator._python_function)
+
+ # the operator must be callable
+ if not hasattr(operator, '__call__'):
+ raise_invalid_operator()
+
+ # the operator should take at least three arguments
+ args = inspect.getfullargspec(operator).args
+ if len(args) < 3:
+ raise_invalid_operator()
+
+ return True
+
+
+def get_operator(
+ operator: Optional[Union[Tasks, str, OperatorSignature]]):
+ """
+ This function allows to retrieve an operator from: a Tasks, a task name. If the operator
+ is a custom one, we simply check if its signature is correct
+
+ Parameters
+ ----------
+ operator
+ An operator from the Tasks enum or the task name or a custom operator. If None, use a
+ classification operator.
+
+ Returns
+ -------
+ operator
+ The operator requested
+ """
+ # case when no operator is provided
+ if operator is None:
+ return predictions_operator
+
+ # case when the query is a string
+ if isinstance(operator, str):
+ return Tasks.from_string(operator)
+
+ # case when the query belong to the Tasks enum
+ if operator in [t.value for t in Tasks]:
+ return operator
+
+ # case when the operator is a custom one
+ assert check_operator(operator)
+ return operator
+
+
+def get_gradient_of_operator(operator):
+ """
+ Get the gradient of an operator.
+
+ Parameters
+ ----------
+ operator
+ Operator of which to compute the gradient.
+
+ Returns
+ -------
+ gradient
+ Gradient of the operator.
+ """
+ @tf.function
+ def gradient(model, inputs, targets):
+ with tf.GradientTape() as tape:
+ tape.watch(inputs)
+ scores = operator(model, inputs, targets)
+
+ return tape.gradient(scores, inputs)
+
+ return gradient
+
+
+def operator_batching(operator: OperatorSignature) -> tf.Tensor:
+ """
+ Take care of batching an operator: (model, inputs, labels).
+
+ Parameters
+ ----------
+ operator
+ Any callable that take model, inputs and labels as parameters.
+
+ Returns
+ -------
+ batched_operator
+ Function that apply operator by batch.
+ """
+
+ def batched_operator(model, inputs, targets, batch_size=None):
+ if batch_size is not None:
+ dataset = tf.data.Dataset.from_tensor_slices((inputs, targets))
+ results = tf.concat([
+ operator(model, x, y)
+ for x, y in dataset.batch(batch_size)
+ ], axis=0)
+ else:
+ results = operator(model, inputs, targets)
+
+ return results
+
+ return batched_operator
+
+
+batch_predictions = operator_batching(predictions_operator)
+gradients_predictions = get_gradient_of_operator(predictions_operator)
+batch_gradients_predictions = operator_batching(gradients_predictions)
+batch_predictions_one_hot_callable = operator_batching(predictions_one_hot_callable)
+
+
+def get_inference_function(
+ model: Callable,
+ operator: Optional[OperatorSignature] = None):
+ """
+ Define the inference function according to the model type
+
+ Parameters
+ ----------
+ model
+ Model used for computing explanations.
+ operator
+ Function g to explain, g take 3 parameters (f, x, y) and should return a scalar,
+ with f the model, x the inputs and y the targets. If None, use the standard
+ operator g(f, x, y) = f(x)[y].
+
+ Returns
+ -------
+ inference_function
+ Same definition as the operator.
+ batch_inference_function
+ An inference function which treat inputs and targets by batch,
+ it has an additional parameter `batch_size`.
+ """
+ if operator is not None:
+ # user specified a string, an operator from the ones available or a
+ # custom operator, we check if the operator is valid
+ # and we wrap it to generate a batching version of this operator
+ operator = get_operator(operator)
+ inference_function = operator
+ batch_inference_function = operator_batching(operator)
+
+ elif isinstance(model, (tf.keras.Model, tf.Module, tf.keras.layers.Layer)):
+ inference_function = predictions_operator
+ batch_inference_function = batch_predictions
+
+ else:
+ # completely unknown model (e.g. sklearn), we can't backprop through it
+ inference_function = predictions_one_hot_callable
+ batch_inference_function = batch_predictions_one_hot_callable
+
+ return inference_function, batch_inference_function
+
+
+def get_gradient_functions(
+ model: Callable,
+ operator: Optional[OperatorSignature] = None):
+ """
+ Define the gradient function according to the model type
+
+ Parameters
+ ----------
+ model
+ Model used for computing explanations.
+ operator
+ Function g to explain, g take 3 parameters (f, x, y) and should return a scalar,
+ with f the model, x the inputs and y the targets. If None, use the standard
+ operator g(f, x, y) = f(x)[y].
+
+ Returns
+ -------
+ gradient
+ Gradient function of the operator.
+ batch_gradient
+ An gradient function which treat inputs and targets by batch,
+ it has an additional parameter `batch_size`.
+ """
+ if operator is not None:
+ # user specified a string, an operator from the ones available or a
+ # custom operator, we check if the operator is valid
+ # and we wrap it to generate a batching version of this operator
+ operator = get_operator(operator)
+ gradient = get_gradient_of_operator(operator)
+ batch_gradient = operator_batching(gradient)
+
+ elif isinstance(model, tf.keras.Model):
+ # no custom operator, for keras model we can backprop through the model
+ gradient = gradients_predictions
+ batch_gradient = batch_gradients_predictions
+
+ else:
+ # custom model or completely unknown model (e.g. sklearn), we can't backprop through it
+ gradient = no_gradients_available
+ batch_gradient = no_gradients_available
+
+ return gradient, batch_gradient
+
\ No newline at end of file
diff --git a/xplique/features_visualizations/maco.py b/xplique/features_visualizations/maco.py
index 4b051e4a..f03ce173 100644
--- a/xplique/features_visualizations/maco.py
+++ b/xplique/features_visualizations/maco.py
@@ -71,7 +71,7 @@ def maco(objective: Objective,
# default to box_size that go from 50% to 5%
box_size_values = tf.cast(np.linspace(0.5, 0.05, nb_steps), tf.float32)
get_box_size = lambda step_i: box_size_values[step_i]
- elif isinstance(box_size, Callable):
+ elif hasattr(box_size, "__call__"):
get_box_size = box_size
elif isinstance(box_size, float):
get_box_size = lambda _ : box_size
@@ -82,7 +82,7 @@ def maco(objective: Objective,
# default to large noise to low noise
noise_values = tf.cast(np.logspace(0, -4, nb_steps), tf.float32)
get_noise_intensity = lambda step_i: noise_values[step_i]
- elif isinstance(noise_intensity, Callable):
+ elif hasattr(noise_intensity, "__call__"):
get_noise_intensity = noise_intensity
elif isinstance(noise_intensity, float):
get_noise_intensity = lambda _ : noise_intensity
diff --git a/xplique/utils_functions/__init__.py b/xplique/utils_functions/__init__.py
new file mode 100644
index 00000000..4c1d8941
--- /dev/null
+++ b/xplique/utils_functions/__init__.py
@@ -0,0 +1,5 @@
+"""
+Functions to ease attributions
+"""
+
+from .segmentation import get_class_zone, get_connected_zone, get_in_out_border, get_common_border
diff --git a/xplique/utils_functions/object_detection.py b/xplique/utils_functions/object_detection.py
new file mode 100644
index 00000000..c8bbbe6b
--- /dev/null
+++ b/xplique/utils_functions/object_detection.py
@@ -0,0 +1,72 @@
+"""
+Operator for object detection
+"""
+from typing import Tuple
+import tensorflow as tf
+
+_EPSILON = tf.constant(1e-4)
+
+
+def _box_iou(boxes_a: tf.Tensor, boxes_b: tf.Tensor) -> tf.Tensor:
+ """
+ Compute the intersection between two batched bounding boxes.
+ Each bounding box is defined by (x1, y1, x2, y2) respectively (left, bottom, right, top).
+ With left < right and bottom < top
+
+ Parameters
+ ----------
+ boxes_a
+ First batch of bounding boxes.
+ boxes_b
+ Second batch of bounding boxes.
+
+ Returns
+ -------
+ iou_score
+ The IOU score between the two batches of bounding boxes.
+ """
+
+ # determine the intersection rectangle
+ left = tf.maximum(boxes_a[..., 0], boxes_b[..., 0])
+ bottom = tf.maximum(boxes_a[..., 1], boxes_b[..., 1])
+ right = tf.minimum(boxes_a[..., 2], boxes_b[..., 2])
+ top = tf.minimum(boxes_a[..., 3], boxes_b[..., 3])
+
+ intersection_area = tf.math.maximum(right - left, 0) * tf.math.maximum(top - bottom, 0)
+
+ # determine the areas of the prediction and ground-truth rectangles
+ a_area = (boxes_a[..., 2] - boxes_a[..., 0]) * (boxes_a[..., 3] - boxes_a[..., 1])
+ b_area = (boxes_b[..., 2] - boxes_b[..., 0]) * (boxes_b[..., 3] - boxes_b[..., 1])
+
+ union_area = a_area + b_area - intersection_area
+
+ iou_score = intersection_area / (union_area + _EPSILON)
+
+ return iou_score
+
+
+def _format_objects(predictions: tf.Tensor) -> Tuple[tf.Tensor, tf.Tensor, tf.Tensor]:
+ """
+ Format bounding boxes prediction for object detection operator.
+ It takes a batch of bounding boxes predictions of the model and divide it between
+ boxes_coordinates, proba_detection, and one_hots_classifications.
+
+ Parameters
+ ----------
+ predictions
+ Batch of bounding boxes predictions of shape (nb_boxes, (4 + 1 + nc)).
+ (4 + 1 + nc) means: [boxes_coordinates, proba_detection, one_hots_classifications].
+ Where nc is the number of classes.
+
+ Returns
+ -------
+ boxes_coordinates
+ A Tensor of shape (nb_boxes, 4) encoding the boxes coordinates.
+ proba_detection
+ A Tensor of shape (nb_boxes, 1) encoding the detection probabilities.
+ one_hots_classifications
+ A Tensor of shape (nb_boxes, nc) encoding the class predictions.
+ """
+ boxes_coordinates, proba_detection, one_hots_classifications = \
+ tf.split(predictions, [4, 1, tf.shape(predictions[0])[0] - 5], 1)
+ return boxes_coordinates, proba_detection, one_hots_classifications
diff --git a/xplique/utils_functions/segmentation.py b/xplique/utils_functions/segmentation.py
new file mode 100644
index 00000000..4a1c6824
--- /dev/null
+++ b/xplique/utils_functions/segmentation.py
@@ -0,0 +1,285 @@
+"""
+Functions to prepare `targets` for segmentation attributions
+"""
+
+import cv2
+import numpy as np
+import tensorflow as tf
+
+from ..types import Union, Tuple, List
+
+
+def get_class_zone(predictions: Union[tf.Tensor, np.array], class_id: int) -> tf.Tensor:
+ """
+ Extract a mask for the class `c`.
+ The mask correspond to the pixels where the maximum prediction correspond to the class `c`.
+ Other classes channels are set to zero.
+
+ Parameters
+ ----------
+ predictions
+ Output of the model, it should be the output of a softmax function.
+ We assume the shape (h, w, c).
+ class_id
+ Index of the channel of the class of interest.
+
+ Returns
+ -------
+ class_zone_mask
+ Mask of the zone corresponding to the class of interest.
+ Only the corresponding channel is non-zero.
+ The shape is the same as `predictions`, (h, w, c).
+ """
+ assert len(tf.shape(predictions)) == 3, "predictions should correspond to only one image"
+
+ class_zone = tf.cast(tf.argmax(predictions, axis=-1) == class_id, tf.float32)
+ class_zone_mask = tf.Variable(tf.zeros(predictions.shape))
+ class_zone_mask = class_zone_mask[:, :, class_id].assign(class_zone)
+
+ assert tf.reduce_sum(class_zone_mask) >= 1
+
+ return class_zone_mask
+
+
+def get_connected_zone(
+ predictions: Union[tf.Tensor, np.array], coordinates: Tuple[int, int]
+) -> tf.Tensor:
+ """
+ Extract a connected mask around `coordinates`.
+ The mask correspond to the pixels where the maximum prediction correspond
+ to the maximum predicted class at `coordinates`.
+ This class mask is then limited to the connected zone around `coordinates`.
+ Other classes channels are set to zero.
+
+ Parameters
+ ----------
+ predictions
+ Output of the model, it should be the output of a softmax function.
+ We assume the shape (h, w, c).
+ coordinates
+ Tuple of coordinates of the point inside the zone of interest.
+
+ Returns
+ -------
+ connected_zone_mask
+ Mask of the connected zone around `coordinates` with similar class prediction.
+ Only the corresponding channel is non-zero.
+ The shape is the same as `predictions`, (h, w, c).
+ """
+ assert len(tf.shape(predictions)) == 3, "predictions should correspond to only one image"
+
+ assert (
+ coordinates[0] < predictions.shape[0]
+ ), f"Coordinates should be included in the shape, i.e. {coordinates[0]}<{predictions.shape[0]}"
+ assert (
+ coordinates[1] < predictions.shape[1]
+ ), f"Coordinates should be included in the shape, i.e. {coordinates[1]}<{predictions.shape[1]}"
+
+ labels = tf.argmax(predictions, axis=-1)
+ class_id = labels[coordinates[0], coordinates[1]]
+ mask = labels == class_id
+ mask = np.uint8(np.array(mask)[:, :, np.newaxis] * 255)
+
+ components_masks = cv2.connectedComponents(mask)[1] # pylint: disable=no-member
+
+ component_id = components_masks[coordinates[0], coordinates[1]]
+ connected_zone = tf.cast(components_masks == component_id, tf.float32)
+
+ connected_zone_mask = tf.Variable(tf.zeros(predictions.shape))
+ connected_zone_mask = connected_zone_mask[:, :, class_id].assign(connected_zone)
+
+ assert tf.reduce_sum(connected_zone_mask) >= 1
+ assert connected_zone_mask[coordinates[0], coordinates[1], class_id] != 0
+
+ return connected_zone_mask
+
+
+def list_class_connected_zones(
+ predictions: Union[tf.Tensor,np.array],
+ class_id: int,
+ zone_minimum_size: int = 100
+) -> List[tf.Tensor]:
+ """
+ List all connected zones for a given class.
+ A connected zone is a set of pixels next to each others
+ where the maximum prediction correspond to the same class.
+ This function generate a list of connected zones,
+ each element of the list have a similar format to `get_connected_zone` outputs.
+
+ Parameters
+ ----------
+ predictions
+ Output of the model, it should be the output of a softmax function.
+ We assume the shape (h, w, c).
+ class_id
+ Index of the channel of the class of interest.
+ zone_minimum_size
+ Threshold of number of pixels under which zones are not returned.
+
+ Returns
+ -------
+ connected_zones_masks_list
+ List of the connected zones masks for a given class.
+ Each zone predictions shape is the same as `predictions`, (h, w, c).
+ Only the corresponding channel is non-zero.
+ """
+ assert len(tf.shape(predictions)) == 3, "predictions should correspond to only one image"
+
+ labels = tf.argmax(predictions, axis=-1)
+ mask = labels == class_id
+ mask = np.uint8(np.array(mask)[:, : , np.newaxis] * 255)
+
+ components_masks = cv2.connectedComponents(mask)[1] # pylint: disable=no-member
+
+ sizes = np.bincount(components_masks.ravel())
+
+ connected_zones_masks_list = []
+ for component_id, size in enumerate(sizes[1:]):
+
+ if size > zone_minimum_size:
+
+ connected_zone = tf.cast(components_masks == (component_id + 1), tf.float32)
+
+ all_channels_class_zone_mask = tf.Variable(tf.zeros(predictions.shape))
+ all_channels_class_zone_mask =\
+ all_channels_class_zone_mask[:, :, class_id].assign(connected_zone)
+
+ assert tf.reduce_sum(all_channels_class_zone_mask) >= 1
+
+ connected_zones_masks_list.append(all_channels_class_zone_mask)
+
+ return connected_zones_masks_list
+
+
+
+def get_in_out_border(
+ class_zone_mask: Union[tf.Tensor, np.array],
+) -> tf.Tensor:
+ """
+ Extract the border of a zone of interest, then put `1` on the
+ inside border and `-1` on the outside border.
+
+ Examples of coefficients extracted from the class channel of the class of interest:
+
+ ```
+ # class_zone_mask[:, :, c]
+ [[0, 0, 0, 0, 0],
+ [0, 0, 0, 1, 1],
+ [0, 0, 1, 1, 1],
+ [0, 0, 1, 1, 1],
+ [0, 1, 1, 1, 1]]
+
+ # border_mask
+ [[ 0, 0, -1, -1, -1],
+ [ 0, -1, -1, 1, 1],
+ [ 0, -1, 1, 1, 0],
+ [-1, -1, 1, 0, 0],
+ [-1, 1, 1, 0, 0]]
+ ```
+
+ Parameters
+ ----------
+ class_zone_mask
+ Mask delimiting the zone of interest, for the class of interest
+ only one channel should have non-zero values,
+ the one corresponding to the class.
+ We assume the shape (h, w, c) same as the model output for one element.
+
+ Returns
+ -------
+ class_borders_masks
+ Mask of the borders of the zone of the class of interest.
+ Only the corresponding channel is non-zero.
+ Inside borders are set to `1` and outside borders are set to `-1`.
+ The shape is the same as `class_zone_mask`, (h, w, c).
+ """
+ assert len(tf.shape(class_zone_mask)) == 3,\
+ "class_zone_mask should correspond to only one image"
+
+ # channel of the class of interest
+ channel_mean = tf.reduce_sum(tf.cast(class_zone_mask, tf.int32), axis=[0, 1])
+ assert (
+ int(tf.reduce_sum(channel_mean)) >= 1
+ ), "The specified `class_target_mask` is empty."
+ class_id = int(tf.argmax(channel_mean))
+
+ # set other values to -1 on the target zone to 1
+ binary_mask = 2 * tf.cast(class_zone_mask[:, :, class_id], tf.int32) - 1
+
+ # extend size with padding for convolution
+ extended_binary_mask = tf.pad(
+ binary_mask,
+ tf.constant([[1, 1], [1, 1]]),
+ "SYMMETRIC",
+ )
+
+ kernel = tf.convert_to_tensor(
+ [[-1, -1, -1], [-1, 13, -1], [-1, -1, -1]], dtype=tf.int32
+ )
+ kernel = kernel[:, :, tf.newaxis, tf.newaxis]
+ conv_result = tf.nn.conv2d(
+ tf.expand_dims(tf.expand_dims(extended_binary_mask, axis=0), axis=-1),
+ kernel,
+ strides=1,
+ padding="VALID",
+ )[0, :, :, 0]
+
+ # 6 < in < 21, -21 < out < -6
+ in_border = tf.logical_and(
+ tf.math.less(tf.constant([6]), conv_result),
+ tf.math.less(conv_result, tf.constant([21])),
+ )
+ out_border = tf.logical_and(
+ tf.math.less(tf.constant([-21]), conv_result),
+ tf.math.less(conv_result, tf.constant([-6])),
+ )
+
+ border_mask = (
+ tf.zeros(binary_mask.shape)
+ + tf.cast(in_border, tf.float32)
+ - tf.cast(out_border, tf.float32)
+ )
+
+ class_borders_masks = tf.Variable(tf.zeros(class_zone_mask.shape))
+ class_borders_masks = class_borders_masks[:, :, class_id].assign(border_mask)
+
+ assert int(tf.reduce_sum(tf.abs(class_borders_masks))) >= 1
+
+ return class_borders_masks
+
+
+def get_common_border(
+ border_mask_1: Union[tf.Tensor, np.array], border_mask_2: Union[tf.Tensor, np.array]
+) -> tf.Tensor:
+ """
+ Compute the common part between `border_mask_1` and `border_mask_2` masks.
+ Those borders should be computed using `get_in_out_border`.
+
+ Parameters
+ ----------
+ border_mask_1
+ Border of the first zone of interest. Computed with `get_in_out_border`.
+ border_mask_2
+ Border of the second zone of interest. Computed with `get_in_out_border`.
+
+ Returns
+ -------
+ common_borders_masks
+ Mask of the common borders between two zones of interest.
+ Only the two corresponding channels are non-zero.
+ Inside borders are set to `1` and outside borders are set to `-1`,
+ Respectively on the two channels.
+ The shape is the same as the input border masks, (h, w, c).
+ """
+ all_channel_border_mask_1 = tf.reduce_any(border_mask_1 != 0, axis=-1)
+ all_channel_border_mask_2 = tf.reduce_any(border_mask_2 != 0, axis=-1)
+
+ common_pixels_mask = tf.logical_and(
+ all_channel_border_mask_1, all_channel_border_mask_2
+ )
+
+ assert tf.reduce_any(common_pixels_mask), "No common border between the two masks."
+
+ return (border_mask_1 + border_mask_2) * tf.expand_dims(
+ tf.cast(common_pixels_mask, tf.float32), -1
+ )
diff --git a/xplique/wrappers/pytorch.py b/xplique/wrappers/pytorch.py
index c1c912bf..78e0a1ce 100644
--- a/xplique/wrappers/pytorch.py
+++ b/xplique/wrappers/pytorch.py
@@ -1,5 +1,5 @@
"""
-Module for having a wrapper for PyTorch's model
+Module for having a wrapper for PyTorch models
"""
import warnings
@@ -10,7 +10,7 @@
class TorchWrapper(tf.keras.Model):
"""
- A wrapper for PyTorch's model so that they can be used in Xplique framework
+ A wrapper for PyTorch models so that they can be used in Xplique framework
for most attribution methods
Parameters
@@ -53,7 +53,7 @@ def __init__(self, torch_model: "nn.Module", device: Union["torch.device", str],
self.channel_first = is_channel_first
# deactivate all tf.function
tf.config.run_functions_eagerly(True)
- warnings.warn("TF is set to run eagerly to avoid conflict with Pytorch. Thus,\
+ warnings.warn("TF is set to run eagerly to avoid conflict with PyTorch. Thus,\
TF functions might be slower")
# pylint: disable=arguments-differ
@@ -139,7 +139,7 @@ def np_img_to_torch(self, np_inputs: np.ndarray):
def _has_conv_layers(self):
"""
- A method that checks if the PyTorch's model has 2D convolutional layer.
+ A method that checks if the PyTorch models has 2D convolutional layer.
Indeed, convolution with PyTorch expects inputs in the shape (N, C, H, W)
where TF expect (N, H, W, C).