Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

(Keras images) Add an optional image argument, and other improvements #329

Merged
merged 21 commits into from
Aug 10, 2019
Merged
Show file tree
Hide file tree
Changes from 18 commits
Commits
Show all changes
21 commits
Select commit Hold shift + click to select a range
c4eb039
(keras) Make image argument required
teabolt Aug 5, 2019
1111105
Update README.md to include keras
teabolt Aug 6, 2019
7d6d982
Merge branch 'master' of https://github.com/teabolt/eli5 into keras-g…
teabolt Aug 6, 2019
14452c3
Remove mentions of target_names (not implemented)
teabolt Aug 7, 2019
9e85021
Add dispatch function and image implementation
teabolt Aug 7, 2019
9a0cd53
Update dispatcher and image function docstrings
teabolt Aug 7, 2019
6b002a6
Automatically check if model/input is image-based. Convert input to a…
teabolt Aug 7, 2019
7d82c30
Mock keras.preprocessing.image in docs conf (CI fix)
teabolt Aug 7, 2019
d1af643
Update tests, docs, tutorial with image argument changes
teabolt Aug 7, 2019
6aec486
Blank line between header and list in docstring (CI fix)
teabolt Aug 8, 2019
aaa83a7
Test keras not supported function
teabolt Aug 8, 2019
47b03c1
Docstring typo
teabolt Aug 8, 2019
df25038
Clarify "not supported" error message.
teabolt Aug 8, 2019
6234aaa
Remove TODO: explain Grad-CAM in docstring. (Will be explained in ker…
teabolt Aug 8, 2019
13e1847
Move image extraction call from dispatcher to image function
teabolt Aug 8, 2019
3e875bb
Move Keras to second place in supported package list
teabolt Aug 8, 2019
4192939
Remove warnings for 'maybe image' dispatch and conversion to RGBA
teabolt Aug 8, 2019
2bb7ba5
'not supported' error typo
teabolt Aug 8, 2019
991159b
Test 'maybe image' check with both input and model
teabolt Aug 9, 2019
10921be
Add Grad-CAM image to README
teabolt Aug 9, 2019
0cf31fe
Remove line breaking backslash from docstring
teabolt Aug 10, 2019
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,8 @@ It provides support for the following machine learning frameworks and packages:
It also allows to debug scikit-learn pipelines which contain
HashingVectorizer, by undoing hashing.

* Keras_ - explain predictions of image classifiers via Grad-CAM visualizations.

* xgboost_ - show feature importances and explain predictions of XGBClassifier,
XGBRegressor and xgboost.Booster.

Expand All @@ -51,6 +53,7 @@ It provides support for the following machine learning frameworks and packages:
* sklearn-crfsuite_. ELI5 allows to check weights of sklearn_crfsuite.CRF
models.


ELI5 also implements several algorithms for inspecting black-box models
(see `Inspecting Black-Box Estimators`_):

Expand All @@ -75,6 +78,7 @@ and formatting on a client.
.. _xgboost: https://github.com/dmlc/xgboost
.. _LightGBM: https://github.com/Microsoft/LightGBM
.. _Catboost: https://github.com/catboost/catboost
.. _Keras: https://keras.io/
.. _Permutation importance: https://eli5.readthedocs.io/en/latest/blackbox/permutation_importance.html
.. _Inspecting Black-Box Estimators: https://eli5.readthedocs.io/en/latest/blackbox/index.html

Expand Down
74 changes: 48 additions & 26 deletions docs/source/_notebooks/keras-image-classifiers.rst
Original file line number Diff line number Diff line change
Expand Up @@ -70,11 +70,11 @@ Loading our sample image:

# we start from a path / URI.
# If you already have an image loaded, follow the subsequent steps
image = 'imagenet-samples/cat_dog.jpg'
image_uri = 'imagenet-samples/cat_dog.jpg'

# this is the original "cat dog" image used in the Grad-CAM paper
# check the image with Pillow
im = Image.open(image)
im = Image.open(image_uri)
print(type(im))
display(im)

Expand All @@ -96,14 +96,14 @@ dimensions! Let's resize it:
# we could resize the image manually
# but instead let's use a utility function from `keras.preprocessing`
# we pass the required dimensions as a (height, width) tuple
im = keras.preprocessing.image.load_img(image, target_size=dims) # -> PIL image
print(type(im))
im = keras.preprocessing.image.load_img(image_uri, target_size=dims) # -> PIL image
print(im)
display(im)


.. parsed-literal::

<class 'PIL.Image.Image'>
<PIL.Image.Image image mode=RGB size=224x224 at 0x7FBF0DDE5A20>



Expand Down Expand Up @@ -143,7 +143,6 @@ Looking good. Now we need to convert the image to a numpy array.

.. code:: ipython3

# one last thing
# `keras.applications` models come with their own input preprocessing function
# for best results, apply that as well

Expand All @@ -164,11 +163,18 @@ inputting
.. code:: ipython3

# take back the first image from our 'batch'
display(keras.preprocessing.image.array_to_img(doc[0]))
image = keras.preprocessing.image.array_to_img(doc[0])
print(image)
display(image)


.. parsed-literal::

<PIL.Image.Image image mode=RGB size=224x224 at 0x7FBF0CF760F0>



.. image:: ../_notebooks/keras-image-classifiers_files/keras-image-classifiers_13_0.png
.. image:: ../_notebooks/keras-image-classifiers_files/keras-image-classifiers_13_1.png


Ready to go!
Expand Down Expand Up @@ -218,6 +224,8 @@ for a dog with ELI5:

.. code:: ipython3

# we need to pass the network
# the input as a numpy array
eli5.show_prediction(model, doc)


Expand All @@ -229,8 +237,21 @@ for a dog with ELI5:

The dog region is highlighted. Makes sense!

Note that here we made a prediction twice. Once when looking at top
predictions, and a second time when passing the model through ELI5.
When explaining image based models, we can optionally pass the image
associated with the input as a Pillow image object. If we don't, the
image will be created from ``doc``. This may not work with custom models
or inputs, in which case it's worth passing the image explicitly.

.. code:: ipython3

eli5.show_prediction(model, doc, image=image)




.. image:: ../_notebooks/keras-image-classifiers_files/keras-image-classifiers_22_0.png



3. Choosing the target class (target prediction)
------------------------------------------------
Expand All @@ -246,7 +267,7 @@ classifier looks to find those objects.



.. image:: ../_notebooks/keras-image-classifiers_files/keras-image-classifiers_22_0.png
.. image:: ../_notebooks/keras-image-classifiers_files/keras-image-classifiers_24_0.png



Expand All @@ -264,11 +285,11 @@ Currently only one class can be explained at a time.



.. image:: ../_notebooks/keras-image-classifiers_files/keras-image-classifiers_24_0.png
.. image:: ../_notebooks/keras-image-classifiers_files/keras-image-classifiers_26_0.png



.. image:: ../_notebooks/keras-image-classifiers_files/keras-image-classifiers_24_1.png
.. image:: ../_notebooks/keras-image-classifiers_files/keras-image-classifiers_26_1.png


That's quite noisy! Perhaps the model is weak at classifying 'window
Expand Down Expand Up @@ -354,7 +375,7 @@ Rough print but okay. Let's pick a few convolutional layers that are



.. image:: ../_notebooks/keras-image-classifiers_files/keras-image-classifiers_29_1.png
.. image:: ../_notebooks/keras-image-classifiers_files/keras-image-classifiers_31_1.png


.. parsed-literal::
Expand All @@ -363,7 +384,7 @@ Rough print but okay. Let's pick a few convolutional layers that are



.. image:: ../_notebooks/keras-image-classifiers_files/keras-image-classifiers_29_3.png
.. image:: ../_notebooks/keras-image-classifiers_files/keras-image-classifiers_31_3.png


.. parsed-literal::
Expand All @@ -372,7 +393,7 @@ Rough print but okay. Let's pick a few convolutional layers that are



.. image:: ../_notebooks/keras-image-classifiers_files/keras-image-classifiers_29_5.png
.. image:: ../_notebooks/keras-image-classifiers_files/keras-image-classifiers_31_5.png


These results should make intuitive sense for Convolutional Neural
Expand Down Expand Up @@ -417,7 +438,7 @@ Examining the structure of the ``Explanation`` object:
[0. , 0. , 0. , 0. , 0. ,
0. , 0.05308531],
[0. , 0. , 0. , 0. , 0. ,
0.01124764, 0.06864655]]))], feature_importances=None, decision_tree=None, highlight_spaces=None, transition_features=None, image=<PIL.Image.Image image mode=RGBA size=224x224 at 0x7FCA6FD17CC0>)
0.01124764, 0.06864655]]))], feature_importances=None, decision_tree=None, highlight_spaces=None, transition_features=None, image=<PIL.Image.Image image mode=RGB size=224x224 at 0x7FBEFD7F4080>)


We can check the score (raw value) or probability (normalized score) of
Expand Down Expand Up @@ -446,7 +467,7 @@ We can also access the original image and the Grad-CAM heatmap:



.. image:: ../_notebooks/keras-image-classifiers_files/keras-image-classifiers_39_0.png
.. image:: ../_notebooks/keras-image-classifiers_files/keras-image-classifiers_41_0.png


.. parsed-literal::
Expand Down Expand Up @@ -476,7 +497,7 @@ Visualizing the heatmap:



.. image:: ../_notebooks/keras-image-classifiers_files/keras-image-classifiers_41_0.png
.. image:: ../_notebooks/keras-image-classifiers_files/keras-image-classifiers_43_0.png


That's only 7x7! This is the spatial dimensions of the
Expand All @@ -494,7 +515,7 @@ resampling):



.. image:: ../_notebooks/keras-image-classifiers_files/keras-image-classifiers_43_0.png
.. image:: ../_notebooks/keras-image-classifiers_files/keras-image-classifiers_45_0.png


Now it's clear what is being highlighted. We just need to apply some
Expand All @@ -508,7 +529,7 @@ colors and overlay the heatmap over the original image, exactly what



.. image:: ../_notebooks/keras-image-classifiers_files/keras-image-classifiers_45_0.png
.. image:: ../_notebooks/keras-image-classifiers_files/keras-image-classifiers_47_0.png


6. Extra arguments to ``format_as_image()``
Expand All @@ -525,7 +546,7 @@ colors and overlay the heatmap over the original image, exactly what



.. image:: ../_notebooks/keras-image-classifiers_files/keras-image-classifiers_48_0.png
.. image:: ../_notebooks/keras-image-classifiers_files/keras-image-classifiers_50_0.png


The ``alpha_limit`` argument controls the maximum opacity that the
Expand Down Expand Up @@ -575,7 +596,7 @@ check the explanation:



.. image:: ../_notebooks/keras-image-classifiers_files/keras-image-classifiers_51_1.png
.. image:: ../_notebooks/keras-image-classifiers_files/keras-image-classifiers_53_1.png


.. parsed-literal::
Expand All @@ -584,7 +605,7 @@ check the explanation:



.. image:: ../_notebooks/keras-image-classifiers_files/keras-image-classifiers_51_3.png
.. image:: ../_notebooks/keras-image-classifiers_files/keras-image-classifiers_53_3.png


We see some slight differences. The activations are brighter. Do
Expand All @@ -610,6 +631,7 @@ loading another model and explaining a classification of the same image:
nasnet.preprocess_input(doc2)

print(model.name)
# note that this model is without softmax
display(eli5.show_prediction(model, doc))
print(model2.name)
display(eli5.show_prediction(model2, doc2))
Expand All @@ -621,7 +643,7 @@ loading another model and explaining a classification of the same image:



.. image:: ../_notebooks/keras-image-classifiers_files/keras-image-classifiers_54_1.png
.. image:: ../_notebooks/keras-image-classifiers_files/keras-image-classifiers_56_1.png


.. parsed-literal::
Expand All @@ -630,7 +652,7 @@ loading another model and explaining a classification of the same image:



.. image:: ../_notebooks/keras-image-classifiers_files/keras-image-classifiers_54_3.png
.. image:: ../_notebooks/keras-image-classifiers_files/keras-image-classifiers_56_3.png


Wow ``show_prediction()`` is so robust!
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
1 change: 1 addition & 0 deletions docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -53,6 +53,7 @@ def __getattr__(cls, name):
'keras.backend',
'keras.models',
'keras.layers',
'keras.preprocessing.image',
'pandas',
'PIL',
'matplotlib',
Expand Down
12 changes: 8 additions & 4 deletions docs/source/libraries/keras.rst
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ Currently ELI5 supports :func:`eli5.explain_prediction` for Keras image classifi

The returned :class:`eli5.base.Explanation` instance contains some important objects:

* ``image`` represents the image input into the model. A Pillow image with mode "RGBA".
* ``image`` represents the image input into the model. A Pillow image.

* ``targets`` represents the explanation values for each target class (currently only 1 target is supported). A list of :class:`eli5.base.TargetExplanation` objects with the following attributes set:

Expand All @@ -42,9 +42,13 @@ Important arguments to :func:`eli5.explain_prediction` for ``Model`` and ``Seque

- Check ``model.input_shape`` to confirm the required dimensions of the input tensor.

* ``target_names`` are the names of the output classes.

- *Currently not implemented*.
* ``image`` Pillow image, corresponds to doc input.

- Image over which to overlay the heatmap.

- If not given, the image will be derived from ``doc`` where possible.

- Useful if ELI5 fails in case you have a custom image model or image input.

* ``targets`` are the output classes to focus on. Possible values include:

Expand Down
6 changes: 3 additions & 3 deletions docs/source/overview.rst
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,9 @@ following machine learning frameworks and packages:
highlight text data accordingly. It also allows to debug scikit-learn
pipelines which contain HashingVectorizer, by undoing hashing.

* :ref:`library-keras` - explain predictions of image classifiers
via Grad-CAM visualizations.

* :ref:`library-xgboost` - show feature importances and explain predictions
of XGBClassifier, XGBRegressor and xgboost.Booster.

Expand All @@ -45,9 +48,6 @@ following machine learning frameworks and packages:
* :ref:`library-sklearn-crfsuite`. ELI5 allows to check weights of
sklearn_crfsuite.CRF models.

* :ref:`library-keras` - explain predictions of image classifiers
via Grad-CAM visualizations.

ELI5 also implements several algorithms for inspecting black-box models
(see :ref:`eli5-black-box`):

Expand Down
Loading