-
Notifications
You must be signed in to change notification settings - Fork 24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Shallow convolutional notebook #16
base: develop
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Some of the text could do with an extra proofread though. I already modified some of the text - feel free to adjust to your liking. Also, can you please update the text to use triple quotes """
rather than #
to be consistent with the other notebooks.
# %% [markdown]
"""
# Title
Lorem ipsum
"""
@@ -0,0 +1,178 @@ | |||
# %% [markdown] | |||
# In this notebook we revisit the Convolutional Gaussian Processes (ConvGP), <cite data-cite="van2017convolutional"/>. Similarly to convolutional neural networks, ConvGP suits very well to model image processing tasks. As well as CNN, the ConvGP is endowed with translation invariant property. ConvGP imposes stronger and structured prior on a image response function $f(\cdot)$, using patch response function $g(\cdot) \sim GP(0, k_g(\cdot, \cdot))$. The image response function is a sum of patch responses for all (overlapping) patches in the image $f(\mathbb{x}) = \sum_{p=1}^{P}g(\mathbb{x}^{[p]})$, where $p$ is an index of a patch of image, and therefore $f(\cdot) \sim GP(0, \sum_p \sum_{p'} k_g(x^{[p]}, x^{[p']}))$. In a way, the patch response kernel can be viewed as an equivalent to a convolutional kernel of CNN. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
# In this notebook we revisit the Convolutional Gaussian Processes (ConvGP), <cite data-cite="van2017convolutional"/>. Similarly to convolutional neural networks, ConvGP suits very well to model image processing tasks. As well as CNN, the ConvGP is endowed with translation invariant property. ConvGP imposes stronger and structured prior on a image response function $f(\cdot)$, using patch response function $g(\cdot) \sim GP(0, k_g(\cdot, \cdot))$. The image response function is a sum of patch responses for all (overlapping) patches in the image $f(\mathbb{x}) = \sum_{p=1}^{P}g(\mathbb{x}^{[p]})$, where $p$ is an index of a patch of image, and therefore $f(\cdot) \sim GP(0, \sum_p \sum_{p'} k_g(x^{[p]}, x^{[p']}))$. In a way, the patch response kernel can be viewed as an equivalent to a convolutional kernel of CNN. | |
# In this notebook, we revisit single-layer Convolutional Gaussian Processes (ConvGPs), <cite data-cite="van2017convolutional"/>. Similar to Convolutional Neural Networks (CNNs), ConvGPs are well suited for image processing tasks. | |
A ConvGP imposes a structured prior on an image response function. That is, the image response function $f(\cdot)$ is a sum of patch responses for all (overlapping) patches in the image: | |
$$ | |
f(\mathbb{x}) = \sum_{p=1}^{P} g(\mathbb{x}^{[p]}), | |
$$ | |
where $\mathbb{x}$ is an $H \times W$-sized image and $p$ indexes a $h \times w$-sized patch in the image. Imposing a GP on the patch response function $g(\cdot) \sim GP(0, k_g(\cdot, \cdot))$, leads to another GP for the image response function | |
$$ | |
f(\cdot) \sim GP(0, \sum_p \sum_{p'} k_g(x^{[p]}, x^{[p']})), | |
$$ | |
because Gaussianity is preserved under linear transformations. |
# %% [markdown] | ||
# In this notebook we revisit the Convolutional Gaussian Processes (ConvGP), <cite data-cite="van2017convolutional"/>. Similarly to convolutional neural networks, ConvGP suits very well to model image processing tasks. As well as CNN, the ConvGP is endowed with translation invariant property. ConvGP imposes stronger and structured prior on a image response function $f(\cdot)$, using patch response function $g(\cdot) \sim GP(0, k_g(\cdot, \cdot))$. The image response function is a sum of patch responses for all (overlapping) patches in the image $f(\mathbb{x}) = \sum_{p=1}^{P}g(\mathbb{x}^{[p]})$, where $p$ is an index of a patch of image, and therefore $f(\cdot) \sim GP(0, \sum_p \sum_{p'} k_g(x^{[p]}, x^{[p']}))$. In a way, the patch response kernel can be viewed as an equivalent to a convolutional kernel of CNN. | ||
# | ||
# <img src="./convgp.png" alt="convgp" width="400px"/> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
likelihood_layer = LikelihoodLayer(likelihood) | ||
|
||
# %% [markdown] | ||
# Below are are going to use Keras for training convolutional GP model. The details of GPflux and Keras itegration you can find [here](HOW TO ADD A CROSS REFERENCE TO A FILE?). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
# Below are are going to use Keras for training convolutional GP model. The details of GPflux and Keras itegration you can find [here](HOW TO ADD A CROSS REFERENCE TO A FILE?). | |
# Below we are going to use a Keras model built using GPflux for training the convolutional GP model. More details on GPflux with Keras support can be found in [this](gpflux_features.ipynb) notebook. |
|
||
|
||
#%% [markdown] | ||
# In this notebook we showed how to use GPflux to build and train shallow convolutional Gaussian processes using Keras. More advanced models like deep convolutional GPs require multi-output convoluitonal kernels and extensions in dispatchers of :mod:`~gpflow.conditionals` and in :mod:`~gpflow.covariances`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
newline?
author={van der Wilk, Mark and Rasmussen, Carl Edward and Hensman, James}, | ||
booktitle={NIPS}, | ||
year={2017} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
newline?
@inproceedings{van2017convolutional, | ||
title={Convolutional Gaussian Processes}, | ||
author={van der Wilk, Mark and Rasmussen, Carl Edward and Hensman, James}, | ||
booktitle={NIPS}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
booktitle={NIPS}, | |
booktitle={Advances in Neural Information Processing Systems}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For consistency with other references.
|
||
# %% [markdown] | ||
# In the following steps we create a GP convolutional layer and a Bernoulli likelihood layer: | ||
# 1. Create a convolutional kernel using :mod:`~gpflow.kernels.Convolutional`, and specify input's image shape and a patch shape. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not sure I understand this sentence?
No description provided.