Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Shallow convolutional notebook #16

Open
wants to merge 1 commit into
base: develop
Choose a base branch
from
Open

Shallow convolutional notebook #16

wants to merge 1 commit into from

Conversation

awav
Copy link
Collaborator

@awav awav commented Apr 9, 2021

No description provided.

Copy link
Member

@vdutor vdutor left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Some of the text could do with an extra proofread though. I already modified some of the text - feel free to adjust to your liking. Also, can you please update the text to use triple quotes """ rather than # to be consistent with the other notebooks.

# %% [markdown]
"""
# Title
Lorem ipsum
"""

@@ -0,0 +1,178 @@
# %% [markdown]
# In this notebook we revisit the Convolutional Gaussian Processes (ConvGP), <cite data-cite="van2017convolutional"/>. Similarly to convolutional neural networks, ConvGP suits very well to model image processing tasks. As well as CNN, the ConvGP is endowed with translation invariant property. ConvGP imposes stronger and structured prior on a image response function $f(\cdot)$, using patch response function $g(\cdot) \sim GP(0, k_g(\cdot, \cdot))$. The image response function is a sum of patch responses for all (overlapping) patches in the image $f(\mathbb{x}) = \sum_{p=1}^{P}g(\mathbb{x}^{[p]})$, where $p$ is an index of a patch of image, and therefore $f(\cdot) \sim GP(0, \sum_p \sum_{p'} k_g(x^{[p]}, x^{[p']}))$. In a way, the patch response kernel can be viewed as an equivalent to a convolutional kernel of CNN.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
# In this notebook we revisit the Convolutional Gaussian Processes (ConvGP), <cite data-cite="van2017convolutional"/>. Similarly to convolutional neural networks, ConvGP suits very well to model image processing tasks. As well as CNN, the ConvGP is endowed with translation invariant property. ConvGP imposes stronger and structured prior on a image response function $f(\cdot)$, using patch response function $g(\cdot) \sim GP(0, k_g(\cdot, \cdot))$. The image response function is a sum of patch responses for all (overlapping) patches in the image $f(\mathbb{x}) = \sum_{p=1}^{P}g(\mathbb{x}^{[p]})$, where $p$ is an index of a patch of image, and therefore $f(\cdot) \sim GP(0, \sum_p \sum_{p'} k_g(x^{[p]}, x^{[p']}))$. In a way, the patch response kernel can be viewed as an equivalent to a convolutional kernel of CNN.
# In this notebook, we revisit single-layer Convolutional Gaussian Processes (ConvGPs), <cite data-cite="van2017convolutional"/>. Similar to Convolutional Neural Networks (CNNs), ConvGPs are well suited for image processing tasks.
A ConvGP imposes a structured prior on an image response function. That is, the image response function $f(\cdot)$ is a sum of patch responses for all (overlapping) patches in the image:
$$
f(\mathbb{x}) = \sum_{p=1}^{P} g(\mathbb{x}^{[p]}),
$$
where $\mathbb{x}$ is an $H \times W$-sized image and $p$ indexes a $h \times w$-sized patch in the image. Imposing a GP on the patch response function $g(\cdot) \sim GP(0, k_g(\cdot, \cdot))$, leads to another GP for the image response function
$$
f(\cdot) \sim GP(0, \sum_p \sum_{p'} k_g(x^{[p]}, x^{[p']})),
$$
because Gaussianity is preserved under linear transformations.

# %% [markdown]
# In this notebook we revisit the Convolutional Gaussian Processes (ConvGP), <cite data-cite="van2017convolutional"/>. Similarly to convolutional neural networks, ConvGP suits very well to model image processing tasks. As well as CNN, the ConvGP is endowed with translation invariant property. ConvGP imposes stronger and structured prior on a image response function $f(\cdot)$, using patch response function $g(\cdot) \sim GP(0, k_g(\cdot, \cdot))$. The image response function is a sum of patch responses for all (overlapping) patches in the image $f(\mathbb{x}) = \sum_{p=1}^{P}g(\mathbb{x}^{[p]})$, where $p$ is an index of a patch of image, and therefore $f(\cdot) \sim GP(0, \sum_p \sum_{p'} k_g(x^{[p]}, x^{[p']}))$. In a way, the patch response kernel can be viewed as an equivalent to a convolutional kernel of CNN.
#
# <img src="./convgp.png" alt="convgp" width="400px"/>
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given that this is a shallow model I think this part of the architecture is more accurate to display. However, we'll have to recompile it without the incoming arrows.

Screenshot 2021-04-22 at 10 09 29

likelihood_layer = LikelihoodLayer(likelihood)

# %% [markdown]
# Below are are going to use Keras for training convolutional GP model. The details of GPflux and Keras itegration you can find [here](HOW TO ADD A CROSS REFERENCE TO A FILE?).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
# Below are are going to use Keras for training convolutional GP model. The details of GPflux and Keras itegration you can find [here](HOW TO ADD A CROSS REFERENCE TO A FILE?).
# Below we are going to use a Keras model built using GPflux for training the convolutional GP model. More details on GPflux with Keras support can be found in [this](gpflux_features.ipynb) notebook.



#%% [markdown]
# In this notebook we showed how to use GPflux to build and train shallow convolutional Gaussian processes using Keras. More advanced models like deep convolutional GPs require multi-output convoluitonal kernels and extensions in dispatchers of :mod:`~gpflow.conditionals` and in :mod:`~gpflow.covariances`.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

newline?

author={van der Wilk, Mark and Rasmussen, Carl Edward and Hensman, James},
booktitle={NIPS},
year={2017}
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

newline?

@inproceedings{van2017convolutional,
title={Convolutional Gaussian Processes},
author={van der Wilk, Mark and Rasmussen, Carl Edward and Hensman, James},
booktitle={NIPS},
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
booktitle={NIPS},
booktitle={Advances in Neural Information Processing Systems},

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For consistency with other references.


# %% [markdown]
# In the following steps we create a GP convolutional layer and a Bernoulli likelihood layer:
# 1. Create a convolutional kernel using :mod:`~gpflow.kernels.Convolutional`, and specify input's image shape and a patch shape.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not sure I understand this sentence?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants