Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Approximation of the marginal GP #39

Open
javdrher opened this issue Jul 21, 2017 · 5 comments
Open

Approximation of the marginal GP #39

javdrher opened this issue Jul 21, 2017 · 5 comments

Comments

@javdrher
Copy link
Member

A very cool addition would be the approximation of the marginal GP presented in https://arxiv.org/abs/1310.6740. This approach requires both the gradient as well as the Hessian of the lilkelihood w.r.t its hyperparameters which should be doable with tensorflow. The hessian however is currently a problem with r1.2 of tensorflow, as the cholesky_grad has no gradient op. I saw that in the rc for r1.3, the cholesky_grad is no longer the default gradient op for tf.cholesky, rather it is computed using tensorflow ops. I quickly tested this, the following code now runs:

import tensorflow as tf
import numpy as np

A = tf.Variable(np.zeros((10,)), dtype=tf.float32)
with tf.Session() as sess:
    Ae = tf.expand_dims(A, -1)
    Xs = tf.matmul(Ae, Ae, transpose_b=True) + 1e-2 * tf.eye(10)
    X = tf.cholesky(Xs)
    Xg = tf.hessians(X, A)
    sess.run(tf.global_variables_initializer())
    print (sess.run(Xg, feed_dict={A: np.random.rand(10)}))

So we can try this again, @nknudde could you look at this?

@gpfins
Copy link
Contributor

gpfins commented Jul 22, 2017

I will definitely look at this!

@gpfins
Copy link
Contributor

gpfins commented Jul 22, 2017

I'm putting it together in this branch https://github.com/nknudde/GPflowOpt/tree/mgp. I made a rough version, but still needs some cleaning up and in particular the prediction code needs some looking into. Right now I evaluate for every point individually to circumvent the collocation of gradients in tf.gradients.

@javdrher
Copy link
Member Author

Lots of progress there, looking very good! Thanks a lot for the effort.

Some thoughts:

  • It's the second wrapper class we would introduce, the structure is the same to it makes we plan work on splitting the logic out and deriving DataScaler and MGP from a shared modelwrapper class. But there's some appealing options now due to this design choice.
  • We could add a section on MCMC as well. That way the notebook can be added to the documentation as two different approaches to dealing with hyperparameter uncertainty in GPflowOpt.

@gpfins
Copy link
Contributor

gpfins commented Aug 6, 2017

Here I am again.

I wrote some tests and now it also support multi-output GPs. I think the notebook would best be made after the datascaler reform

@gpfins gpfins mentioned this issue Aug 7, 2017
Closed
@icouckuy icouckuy assigned icouckuy and unassigned icouckuy Aug 14, 2017
@icouckuy icouckuy added this to the 0.2.0 release milestone Aug 14, 2017
@javdrher javdrher removed this from the 0.2.0 release milestone Sep 11, 2017
@javdrher
Copy link
Member Author

Postponed until we manage to find a way to accomplish this in GPflow, some of the changes planned might facilitate this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants