-
Notifications
You must be signed in to change notification settings - Fork 61
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Approximation of the marginal GP #39
Comments
I will definitely look at this! |
I'm putting it together in this branch https://github.com/nknudde/GPflowOpt/tree/mgp. I made a rough version, but still needs some cleaning up and in particular the prediction code needs some looking into. Right now I evaluate for every point individually to circumvent the collocation of gradients in tf.gradients. |
Lots of progress there, looking very good! Thanks a lot for the effort. Some thoughts:
|
Here I am again. I wrote some tests and now it also support multi-output GPs. I think the notebook would best be made after the datascaler reform |
Postponed until we manage to find a way to accomplish this in GPflow, some of the changes planned might facilitate this. |
A very cool addition would be the approximation of the marginal GP presented in https://arxiv.org/abs/1310.6740. This approach requires both the gradient as well as the Hessian of the lilkelihood w.r.t its hyperparameters which should be doable with tensorflow. The hessian however is currently a problem with r1.2 of tensorflow, as the cholesky_grad has no gradient op. I saw that in the rc for r1.3, the cholesky_grad is no longer the default gradient op for tf.cholesky, rather it is computed using tensorflow ops. I quickly tested this, the following code now runs:
So we can try this again, @nknudde could you look at this?
The text was updated successfully, but these errors were encountered: