Update gpytorch and gpflow benchmark #19
Open
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Here's a few updates to the GPyTorch and GPFlow benchmark code that I believe will make a better comparison to both packages:
Using whitened variational inference - the whitening operation is known to dramatically accelerate the convergence of variational optimization without any additional computational complexity. (Importantly, the GPyTorch
UnwhitenedVariationalStrategy
is very old code that uses out-dated linear algebra. We haven't updated it because we wouldn't normally recommend using it at all.)For the gpytorch data loader - set
num_workers=0
. More num_workers is (counterintuitively) slower for non-image datasets, as it requires thread synchronization.You are comparing GPyTorch/GPFlow SVGP models against Falkon's kernel ridge regression - which essentially compares the timing difference of large kernel matrix operations against the convergence rate of SGD. Whitening is a known technique that improves SGD's convergence rate for SVGP (without any additional complexity).
cc/ @jacobrgardner - @jameshensman, @alexggmatthews am I missing anything on the GPFlow end?