Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

gradient computation for stochastic obj in hopt takes forever #56

Open
hep07 opened this issue Jan 10, 2023 · 1 comment
Open

gradient computation for stochastic obj in hopt takes forever #56

hep07 opened this issue Jan 10, 2023 · 1 comment

Comments

@hep07
Copy link

hep07 commented Jan 10, 2023

Hi, I am trying to use stochastic objective function in hopt to do gradient based hyperparameter optimization. Tried running it and the first iteration takes forever for some reason. My falkon solver works without problems now. I take a look at the code and wrote a small replication script based on how stoch_new_compreg.py is implemented. Anything I did wrong in the following script?

import numpy as np
import falkon
import torch
from falkon.center_selection import FixedSelector

# generate a tiny dataset 
n = 100
d = 5
X, Y = datasets.make_regression(n, d, random_state=11)
num_train = int(0.8 * n)
X = X.astype(np.float64)
Y = Y.astype(np.float64).reshape(-1, 1)
X_train, y_train = torch.from_numpy(X[:num_train]), torch.from_numpy(Y[:num_train])
X_test, y_test = torch.from_numpy(X[num_train:]), torch.from_numpy(Y[num_train:])

m = 10
X_centers = X_train[:m, :].clone()
center_selector = FixedSelector(centers=X_centers)

options = falkon.FalkonOptions(keops_active="no", debug=True, cpu_preconditioner=True, max_gpu_mem=12*10**9,
                              chol_force_ooc=True, min_cuda_iter_size_64=300000, cg_tolerance=1e-10)

sigma_init = torch.as_tensor(np.array([np.sqrt(d)]*d), dtype=torch.float64)
kernel = falkon.kernels.GaussianKernel(sigma=sigma_init, opt=options)
ridge = 1e-6
maxiter = 50
def error_fn(t, p):
    return torch.sqrt(torch.mean((t - p) ** 2)).item(), "RMSE"

# solve falkon first before running through gradient
flk = falkon.Falkon(kernel=kernel, center_selection=center_selector,
                    penalty=ridge, M=m, options=options, error_every=1, maxiter=maxiter, error_fn=error_fn)

flk.fit(X_train, y_train, X_train, y_train)



# gist of backward process in stoch_new_compreg.py. 
# Remove trace part and only focus on the derivatives of model fitting term w.r.t. kernel bandwidths
# ridge and centers are set to non-trainable

optimize_centers = False
optimize_ridge = False

def calc_dfit_bwd(zy_knm_solve_zy, zy_solve_knm_knm_solve_zy, zy_solve_kmm_solve_zy, pen_n, t,
                  include_kmm_term):
    """Nystrom regularized data-fit backward"""
    dfit_bwd = -(
        2 * zy_knm_solve_zy[t:].sum() -
        zy_solve_knm_knm_solve_zy[t:].sum()
    )
    print(dfit_bwd)
    print(dfit_bwd.shape)
    if include_kmm_term:
        print(zy_solve_kmm_solve_zy[t:].sum().shape)
        dfit_bwd += pen_n * zy_solve_kmm_solve_zy[t:].sum()
        print(pen_n * zy_solve_kmm_solve_zy[t:].sum())
    return dfit_bwd

solve_zy = flk.alpha_.clone().to("cuda:0", copy=False)
X_centers_dev = X_centers.to("cuda:0", copy=False).requires_grad_(optimize_centers)
solve_zy_dev = solve_zy.to("cuda:0", copy=False)
penalty_dev = torch.as_tensor(ridge).to("cuda:0", copy=False).requires_grad_(optimize_ridge)

sigma_init = torch.as_tensor(np.array([np.sqrt(d)]), dtype=torch.float64).requires_grad_(True)
kernel = falkon.kernels.GaussianKernel(sigma=sigma_init, opt=options)

with torch.autograd.enable_grad():
    kernel_dev = kernel.to("cuda:0")
    kmm_dev = kernel_dev(X_centers_dev, X_centers_dev, opt=options)
    zy_solve_kmm_solve_zy = (kmm_dev @ solve_zy_dev * solve_zy_dev).sum(0) 
        
    k_mn_zy = kernel_dev.mmv(X_centers_dev, X_train, y_train, opt=options)  # M x (T+P)
    zy_knm_solve_zy = k_mn_zy.mul(solve_zy_dev).sum(0)
    zy_solve_knm_knm_solve_zy = kernel_dev.mmv(X_train, X_centers_dev, solve_zy_dev, opt=options).square().sum(0)
    
    pen_n = penalty_dev * num_train
    dfit_bwd = calc_dfit_bwd(
                zy_knm_solve_zy, zy_solve_knm_knm_solve_zy, zy_solve_kmm_solve_zy, pen_n, 0,
                include_kmm_term=True)
    
grads = torch.autograd.grad(
        dfit_bwd, list(kernel_dev.diff_params.values()), retain_graph=False, allow_unused=False)

I am also wondering if we implement the gradient computation this way, we would not able to use multi-GPU in the backward pass. Am I right?

Thanks!

@Giodiro
Copy link
Contributor

Giodiro commented Feb 2, 2023

Hi @hep07 ,
sorry for the super late response.
I'm not sure I understand what you're trying to achieve with this code sample. After a quick look I think you're calculating some version of the gradient (dfit_bwd) and then differentiating it with torch.autograd.grad?

You are also having an issue with the StochasticNystromCompReg objective for hyperparameter optimization?
Would you have a short reproducing code sample for that problem?

Giacomo

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants