Skip to content

Commit

Permalink
Don't enter inference mode in solver. It's redundant with falkon.fit
Browse files Browse the repository at this point in the history
  • Loading branch information
giacomo m committed Apr 21, 2024
1 parent fe672ec commit 4bf276d
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion falkon/optim/conjgrad.py
Original file line number Diff line number Diff line change
Expand Up @@ -282,7 +282,7 @@ def solve(self, X, M, Y, _lambda, initial_solution, max_iter, callback=None):
stream = torch.cuda.current_stream(device)

# Note that if we don't have CUDA this still works with stream=None.
with ExitStack() as stack, TicToc("ConjGrad preparation", False), torch.inference_mode():
with ExitStack() as stack, TicToc("ConjGrad preparation", False):
if cuda_inputs:
stack.enter_context(torch.cuda.device(device))
stack.enter_context(torch.cuda.stream(stream))
Expand Down

0 comments on commit 4bf276d

Please sign in to comment.