Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

support torch stable solve function #72

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open

Conversation

TeaPoly
Copy link

@TeaPoly TeaPoly commented Mar 16, 2023

I notice lhotse-speech has used nara_wpe with PyTorch version. But torch.linalg.solve has error for some audio. So I replace torch.linalg.solve with _stable_solve .

@boeddeker
Copy link
Member

Hey, _stable_solve is a numpy function and works only with numpy arrays. Have you tried it with the torch GPU?
Then _stable_solve should fail (Numpy converts CPU tensors to numpy, but fails for GPU Tensors).

In general, the idea of _stable_solve doesn't work in torch, because the GPU executes code async and _stable_solve requires exceptions that can only be caught from sync code.

If you only need a CPU implementation, I recommend to use numpy. I observed higher stability with numpy and that code is better tested.

@TeaPoly
Copy link
Author

TeaPoly commented Mar 17, 2023

Hey, _stable_solve is a numpy function and works only with numpy arrays. Have you tried it with the torch GPU? Then _stable_solve should fail (Numpy converts CPU tensors to numpy, but fails for GPU Tensors).

In general, the idea of _stable_solve doesn't work in torch, because the GPU executes code async and _stable_solve requires exceptions that can only be caught from sync code.

If you only need a CPU implementation, I recommend to use numpy. I observed higher stability with numpy and that code is better tested.

I modify code for _stable_solve based on the input type (numpy or torch tensor), and I have tried PyTorch version nara_wpe with torch.Tesor as input. It works fine. I will try to push input and nara_wpe to GPU device later.

@TeaPoly
Copy link
Author

TeaPoly commented Mar 17, 2023

torch_wpe works fine on GPU device. Test code is here:

import numpy as np
import torch
from nara_wpe import wpe as np_wpe
from nara_wpe import torch_wpe

T = np.random.randint(100, 120)
D = np.random.randint(2, 6)
K = np.random.randint(3, 5)
delay = np.random.randint(1, 3)

# Real test:
Y = np.random.normal(size=(D, T))
desired = np_wpe.wpe_v6(Y, K, delay, statistics_mode='full')

# Compute on GPU
device = torch.device('cuda:0')
Y_Gpu = torch.tensor(Y).to(device)
actual = torch_wpe.wpe_v6(Y_Gpu, K, delay, statistics_mode='full')

# Convert to Numpy array after copy tensor from GPU to CPU
actual = actual.cpu().numpy()

np.testing.assert_allclose(actual, desired, atol=1e-6)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants