Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ASR/Kaldi] multi-gpu problem #775

Open
Slyne opened this issue Dec 7, 2020 · 0 comments
Open

[ASR/Kaldi] multi-gpu problem #775

Slyne opened this issue Dec 7, 2020 · 0 comments
Labels
bug Something isn't working

Comments

@Slyne
Copy link

Slyne commented Dec 7, 2020

Related to ASR/Kaldi
(e.g. GNMT/PyTorch or FasterTransformer/All)

Describe the bug
When deployed on multi-gpus based on this example, the RTF decreased a lot.
There is a known issue under this example release notes section. Not sure if this is the same issue.

To Reproduce
Steps to reproduce the behavior:
Run this example on multiple gpus.

Expected behavior
Expect the RTF will increase as the number of gpus increases.

Environment
Please provide at least:

  • Container version: nvcr.io/nvidia/tritonserver:20.03-py3
  • GPUs in the system: 2x Tesla V100-SXM2-16GB):
  • CUDA driver version: 418.67
@Slyne Slyne added the bug Something isn't working label Dec 7, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant