Skip to content
/ wenet Public
forked from wenet-e2e/wenet

Production First and Production Ready End-to-End Speech Recognition Toolkit

License

Notifications You must be signed in to change notification settings

liroda/wenet

 
 

Repository files navigation

WeNet

License Python-Version

Docs | Tutorial | Papers | Runtime(x86) | Runtime(android)

We share neural Net together.

The main motivation of WeNet is to close the gap between research and production end-to-end (E2E) speech recognition models, to reduce the effort of productionizing E2E models, and to explore better E2E models for production.

Highlights

  • Production first and production ready: The python code of WeNet meets the requirements of TorchScript, so the model trained by WeNet can be directly exported by Torch JIT and use LibTorch for inference. There is no gap between the research model and production model. Neither model conversion nor additional code is required for model inference.
  • Unified solution for streaming and non-streaming ASR: WeNet implements Unified Two Pass (U2) framework to achieve accurate, fast and unified E2E model, which is favorable for industry adoption.
  • Portable runtime: Several demos will be provided to show how to host WeNet trained models on different platforms, including server x86 and on-device android.
  • Light weight: WeNet is designed specifically for E2E speech recognition, with clean and simple code. It is all based on PyTorch and its corresponding ecosystem. It has no dependency on Kaldi, which simplifies installation and usage.

Performance Benchmark

Please see examples/$dataset/s0/README.md for WeNet benchmark on different speech datasets.

Installation

  • Clone the repo
git clone https://github.com/mobvoi/wenet.git
conda create -n wenet python=3.8
conda activate wenet
pip install -r requirements.txt
conda install pytorch==1.6.0 cudatoolkit=10.1 torchaudio -c pytorch

Discussion & Communication

In addition to discussing in Github Issues, we created a WeChat group for better discussion and quicker response. Please scan the following QR code in WeChat to join the chat group. If it fails, please scan the personal QR code on the right with contact info like "wenet", and we will invite you to the chat group.

If you can not access the QR image, please access it on gitee.

 Wenet chat group  Wenet chat group

Acknowledge

We borrowed a lot of code from ESPnet, and we refered to OpenTransformer for batch inference.

Citations

@article{zhang2021wenet,
  title={WeNet: Production First and Production Ready End-to-End Speech Recognition Toolkit},
  author={Zhang, Binbin and Wu, Di and Yang, Chao and Chen, Xiaoyu and Peng, Zhendong and Wang, Xiangming and Yao, Zhuoyuan and Wang, Xiong and Yu, Fan and Xie, Lei and others},
  journal={arXiv preprint arXiv:2102.01547},
  year={2021}
}

@article{zhang2020unified,
  title={Unified Streaming and Non-streaming Two-pass End-to-end Model for Speech Recognition},
  author={Zhang, Binbin and Wu, Di and Yao, Zhuoyuan and Wang, Xiong and Yu, Fan and Yang, Chao and Guo, Liyong and Hu, Yaguang and Xie, Lei and Lei, Xin},
  journal={arXiv preprint arXiv:2012.05481},
  year={2020}
}

About

Production First and Production Ready End-to-End Speech Recognition Toolkit

Resources

License

Code of conduct

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 62.8%
  • C++ 24.1%
  • Shell 4.8%
  • Java 3.6%
  • Perl 1.7%
  • CMake 1.7%
  • Starlark 1.3%