Skip to content

b-sigpro/neural-fca.spl2021

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Neural FCA for Blind Source Separation

This repository provides inference scripts for Neural FCA proposed in our paper: Neural Full-rank Spatial Covariance Analysis for Blind Source Separation.

Please cite as:

@article{bando2021neural,
  title={Neural Full-Rank Spatial Covariance Analysis for Blind Source Separation},
  author={Bando, Yoshiaki and Sekiguchi, Kouhei and Masuyama, Yoshiki and Nugraha, Aditya Arie and Fontaine, Mathieu and Yoshii, Kazuyoshi},
  journal={IEEE Signal Processing Letters},
  volume={28},
  pages={1670--1674},
  year={2021},
  publisher={IEEE}
}

Environments

Neural FCA was developed with Python 3.8 and the following requirements:

pip install -r requirements.txt

Pre-trained model

The pre-trained model used in the paper for separating speech mixtures can be downloaded from the following URL:

wget https://github.com/yoshipon/spl2021_neural-fca/releases/download/spl2021/model.zip
unzip model.zip

Source separation

  1. The pre-trained model can separate four-channel two-speech mixtures as follows:
python neural-fca/separate.py model/ input.wav output.wav

The model will predict three sources assuming two target sources and one noise source.

  1. If you want to perform the separation without inference-time parameter updates, run the following command:
python neural-fca/separate.py model/ input.wav output.wav --n_iter=0
  1. You can obtain neural-fca.png showing mixture and source spectrograms by:
python neural-fca/separate.py model/ input.wav output.wav --n_iter=0 --plot

License

This repository is released under the MIT License. The pre-trained model is released under the Creative Commons BY-NC-ND 4.0 License.

Contact

Yoshiaki Bando, [email protected]

National Institute of Advanced Industrial Science and Technology, Japan