Skip to content

Source code and demo for INTERSPEECH 2024 paper: Noise-robust Speech Separation with Fast Generative Correction

License

Notifications You must be signed in to change notification settings

WangHelin1997/Fast-GeCo

Repository files navigation

Fast-GeCo: Noise-robust Speech Separation with Fast Generative Correction

Paper HuggingFace Demo page

In this paper, we propose a generative correction method to enhance the output of a discriminative separator. By leveraging a generative corrector based on a diffusion model, we refine the separation process for single-channel mixture speech by removing noises and perceptually unnatural distortions. Furthermore, we optimize the generative model using a predictive loss to streamline the diffusion model’s reverse process into a single step and rectify any associated errors by the reverse process. Our method achieves state-of-the-art performance on the in-domain Libri2Mix noisy dataset, and out-of-domain WSJ with a variety of noises, improving SI-SNR by 22-35% relative to SepFormer, demonstrating robustness and strong generalization capabilities.

Try our Huggingface space!!!

Fast-GeCo Image

NEWS & TODO

  • Release model weights and inference code
  • HuggingFace Spaces demo
  • Scale up training

Environment setup

conda create -n geco python=3.8.19
conda activate geco
pip install -r requirements.txt

Inference examples

To inference with SepFormer and Fast-GeCo, please install speechbrain:

pip install speechbrain==1.0.0

Then download our pretrained models on Huggingface and run:

python inference.py  \
    --test_file 'demo/item0_mix.wav' \
    --save_name 'item0' \
    --save_folder 'output_demo' \
    --ckpt 'YOUR_MODEL_PATH'

Data Preparation

To train GeCo or Fast-GeCo, you should prepare a data folder in the following way:

libri2mix-train100/
    -1_mix.wav
    -1_source1.wav
    -1_source1hatP.wav
    -2_mix.wav
    -2_source1.wav
    -2_source1hatP.wav
    ....

Here, *_mix.wav is the mixture audio, *_source1.wav is the grouth truth audio, and *_source1hatP.wav is the estimated audio by a speech separation model like SepFormer.

Train GeCo

with 1 GPU, run:

CUDA_VISIBLE_DEVICES=0 python train-geco.py --gpus 1 --batch_size 16

Train Fast-GeCo

with 1 GPU, run:

CUDA_VISIBLE_DEVICES=0 python train-fastgeco.py --gpus 1 --batch_size 32

Evaluate GeCo

CUDA_VISIBLE_DEVICES=0 python eval-geco.py

Evaluate Fast-GeCo

CUDA_VISIBLE_DEVICES=0 python eval-fastgeco.py

Run baseline SepFormer

We also provide codes to train and evaluate the SepFormer model, the same as in our paper.

See speechbrain for more details of training and test.

License

The codebase is under MIT LICENSE.

Citations & References

We kindly ask you to cite our paper in your publication when using any of our research or code:

@inproceedings{wang24i_interspeech,
  title     = {Noise-robust Speech Separation with Fast Generative Correction},
  author    = {Helin Wang and Jesús Villalba and Laureano Moro-Velazquez and Jiarui Hai and Thomas Thebaud and Najim Dehak},
  year      = {2024},
  booktitle = {Interspeech 2024},
  pages     = {2165--2169},
  doi       = {10.21437/Interspeech.2024-327},
  issn      = {2958-1796},
}

Acknowledgement

[1] speechbrain

[2] Conv-TasNet

[3] sgmse-bbed

[4] sgmse-crp

About

Source code and demo for INTERSPEECH 2024 paper: Noise-robust Speech Separation with Fast Generative Correction

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages