--- language: "en" tags: - Robust ASR - Speech Enhancement - PyTorch license: "apache-2.0" datasets: - Voicebank - DEMAND metrics: - WER - PESQ - eSTOI ---

# 1D CNN + Transformer Trained w/ Mimic Loss This repository provides all the necessary tools to perform enhancement and robust ASR training (EN) within SpeechBrain. For a better experience we encourage you to learn more about [SpeechBrain](https://speechbrain.github.io). The model performance is: | Release | Test PESQ | Test eSTOI | Valid WER | Test WER | |:-----------:|:-----:| :-----:|:----:|:---------:| | 21-03-08 | 2.92 | 85.2 | 3.20 | 2.96 | ## Pipeline description The mimic loss training system consists of three steps: 1. A perceptual model is pre-trained on clean speech features, the same type used for the enhancement masking system. 2. An enhancement model is trained with mimic loss, using the pre-trained perceptual model. 3. A large ASR model pre-trained on LibriSpeech is fine-tuned using the enhancement front-end. The enhancement and ASR models can be used together or independently. ## Install SpeechBrain First of all, please install SpeechBrain with the following command: ``` pip install speechbrain ``` Please notice that we encourage you to read our tutorials and learn more about [SpeechBrain](https://speechbrain.github.io). ## Pretrained Usage To use the mimic-loss-trained model for enhancement, use the following simple code: ```python import torchaudio from speechbrain.pretrained import SpectralMaskEnhancement enhance_model = SpectralMaskEnhancement.from_hparams( source="speechbrain/mtl-mimic-voicebank", savedir="pretrained_models/mtl-mimic-voicebank", ) enhanced = enhance_model.enhance_file("speechbrain/mtl-mimic-voicebank/example.wav") # Saving enhanced signal on disk torchaudio.save('enhanced.wav', enhanced.unsqueeze(0).cpu(), 16000) ``` ### Inference on GPU To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method. ### Training The model was trained with SpeechBrain (150e1890). To train it from scratch follows these steps: 1. Clone SpeechBrain: ```bash git clone https://github.com/speechbrain/speechbrain/ ``` 2. Install it: ``` cd speechbrain pip install -r requirements.txt pip install -e . ``` 3. Run Training: ``` cd recipes/Voicebank/MTL/ASR_enhance python train.py hparams/enhance_mimic.yaml --data_folder=your_data_folder ``` You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1HaR0Bq679pgd1_4jD74_wDRUq-c3Wl4L?usp=sharing) ### Limitations The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets. ## Referencing Mimic Loss If you find mimic loss useful, please cite: ``` @inproceedings{bagchi2018spectral, title={Spectral Feature Mapping with Mimic Loss for Robust Speech Recognition}, author={Bagchi, Deblin and Plantinga, Peter and Stiff, Adam and Fosler-Lussier, Eric}, booktitle={IEEE Conference on Audio, Speech, and Signal Processing (ICASSP)}, year={2018} } ``` ## Referencing SpeechBrain If you find SpeechBrain useful, please cite: ``` @misc{SB2021, author = {Ravanelli, Mirco and Parcollet, Titouan and Rouhe, Aku and Plantinga, Peter and Rastorgueva, Elena and Lugosch, Loren and Dawalatabad, Nauman and Ju-Chieh, Chou and Heba, Abdel and Grondin, Francois and Aris, William and Liao, Chien-Feng and Cornell, Samuele and Yeh, Sung-Lin and Na, Hwidong and Gao, Yan and Fu, Szu-Wei and Subakan, Cem and De Mori, Renato and Bengio, Yoshua }, title = {SpeechBrain}, year = {2021}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\\\\url{https://github.com/speechbrain/speechbrain}}, } ``` #### About SpeechBrain SpeechBrain is an open-source and all-in-one speech toolkit. It is designed to be simple, extremely flexible, and user-friendly. Competitive or state-of-the-art performance is obtained in various domains. Website: https://speechbrain.github.io/ GitHub: https://github.com/speechbrain/speechbrain