--- tags: - espnet - audio - audio-to-audio language: en datasets: - dns_ins20 license: cc-by-4.0 --- ## ESPnet2 ENH model ### `Johnson-Lsx/Shaoxiong_Lin_dns_ins20_enh_enh_train_enh_dccrn_raw` This model was trained by Shaoxiong Lin using dns_ins20 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```bash cd espnet git checkout 4538462eb7dc6a6b858adcbd3a526fb8173d6f73 pip install -e . cd egs2/dns_ins20/enh1 ./run.sh --skip_data_prep false --skip_train true --download_model Johnson-Lsx/Shaoxiong_Lin_dns_ins20_enh_enh_train_enh_dccrn_raw ``` # RESULTS ## Environments - date: `Thu Feb 10 23:11:40 CST 2022` - python version: `3.8.12 (default, Oct 12 2021, 13:49:34) [GCC 7.5.0]` - espnet version: `espnet 0.10.5a1` - pytorch version: `pytorch 1.9.1` - Git hash: `6f66283b9eed7b0d5e5643feb18d8f60118a4afc` - Commit date: `Mon Dec 13 15:30:29 2021 +0800` ## enh_train_enh_dccrn_batch_size_raw config: ./conf/tuning/train_enh_dccrn_batch_size.yaml |dataset|STOI|SAR|SDR|SIR| |---|---|---|---|---| |enhanced_cv_synthetic|0.98|24.69|24.69|0.00| |enhanced_tt_synthetic_no_reverb|0.96|17.69|17.69|0.00| |enhanced_tt_synthetic_with_reverb|0.81|10.45|10.45|0.00| ## ENH config
expand ``` config: ./conf/tuning/train_enh_dccrn_batch_size.yaml print_config: false log_level: INFO dry_run: false iterator_type: chunk output_dir: exp/enh_train_enh_dccrn_batch_size_raw ngpu: 1 seed: 0 num_workers: 4 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: 4 dist_rank: 0 local_rank: 0 dist_master_addr: localhost dist_master_port: 46366 dist_launcher: null multiprocessing_distributed: true unused_parameters: false sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 100 patience: 10 val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - si_snr - max - - valid - loss - min keep_nbest_models: 1 nbest_averaging_interval: 0 grad_clip: 5.0 grad_clip_type: 2.0 grad_noise: false accum_grad: 1 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: null use_tensorboard: true use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: [] num_iters_per_epoch: null batch_size: 32 valid_batch_size: null batch_bins: 1000000 valid_batch_bins: null train_shape_file: - exp/enh_stats_16k/train/speech_mix_shape - exp/enh_stats_16k/train/speech_ref1_shape - exp/enh_stats_16k/train/noise_ref1_shape valid_shape_file: - exp/enh_stats_16k/valid/speech_mix_shape - exp/enh_stats_16k/valid/speech_ref1_shape - exp/enh_stats_16k/valid/noise_ref1_shape batch_type: folded valid_batch_type: null fold_length: - 80000 - 80000 - 80000 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 64000 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - dump/raw/tr_synthetic/wav.scp - speech_mix - sound - - dump/raw/tr_synthetic/spk1.scp - speech_ref1 - sound - - dump/raw/tr_synthetic/noise1.scp - noise_ref1 - sound valid_data_path_and_name_and_type: - - dump/raw/cv_synthetic/wav.scp - speech_mix - sound - - dump/raw/cv_synthetic/spk1.scp - speech_ref1 - sound - - dump/raw/cv_synthetic/noise1.scp - noise_ref1 - sound allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adam optim_conf: lr: 0.001 eps: 1.0e-08 weight_decay: 1.0e-07 scheduler: reducelronplateau scheduler_conf: mode: min factor: 0.7 patience: 1 init: null model_conf: loss_type: si_snr criterions: # The first criterion - name: si_snr conf: eps: 1.0e-7 # the wrapper for the current criterion # for single-talker case, we simplely use fixed_order wrapper wrapper: fixed_order wrapper_conf: weight: 1.0 use_preprocessor: false encoder: stft encoder_conf: n_fft: 512 win_length: 400 hop_length: 100 separator: dccrn separator_conf: {} decoder: stft decoder_conf: n_fft: 512 win_length: 400 hop_length: 100 required: - output_dir version: 0.10.5a1 distributed: true ```
### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{ESPnet-SE, author = {Chenda Li and Jing Shi and Wangyou Zhang and Aswin Shanmugam Subramanian and Xuankai Chang and Naoyuki Kamo and Moto Hira and Tomoki Hayashi and Christoph B{"{o}}ddeker and Zhuo Chen and Shinji Watanabe}, title = {ESPnet-SE: End-To-End Speech Enhancement and Separation Toolkit Designed for {ASR} Integration}, booktitle = {{IEEE} Spoken Language Technology Workshop, {SLT} 2021, Shenzhen, China, January 19-22, 2021}, pages = {785--792}, publisher = {{IEEE}}, year = {2021}, url = {https://doi.org/10.1109/SLT48900.2021.9383615}, doi = {10.1109/SLT48900.2021.9383615}, timestamp = {Mon, 12 Apr 2021 17:08:59 +0200}, biburl = {https://dblp.org/rec/conf/slt/Li0ZSCKHHBC021.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```