Edit model card

ESPnet2 ENH model

espnet/yoshiki_wsj0_2mix_spatialized_enh_tfgridnet_waspaa2023_raw

This model was trained by Yoshiki using wsj0_2mix_spatialized recipe in espnet.

Demo: How to use in ESPnet2

Follow the ESPnet installation instructions if you haven't done that already.

cd espnet

pip install -e .
cd egs2/wsj0_2mix_spatialized/enh1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/yoshiki_wsj0_2mix_spatialized_enh_tfgridnet_waspaa2023_raw

RESULTS

Environments

  • date: Mon Aug 7 09:48:51 UTC 2023
  • python version: 3.7.4 (default, Aug 13 2019, 20:35:49) [GCC 7.3.0]
  • espnet version: espnet 202304
  • pytorch version: pytorch 1.10.1+cu111
  • Git hash: 277ec3c33d2ca7f47d9d31c84e4dae54ce017bd7
    • Commit date: Wed Aug 10 13:32:09 2022 -0400

enh_train_enh_tfgridnet_waspaa2023_raw

config: ./conf/tuning/train_enh_tfgridnet_waspaa2023.yaml

dataset STOI SAR SDR SIR SI_SNR
enhanced_cv_spatialized_multi_multich_min_16k 98.58 23.20 22.75 33.93 22.66
enhanced_tt_spatialized_anechoic_multich_min_16k 99.65 27.45 26.98 38.43 27.02
enhanced_tt_spatialized_reverb_multich_min_16k 98.13 18.81 18.39 29.47 18.12

ENH config

expand
config: ./conf/tuning/train_enh_tfgridnet_waspaa2023.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: chunk
output_dir: exp/enh_train_enh_tfgridnet_waspaa2023_raw
ngpu: 1
seed: 0
num_workers: 6
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 25
patience: 5
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
-   - valid
    - si_snr
    - max
-   - valid
    - loss
    - min
keep_nbest_models: 1
nbest_averaging_interval: 0
grad_clip: 5
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
create_graph_in_tensorboard: false
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 12
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/enh_stats_16k/train/speech_mix_shape
- exp/enh_stats_16k/train/speech_ref1_shape
- exp/enh_stats_16k/train/speech_ref2_shape
valid_shape_file:
- exp/enh_stats_16k/valid/speech_mix_shape
- exp/enh_stats_16k/valid/speech_ref1_shape
- exp/enh_stats_16k/valid/speech_ref2_shape
batch_type: folded
valid_batch_type: null
fold_length:
- 80000
- 80000
- 80000
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 32000
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
chunk_excluded_key_prefixes: []
train_data_path_and_name_and_type:
-   - dump/raw/tr_spatialized_multi_multich_min_16k/wav.scp
    - speech_mix
    - sound
-   - dump/raw/tr_spatialized_multi_multich_min_16k/spk1.scp
    - speech_ref1
    - sound
-   - dump/raw/tr_spatialized_multi_multich_min_16k/spk2.scp
    - speech_ref2
    - sound
valid_data_path_and_name_and_type:
-   - dump/raw/cv_spatialized_multi_multich_min_16k/wav.scp
    - speech_mix
    - sound
-   - dump/raw/cv_spatialized_multi_multich_min_16k/spk1.scp
    - speech_ref1
    - sound
-   - dump/raw/cv_spatialized_multi_multich_min_16k/spk2.scp
    - speech_ref2
    - sound
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
exclude_weight_decay: false
exclude_weight_decay_conf: {}
optim: adam
optim_conf:
    lr: 0.001
    eps: 1.0e-08
    weight_decay: 1.0e-05
scheduler: reducelronplateau
scheduler_conf:
    mode: min
    factor: 0.5
    patience: 50
init: xavier_uniform
model_conf:
    stft_consistency: false
    loss_type: mask_mse
    mask_type: null
criterions:
-   name: mr_l1_tfd
    conf:
        window_sz:
        - 512
        time_domain_weight: 0.99
    wrapper: pit
    wrapper_conf:
        weight: 1.0
-   name: ci_sdr
    conf:
        filter_length: 512
    wrapper: pit
    wrapper_conf:
        weight: 0.0
        independent_perm: false
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
short_noise_thres: 0.5
use_reverberant_ref: false
num_spk: 1
num_noise_type: 1
sample_rate: 8000
force_single_channel: false
dynamic_mixing: false
utt2spk: null
dynamic_mixing_gain_db: 0.0
encoder: same
encoder_conf: {}
separator: tfgridnet
separator_conf:
    n_srcs: 2
    n_fft: 512
    stride: 256
    window: hann
    n_imics: 8
    n_layers: 6
    lstm_hidden_units: 192
    attn_n_head: 4
    attn_approx_qk_dim: 512
    emb_dim: 48
    emb_ks: 4
    emb_hs: 2
    activation: gelu
    eps: 1.0e-05
    ref_channel: 0
decoder: same
decoder_conf: {}
mask_module: multi_mask
mask_module_conf: {}
preprocessor: null
preprocessor_conf: {}
required:
- output_dir
version: '202304'
distributed: false

Citing ESPnet

@inproceedings{watanabe2018espnet,
  author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
  title={{ESPnet}: End-to-End Speech Processing Toolkit},
  year={2018},
  booktitle={Proceedings of Interspeech},
  pages={2207--2211},
  doi={10.21437/Interspeech.2018-1456},
  url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}


@inproceedings{ESPnet-SE,
  author = {Chenda Li and Jing Shi and Wangyou Zhang and Aswin Shanmugam Subramanian and Xuankai Chang and
  Naoyuki Kamo and Moto Hira and Tomoki Hayashi and Christoph B{"{o}}ddeker and Zhuo Chen and Shinji Watanabe},
  title = {ESPnet-SE: End-To-End Speech Enhancement and Separation Toolkit Designed for {ASR} Integration},
  booktitle = {{IEEE} Spoken Language Technology Workshop, {SLT} 2021, Shenzhen, China, January 19-22, 2021},
  pages = {785--792},
  publisher = {{IEEE}},
  year = {2021},
  url = {https://doi.org/10.1109/SLT48900.2021.9383615},
  doi = {10.1109/SLT48900.2021.9383615},
  timestamp = {Mon, 12 Apr 2021 17:08:59 +0200},
  biburl = {https://dblp.org/rec/conf/slt/Li0ZSCKHHBC021.bib},
  bibsource = {dblp computer science bibliography, https://dblp.org}
}

or arXiv:

@misc{watanabe2018espnet,
  title={ESPnet: End-to-End Speech Processing Toolkit},
  author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
  year={2018},
  eprint={1804.00015},
  archivePrefix={arXiv},
  primaryClass={cs.CL}
}
Downloads last month
1
Inference API
or
Inference API (serverless) does not yet support espnet models for this pipeline type.