lichenda commited on
Commit
fb6ad6a
1 Parent(s): 4c8f21d

Update model

Browse files
README.md CHANGED
@@ -1,3 +1,251 @@
1
- ---
2
- license: cc-by-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - espnet
4
+ - audio
5
+ - audio-to-audio
6
+ language: noinfo
7
+ datasets:
8
+ - chime4
9
+ license: cc-by-4.0
10
+ ---
11
+
12
+ ## ESPnet2 ENH model
13
+
14
+ ### `lichenda/chime4_fasnet_dprnn_tac`
15
+
16
+ This model was trained by LiChenda using chime4 recipe in [espnet](https://github.com/espnet/espnet/).
17
+
18
+ ### Demo: How to use in ESPnet2
19
+
20
+ ```bash
21
+ cd espnet
22
+ git checkout 648b024d8fb262eb9923c06a698b9c6df5b16e51
23
+ pip install -e .
24
+ cd egs2/chime4/enh1
25
+ ./run.sh --skip_data_prep false --skip_train true --download_model lichenda/chime4_fasnet_dprnn_tac
26
+ ```
27
+
28
+ <!-- Generated by ./scripts/utils/show_enh_score.sh -->
29
+ # RESULTS
30
+ ## Environments
31
+ - date: `Sat Mar 19 07:17:45 CST 2022`
32
+ - python version: `3.7.11 (default, Jul 27 2021, 14:32:16) [GCC 7.5.0]`
33
+ - espnet version: `espnet 0.10.7a1`
34
+ - pytorch version: `pytorch 1.8.1`
35
+ - Git hash: `648b024d8fb262eb9923c06a698b9c6df5b16e51`
36
+ - Commit date: `Wed Mar 16 18:47:21 2022 +0800`
37
+
38
+
39
+ ## ..
40
+
41
+ config: conf/tuning/train_enh_dprnntac_fasnet.yaml
42
+
43
+ |dataset|STOI|SAR|SDR|SIR|
44
+ |---|---|---|---|---|
45
+ |enhanced_dt05_simu_isolated_6ch_track|0.95|15.75|15.75|0.00|
46
+ |enhanced_et05_simu_isolated_6ch_track|0.94|15.40|15.40|0.00|
47
+
48
+ ## ENH config
49
+
50
+ <details><summary>expand</summary>
51
+
52
+ ```
53
+ config: conf/tuning/train_enh_dprnntac_fasnet.yaml
54
+ print_config: false
55
+ log_level: INFO
56
+ dry_run: false
57
+ iterator_type: chunk
58
+ output_dir: exp/enh_train_enh_dprnntac_fasnet_raw
59
+ ngpu: 1
60
+ seed: 0
61
+ num_workers: 4
62
+ num_att_plot: 3
63
+ dist_backend: nccl
64
+ dist_init_method: env://
65
+ dist_world_size: null
66
+ dist_rank: null
67
+ local_rank: 0
68
+ dist_master_addr: null
69
+ dist_master_port: null
70
+ dist_launcher: null
71
+ multiprocessing_distributed: false
72
+ unused_parameters: false
73
+ sharded_ddp: false
74
+ cudnn_enabled: true
75
+ cudnn_benchmark: false
76
+ cudnn_deterministic: true
77
+ collect_stats: false
78
+ write_collected_feats: false
79
+ max_epoch: 100
80
+ patience: 10
81
+ val_scheduler_criterion:
82
+ - valid
83
+ - loss
84
+ early_stopping_criterion:
85
+ - valid
86
+ - loss
87
+ - min
88
+ best_model_criterion:
89
+ - - valid
90
+ - si_snr
91
+ - max
92
+ - - valid
93
+ - loss
94
+ - min
95
+ keep_nbest_models: 1
96
+ nbest_averaging_interval: 0
97
+ grad_clip: 5.0
98
+ grad_clip_type: 2.0
99
+ grad_noise: false
100
+ accum_grad: 1
101
+ no_forward_run: false
102
+ resume: true
103
+ train_dtype: float32
104
+ use_amp: false
105
+ log_interval: null
106
+ use_matplotlib: true
107
+ use_tensorboard: true
108
+ use_wandb: false
109
+ wandb_project: null
110
+ wandb_id: null
111
+ wandb_entity: null
112
+ wandb_name: null
113
+ wandb_model_log_interval: -1
114
+ detect_anomaly: false
115
+ pretrain_path: null
116
+ init_param: []
117
+ ignore_init_mismatch: false
118
+ freeze_param: []
119
+ num_iters_per_epoch: null
120
+ batch_size: 8
121
+ valid_batch_size: null
122
+ batch_bins: 1000000
123
+ valid_batch_bins: null
124
+ train_shape_file:
125
+ - exp/enh_stats_16k/train/speech_mix_shape
126
+ - exp/enh_stats_16k/train/speech_ref1_shape
127
+ valid_shape_file:
128
+ - exp/enh_stats_16k/valid/speech_mix_shape
129
+ - exp/enh_stats_16k/valid/speech_ref1_shape
130
+ batch_type: folded
131
+ valid_batch_type: null
132
+ fold_length:
133
+ - 80000
134
+ - 80000
135
+ sort_in_batch: descending
136
+ sort_batch: descending
137
+ multiple_iterator: false
138
+ chunk_length: 32000
139
+ chunk_shift_ratio: 0.5
140
+ num_cache_chunks: 1024
141
+ train_data_path_and_name_and_type:
142
+ - - dump/raw/tr05_simu_isolated_6ch_track/wav.scp
143
+ - speech_mix
144
+ - sound
145
+ - - dump/raw/tr05_simu_isolated_6ch_track/spk1.scp
146
+ - speech_ref1
147
+ - sound
148
+ valid_data_path_and_name_and_type:
149
+ - - dump/raw/dt05_simu_isolated_6ch_track/wav.scp
150
+ - speech_mix
151
+ - sound
152
+ - - dump/raw/dt05_simu_isolated_6ch_track/spk1.scp
153
+ - speech_ref1
154
+ - sound
155
+ allow_variable_data_keys: false
156
+ max_cache_size: 0.0
157
+ max_cache_fd: 32
158
+ valid_max_cache_size: null
159
+ optim: adam
160
+ optim_conf:
161
+ lr: 0.001
162
+ eps: 1.0e-08
163
+ weight_decay: 0
164
+ scheduler: steplr
165
+ scheduler_conf:
166
+ step_size: 2
167
+ gamma: 0.98
168
+ init: xavier_uniform
169
+ model_conf:
170
+ stft_consistency: false
171
+ loss_type: mask_mse
172
+ mask_type: null
173
+ criterions:
174
+ - name: si_snr
175
+ conf:
176
+ eps: 1.0e-07
177
+ wrapper: fixed_order
178
+ wrapper_conf:
179
+ weight: 1.0
180
+ use_preprocessor: false
181
+ encoder: same
182
+ encoder_conf: {}
183
+ separator: fasnet
184
+ separator_conf:
185
+ enc_dim: 64
186
+ feature_dim: 64
187
+ hidden_dim: 128
188
+ layer: 6
189
+ segment_size: 24
190
+ num_spk: 1
191
+ win_len: 16
192
+ context_len: 16
193
+ sr: 16000
194
+ fasnet_type: fasnet
195
+ dropout: 0.2
196
+ decoder: same
197
+ decoder_conf: {}
198
+ required:
199
+ - output_dir
200
+ version: 0.10.7a1
201
+ distributed: false
202
+ ```
203
+
204
+ </details>
205
+
206
+
207
+
208
+ ### Citing ESPnet
209
+
210
+ ```BibTex
211
+ @inproceedings{watanabe2018espnet,
212
+ author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
213
+ title={{ESPnet}: End-to-End Speech Processing Toolkit},
214
+ year={2018},
215
+ booktitle={Proceedings of Interspeech},
216
+ pages={2207--2211},
217
+ doi={10.21437/Interspeech.2018-1456},
218
+ url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
219
+ }
220
+
221
+
222
+ @inproceedings{ESPnet-SE,
223
+ author = {Chenda Li and Jing Shi and Wangyou Zhang and Aswin Shanmugam Subramanian and Xuankai Chang and
224
+ Naoyuki Kamo and Moto Hira and Tomoki Hayashi and Christoph B{"{o}}ddeker and Zhuo Chen and Shinji Watanabe},
225
+ title = {ESPnet-SE: End-To-End Speech Enhancement and Separation Toolkit Designed for {ASR} Integration},
226
+ booktitle = {{IEEE} Spoken Language Technology Workshop, {SLT} 2021, Shenzhen, China, January 19-22, 2021},
227
+ pages = {785--792},
228
+ publisher = {{IEEE}},
229
+ year = {2021},
230
+ url = {https://doi.org/10.1109/SLT48900.2021.9383615},
231
+ doi = {10.1109/SLT48900.2021.9383615},
232
+ timestamp = {Mon, 12 Apr 2021 17:08:59 +0200},
233
+ biburl = {https://dblp.org/rec/conf/slt/Li0ZSCKHHBC021.bib},
234
+ bibsource = {dblp computer science bibliography, https://dblp.org}
235
+ }
236
+
237
+
238
+ ```
239
+
240
+ or arXiv:
241
+
242
+ ```bibtex
243
+ @misc{watanabe2018espnet,
244
+ title={ESPnet: End-to-End Speech Processing Toolkit},
245
+ author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
246
+ year={2018},
247
+ eprint={1804.00015},
248
+ archivePrefix={arXiv},
249
+ primaryClass={cs.CL}
250
+ }
251
+ ```
exp/enh_stats_16k/train/feats_stats.npz ADDED
Binary file (802 Bytes). View file
 
exp/enh_train_enh_dprnntac_fasnet_raw/59epoch.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1a31da18f7a0cb32295dfab451475c1f543ec603a66921c8e207d418d0018e5a
3
+ size 16366144
exp/enh_train_enh_dprnntac_fasnet_raw/RESULTS.md ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!-- Generated by ./scripts/utils/show_enh_score.sh -->
2
+ # RESULTS
3
+ ## Environments
4
+ - date: `Sat Mar 19 07:17:45 CST 2022`
5
+ - python version: `3.7.11 (default, Jul 27 2021, 14:32:16) [GCC 7.5.0]`
6
+ - espnet version: `espnet 0.10.7a1`
7
+ - pytorch version: `pytorch 1.8.1`
8
+ - Git hash: `648b024d8fb262eb9923c06a698b9c6df5b16e51`
9
+ - Commit date: `Wed Mar 16 18:47:21 2022 +0800`
10
+
11
+
12
+ ## ..
13
+
14
+ config: conf/tuning/train_enh_dprnntac_fasnet.yaml
15
+
16
+ |dataset|STOI|SAR|SDR|SIR|
17
+ |---|---|---|---|---|
18
+ |enhanced_dt05_simu_isolated_6ch_track|0.95|15.75|15.75|0.00|
19
+ |enhanced_et05_simu_isolated_6ch_track|0.94|15.40|15.40|0.00|
20
+
exp/enh_train_enh_dprnntac_fasnet_raw/config.yaml ADDED
@@ -0,0 +1,149 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ config: conf/tuning/train_enh_dprnntac_fasnet.yaml
2
+ print_config: false
3
+ log_level: INFO
4
+ dry_run: false
5
+ iterator_type: chunk
6
+ output_dir: exp/enh_train_enh_dprnntac_fasnet_raw
7
+ ngpu: 1
8
+ seed: 0
9
+ num_workers: 4
10
+ num_att_plot: 3
11
+ dist_backend: nccl
12
+ dist_init_method: env://
13
+ dist_world_size: null
14
+ dist_rank: null
15
+ local_rank: 0
16
+ dist_master_addr: null
17
+ dist_master_port: null
18
+ dist_launcher: null
19
+ multiprocessing_distributed: false
20
+ unused_parameters: false
21
+ sharded_ddp: false
22
+ cudnn_enabled: true
23
+ cudnn_benchmark: false
24
+ cudnn_deterministic: true
25
+ collect_stats: false
26
+ write_collected_feats: false
27
+ max_epoch: 100
28
+ patience: 10
29
+ val_scheduler_criterion:
30
+ - valid
31
+ - loss
32
+ early_stopping_criterion:
33
+ - valid
34
+ - loss
35
+ - min
36
+ best_model_criterion:
37
+ - - valid
38
+ - si_snr
39
+ - max
40
+ - - valid
41
+ - loss
42
+ - min
43
+ keep_nbest_models: 1
44
+ nbest_averaging_interval: 0
45
+ grad_clip: 5.0
46
+ grad_clip_type: 2.0
47
+ grad_noise: false
48
+ accum_grad: 1
49
+ no_forward_run: false
50
+ resume: true
51
+ train_dtype: float32
52
+ use_amp: false
53
+ log_interval: null
54
+ use_matplotlib: true
55
+ use_tensorboard: true
56
+ use_wandb: false
57
+ wandb_project: null
58
+ wandb_id: null
59
+ wandb_entity: null
60
+ wandb_name: null
61
+ wandb_model_log_interval: -1
62
+ detect_anomaly: false
63
+ pretrain_path: null
64
+ init_param: []
65
+ ignore_init_mismatch: false
66
+ freeze_param: []
67
+ num_iters_per_epoch: null
68
+ batch_size: 8
69
+ valid_batch_size: null
70
+ batch_bins: 1000000
71
+ valid_batch_bins: null
72
+ train_shape_file:
73
+ - exp/enh_stats_16k/train/speech_mix_shape
74
+ - exp/enh_stats_16k/train/speech_ref1_shape
75
+ valid_shape_file:
76
+ - exp/enh_stats_16k/valid/speech_mix_shape
77
+ - exp/enh_stats_16k/valid/speech_ref1_shape
78
+ batch_type: folded
79
+ valid_batch_type: null
80
+ fold_length:
81
+ - 80000
82
+ - 80000
83
+ sort_in_batch: descending
84
+ sort_batch: descending
85
+ multiple_iterator: false
86
+ chunk_length: 32000
87
+ chunk_shift_ratio: 0.5
88
+ num_cache_chunks: 1024
89
+ train_data_path_and_name_and_type:
90
+ - - dump/raw/tr05_simu_isolated_6ch_track/wav.scp
91
+ - speech_mix
92
+ - sound
93
+ - - dump/raw/tr05_simu_isolated_6ch_track/spk1.scp
94
+ - speech_ref1
95
+ - sound
96
+ valid_data_path_and_name_and_type:
97
+ - - dump/raw/dt05_simu_isolated_6ch_track/wav.scp
98
+ - speech_mix
99
+ - sound
100
+ - - dump/raw/dt05_simu_isolated_6ch_track/spk1.scp
101
+ - speech_ref1
102
+ - sound
103
+ allow_variable_data_keys: false
104
+ max_cache_size: 0.0
105
+ max_cache_fd: 32
106
+ valid_max_cache_size: null
107
+ optim: adam
108
+ optim_conf:
109
+ lr: 0.001
110
+ eps: 1.0e-08
111
+ weight_decay: 0
112
+ scheduler: steplr
113
+ scheduler_conf:
114
+ step_size: 2
115
+ gamma: 0.98
116
+ init: xavier_uniform
117
+ model_conf:
118
+ stft_consistency: false
119
+ loss_type: mask_mse
120
+ mask_type: null
121
+ criterions:
122
+ - name: si_snr
123
+ conf:
124
+ eps: 1.0e-07
125
+ wrapper: fixed_order
126
+ wrapper_conf:
127
+ weight: 1.0
128
+ use_preprocessor: false
129
+ encoder: same
130
+ encoder_conf: {}
131
+ separator: fasnet
132
+ separator_conf:
133
+ enc_dim: 64
134
+ feature_dim: 64
135
+ hidden_dim: 128
136
+ layer: 6
137
+ segment_size: 24
138
+ num_spk: 1
139
+ win_len: 16
140
+ context_len: 16
141
+ sr: 16000
142
+ fasnet_type: fasnet
143
+ dropout: 0.2
144
+ decoder: same
145
+ decoder_conf: {}
146
+ required:
147
+ - output_dir
148
+ version: 0.10.7a1
149
+ distributed: false
exp/enh_train_enh_dprnntac_fasnet_raw/images/backward_time.png ADDED
exp/enh_train_enh_dprnntac_fasnet_raw/images/forward_time.png ADDED
exp/enh_train_enh_dprnntac_fasnet_raw/images/gpu_max_cached_mem_GB.png ADDED
exp/enh_train_enh_dprnntac_fasnet_raw/images/iter_time.png ADDED
exp/enh_train_enh_dprnntac_fasnet_raw/images/loss.png ADDED
exp/enh_train_enh_dprnntac_fasnet_raw/images/optim0_lr0.png ADDED
exp/enh_train_enh_dprnntac_fasnet_raw/images/optim_step_time.png ADDED
exp/enh_train_enh_dprnntac_fasnet_raw/images/si_snr_loss.png ADDED
exp/enh_train_enh_dprnntac_fasnet_raw/images/train_time.png ADDED
meta.yaml ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ espnet: 0.10.7a1
2
+ files:
3
+ model_file: exp/enh_train_enh_dprnntac_fasnet_raw/59epoch.pth
4
+ python: "3.7.11 (default, Jul 27 2021, 14:32:16) \n[GCC 7.5.0]"
5
+ timestamp: 1647850775.717462
6
+ torch: 1.8.1
7
+ yaml_files:
8
+ train_config: exp/enh_train_enh_dprnntac_fasnet_raw/config.yaml