csukuangfj commited on
Commit
e7e7424
1 Parent(s): 096646a

add fast beam search results.

Browse files
decoding-results/fast_beam_search/errs-test-clean-beam_4.0_max_contexts_32_max_states_8-iter-1224000-avg-14-beam-4.0-max-contexts-32-max-states-8.txt ADDED
The diff for this file is too large to render. See raw diff
decoding-results/fast_beam_search/errs-test-other-beam_4.0_max_contexts_32_max_states_8-iter-1224000-avg-14-beam-4.0-max-contexts-32-max-states-8.txt ADDED
The diff for this file is too large to render. See raw diff
decoding-results/fast_beam_search/log-decode-iter-1224000-avg-14-beam-4.0-max-contexts-32-max-states-8-2022-05-13-10-11-35 ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2022-05-13 10:11:35,141 INFO [decode.py:531] Decoding started
2
+ 2022-05-13 10:11:35,142 INFO [decode.py:537] Device: cuda:0
3
+ 2022-05-13 10:11:35,144 INFO [decode.py:547] {'best_train_loss': inf, 'best_valid_loss': inf, 'best_train_epoch': -1, 'best_valid_epoch': -1, 'batch_idx_train': 0, 'log_interval': 50, 'reset_interval': 200, 'valid_interval': 3000, 'feature_dim': 80, 'subsampling_factor': 4, 'encoder_dim': 512, 'nhead': 8, 'dim_feedforward': 2048, 'num_encoder_layers': 12, 'decoder_dim': 512, 'joiner_dim': 512, 'model_warm_step': 3000, 'env_info': {'k2-version': '1.15.1', 'k2-build-type': 'Release', 'k2-with-cuda': True, 'k2-git-sha1': 'f8d2dba06c000ffee36aab5b66f24e7c9809f116', 'k2-git-date': 'Thu Apr 21 12:20:34 2022', 'lhotse-version': '1.1.0.dev+missing.version.file', 'torch-version': '1.10.0+cu102', 'torch-cuda-available': True, 'torch-cuda-version': '10.2', 'python-version': '3.8', 'icefall-git-branch': 'modified-conformer-with-multi-datasets', 'icefall-git-sha1': '00fd664-dirty', 'icefall-git-date': 'Fri Apr 29 15:28:42 2022', 'icefall-path': '/ceph-fj/fangjun/open-source-2/icefall-multi-4', 'k2-path': '/ceph-fj/fangjun/open-source-2/k2-multi-22/k2/python/k2/__init__.py', 'lhotse-path': '/ceph-fj/fangjun/open-source-2/lhotse-multi-3/lhotse/__init__.py', 'hostname': 'de-74279-k2-train-2-0307200233-b554c565c-lf9qd', 'IP address': '10.177.74.201'}, 'epoch': 28, 'iter': 1224000, 'avg': 14, 'exp_dir': PosixPath('pruned_transducer_stateless3/exp-0.9'), 'bpe_model': 'data/lang_bpe_500/bpe.model', 'decoding_method': 'fast_beam_search', 'beam_size': 4, 'beam': 4.0, 'max_contexts': 32, 'max_states': 8, 'context_size': 2, 'max_sym_per_frame': 1, 'num_paths': 100, 'nbest_scale': 0.5, 'max_duration': 600, 'bucketing_sampler': True, 'num_buckets': 30, 'shuffle': True, 'return_cuts': True, 'num_workers': 2, 'on_the_fly_num_workers': 0, 'enable_spec_aug': True, 'spec_aug_time_warp_factor': 80, 'enable_musan': True, 'manifest_dir': PosixPath('data/fbank'), 'on_the_fly_feats': False, 'res_dir': PosixPath('pruned_transducer_stateless3/exp-0.9/fast_beam_search'), 'suffix': 'iter-1224000-avg-14-beam-4.0-max-contexts-32-max-states-8', 'blank_id': 0, 'unk_id': 2, 'vocab_size': 500}
4
+ 2022-05-13 10:11:35,144 INFO [decode.py:549] About to create model
5
+ 2022-05-13 10:11:35,622 INFO [decode.py:566] averaging ['pruned_transducer_stateless3/exp-0.9/checkpoint-1224000.pt', 'pruned_transducer_stateless3/exp-0.9/checkpoint-1216000.pt', 'pruned_transducer_stateless3/exp-0.9/checkpoint-1208000.pt', 'pruned_transducer_stateless3/exp-0.9/checkpoint-1200000.pt', 'pruned_transducer_stateless3/exp-0.9/checkpoint-1192000.pt', 'pruned_transducer_stateless3/exp-0.9/checkpoint-1184000.pt', 'pruned_transducer_stateless3/exp-0.9/checkpoint-1176000.pt', 'pruned_transducer_stateless3/exp-0.9/checkpoint-1168000.pt', 'pruned_transducer_stateless3/exp-0.9/checkpoint-1160000.pt', 'pruned_transducer_stateless3/exp-0.9/checkpoint-1152000.pt', 'pruned_transducer_stateless3/exp-0.9/checkpoint-1144000.pt', 'pruned_transducer_stateless3/exp-0.9/checkpoint-1136000.pt', 'pruned_transducer_stateless3/exp-0.9/checkpoint-1128000.pt', 'pruned_transducer_stateless3/exp-0.9/checkpoint-1120000.pt']
6
+ 2022-05-13 10:11:56,508 INFO [decode.py:595] Number of model parameters: 80199888
7
+ 2022-05-13 10:11:56,508 INFO [librispeech.py:58] About to get test-clean cuts from data/fbank/cuts_test-clean.json.gz
8
+ 2022-05-13 10:11:56,621 INFO [librispeech.py:63] About to get test-other cuts from data/fbank/cuts_test-other.json.gz
9
+ 2022-05-13 10:11:59,087 INFO [decode.py:438] batch 0/?, cuts processed until now is 123
10
+ 2022-05-13 10:12:15,259 INFO [decode.py:438] batch 10/?, cuts processed until now is 945
11
+ 2022-05-13 10:12:37,390 INFO [decode.py:438] batch 20/?, cuts processed until now is 1558
12
+ 2022-05-13 10:12:52,210 INFO [decode.py:438] batch 30/?, cuts processed until now is 2383
13
+ 2022-05-13 10:13:08,683 INFO [decode.py:455] The transcripts are stored in pruned_transducer_stateless3/exp-0.9/fast_beam_search/recogs-test-clean-beam_4.0_max_contexts_32_max_states_8-iter-1224000-avg-14-beam-4.0-max-contexts-32-max-states-8.txt
14
+ 2022-05-13 10:13:08,776 INFO [utils.py:405] [test-clean-beam_4.0_max_contexts_32_max_states_8] %WER 2.10% [1102 / 52576, 115 ins, 115 del, 872 sub ]
15
+ 2022-05-13 10:13:09,068 INFO [decode.py:468] Wrote detailed error stats to pruned_transducer_stateless3/exp-0.9/fast_beam_search/errs-test-clean-beam_4.0_max_contexts_32_max_states_8-iter-1224000-avg-14-beam-4.0-max-contexts-32-max-states-8.txt
16
+ 2022-05-13 10:13:09,069 INFO [decode.py:485]
17
+ For test-clean, WER of different settings are:
18
+ beam_4.0_max_contexts_32_max_states_8 2.1 best for test-clean
19
+
20
+ 2022-05-13 10:13:10,920 INFO [decode.py:438] batch 0/?, cuts processed until now is 138
21
+ 2022-05-13 10:13:25,921 INFO [decode.py:438] batch 10/?, cuts processed until now is 1070
22
+ 2022-05-13 10:13:46,529 INFO [decode.py:438] batch 20/?, cuts processed until now is 1765
23
+ 2022-05-13 10:13:59,688 INFO [decode.py:438] batch 30/?, cuts processed until now is 2653
24
+ 2022-05-13 10:14:14,049 INFO [decode.py:455] The transcripts are stored in pruned_transducer_stateless3/exp-0.9/fast_beam_search/recogs-test-other-beam_4.0_max_contexts_32_max_states_8-iter-1224000-avg-14-beam-4.0-max-contexts-32-max-states-8.txt
25
+ 2022-05-13 10:14:14,142 INFO [utils.py:405] [test-other-beam_4.0_max_contexts_32_max_states_8] %WER 4.68% [2449 / 52343, 245 ins, 223 del, 1981 sub ]
26
+ 2022-05-13 10:14:14,353 INFO [decode.py:468] Wrote detailed error stats to pruned_transducer_stateless3/exp-0.9/fast_beam_search/errs-test-other-beam_4.0_max_contexts_32_max_states_8-iter-1224000-avg-14-beam-4.0-max-contexts-32-max-states-8.txt
27
+ 2022-05-13 10:14:14,354 INFO [decode.py:485]
28
+ For test-other, WER of different settings are:
29
+ beam_4.0_max_contexts_32_max_states_8 4.68 best for test-other
30
+
31
+ 2022-05-13 10:14:14,354 INFO [decode.py:624] Done!
decoding-results/fast_beam_search/recogs-test-clean-beam_4.0_max_contexts_32_max_states_8-iter-1224000-avg-14-beam-4.0-max-contexts-32-max-states-8.txt ADDED
The diff for this file is too large to render. See raw diff
decoding-results/fast_beam_search/recogs-test-other-beam_4.0_max_contexts_32_max_states_8-iter-1224000-avg-14-beam-4.0-max-contexts-32-max-states-8.txt ADDED
The diff for this file is too large to render. See raw diff