csukuangfj commited on
Commit
b3c4165
1 Parent(s): 2445857

add decoding results

Browse files
Files changed (30) hide show
  1. decoding-results/fast_beam_search/errs-test-clean-beam_4.0_max_contexts_4_max_states_8-iter-468000-avg-16-beam-4.0-max-contexts-4-max-states-8-use-averaged-model.txt +0 -0
  2. decoding-results/fast_beam_search/errs-test-clean-beam_4.0_max_contexts_4_max_states_8-iter-472000-avg-18-beam-4.0-max-contexts-4-max-states-8-use-averaged-model.txt +0 -0
  3. decoding-results/fast_beam_search/errs-test-other-beam_4.0_max_contexts_4_max_states_8-iter-468000-avg-16-beam-4.0-max-contexts-4-max-states-8-use-averaged-model.txt +0 -0
  4. decoding-results/fast_beam_search/errs-test-other-beam_4.0_max_contexts_4_max_states_8-iter-472000-avg-18-beam-4.0-max-contexts-4-max-states-8-use-averaged-model.txt +0 -0
  5. decoding-results/fast_beam_search/log-decode-iter-468000-avg-16-beam-4.0-max-contexts-4-max-states-8-use-averaged-model-2022-09-01-12-18-42 +30 -0
  6. decoding-results/fast_beam_search/log-decode-iter-472000-avg-18-beam-4.0-max-contexts-4-max-states-8-use-averaged-model-2022-09-01-14-03-37 +30 -0
  7. decoding-results/fast_beam_search/recogs-test-clean-beam_4.0_max_contexts_4_max_states_8-iter-468000-avg-16-beam-4.0-max-contexts-4-max-states-8-use-averaged-model.txt +0 -0
  8. decoding-results/fast_beam_search/recogs-test-clean-beam_4.0_max_contexts_4_max_states_8-iter-472000-avg-18-beam-4.0-max-contexts-4-max-states-8-use-averaged-model.txt +0 -0
  9. decoding-results/fast_beam_search/recogs-test-other-beam_4.0_max_contexts_4_max_states_8-iter-468000-avg-16-beam-4.0-max-contexts-4-max-states-8-use-averaged-model.txt +0 -0
  10. decoding-results/fast_beam_search/recogs-test-other-beam_4.0_max_contexts_4_max_states_8-iter-472000-avg-18-beam-4.0-max-contexts-4-max-states-8-use-averaged-model.txt +0 -0
  11. decoding-results/greedy_search/errs-test-clean-greedy_search-iter-468000-avg-16-context-2-max-sym-per-frame-1-use-averaged-model.txt +0 -0
  12. decoding-results/greedy_search/errs-test-clean-greedy_search-iter-472000-avg-18-context-2-max-sym-per-frame-1-use-averaged-model.txt +0 -0
  13. decoding-results/greedy_search/errs-test-other-greedy_search-iter-468000-avg-16-context-2-max-sym-per-frame-1-use-averaged-model.txt +0 -0
  14. decoding-results/greedy_search/errs-test-other-greedy_search-iter-472000-avg-18-context-2-max-sym-per-frame-1-use-averaged-model.txt +0 -0
  15. decoding-results/greedy_search/log-decode-iter-468000-avg-16-context-2-max-sym-per-frame-1-use-averaged-model-2022-09-01-12-06-18 +28 -0
  16. decoding-results/greedy_search/log-decode-iter-472000-avg-18-context-2-max-sym-per-frame-1-use-averaged-model-2022-09-01-13-52-30 +28 -0
  17. decoding-results/greedy_search/recogs-test-clean-greedy_search-iter-468000-avg-16-context-2-max-sym-per-frame-1-use-averaged-model.txt +0 -0
  18. decoding-results/greedy_search/recogs-test-clean-greedy_search-iter-472000-avg-18-context-2-max-sym-per-frame-1-use-averaged-model.txt +0 -0
  19. decoding-results/greedy_search/recogs-test-other-greedy_search-iter-468000-avg-16-context-2-max-sym-per-frame-1-use-averaged-model.txt +0 -0
  20. decoding-results/greedy_search/recogs-test-other-greedy_search-iter-472000-avg-18-context-2-max-sym-per-frame-1-use-averaged-model.txt +0 -0
  21. decoding-results/modified_beam_search/errs-test-clean-beam_size_4-iter-468000-avg-16-modified_beam_search-beam-size-4-use-averaged-model.txt +0 -0
  22. decoding-results/modified_beam_search/errs-test-clean-beam_size_4-iter-472000-avg-18-modified_beam_search-beam-size-4-use-averaged-model.txt +0 -0
  23. decoding-results/modified_beam_search/errs-test-other-beam_size_4-iter-468000-avg-16-modified_beam_search-beam-size-4-use-averaged-model.txt +0 -0
  24. decoding-results/modified_beam_search/errs-test-other-beam_size_4-iter-472000-avg-18-modified_beam_search-beam-size-4-use-averaged-model.txt +0 -0
  25. decoding-results/modified_beam_search/log-decode-iter-468000-avg-16-modified_beam_search-beam-size-4-use-averaged-model-2022-09-01-12-44-30 +30 -0
  26. decoding-results/modified_beam_search/log-decode-iter-472000-avg-18-modified_beam_search-beam-size-4-use-averaged-model-2022-09-01-14-27-47 +30 -0
  27. decoding-results/modified_beam_search/recogs-test-clean-beam_size_4-iter-468000-avg-16-modified_beam_search-beam-size-4-use-averaged-model.txt +0 -0
  28. decoding-results/modified_beam_search/recogs-test-clean-beam_size_4-iter-472000-avg-18-modified_beam_search-beam-size-4-use-averaged-model.txt +0 -0
  29. decoding-results/modified_beam_search/recogs-test-other-beam_size_4-iter-468000-avg-16-modified_beam_search-beam-size-4-use-averaged-model.txt +0 -0
  30. decoding-results/modified_beam_search/recogs-test-other-beam_size_4-iter-472000-avg-18-modified_beam_search-beam-size-4-use-averaged-model.txt +0 -0
decoding-results/fast_beam_search/errs-test-clean-beam_4.0_max_contexts_4_max_states_8-iter-468000-avg-16-beam-4.0-max-contexts-4-max-states-8-use-averaged-model.txt ADDED
The diff for this file is too large to render. See raw diff
 
decoding-results/fast_beam_search/errs-test-clean-beam_4.0_max_contexts_4_max_states_8-iter-472000-avg-18-beam-4.0-max-contexts-4-max-states-8-use-averaged-model.txt ADDED
The diff for this file is too large to render. See raw diff
 
decoding-results/fast_beam_search/errs-test-other-beam_4.0_max_contexts_4_max_states_8-iter-468000-avg-16-beam-4.0-max-contexts-4-max-states-8-use-averaged-model.txt ADDED
The diff for this file is too large to render. See raw diff
 
decoding-results/fast_beam_search/errs-test-other-beam_4.0_max_contexts_4_max_states_8-iter-472000-avg-18-beam-4.0-max-contexts-4-max-states-8-use-averaged-model.txt ADDED
The diff for this file is too large to render. See raw diff
 
decoding-results/fast_beam_search/log-decode-iter-468000-avg-16-beam-4.0-max-contexts-4-max-states-8-use-averaged-model-2022-09-01-12-18-42 ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2022-09-01 12:18:42,164 INFO [decode.py:663] Decoding started
2
+ 2022-09-01 12:18:42,164 INFO [decode.py:669] Device: cuda:0
3
+ 2022-09-01 12:18:42,167 INFO [decode.py:679] {'best_train_loss': inf, 'best_valid_loss': inf, 'best_train_epoch': -1, 'best_valid_epoch': -1, 'batch_idx_train': 0, 'log_interval': 50, 'reset_interval': 200, 'valid_interval': 3000, 'feature_dim': 80, 'subsampling_factor': 4, 'dim_feedforward': 2048, 'decoder_dim': 512, 'joiner_dim': 512, 'model_warm_step': 3000, 'env_info': {'k2-version': '1.15.1', 'k2-build-type': 'Release', 'k2-with-cuda': True, 'k2-git-sha1': 'f8d2dba06c000ffee36aab5b66f24e7c9809f116', 'k2-git-date': 'Thu Apr 21 12:20:34 2022', 'lhotse-version': '1.3.0.dev+missing.version.file', 'torch-version': '1.10.0+cu102', 'torch-cuda-available': True, 'torch-cuda-version': '10.2', 'python-version': '3.8', 'icefall-git-branch': 'lstm-giga-libri', 'icefall-git-sha1': 'e3128cb-dirty', 'icefall-git-date': 'Mon Aug 29 19:05:41 2022', 'icefall-path': '/k2-dev/fangjun/open-source/icefall-lstm-giga', 'k2-path': '/ceph-fj/fangjun/open-source-2/k2-multi-22/k2/python/k2/__init__.py', 'lhotse-path': '/ceph-fj/fangjun/open-source-2/lhotse-jsonl/lhotse/__init__.py', 'hostname': 'de-74279-k2-train-2-0602201035-5fb6d86964-mclm7', 'IP address': '10.177.74.202'}, 'epoch': 30, 'iter': 468000, 'avg': 16, 'use_averaged_model': True, 'exp_dir': PosixPath('lstm_transducer_stateless2/exp'), 'bpe_model': 'data/lang_bpe_500/bpe.model', 'lang_dir': PosixPath('data/lang_bpe_500'), 'decoding_method': 'fast_beam_search', 'beam_size': 4, 'beam': 4.0, 'ngram_lm_scale': 0.01, 'max_contexts': 4, 'max_states': 8, 'context_size': 2, 'max_sym_per_frame': 1, 'num_paths': 200, 'nbest_scale': 0.5, 'num_encoder_layers': 12, 'encoder_dim': 512, 'rnn_hidden_size': 1024, 'aux_layer_period': 0, 'max_duration': 600, 'bucketing_sampler': True, 'num_buckets': 30, 'shuffle': True, 'return_cuts': True, 'num_workers': 2, 'on_the_fly_num_workers': 0, 'enable_spec_aug': True, 'spec_aug_time_warp_factor': 80, 'enable_musan': True, 'manifest_dir': PosixPath('data/fbank'), 'on_the_fly_feats': False, 'res_dir': PosixPath('lstm_transducer_stateless2/exp/fast_beam_search'), 'suffix': 'iter-468000-avg-16-beam-4.0-max-contexts-4-max-states-8-use-averaged-model', 'blank_id': 0, 'unk_id': 2, 'vocab_size': 500}
4
+ 2022-09-01 12:18:42,167 INFO [decode.py:681] About to create model
5
+ 2022-09-01 12:18:42,546 INFO [train.py:464] Disable giga
6
+ 2022-09-01 12:18:42,559 INFO [decode.py:735] Calculating the averaged model over iteration checkpoints from lstm_transducer_stateless2/exp/checkpoint-436000.pt (excluded) to lstm_transducer_stateless2/exp/checkpoint-468000.pt
7
+ 2022-09-01 12:18:48,456 INFO [decode.py:791] Number of model parameters: 84689496
8
+ 2022-09-01 12:18:48,456 INFO [librispeech.py:58] About to get test-clean cuts from data/fbank/librispeech_cuts_test-clean.jsonl.gz
9
+ 2022-09-01 12:18:48,459 INFO [librispeech.py:63] About to get test-other cuts from data/fbank/librispeech_cuts_test-other.jsonl.gz
10
+ 2022-09-01 12:18:50,915 INFO [decode.py:565] batch 0/?, cuts processed until now is 27
11
+ 2022-09-01 12:19:14,887 INFO [decode.py:565] batch 20/?, cuts processed until now is 1623
12
+ 2022-09-01 12:19:32,610 INFO [decode.py:565] batch 40/?, cuts processed until now is 2468
13
+ 2022-09-01 12:19:45,959 INFO [decode.py:583] The transcripts are stored in lstm_transducer_stateless2/exp/fast_beam_search/recogs-test-clean-beam_4.0_max_contexts_4_max_states_8-iter-468000-avg-16-beam-4.0-max-contexts-4-max-states-8-use-averaged-model.txt
14
+ 2022-09-01 12:19:46,027 INFO [utils.py:428] [test-clean-beam_4.0_max_contexts_4_max_states_8] %WER 2.76% [1451 / 52576, 168 ins, 103 del, 1180 sub ]
15
+ 2022-09-01 12:19:46,197 INFO [decode.py:596] Wrote detailed error stats to lstm_transducer_stateless2/exp/fast_beam_search/errs-test-clean-beam_4.0_max_contexts_4_max_states_8-iter-468000-avg-16-beam-4.0-max-contexts-4-max-states-8-use-averaged-model.txt
16
+ 2022-09-01 12:19:46,197 INFO [decode.py:613]
17
+ For test-clean, WER of different settings are:
18
+ beam_4.0_max_contexts_4_max_states_8 2.76 best for test-clean
19
+
20
+ 2022-09-01 12:19:48,304 INFO [decode.py:565] batch 0/?, cuts processed until now is 31
21
+ 2022-09-01 12:20:10,671 INFO [decode.py:565] batch 20/?, cuts processed until now is 1849
22
+ 2022-09-01 12:20:26,708 INFO [decode.py:565] batch 40/?, cuts processed until now is 2785
23
+ 2022-09-01 12:20:38,829 INFO [decode.py:583] The transcripts are stored in lstm_transducer_stateless2/exp/fast_beam_search/recogs-test-other-beam_4.0_max_contexts_4_max_states_8-iter-468000-avg-16-beam-4.0-max-contexts-4-max-states-8-use-averaged-model.txt
24
+ 2022-09-01 12:20:38,900 INFO [utils.py:428] [test-other-beam_4.0_max_contexts_4_max_states_8] %WER 7.31% [3827 / 52343, 397 ins, 383 del, 3047 sub ]
25
+ 2022-09-01 12:20:39,117 INFO [decode.py:596] Wrote detailed error stats to lstm_transducer_stateless2/exp/fast_beam_search/errs-test-other-beam_4.0_max_contexts_4_max_states_8-iter-468000-avg-16-beam-4.0-max-contexts-4-max-states-8-use-averaged-model.txt
26
+ 2022-09-01 12:20:39,118 INFO [decode.py:613]
27
+ For test-other, WER of different settings are:
28
+ beam_4.0_max_contexts_4_max_states_8 7.31 best for test-other
29
+
30
+ 2022-09-01 12:20:39,118 INFO [decode.py:823] Done!
decoding-results/fast_beam_search/log-decode-iter-472000-avg-18-beam-4.0-max-contexts-4-max-states-8-use-averaged-model-2022-09-01-14-03-37 ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2022-09-01 14:03:37,829 INFO [decode.py:663] Decoding started
2
+ 2022-09-01 14:03:37,829 INFO [decode.py:669] Device: cuda:0
3
+ 2022-09-01 14:03:37,831 INFO [decode.py:679] {'best_train_loss': inf, 'best_valid_loss': inf, 'best_train_epoch': -1, 'best_valid_epoch': -1, 'batch_idx_train': 0, 'log_interval': 50, 'reset_interval': 200, 'valid_interval': 3000, 'feature_dim': 80, 'subsampling_factor': 4, 'dim_feedforward': 2048, 'decoder_dim': 512, 'joiner_dim': 512, 'model_warm_step': 3000, 'env_info': {'k2-version': '1.15.1', 'k2-build-type': 'Release', 'k2-with-cuda': True, 'k2-git-sha1': 'f8d2dba06c000ffee36aab5b66f24e7c9809f116', 'k2-git-date': 'Thu Apr 21 12:20:34 2022', 'lhotse-version': '1.3.0.dev+missing.version.file', 'torch-version': '1.10.0+cu102', 'torch-cuda-available': True, 'torch-cuda-version': '10.2', 'python-version': '3.8', 'icefall-git-branch': 'lstm-giga-libri', 'icefall-git-sha1': 'e3128cb-dirty', 'icefall-git-date': 'Mon Aug 29 19:05:41 2022', 'icefall-path': '/k2-dev/fangjun/open-source/icefall-lstm-giga', 'k2-path': '/ceph-fj/fangjun/open-source-2/k2-multi-22/k2/python/k2/__init__.py', 'lhotse-path': '/ceph-fj/fangjun/open-source-2/lhotse-jsonl/lhotse/__init__.py', 'hostname': 'de-74279-k2-train-2-0602201035-5fb6d86964-mclm7', 'IP address': '10.177.74.202'}, 'epoch': 30, 'iter': 472000, 'avg': 18, 'use_averaged_model': True, 'exp_dir': PosixPath('lstm_transducer_stateless2/exp'), 'bpe_model': 'data/lang_bpe_500/bpe.model', 'lang_dir': PosixPath('data/lang_bpe_500'), 'decoding_method': 'fast_beam_search', 'beam_size': 4, 'beam': 4.0, 'ngram_lm_scale': 0.01, 'max_contexts': 4, 'max_states': 8, 'context_size': 2, 'max_sym_per_frame': 1, 'num_paths': 200, 'nbest_scale': 0.5, 'num_encoder_layers': 12, 'encoder_dim': 512, 'rnn_hidden_size': 1024, 'aux_layer_period': 0, 'max_duration': 600, 'bucketing_sampler': True, 'num_buckets': 30, 'shuffle': True, 'return_cuts': True, 'num_workers': 2, 'on_the_fly_num_workers': 0, 'enable_spec_aug': True, 'spec_aug_time_warp_factor': 80, 'enable_musan': True, 'manifest_dir': PosixPath('data/fbank'), 'on_the_fly_feats': False, 'res_dir': PosixPath('lstm_transducer_stateless2/exp/fast_beam_search'), 'suffix': 'iter-472000-avg-18-beam-4.0-max-contexts-4-max-states-8-use-averaged-model', 'blank_id': 0, 'unk_id': 2, 'vocab_size': 500}
4
+ 2022-09-01 14:03:37,831 INFO [decode.py:681] About to create model
5
+ 2022-09-01 14:03:38,172 INFO [train.py:464] Disable giga
6
+ 2022-09-01 14:03:38,183 INFO [decode.py:735] Calculating the averaged model over iteration checkpoints from lstm_transducer_stateless2/exp/checkpoint-436000.pt (excluded) to lstm_transducer_stateless2/exp/checkpoint-472000.pt
7
+ 2022-09-01 14:03:43,906 INFO [decode.py:791] Number of model parameters: 84689496
8
+ 2022-09-01 14:03:43,906 INFO [librispeech.py:58] About to get test-clean cuts from data/fbank/librispeech_cuts_test-clean.jsonl.gz
9
+ 2022-09-01 14:03:43,908 INFO [librispeech.py:63] About to get test-other cuts from data/fbank/librispeech_cuts_test-other.jsonl.gz
10
+ 2022-09-01 14:03:46,290 INFO [decode.py:565] batch 0/?, cuts processed until now is 27
11
+ 2022-09-01 14:04:10,303 INFO [decode.py:565] batch 20/?, cuts processed until now is 1623
12
+ 2022-09-01 14:04:28,033 INFO [decode.py:565] batch 40/?, cuts processed until now is 2468
13
+ 2022-09-01 14:04:41,371 INFO [decode.py:583] The transcripts are stored in lstm_transducer_stateless2/exp/fast_beam_search/recogs-test-clean-beam_4.0_max_contexts_4_max_states_8-iter-472000-avg-18-beam-4.0-max-contexts-4-max-states-8-use-averaged-model.txt
14
+ 2022-09-01 14:04:41,458 INFO [utils.py:428] [test-clean-beam_4.0_max_contexts_4_max_states_8] %WER 2.77% [1456 / 52576, 168 ins, 106 del, 1182 sub ]
15
+ 2022-09-01 14:04:41,676 INFO [decode.py:596] Wrote detailed error stats to lstm_transducer_stateless2/exp/fast_beam_search/errs-test-clean-beam_4.0_max_contexts_4_max_states_8-iter-472000-avg-18-beam-4.0-max-contexts-4-max-states-8-use-averaged-model.txt
16
+ 2022-09-01 14:04:41,677 INFO [decode.py:613]
17
+ For test-clean, WER of different settings are:
18
+ beam_4.0_max_contexts_4_max_states_8 2.77 best for test-clean
19
+
20
+ 2022-09-01 14:04:43,799 INFO [decode.py:565] batch 0/?, cuts processed until now is 31
21
+ 2022-09-01 14:05:06,278 INFO [decode.py:565] batch 20/?, cuts processed until now is 1849
22
+ 2022-09-01 14:05:22,256 INFO [decode.py:565] batch 40/?, cuts processed until now is 2785
23
+ 2022-09-01 14:05:34,340 INFO [decode.py:583] The transcripts are stored in lstm_transducer_stateless2/exp/fast_beam_search/recogs-test-other-beam_4.0_max_contexts_4_max_states_8-iter-472000-avg-18-beam-4.0-max-contexts-4-max-states-8-use-averaged-model.txt
24
+ 2022-09-01 14:05:34,429 INFO [utils.py:428] [test-other-beam_4.0_max_contexts_4_max_states_8] %WER 7.29% [3816 / 52343, 408 ins, 373 del, 3035 sub ]
25
+ 2022-09-01 14:05:34,603 INFO [decode.py:596] Wrote detailed error stats to lstm_transducer_stateless2/exp/fast_beam_search/errs-test-other-beam_4.0_max_contexts_4_max_states_8-iter-472000-avg-18-beam-4.0-max-contexts-4-max-states-8-use-averaged-model.txt
26
+ 2022-09-01 14:05:34,604 INFO [decode.py:613]
27
+ For test-other, WER of different settings are:
28
+ beam_4.0_max_contexts_4_max_states_8 7.29 best for test-other
29
+
30
+ 2022-09-01 14:05:34,604 INFO [decode.py:823] Done!
decoding-results/fast_beam_search/recogs-test-clean-beam_4.0_max_contexts_4_max_states_8-iter-468000-avg-16-beam-4.0-max-contexts-4-max-states-8-use-averaged-model.txt ADDED
The diff for this file is too large to render. See raw diff
 
decoding-results/fast_beam_search/recogs-test-clean-beam_4.0_max_contexts_4_max_states_8-iter-472000-avg-18-beam-4.0-max-contexts-4-max-states-8-use-averaged-model.txt ADDED
The diff for this file is too large to render. See raw diff
 
decoding-results/fast_beam_search/recogs-test-other-beam_4.0_max_contexts_4_max_states_8-iter-468000-avg-16-beam-4.0-max-contexts-4-max-states-8-use-averaged-model.txt ADDED
The diff for this file is too large to render. See raw diff
 
decoding-results/fast_beam_search/recogs-test-other-beam_4.0_max_contexts_4_max_states_8-iter-472000-avg-18-beam-4.0-max-contexts-4-max-states-8-use-averaged-model.txt ADDED
The diff for this file is too large to render. See raw diff
 
decoding-results/greedy_search/errs-test-clean-greedy_search-iter-468000-avg-16-context-2-max-sym-per-frame-1-use-averaged-model.txt ADDED
The diff for this file is too large to render. See raw diff
 
decoding-results/greedy_search/errs-test-clean-greedy_search-iter-472000-avg-18-context-2-max-sym-per-frame-1-use-averaged-model.txt ADDED
The diff for this file is too large to render. See raw diff
 
decoding-results/greedy_search/errs-test-other-greedy_search-iter-468000-avg-16-context-2-max-sym-per-frame-1-use-averaged-model.txt ADDED
The diff for this file is too large to render. See raw diff
 
decoding-results/greedy_search/errs-test-other-greedy_search-iter-472000-avg-18-context-2-max-sym-per-frame-1-use-averaged-model.txt ADDED
The diff for this file is too large to render. See raw diff
 
decoding-results/greedy_search/log-decode-iter-468000-avg-16-context-2-max-sym-per-frame-1-use-averaged-model-2022-09-01-12-06-18 ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2022-09-01 12:06:18,758 INFO [decode.py:663] Decoding started
2
+ 2022-09-01 12:06:18,758 INFO [decode.py:669] Device: cuda:0
3
+ 2022-09-01 12:06:18,761 INFO [decode.py:679] {'best_train_loss': inf, 'best_valid_loss': inf, 'best_train_epoch': -1, 'best_valid_epoch': -1, 'batch_idx_train': 0, 'log_interval': 50, 'reset_interval': 200, 'valid_interval': 3000, 'feature_dim': 80, 'subsampling_factor': 4, 'dim_feedforward': 2048, 'decoder_dim': 512, 'joiner_dim': 512, 'model_warm_step': 3000, 'env_info': {'k2-version': '1.15.1', 'k2-build-type': 'Release', 'k2-with-cuda': True, 'k2-git-sha1': 'f8d2dba06c000ffee36aab5b66f24e7c9809f116', 'k2-git-date': 'Thu Apr 21 12:20:34 2022', 'lhotse-version': '1.3.0.dev+missing.version.file', 'torch-version': '1.10.0+cu102', 'torch-cuda-available': True, 'torch-cuda-version': '10.2', 'python-version': '3.8', 'icefall-git-branch': 'lstm-giga-libri', 'icefall-git-sha1': 'e3128cb-dirty', 'icefall-git-date': 'Mon Aug 29 19:05:41 2022', 'icefall-path': '/k2-dev/fangjun/open-source/icefall-lstm-giga', 'k2-path': '/ceph-fj/fangjun/open-source-2/k2-multi-22/k2/python/k2/__init__.py', 'lhotse-path': '/ceph-fj/fangjun/open-source-2/lhotse-jsonl/lhotse/__init__.py', 'hostname': 'de-74279-k2-train-2-0602201035-5fb6d86964-mclm7', 'IP address': '10.177.74.202'}, 'epoch': 30, 'iter': 468000, 'avg': 16, 'use_averaged_model': True, 'exp_dir': PosixPath('lstm_transducer_stateless2/exp'), 'bpe_model': 'data/lang_bpe_500/bpe.model', 'lang_dir': PosixPath('data/lang_bpe_500'), 'decoding_method': 'greedy_search', 'beam_size': 4, 'beam': 4.0, 'ngram_lm_scale': 0.01, 'max_contexts': 4, 'max_states': 8, 'context_size': 2, 'max_sym_per_frame': 1, 'num_paths': 200, 'nbest_scale': 0.5, 'num_encoder_layers': 12, 'encoder_dim': 512, 'rnn_hidden_size': 1024, 'aux_layer_period': 0, 'max_duration': 600, 'bucketing_sampler': True, 'num_buckets': 30, 'shuffle': True, 'return_cuts': True, 'num_workers': 2, 'on_the_fly_num_workers': 0, 'enable_spec_aug': True, 'spec_aug_time_warp_factor': 80, 'enable_musan': True, 'manifest_dir': PosixPath('data/fbank'), 'on_the_fly_feats': False, 'res_dir': PosixPath('lstm_transducer_stateless2/exp/greedy_search'), 'suffix': 'iter-468000-avg-16-context-2-max-sym-per-frame-1-use-averaged-model', 'blank_id': 0, 'unk_id': 2, 'vocab_size': 500}
4
+ 2022-09-01 12:06:18,761 INFO [decode.py:681] About to create model
5
+ 2022-09-01 12:06:19,131 INFO [train.py:464] Disable giga
6
+ 2022-09-01 12:06:19,141 INFO [decode.py:735] Calculating the averaged model over iteration checkpoints from lstm_transducer_stateless2/exp/checkpoint-436000.pt (excluded) to lstm_transducer_stateless2/exp/checkpoint-468000.pt
7
+ 2022-09-01 12:06:35,686 INFO [decode.py:791] Number of model parameters: 84689496
8
+ 2022-09-01 12:06:35,686 INFO [librispeech.py:58] About to get test-clean cuts from data/fbank/librispeech_cuts_test-clean.jsonl.gz
9
+ 2022-09-01 12:06:35,689 INFO [librispeech.py:63] About to get test-other cuts from data/fbank/librispeech_cuts_test-other.jsonl.gz
10
+ 2022-09-01 12:06:39,993 INFO [decode.py:565] batch 0/?, cuts processed until now is 27
11
+ 2022-09-01 12:07:23,133 INFO [decode.py:565] batch 50/?, cuts processed until now is 2555
12
+ 2022-09-01 12:07:25,717 INFO [decode.py:583] The transcripts are stored in lstm_transducer_stateless2/exp/greedy_search/recogs-test-clean-greedy_search-iter-468000-avg-16-context-2-max-sym-per-frame-1-use-averaged-model.txt
13
+ 2022-09-01 12:07:25,791 INFO [utils.py:428] [test-clean-greedy_search] %WER 2.78% [1460 / 52576, 156 ins, 114 del, 1190 sub ]
14
+ 2022-09-01 12:07:25,961 INFO [decode.py:596] Wrote detailed error stats to lstm_transducer_stateless2/exp/greedy_search/errs-test-clean-greedy_search-iter-468000-avg-16-context-2-max-sym-per-frame-1-use-averaged-model.txt
15
+ 2022-09-01 12:07:25,962 INFO [decode.py:613]
16
+ For test-clean, WER of different settings are:
17
+ greedy_search 2.78 best for test-clean
18
+
19
+ 2022-09-01 12:07:27,347 INFO [decode.py:565] batch 0/?, cuts processed until now is 31
20
+ 2022-09-01 12:07:52,129 INFO [decode.py:565] batch 50/?, cuts processed until now is 2869
21
+ 2022-09-01 12:07:54,086 INFO [decode.py:583] The transcripts are stored in lstm_transducer_stateless2/exp/greedy_search/recogs-test-other-greedy_search-iter-468000-avg-16-context-2-max-sym-per-frame-1-use-averaged-model.txt
22
+ 2022-09-01 12:07:54,164 INFO [utils.py:428] [test-other-greedy_search] %WER 7.36% [3854 / 52343, 377 ins, 417 del, 3060 sub ]
23
+ 2022-09-01 12:07:54,344 INFO [decode.py:596] Wrote detailed error stats to lstm_transducer_stateless2/exp/greedy_search/errs-test-other-greedy_search-iter-468000-avg-16-context-2-max-sym-per-frame-1-use-averaged-model.txt
24
+ 2022-09-01 12:07:54,344 INFO [decode.py:613]
25
+ For test-other, WER of different settings are:
26
+ greedy_search 7.36 best for test-other
27
+
28
+ 2022-09-01 12:07:54,345 INFO [decode.py:823] Done!
decoding-results/greedy_search/log-decode-iter-472000-avg-18-context-2-max-sym-per-frame-1-use-averaged-model-2022-09-01-13-52-30 ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2022-09-01 13:52:30,671 INFO [decode.py:663] Decoding started
2
+ 2022-09-01 13:52:30,671 INFO [decode.py:669] Device: cuda:0
3
+ 2022-09-01 13:52:30,674 INFO [decode.py:679] {'best_train_loss': inf, 'best_valid_loss': inf, 'best_train_epoch': -1, 'best_valid_epoch': -1, 'batch_idx_train': 0, 'log_interval': 50, 'reset_interval': 200, 'valid_interval': 3000, 'feature_dim': 80, 'subsampling_factor': 4, 'dim_feedforward': 2048, 'decoder_dim': 512, 'joiner_dim': 512, 'model_warm_step': 3000, 'env_info': {'k2-version': '1.15.1', 'k2-build-type': 'Release', 'k2-with-cuda': True, 'k2-git-sha1': 'f8d2dba06c000ffee36aab5b66f24e7c9809f116', 'k2-git-date': 'Thu Apr 21 12:20:34 2022', 'lhotse-version': '1.3.0.dev+missing.version.file', 'torch-version': '1.10.0+cu102', 'torch-cuda-available': True, 'torch-cuda-version': '10.2', 'python-version': '3.8', 'icefall-git-branch': 'lstm-giga-libri', 'icefall-git-sha1': 'e3128cb-dirty', 'icefall-git-date': 'Mon Aug 29 19:05:41 2022', 'icefall-path': '/k2-dev/fangjun/open-source/icefall-lstm-giga', 'k2-path': '/ceph-fj/fangjun/open-source-2/k2-multi-22/k2/python/k2/__init__.py', 'lhotse-path': '/ceph-fj/fangjun/open-source-2/lhotse-jsonl/lhotse/__init__.py', 'hostname': 'de-74279-k2-train-2-0602201035-5fb6d86964-mclm7', 'IP address': '10.177.74.202'}, 'epoch': 30, 'iter': 472000, 'avg': 18, 'use_averaged_model': True, 'exp_dir': PosixPath('lstm_transducer_stateless2/exp'), 'bpe_model': 'data/lang_bpe_500/bpe.model', 'lang_dir': PosixPath('data/lang_bpe_500'), 'decoding_method': 'greedy_search', 'beam_size': 4, 'beam': 4.0, 'ngram_lm_scale': 0.01, 'max_contexts': 4, 'max_states': 8, 'context_size': 2, 'max_sym_per_frame': 1, 'num_paths': 200, 'nbest_scale': 0.5, 'num_encoder_layers': 12, 'encoder_dim': 512, 'rnn_hidden_size': 1024, 'aux_layer_period': 0, 'max_duration': 600, 'bucketing_sampler': True, 'num_buckets': 30, 'shuffle': True, 'return_cuts': True, 'num_workers': 2, 'on_the_fly_num_workers': 0, 'enable_spec_aug': True, 'spec_aug_time_warp_factor': 80, 'enable_musan': True, 'manifest_dir': PosixPath('data/fbank'), 'on_the_fly_feats': False, 'res_dir': PosixPath('lstm_transducer_stateless2/exp/greedy_search'), 'suffix': 'iter-472000-avg-18-context-2-max-sym-per-frame-1-use-averaged-model', 'blank_id': 0, 'unk_id': 2, 'vocab_size': 500}
4
+ 2022-09-01 13:52:30,675 INFO [decode.py:681] About to create model
5
+ 2022-09-01 13:52:31,031 INFO [train.py:464] Disable giga
6
+ 2022-09-01 13:52:31,046 INFO [decode.py:735] Calculating the averaged model over iteration checkpoints from lstm_transducer_stateless2/exp/checkpoint-436000.pt (excluded) to lstm_transducer_stateless2/exp/checkpoint-472000.pt
7
+ 2022-09-01 13:52:36,873 INFO [decode.py:791] Number of model parameters: 84689496
8
+ 2022-09-01 13:52:36,873 INFO [librispeech.py:58] About to get test-clean cuts from data/fbank/librispeech_cuts_test-clean.jsonl.gz
9
+ 2022-09-01 13:52:36,875 INFO [librispeech.py:63] About to get test-other cuts from data/fbank/librispeech_cuts_test-other.jsonl.gz
10
+ 2022-09-01 13:52:38,424 INFO [decode.py:565] batch 0/?, cuts processed until now is 27
11
+ 2022-09-01 13:53:05,204 INFO [decode.py:565] batch 50/?, cuts processed until now is 2555
12
+ 2022-09-01 13:53:07,412 INFO [decode.py:583] The transcripts are stored in lstm_transducer_stateless2/exp/greedy_search/recogs-test-clean-greedy_search-iter-472000-avg-18-context-2-max-sym-per-frame-1-use-averaged-model.txt
13
+ 2022-09-01 13:53:07,491 INFO [utils.py:428] [test-clean-greedy_search] %WER 2.77% [1454 / 52576, 155 ins, 112 del, 1187 sub ]
14
+ 2022-09-01 13:53:07,701 INFO [decode.py:596] Wrote detailed error stats to lstm_transducer_stateless2/exp/greedy_search/errs-test-clean-greedy_search-iter-472000-avg-18-context-2-max-sym-per-frame-1-use-averaged-model.txt
15
+ 2022-09-01 13:53:07,701 INFO [decode.py:613]
16
+ For test-clean, WER of different settings are:
17
+ greedy_search 2.77 best for test-clean
18
+
19
+ 2022-09-01 13:53:09,020 INFO [decode.py:565] batch 0/?, cuts processed until now is 31
20
+ 2022-09-01 13:53:33,729 INFO [decode.py:565] batch 50/?, cuts processed until now is 2869
21
+ 2022-09-01 13:53:35,727 INFO [decode.py:583] The transcripts are stored in lstm_transducer_stateless2/exp/greedy_search/recogs-test-other-greedy_search-iter-472000-avg-18-context-2-max-sym-per-frame-1-use-averaged-model.txt
22
+ 2022-09-01 13:53:35,798 INFO [utils.py:428] [test-other-greedy_search] %WER 7.35% [3846 / 52343, 381 ins, 422 del, 3043 sub ]
23
+ 2022-09-01 13:53:35,969 INFO [decode.py:596] Wrote detailed error stats to lstm_transducer_stateless2/exp/greedy_search/errs-test-other-greedy_search-iter-472000-avg-18-context-2-max-sym-per-frame-1-use-averaged-model.txt
24
+ 2022-09-01 13:53:35,970 INFO [decode.py:613]
25
+ For test-other, WER of different settings are:
26
+ greedy_search 7.35 best for test-other
27
+
28
+ 2022-09-01 13:53:35,970 INFO [decode.py:823] Done!
decoding-results/greedy_search/recogs-test-clean-greedy_search-iter-468000-avg-16-context-2-max-sym-per-frame-1-use-averaged-model.txt ADDED
The diff for this file is too large to render. See raw diff
 
decoding-results/greedy_search/recogs-test-clean-greedy_search-iter-472000-avg-18-context-2-max-sym-per-frame-1-use-averaged-model.txt ADDED
The diff for this file is too large to render. See raw diff
 
decoding-results/greedy_search/recogs-test-other-greedy_search-iter-468000-avg-16-context-2-max-sym-per-frame-1-use-averaged-model.txt ADDED
The diff for this file is too large to render. See raw diff
 
decoding-results/greedy_search/recogs-test-other-greedy_search-iter-472000-avg-18-context-2-max-sym-per-frame-1-use-averaged-model.txt ADDED
The diff for this file is too large to render. See raw diff
 
decoding-results/modified_beam_search/errs-test-clean-beam_size_4-iter-468000-avg-16-modified_beam_search-beam-size-4-use-averaged-model.txt ADDED
The diff for this file is too large to render. See raw diff
 
decoding-results/modified_beam_search/errs-test-clean-beam_size_4-iter-472000-avg-18-modified_beam_search-beam-size-4-use-averaged-model.txt ADDED
The diff for this file is too large to render. See raw diff
 
decoding-results/modified_beam_search/errs-test-other-beam_size_4-iter-468000-avg-16-modified_beam_search-beam-size-4-use-averaged-model.txt ADDED
The diff for this file is too large to render. See raw diff
 
decoding-results/modified_beam_search/errs-test-other-beam_size_4-iter-472000-avg-18-modified_beam_search-beam-size-4-use-averaged-model.txt ADDED
The diff for this file is too large to render. See raw diff
 
decoding-results/modified_beam_search/log-decode-iter-468000-avg-16-modified_beam_search-beam-size-4-use-averaged-model-2022-09-01-12-44-30 ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2022-09-01 12:44:30,974 INFO [decode.py:663] Decoding started
2
+ 2022-09-01 12:44:30,974 INFO [decode.py:669] Device: cuda:0
3
+ 2022-09-01 12:44:30,977 INFO [decode.py:679] {'best_train_loss': inf, 'best_valid_loss': inf, 'best_train_epoch': -1, 'best_valid_epoch': -1, 'batch_idx_train': 0, 'log_interval': 50, 'reset_interval': 200, 'valid_interval': 3000, 'feature_dim': 80, 'subsampling_factor': 4, 'dim_feedforward': 2048, 'decoder_dim': 512, 'joiner_dim': 512, 'model_warm_step': 3000, 'env_info': {'k2-version': '1.15.1', 'k2-build-type': 'Release', 'k2-with-cuda': True, 'k2-git-sha1': 'f8d2dba06c000ffee36aab5b66f24e7c9809f116', 'k2-git-date': 'Thu Apr 21 12:20:34 2022', 'lhotse-version': '1.3.0.dev+missing.version.file', 'torch-version': '1.10.0+cu102', 'torch-cuda-available': True, 'torch-cuda-version': '10.2', 'python-version': '3.8', 'icefall-git-branch': 'lstm-giga-libri', 'icefall-git-sha1': 'e3128cb-dirty', 'icefall-git-date': 'Mon Aug 29 19:05:41 2022', 'icefall-path': '/k2-dev/fangjun/open-source/icefall-lstm-giga', 'k2-path': '/ceph-fj/fangjun/open-source-2/k2-multi-22/k2/python/k2/__init__.py', 'lhotse-path': '/ceph-fj/fangjun/open-source-2/lhotse-jsonl/lhotse/__init__.py', 'hostname': 'de-74279-k2-train-2-0602201035-5fb6d86964-mclm7', 'IP address': '10.177.74.202'}, 'epoch': 30, 'iter': 468000, 'avg': 16, 'use_averaged_model': True, 'exp_dir': PosixPath('lstm_transducer_stateless2/exp'), 'bpe_model': 'data/lang_bpe_500/bpe.model', 'lang_dir': PosixPath('data/lang_bpe_500'), 'decoding_method': 'modified_beam_search', 'beam_size': 4, 'beam': 4.0, 'ngram_lm_scale': 0.01, 'max_contexts': 4, 'max_states': 8, 'context_size': 2, 'max_sym_per_frame': 1, 'num_paths': 200, 'nbest_scale': 0.5, 'num_encoder_layers': 12, 'encoder_dim': 512, 'rnn_hidden_size': 1024, 'aux_layer_period': 0, 'max_duration': 600, 'bucketing_sampler': True, 'num_buckets': 30, 'shuffle': True, 'return_cuts': True, 'num_workers': 2, 'on_the_fly_num_workers': 0, 'enable_spec_aug': True, 'spec_aug_time_warp_factor': 80, 'enable_musan': True, 'manifest_dir': PosixPath('data/fbank'), 'on_the_fly_feats': False, 'res_dir': PosixPath('lstm_transducer_stateless2/exp/modified_beam_search'), 'suffix': 'iter-468000-avg-16-modified_beam_search-beam-size-4-use-averaged-model', 'blank_id': 0, 'unk_id': 2, 'vocab_size': 500}
4
+ 2022-09-01 12:44:30,977 INFO [decode.py:681] About to create model
5
+ 2022-09-01 12:44:31,351 INFO [train.py:464] Disable giga
6
+ 2022-09-01 12:44:31,361 INFO [decode.py:735] Calculating the averaged model over iteration checkpoints from lstm_transducer_stateless2/exp/checkpoint-436000.pt (excluded) to lstm_transducer_stateless2/exp/checkpoint-468000.pt
7
+ 2022-09-01 12:44:37,280 INFO [decode.py:791] Number of model parameters: 84689496
8
+ 2022-09-01 12:44:37,280 INFO [librispeech.py:58] About to get test-clean cuts from data/fbank/librispeech_cuts_test-clean.jsonl.gz
9
+ 2022-09-01 12:44:37,282 INFO [librispeech.py:63] About to get test-other cuts from data/fbank/librispeech_cuts_test-other.jsonl.gz
10
+ 2022-09-01 12:44:42,018 INFO [decode.py:565] batch 0/?, cuts processed until now is 27
11
+ 2022-09-01 12:45:54,973 INFO [decode.py:565] batch 20/?, cuts processed until now is 1623
12
+ 2022-09-01 12:46:32,803 INFO [decode.py:565] batch 40/?, cuts processed until now is 2468
13
+ 2022-09-01 12:46:49,067 INFO [decode.py:583] The transcripts are stored in lstm_transducer_stateless2/exp/modified_beam_search/recogs-test-clean-beam_size_4-iter-468000-avg-16-modified_beam_search-beam-size-4-use-averaged-model.txt
14
+ 2022-09-01 12:46:49,134 INFO [utils.py:428] [test-clean-beam_size_4] %WER 2.73% [1436 / 52576, 166 ins, 101 del, 1169 sub ]
15
+ 2022-09-01 12:46:49,302 INFO [decode.py:596] Wrote detailed error stats to lstm_transducer_stateless2/exp/modified_beam_search/errs-test-clean-beam_size_4-iter-468000-avg-16-modified_beam_search-beam-size-4-use-averaged-model.txt
16
+ 2022-09-01 12:46:49,302 INFO [decode.py:613]
17
+ For test-clean, WER of different settings are:
18
+ beam_size_4 2.73 best for test-clean
19
+
20
+ 2022-09-01 12:46:53,608 INFO [decode.py:565] batch 0/?, cuts processed until now is 31
21
+ 2022-09-01 12:48:04,973 INFO [decode.py:565] batch 20/?, cuts processed until now is 1849
22
+ 2022-09-01 12:48:40,924 INFO [decode.py:565] batch 40/?, cuts processed until now is 2785
23
+ 2022-09-01 12:48:55,789 INFO [decode.py:583] The transcripts are stored in lstm_transducer_stateless2/exp/modified_beam_search/recogs-test-other-beam_size_4-iter-468000-avg-16-modified_beam_search-beam-size-4-use-averaged-model.txt
24
+ 2022-09-01 12:48:55,876 INFO [utils.py:428] [test-other-beam_size_4] %WER 7.15% [3745 / 52343, 407 ins, 352 del, 2986 sub ]
25
+ 2022-09-01 12:48:56,051 INFO [decode.py:596] Wrote detailed error stats to lstm_transducer_stateless2/exp/modified_beam_search/errs-test-other-beam_size_4-iter-468000-avg-16-modified_beam_search-beam-size-4-use-averaged-model.txt
26
+ 2022-09-01 12:48:56,051 INFO [decode.py:613]
27
+ For test-other, WER of different settings are:
28
+ beam_size_4 7.15 best for test-other
29
+
30
+ 2022-09-01 12:48:56,052 INFO [decode.py:823] Done!
decoding-results/modified_beam_search/log-decode-iter-472000-avg-18-modified_beam_search-beam-size-4-use-averaged-model-2022-09-01-14-27-47 ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2022-09-01 14:27:47,985 INFO [decode.py:663] Decoding started
2
+ 2022-09-01 14:27:47,985 INFO [decode.py:669] Device: cuda:0
3
+ 2022-09-01 14:27:47,987 INFO [decode.py:679] {'best_train_loss': inf, 'best_valid_loss': inf, 'best_train_epoch': -1, 'best_valid_epoch': -1, 'batch_idx_train': 0, 'log_interval': 50, 'reset_interval': 200, 'valid_interval': 3000, 'feature_dim': 80, 'subsampling_factor': 4, 'dim_feedforward': 2048, 'decoder_dim': 512, 'joiner_dim': 512, 'model_warm_step': 3000, 'env_info': {'k2-version': '1.15.1', 'k2-build-type': 'Release', 'k2-with-cuda': True, 'k2-git-sha1': 'f8d2dba06c000ffee36aab5b66f24e7c9809f116', 'k2-git-date': 'Thu Apr 21 12:20:34 2022', 'lhotse-version': '1.3.0.dev+missing.version.file', 'torch-version': '1.10.0+cu102', 'torch-cuda-available': True, 'torch-cuda-version': '10.2', 'python-version': '3.8', 'icefall-git-branch': 'lstm-giga-libri', 'icefall-git-sha1': 'e3128cb-dirty', 'icefall-git-date': 'Mon Aug 29 19:05:41 2022', 'icefall-path': '/k2-dev/fangjun/open-source/icefall-lstm-giga', 'k2-path': '/ceph-fj/fangjun/open-source-2/k2-multi-22/k2/python/k2/__init__.py', 'lhotse-path': '/ceph-fj/fangjun/open-source-2/lhotse-jsonl/lhotse/__init__.py', 'hostname': 'de-74279-k2-train-2-0602201035-5fb6d86964-mclm7', 'IP address': '10.177.74.202'}, 'epoch': 30, 'iter': 472000, 'avg': 18, 'use_averaged_model': True, 'exp_dir': PosixPath('lstm_transducer_stateless2/exp'), 'bpe_model': 'data/lang_bpe_500/bpe.model', 'lang_dir': PosixPath('data/lang_bpe_500'), 'decoding_method': 'modified_beam_search', 'beam_size': 4, 'beam': 4.0, 'ngram_lm_scale': 0.01, 'max_contexts': 4, 'max_states': 8, 'context_size': 2, 'max_sym_per_frame': 1, 'num_paths': 200, 'nbest_scale': 0.5, 'num_encoder_layers': 12, 'encoder_dim': 512, 'rnn_hidden_size': 1024, 'aux_layer_period': 0, 'max_duration': 600, 'bucketing_sampler': True, 'num_buckets': 30, 'shuffle': True, 'return_cuts': True, 'num_workers': 2, 'on_the_fly_num_workers': 0, 'enable_spec_aug': True, 'spec_aug_time_warp_factor': 80, 'enable_musan': True, 'manifest_dir': PosixPath('data/fbank'), 'on_the_fly_feats': False, 'res_dir': PosixPath('lstm_transducer_stateless2/exp/modified_beam_search'), 'suffix': 'iter-472000-avg-18-modified_beam_search-beam-size-4-use-averaged-model', 'blank_id': 0, 'unk_id': 2, 'vocab_size': 500}
4
+ 2022-09-01 14:27:47,987 INFO [decode.py:681] About to create model
5
+ 2022-09-01 14:27:48,327 INFO [train.py:464] Disable giga
6
+ 2022-09-01 14:27:48,337 INFO [decode.py:735] Calculating the averaged model over iteration checkpoints from lstm_transducer_stateless2/exp/checkpoint-436000.pt (excluded) to lstm_transducer_stateless2/exp/checkpoint-472000.pt
7
+ 2022-09-01 14:27:53,879 INFO [decode.py:791] Number of model parameters: 84689496
8
+ 2022-09-01 14:27:53,880 INFO [librispeech.py:58] About to get test-clean cuts from data/fbank/librispeech_cuts_test-clean.jsonl.gz
9
+ 2022-09-01 14:27:53,882 INFO [librispeech.py:63] About to get test-other cuts from data/fbank/librispeech_cuts_test-other.jsonl.gz
10
+ 2022-09-01 14:27:58,373 INFO [decode.py:565] batch 0/?, cuts processed until now is 27
11
+ 2022-09-01 14:29:07,720 INFO [decode.py:565] batch 20/?, cuts processed until now is 1623
12
+ 2022-09-01 14:29:43,529 INFO [decode.py:565] batch 40/?, cuts processed until now is 2468
13
+ 2022-09-01 14:29:58,954 INFO [decode.py:583] The transcripts are stored in lstm_transducer_stateless2/exp/modified_beam_search/recogs-test-clean-beam_size_4-iter-472000-avg-18-modified_beam_search-beam-size-4-use-averaged-model.txt
14
+ 2022-09-01 14:29:59,020 INFO [utils.py:428] [test-clean-beam_size_4] %WER 2.75% [1448 / 52576, 171 ins, 105 del, 1172 sub ]
15
+ 2022-09-01 14:29:59,187 INFO [decode.py:596] Wrote detailed error stats to lstm_transducer_stateless2/exp/modified_beam_search/errs-test-clean-beam_size_4-iter-472000-avg-18-modified_beam_search-beam-size-4-use-averaged-model.txt
16
+ 2022-09-01 14:29:59,188 INFO [decode.py:613]
17
+ For test-clean, WER of different settings are:
18
+ beam_size_4 2.75 best for test-clean
19
+
20
+ 2022-09-01 14:30:03,309 INFO [decode.py:565] batch 0/?, cuts processed until now is 31
21
+ 2022-09-01 14:32:16,602 INFO [decode.py:565] batch 20/?, cuts processed until now is 1849
22
+ 2022-09-01 14:32:52,559 INFO [decode.py:565] batch 40/?, cuts processed until now is 2785
23
+ 2022-09-01 14:33:06,715 INFO [decode.py:583] The transcripts are stored in lstm_transducer_stateless2/exp/modified_beam_search/recogs-test-other-beam_size_4-iter-472000-avg-18-modified_beam_search-beam-size-4-use-averaged-model.txt
24
+ 2022-09-01 14:33:06,783 INFO [utils.py:428] [test-other-beam_size_4] %WER 7.08% [3707 / 52343, 406 ins, 337 del, 2964 sub ]
25
+ 2022-09-01 14:33:07,000 INFO [decode.py:596] Wrote detailed error stats to lstm_transducer_stateless2/exp/modified_beam_search/errs-test-other-beam_size_4-iter-472000-avg-18-modified_beam_search-beam-size-4-use-averaged-model.txt
26
+ 2022-09-01 14:33:07,001 INFO [decode.py:613]
27
+ For test-other, WER of different settings are:
28
+ beam_size_4 7.08 best for test-other
29
+
30
+ 2022-09-01 14:33:07,001 INFO [decode.py:823] Done!
decoding-results/modified_beam_search/recogs-test-clean-beam_size_4-iter-468000-avg-16-modified_beam_search-beam-size-4-use-averaged-model.txt ADDED
The diff for this file is too large to render. See raw diff
 
decoding-results/modified_beam_search/recogs-test-clean-beam_size_4-iter-472000-avg-18-modified_beam_search-beam-size-4-use-averaged-model.txt ADDED
The diff for this file is too large to render. See raw diff
 
decoding-results/modified_beam_search/recogs-test-other-beam_size_4-iter-468000-avg-16-modified_beam_search-beam-size-4-use-averaged-model.txt ADDED
The diff for this file is too large to render. See raw diff
 
decoding-results/modified_beam_search/recogs-test-other-beam_size_4-iter-472000-avg-18-modified_beam_search-beam-size-4-use-averaged-model.txt ADDED
The diff for this file is too large to render. See raw diff