Cloud User commited on
Commit
907dc5b
1 Parent(s): c24dd82

Add decoding log

Browse files
Files changed (10) hide show
  1. log/fast_beam_search/.ipynb_checkpoints/cer-summary-eval_clean-beam_20.0_max_contexts_8_max_states_64-epoch-30-avg-9-beam-20.0-max-contexts-8-max-states-64-use-averaged-model-checkpoint.txt +0 -2
  2. log/fast_beam_search/.ipynb_checkpoints/cer-summary-eval_other-beam_20.0_max_contexts_8_max_states_64-epoch-30-avg-9-beam-20.0-max-contexts-8-max-states-64-use-averaged-model-checkpoint.txt +0 -2
  3. log/fast_beam_search/log-decode-epoch-30-avg-9-beam-20.0-max-contexts-8-max-states-64-use-averaged-model-2024-06-24-10-30-31 +33 -0
  4. log/greedy_search/.ipynb_checkpoints/cer-summary-eval_clean-greedy_search-epoch-30-avg-9-context-2-max-sym-per-frame-1-use-averaged-model-checkpoint.txt +0 -2
  5. log/greedy_search/.ipynb_checkpoints/cer-summary-eval_other-greedy_search-epoch-30-avg-9-context-2-max-sym-per-frame-1-use-averaged-model-checkpoint.txt +0 -2
  6. log/greedy_search/{.ipynb_checkpoints/log-decode-epoch-30-avg-9-context-2-max-sym-per-frame-1-use-averaged-model-2024-06-24-10-29-54-checkpoint → log-decode-epoch-30-avg-9-context-2-max-sym-per-frame-1-use-averaged-model-2024-06-24-10-29-54} +5 -5
  7. log/modified_beam_search/.ipynb_checkpoints/cer-summary-eval_clean-beam_size_4-epoch-30-avg-9-modified_beam_search-beam-size-4-use-averaged-model-checkpoint.txt +0 -2
  8. log/modified_beam_search/.ipynb_checkpoints/cer-summary-eval_other-beam_size_4-epoch-30-avg-9-modified_beam_search-beam-size-4-use-averaged-model-checkpoint.txt +0 -2
  9. log/modified_beam_search/log-decode-epoch-30-avg-9-modified_beam_search-beam-size-4-use-averaged-model-2024-06-24-10-32-00 +33 -0
  10. log/tensorboard/events.out.tfevents.1718771868.gpu-1.1326770.0 +3 -0
log/fast_beam_search/.ipynb_checkpoints/cer-summary-eval_clean-beam_20.0_max_contexts_8_max_states_64-epoch-30-avg-9-beam-20.0-max-contexts-8-max-states-64-use-averaged-model-checkpoint.txt DELETED
@@ -1,2 +0,0 @@
1
- settings CER
2
- beam_20.0_max_contexts_8_max_states_64 10.59
 
 
 
log/fast_beam_search/.ipynb_checkpoints/cer-summary-eval_other-beam_20.0_max_contexts_8_max_states_64-epoch-30-avg-9-beam-20.0-max-contexts-8-max-states-64-use-averaged-model-checkpoint.txt DELETED
@@ -1,2 +0,0 @@
1
- settings CER
2
- beam_20.0_max_contexts_8_max_states_64 11.54
 
 
 
log/fast_beam_search/log-decode-epoch-30-avg-9-beam-20.0-max-contexts-8-max-states-64-use-averaged-model-2024-06-24-10-30-31 ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2024-06-24 10:30:31,308 INFO [decode.py:832] Decoding started
2
+ 2024-06-24 10:30:31,309 INFO [decode.py:838] Device: cuda:0
3
+ 2024-06-24 10:30:31,318 INFO [decode.py:848] {'best_train_loss': inf, 'best_valid_loss': inf, 'best_train_epoch': -1, 'best_valid_epoch': -1, 'batch_idx_train': 0, 'log_interval': 50, 'reset_interval': 200, 'valid_interval': 3000, 'feature_dim': 80, 'subsampling_factor': 4, 'warm_step': 2000, 'env_info': {'k2-version': '1.24.4', 'k2-build-type': 'Release', 'k2-with-cuda': True, 'k2-git-sha1': '8f976a1e1407e330e2a233d68f81b1eb5269fdaa', 'k2-git-date': 'Thu Jun 6 02:13:08 2024', 'lhotse-version': '1.24.0.dev+git.4d57d53d.dirty', 'torch-version': '2.3.1+cu121', 'torch-cuda-available': True, 'torch-cuda-version': '12.1', 'python-version': '3.9', 'icefall-git-branch': 'feature/ksponspeech_zipformer', 'icefall-git-sha1': '7dda45c9-dirty', 'icefall-git-date': 'Tue Jun 18 16:40:30 2024', 'icefall-path': '/home/ubuntu/icefall', 'k2-path': '/home/ubuntu/miniforge3/envs/lhotse/lib/python3.9/site-packages/k2/__init__.py', 'lhotse-path': '/home/ubuntu/lhotse/lhotse/__init__.py', 'hostname': 'gpu-1', 'IP address': '127.0.1.1'}, 'epoch': 30, 'iter': 0, 'avg': 9, 'use_averaged_model': True, 'exp_dir': PosixPath('zipformer/exp'), 'bpe_model': 'data/lang_bpe_5000/bpe.model', 'lang_dir': PosixPath('data/lang_bpe_500'), 'decoding_method': 'fast_beam_search', 'beam_size': 4, 'beam': 20.0, 'ngram_lm_scale': 0.01, 'max_contexts': 8, 'max_states': 64, 'context_size': 2, 'max_sym_per_frame': 1, 'num_paths': 200, 'nbest_scale': 0.5, 'use_shallow_fusion': False, 'lm_type': 'rnn', 'lm_scale': 0.3, 'tokens_ngram': 2, 'backoff_id': 500, 'context_score': 2, 'context_file': '', 'num_encoder_layers': '2,2,3,4,3,2', 'downsampling_factor': '1,2,4,8,4,2', 'feedforward_dim': '512,768,1024,1536,1024,768', 'num_heads': '4,4,4,8,4,4', 'encoder_dim': '192,256,384,512,384,256', 'query_head_dim': '32', 'value_head_dim': '12', 'pos_head_dim': '4', 'pos_dim': 48, 'encoder_unmasked_dim': '192,192,256,256,256,192', 'cnn_module_kernel': '31,31,15,15,15,31', 'decoder_dim': 512, 'joiner_dim': 512, 'causal': False, 'chunk_size': '16,32,64,-1', 'left_context_frames': '64,128,256,-1', 'use_transducer': True, 'use_ctc': False, 'manifest_dir': PosixPath('data/fbank'), 'max_duration': 200.0, 'bucketing_sampler': True, 'num_buckets': 30, 'concatenate_cuts': False, 'duration_factor': 1.0, 'gap': 1.0, 'on_the_fly_feats': False, 'shuffle': True, 'drop_last': True, 'return_cuts': True, 'num_workers': 2, 'enable_spec_aug': True, 'spec_aug_time_warp_factor': 80, 'enable_musan': True, 'input_strategy': 'PrecomputedFeatures', 'lm_vocab_size': 500, 'lm_epoch': 7, 'lm_avg': 1, 'lm_exp_dir': None, 'rnn_lm_embedding_dim': 2048, 'rnn_lm_hidden_dim': 2048, 'rnn_lm_num_layers': 3, 'rnn_lm_tie_weights': True, 'transformer_lm_exp_dir': None, 'transformer_lm_dim_feedforward': 2048, 'transformer_lm_encoder_dim': 768, 'transformer_lm_embedding_dim': 768, 'transformer_lm_nhead': 8, 'transformer_lm_num_layers': 16, 'transformer_lm_tie_weights': True, 'res_dir': PosixPath('zipformer/exp/fast_beam_search'), 'has_contexts': False, 'suffix': 'epoch-30-avg-9-beam-20.0-max-contexts-8-max-states-64-use-averaged-model', 'blank_id': 0, 'unk_id': 2, 'vocab_size': 5000}
4
+ 2024-06-24 10:30:31,318 INFO [decode.py:850] About to create model
5
+ 2024-06-24 10:30:32,012 INFO [decode.py:917] Calculating the averaged model over epoch range from 21 (excluded) to 30
6
+ 2024-06-24 10:30:37,819 INFO [decode.py:1011] Number of model parameters: 74778511
7
+ 2024-06-24 10:30:37,819 INFO [asr_datamodule.py:405] About to get eval_clean cuts
8
+ 2024-06-24 10:30:37,821 INFO [asr_datamodule.py:412] About to get eval_other cuts
9
+ 2024-06-24 10:30:41,828 INFO [decode.py:705] batch 0/?, cuts processed until now is 21
10
+ 2024-06-24 10:30:50,844 INFO [decode.py:705] batch 20/?, cuts processed until now is 1368
11
+ 2024-06-24 10:30:56,165 INFO [zipformer.py:1858] name=None, attn_weights_entropy = tensor([3.6460, 3.3435, 2.4906, 3.5147], device='cuda:0')
12
+ 2024-06-24 10:31:01,280 INFO [decode.py:705] batch 40/?, cuts processed until now is 2063
13
+ 2024-06-24 10:31:10,045 INFO [decode.py:721] The transcripts are stored in zipformer/exp/fast_beam_search/recogs-eval_clean-beam_20.0_max_contexts_8_max_states_64-epoch-30-avg-9-beam-20.0-max-contexts-8-max-states-64-use-averaged-model.txt
14
+ 2024-06-24 10:31:10,135 INFO [utils.py:656] [eval_clean-beam_20.0_max_contexts_8_max_states_64] %WER 10.59% [5134 / 48463, 951 ins, 1838 del, 2345 sub ]
15
+ 2024-06-24 10:31:10,326 INFO [decode.py:734] Wrote detailed error stats to zipformer/exp/fast_beam_search/errs-eval_clean-beam_20.0_max_contexts_8_max_states_64-epoch-30-avg-9-beam-20.0-max-contexts-8-max-states-64-use-averaged-model.txt
16
+ 2024-06-24 10:31:10,329 INFO [decode.py:750]
17
+ For eval_clean, CER of different settings are:
18
+ beam_20.0_max_contexts_8_max_states_64 10.59 best for eval_clean
19
+
20
+ 2024-06-24 10:31:12,334 INFO [decode.py:705] batch 0/?, cuts processed until now is 17
21
+ 2024-06-24 10:31:22,306 INFO [decode.py:705] batch 20/?, cuts processed until now is 989
22
+ 2024-06-24 10:31:28,298 INFO [zipformer.py:1858] name=None, attn_weights_entropy = tensor([4.7227, 4.1740, 3.1463, 4.4510], device='cuda:0')
23
+ 2024-06-24 10:31:33,710 INFO [decode.py:705] batch 40/?, cuts processed until now is 1761
24
+ 2024-06-24 10:31:44,195 INFO [decode.py:705] batch 60/?, cuts processed until now is 2371
25
+ 2024-06-24 10:31:54,865 INFO [decode.py:705] batch 80/?, cuts processed until now is 2803
26
+ 2024-06-24 10:31:57,079 INFO [decode.py:721] The transcripts are stored in zipformer/exp/fast_beam_search/recogs-eval_other-beam_20.0_max_contexts_8_max_states_64-epoch-30-avg-9-beam-20.0-max-contexts-8-max-states-64-use-averaged-model.txt
27
+ 2024-06-24 10:31:57,340 INFO [utils.py:656] [eval_other-beam_20.0_max_contexts_8_max_states_64] %WER 11.54% [8202 / 71101, 1439 ins, 2941 del, 3822 sub ]
28
+ 2024-06-24 10:31:57,629 INFO [decode.py:734] Wrote detailed error stats to zipformer/exp/fast_beam_search/errs-eval_other-beam_20.0_max_contexts_8_max_states_64-epoch-30-avg-9-beam-20.0-max-contexts-8-max-states-64-use-averaged-model.txt
29
+ 2024-06-24 10:31:57,632 INFO [decode.py:750]
30
+ For eval_other, CER of different settings are:
31
+ beam_20.0_max_contexts_8_max_states_64 11.54 best for eval_other
32
+
33
+ 2024-06-24 10:31:57,635 INFO [decode.py:1046] Done!
log/greedy_search/.ipynb_checkpoints/cer-summary-eval_clean-greedy_search-epoch-30-avg-9-context-2-max-sym-per-frame-1-use-averaged-model-checkpoint.txt DELETED
@@ -1,2 +0,0 @@
1
- settings CER
2
- greedy_search 10.6
 
 
 
log/greedy_search/.ipynb_checkpoints/cer-summary-eval_other-greedy_search-epoch-30-avg-9-context-2-max-sym-per-frame-1-use-averaged-model-checkpoint.txt DELETED
@@ -1,2 +0,0 @@
1
- settings CER
2
- greedy_search 11.56
 
 
 
log/greedy_search/{.ipynb_checkpoints/log-decode-epoch-30-avg-9-context-2-max-sym-per-frame-1-use-averaged-model-2024-06-24-10-29-54-checkpoint → log-decode-epoch-30-avg-9-context-2-max-sym-per-frame-1-use-averaged-model-2024-06-24-10-29-54} RENAMED
@@ -1,6 +1,6 @@
1
  2024-06-24 10:29:54,225 INFO [decode.py:832] Decoding started
2
  2024-06-24 10:29:54,225 INFO [decode.py:838] Device: cuda:0
3
- 2024-06-24 10:29:54,232 INFO [decode.py:848] {'best_train_loss': inf, 'best_valid_loss': inf, 'best_train_epoch': -1, 'best_valid_epoch': -1, 'batch_idx_train': 0, 'log_interval': 50, 'reset_interval': 200, 'valid_interval': 3000, 'feature_dim': 80, 'subsampling_factor': 4, 'warm_step': 2000, 'env_info': {'k2-version': '1.24.4', 'k2-build-type': 'Release', 'k2-with-cuda': True, 'k2-git-sha1': '8f976a1e1407e330e2a233d68f81b1eb5269fdaa', 'k2-git-date': 'Thu Jun 6 02:13:08 2024', 'lhotse-version': '1.24.0.dev+git.4d57d53d.dirty', 'torch-version': '2.3.1+cu121', 'torch-cuda-available': True, 'torch-cuda-version': '12.1', 'python-version': '3.9', 'icefall-git-branch': 'feature/ksponspeech_zipformer', 'icefall-git-sha1': '7dda45c9-dirty', 'icefall-git-date': 'Tue Jun 18 16:40:30 2024', 'icefall-path': '/home/ubuntu/test_icefall/icefall', 'k2-path': '/home/ubuntu/miniforge3/envs/lhotse/lib/python3.9/site-packages/k2/__init__.py', 'lhotse-path': '/home/ubuntu/lsh/lhotse/lhotse/__init__.py', 'hostname': 'gpu-1', 'IP address': '127.0.1.1'}, 'epoch': 30, 'iter': 0, 'avg': 9, 'use_averaged_model': True, 'exp_dir': PosixPath('download/test_KsponSpeech/240618_exp_zipformer_non_streaming_with_musan'), 'bpe_model': 'download/test_KsponSpeech/lang_bpe_5000/bpe.model', 'lang_dir': PosixPath('data/lang_bpe_500'), 'decoding_method': 'greedy_search', 'beam_size': 4, 'beam': 20.0, 'ngram_lm_scale': 0.01, 'max_contexts': 8, 'max_states': 64, 'context_size': 2, 'max_sym_per_frame': 1, 'num_paths': 200, 'nbest_scale': 0.5, 'use_shallow_fusion': False, 'lm_type': 'rnn', 'lm_scale': 0.3, 'tokens_ngram': 2, 'backoff_id': 500, 'context_score': 2, 'context_file': '', 'num_encoder_layers': '2,2,3,4,3,2', 'downsampling_factor': '1,2,4,8,4,2', 'feedforward_dim': '512,768,1024,1536,1024,768', 'num_heads': '4,4,4,8,4,4', 'encoder_dim': '192,256,384,512,384,256', 'query_head_dim': '32', 'value_head_dim': '12', 'pos_head_dim': '4', 'pos_dim': 48, 'encoder_unmasked_dim': '192,192,256,256,256,192', 'cnn_module_kernel': '31,31,15,15,15,31', 'decoder_dim': 512, 'joiner_dim': 512, 'causal': False, 'chunk_size': '16,32,64,-1', 'left_context_frames': '64,128,256,-1', 'use_transducer': True, 'use_ctc': False, 'manifest_dir': PosixPath('download/test_KsponSpeech/fbank'), 'max_duration': 200.0, 'bucketing_sampler': True, 'num_buckets': 30, 'concatenate_cuts': False, 'duration_factor': 1.0, 'gap': 1.0, 'on_the_fly_feats': False, 'shuffle': True, 'drop_last': True, 'return_cuts': True, 'num_workers': 2, 'enable_spec_aug': True, 'spec_aug_time_warp_factor': 80, 'enable_musan': True, 'input_strategy': 'PrecomputedFeatures', 'lm_vocab_size': 500, 'lm_epoch': 7, 'lm_avg': 1, 'lm_exp_dir': None, 'rnn_lm_embedding_dim': 2048, 'rnn_lm_hidden_dim': 2048, 'rnn_lm_num_layers': 3, 'rnn_lm_tie_weights': True, 'transformer_lm_exp_dir': None, 'transformer_lm_dim_feedforward': 2048, 'transformer_lm_encoder_dim': 768, 'transformer_lm_embedding_dim': 768, 'transformer_lm_nhead': 8, 'transformer_lm_num_layers': 16, 'transformer_lm_tie_weights': True, 'res_dir': PosixPath('download/test_KsponSpeech/240618_exp_zipformer_non_streaming_with_musan/greedy_search'), 'has_contexts': False, 'suffix': 'epoch-30-avg-9-context-2-max-sym-per-frame-1-use-averaged-model', 'blank_id': 0, 'unk_id': 2, 'vocab_size': 5000}
4
  2024-06-24 10:29:54,232 INFO [decode.py:850] About to create model
5
  2024-06-24 10:29:54,898 INFO [decode.py:917] Calculating the averaged model over epoch range from 21 (excluded) to 30
6
  2024-06-24 10:29:56,517 INFO [decode.py:1011] Number of model parameters: 74778511
@@ -11,18 +11,18 @@
11
  device='cuda:0')
12
  2024-06-24 10:30:05,546 INFO [zipformer.py:1858] name=None, attn_weights_entropy = tensor([3.8089, 3.2592, 2.0893, 3.4451], device='cuda:0')
13
  2024-06-24 10:30:08,163 INFO [decode.py:705] batch 50/?, cuts processed until now is 2332
14
- 2024-06-24 10:30:09,457 INFO [decode.py:721] The transcripts are stored in download/test_KsponSpeech/240618_exp_zipformer_non_streaming_with_musan/greedy_search/recogs-eval_clean-greedy_search-epoch-30-avg-9-context-2-max-sym-per-frame-1-use-averaged-model.txt
15
  2024-06-24 10:30:09,553 INFO [utils.py:656] [eval_clean-greedy_search] %WER 10.60% [5135 / 48463, 861 ins, 1840 del, 2434 sub ]
16
- 2024-06-24 10:30:09,759 INFO [decode.py:734] Wrote detailed error stats to download/test_KsponSpeech/240618_exp_zipformer_non_streaming_with_musan/greedy_search/errs-eval_clean-greedy_search-epoch-30-avg-9-context-2-max-sym-per-frame-1-use-averaged-model.txt
17
  2024-06-24 10:30:09,762 INFO [decode.py:750]
18
  For eval_clean, CER of different settings are:
19
  greedy_search 10.6 best for eval_clean
20
 
21
  2024-06-24 10:30:11,510 INFO [decode.py:705] batch 0/?, cuts processed until now is 17
22
  2024-06-24 10:30:20,384 INFO [decode.py:705] batch 50/?, cuts processed until now is 2069
23
- 2024-06-24 10:30:26,564 INFO [decode.py:721] The transcripts are stored in download/test_KsponSpeech/240618_exp_zipformer_non_streaming_with_musan/greedy_search/recogs-eval_other-greedy_search-epoch-30-avg-9-context-2-max-sym-per-frame-1-use-averaged-model.txt
24
  2024-06-24 10:30:26,707 INFO [utils.py:656] [eval_other-greedy_search] %WER 11.56% [8221 / 71101, 1297 ins, 3018 del, 3906 sub ]
25
- 2024-06-24 10:30:27,004 INFO [decode.py:734] Wrote detailed error stats to download/test_KsponSpeech/240618_exp_zipformer_non_streaming_with_musan/greedy_search/errs-eval_other-greedy_search-epoch-30-avg-9-context-2-max-sym-per-frame-1-use-averaged-model.txt
26
  2024-06-24 10:30:27,017 INFO [decode.py:750]
27
  For eval_other, CER of different settings are:
28
  greedy_search 11.56 best for eval_other
 
1
  2024-06-24 10:29:54,225 INFO [decode.py:832] Decoding started
2
  2024-06-24 10:29:54,225 INFO [decode.py:838] Device: cuda:0
3
+ 2024-06-24 10:29:54,232 INFO [decode.py:848] {'best_train_loss': inf, 'best_valid_loss': inf, 'best_train_epoch': -1, 'best_valid_epoch': -1, 'batch_idx_train': 0, 'log_interval': 50, 'reset_interval': 200, 'valid_interval': 3000, 'feature_dim': 80, 'subsampling_factor': 4, 'warm_step': 2000, 'env_info': {'k2-version': '1.24.4', 'k2-build-type': 'Release', 'k2-with-cuda': True, 'k2-git-sha1': '8f976a1e1407e330e2a233d68f81b1eb5269fdaa', 'k2-git-date': 'Thu Jun 6 02:13:08 2024', 'lhotse-version': '1.24.0.dev+git.4d57d53d.dirty', 'torch-version': '2.3.1+cu121', 'torch-cuda-available': True, 'torch-cuda-version': '12.1', 'python-version': '3.9', 'icefall-git-branch': 'feature/ksponspeech_zipformer', 'icefall-git-sha1': '7dda45c9-dirty', 'icefall-git-date': 'Tue Jun 18 16:40:30 2024', 'icefall-path': '/home/ubuntu/icefall', 'k2-path': '/home/ubuntu/miniforge3/envs/lhotse/lib/python3.9/site-packages/k2/__init__.py', 'lhotse-path': '/home/ubuntu/lhotse/lhotse/__init__.py', 'hostname': 'gpu-1', 'IP address': '127.0.1.1'}, 'epoch': 30, 'iter': 0, 'avg': 9, 'use_averaged_model': True, 'exp_dir': PosixPath('zipformer/exp'), 'bpe_model': 'data/lang_bpe_5000/bpe.model', 'lang_dir': PosixPath('data/lang_bpe_500'), 'decoding_method': 'greedy_search', 'beam_size': 4, 'beam': 20.0, 'ngram_lm_scale': 0.01, 'max_contexts': 8, 'max_states': 64, 'context_size': 2, 'max_sym_per_frame': 1, 'num_paths': 200, 'nbest_scale': 0.5, 'use_shallow_fusion': False, 'lm_type': 'rnn', 'lm_scale': 0.3, 'tokens_ngram': 2, 'backoff_id': 500, 'context_score': 2, 'context_file': '', 'num_encoder_layers': '2,2,3,4,3,2', 'downsampling_factor': '1,2,4,8,4,2', 'feedforward_dim': '512,768,1024,1536,1024,768', 'num_heads': '4,4,4,8,4,4', 'encoder_dim': '192,256,384,512,384,256', 'query_head_dim': '32', 'value_head_dim': '12', 'pos_head_dim': '4', 'pos_dim': 48, 'encoder_unmasked_dim': '192,192,256,256,256,192', 'cnn_module_kernel': '31,31,15,15,15,31', 'decoder_dim': 512, 'joiner_dim': 512, 'causal': False, 'chunk_size': '16,32,64,-1', 'left_context_frames': '64,128,256,-1', 'use_transducer': True, 'use_ctc': False, 'manifest_dir': PosixPath('data/fbank'), 'max_duration': 200.0, 'bucketing_sampler': True, 'num_buckets': 30, 'concatenate_cuts': False, 'duration_factor': 1.0, 'gap': 1.0, 'on_the_fly_feats': False, 'shuffle': True, 'drop_last': True, 'return_cuts': True, 'num_workers': 2, 'enable_spec_aug': True, 'spec_aug_time_warp_factor': 80, 'enable_musan': True, 'input_strategy': 'PrecomputedFeatures', 'lm_vocab_size': 500, 'lm_epoch': 7, 'lm_avg': 1, 'lm_exp_dir': None, 'rnn_lm_embedding_dim': 2048, 'rnn_lm_hidden_dim': 2048, 'rnn_lm_num_layers': 3, 'rnn_lm_tie_weights': True, 'transformer_lm_exp_dir': None, 'transformer_lm_dim_feedforward': 2048, 'transformer_lm_encoder_dim': 768, 'transformer_lm_embedding_dim': 768, 'transformer_lm_nhead': 8, 'transformer_lm_num_layers': 16, 'transformer_lm_tie_weights': True, 'res_dir': PosixPath('zipformer/exp/greedy_search'), 'has_contexts': False, 'suffix': 'epoch-30-avg-9-context-2-max-sym-per-frame-1-use-averaged-model', 'blank_id': 0, 'unk_id': 2, 'vocab_size': 5000}
4
  2024-06-24 10:29:54,232 INFO [decode.py:850] About to create model
5
  2024-06-24 10:29:54,898 INFO [decode.py:917] Calculating the averaged model over epoch range from 21 (excluded) to 30
6
  2024-06-24 10:29:56,517 INFO [decode.py:1011] Number of model parameters: 74778511
 
11
  device='cuda:0')
12
  2024-06-24 10:30:05,546 INFO [zipformer.py:1858] name=None, attn_weights_entropy = tensor([3.8089, 3.2592, 2.0893, 3.4451], device='cuda:0')
13
  2024-06-24 10:30:08,163 INFO [decode.py:705] batch 50/?, cuts processed until now is 2332
14
+ 2024-06-24 10:30:09,457 INFO [decode.py:721] The transcripts are stored in zipformer/exp/greedy_search/recogs-eval_clean-greedy_search-epoch-30-avg-9-context-2-max-sym-per-frame-1-use-averaged-model.txt
15
  2024-06-24 10:30:09,553 INFO [utils.py:656] [eval_clean-greedy_search] %WER 10.60% [5135 / 48463, 861 ins, 1840 del, 2434 sub ]
16
+ 2024-06-24 10:30:09,759 INFO [decode.py:734] Wrote detailed error stats to zipformer/exp/greedy_search/errs-eval_clean-greedy_search-epoch-30-avg-9-context-2-max-sym-per-frame-1-use-averaged-model.txt
17
  2024-06-24 10:30:09,762 INFO [decode.py:750]
18
  For eval_clean, CER of different settings are:
19
  greedy_search 10.6 best for eval_clean
20
 
21
  2024-06-24 10:30:11,510 INFO [decode.py:705] batch 0/?, cuts processed until now is 17
22
  2024-06-24 10:30:20,384 INFO [decode.py:705] batch 50/?, cuts processed until now is 2069
23
+ 2024-06-24 10:30:26,564 INFO [decode.py:721] The transcripts are stored in zipformer/exp/greedy_search/recogs-eval_other-greedy_search-epoch-30-avg-9-context-2-max-sym-per-frame-1-use-averaged-model.txt
24
  2024-06-24 10:30:26,707 INFO [utils.py:656] [eval_other-greedy_search] %WER 11.56% [8221 / 71101, 1297 ins, 3018 del, 3906 sub ]
25
+ 2024-06-24 10:30:27,004 INFO [decode.py:734] Wrote detailed error stats to zipformer/exp/greedy_search/errs-eval_other-greedy_search-epoch-30-avg-9-context-2-max-sym-per-frame-1-use-averaged-model.txt
26
  2024-06-24 10:30:27,017 INFO [decode.py:750]
27
  For eval_other, CER of different settings are:
28
  greedy_search 11.56 best for eval_other
log/modified_beam_search/.ipynb_checkpoints/cer-summary-eval_clean-beam_size_4-epoch-30-avg-9-modified_beam_search-beam-size-4-use-averaged-model-checkpoint.txt DELETED
@@ -1,2 +0,0 @@
1
- settings CER
2
- beam_size_4 10.35
 
 
 
log/modified_beam_search/.ipynb_checkpoints/cer-summary-eval_other-beam_size_4-epoch-30-avg-9-modified_beam_search-beam-size-4-use-averaged-model-checkpoint.txt DELETED
@@ -1,2 +0,0 @@
1
- settings CER
2
- beam_size_4 11.35
 
 
 
log/modified_beam_search/log-decode-epoch-30-avg-9-modified_beam_search-beam-size-4-use-averaged-model-2024-06-24-10-32-00 ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2024-06-24 10:32:00,991 INFO [decode.py:832] Decoding started
2
+ 2024-06-24 10:32:00,991 INFO [decode.py:838] Device: cuda:0
3
+ 2024-06-24 10:32:00,999 INFO [decode.py:848] {'best_train_loss': inf, 'best_valid_loss': inf, 'best_train_epoch': -1, 'best_valid_epoch': -1, 'batch_idx_train': 0, 'log_interval': 50, 'reset_interval': 200, 'valid_interval': 3000, 'feature_dim': 80, 'subsampling_factor': 4, 'warm_step': 2000, 'env_info': {'k2-version': '1.24.4', 'k2-build-type': 'Release', 'k2-with-cuda': True, 'k2-git-sha1': '8f976a1e1407e330e2a233d68f81b1eb5269fdaa', 'k2-git-date': 'Thu Jun 6 02:13:08 2024', 'lhotse-version': '1.24.0.dev+git.4d57d53d.dirty', 'torch-version': '2.3.1+cu121', 'torch-cuda-available': True, 'torch-cuda-version': '12.1', 'python-version': '3.9', 'icefall-git-branch': 'feature/ksponspeech_zipformer', 'icefall-git-sha1': '7dda45c9-dirty', 'icefall-git-date': 'Tue Jun 18 16:40:30 2024', 'icefall-path': '/home/ubuntu/icefall', 'k2-path': '/home/ubuntu/miniforge3/envs/lhotse/lib/python3.9/site-packages/k2/__init__.py', 'lhotse-path': '/home/ubuntu/lhotse/lhotse/__init__.py', 'hostname': 'gpu-1', 'IP address': '127.0.1.1'}, 'epoch': 30, 'iter': 0, 'avg': 9, 'use_averaged_model': True, 'exp_dir': PosixPath('zipformer/exp'), 'bpe_model': 'data/lang_bpe_5000/bpe.model', 'lang_dir': PosixPath('data/lang_bpe_500'), 'decoding_method': 'modified_beam_search', 'beam_size': 4, 'beam': 20.0, 'ngram_lm_scale': 0.01, 'max_contexts': 8, 'max_states': 64, 'context_size': 2, 'max_sym_per_frame': 1, 'num_paths': 200, 'nbest_scale': 0.5, 'use_shallow_fusion': False, 'lm_type': 'rnn', 'lm_scale': 0.3, 'tokens_ngram': 2, 'backoff_id': 500, 'context_score': 2, 'context_file': '', 'num_encoder_layers': '2,2,3,4,3,2', 'downsampling_factor': '1,2,4,8,4,2', 'feedforward_dim': '512,768,1024,1536,1024,768', 'num_heads': '4,4,4,8,4,4', 'encoder_dim': '192,256,384,512,384,256', 'query_head_dim': '32', 'value_head_dim': '12', 'pos_head_dim': '4', 'pos_dim': 48, 'encoder_unmasked_dim': '192,192,256,256,256,192', 'cnn_module_kernel': '31,31,15,15,15,31', 'decoder_dim': 512, 'joiner_dim': 512, 'causal': False, 'chunk_size': '16,32,64,-1', 'left_context_frames': '64,128,256,-1', 'use_transducer': True, 'use_ctc': False, 'manifest_dir': PosixPath('data/fbank'), 'max_duration': 200.0, 'bucketing_sampler': True, 'num_buckets': 30, 'concatenate_cuts': False, 'duration_factor': 1.0, 'gap': 1.0, 'on_the_fly_feats': False, 'shuffle': True, 'drop_last': True, 'return_cuts': True, 'num_workers': 2, 'enable_spec_aug': True, 'spec_aug_time_warp_factor': 80, 'enable_musan': True, 'input_strategy': 'PrecomputedFeatures', 'lm_vocab_size': 500, 'lm_epoch': 7, 'lm_avg': 1, 'lm_exp_dir': None, 'rnn_lm_embedding_dim': 2048, 'rnn_lm_hidden_dim': 2048, 'rnn_lm_num_layers': 3, 'rnn_lm_tie_weights': True, 'transformer_lm_exp_dir': None, 'transformer_lm_dim_feedforward': 2048, 'transformer_lm_encoder_dim': 768, 'transformer_lm_embedding_dim': 768, 'transformer_lm_nhead': 8, 'transformer_lm_num_layers': 16, 'transformer_lm_tie_weights': True, 'res_dir': PosixPath('zipformer/exp/modified_beam_search'), 'has_contexts': False, 'suffix': 'epoch-30-avg-9-modified_beam_search-beam-size-4-use-averaged-model', 'blank_id': 0, 'unk_id': 2, 'vocab_size': 5000}
4
+ 2024-06-24 10:32:00,999 INFO [decode.py:850] About to create model
5
+ 2024-06-24 10:32:01,710 INFO [decode.py:917] Calculating the averaged model over epoch range from 21 (excluded) to 30
6
+ 2024-06-24 10:32:03,747 INFO [decode.py:1011] Number of model parameters: 74778511
7
+ 2024-06-24 10:32:03,747 INFO [asr_datamodule.py:405] About to get eval_clean cuts
8
+ 2024-06-24 10:32:03,749 INFO [asr_datamodule.py:412] About to get eval_other cuts
9
+ 2024-06-24 10:32:09,105 INFO [decode.py:705] batch 0/?, cuts processed until now is 21
10
+ 2024-06-24 10:32:41,504 INFO [decode.py:705] batch 20/?, cuts processed until now is 1368
11
+ 2024-06-24 10:33:15,949 INFO [decode.py:705] batch 40/?, cuts processed until now is 2063
12
+ 2024-06-24 10:33:19,367 INFO [zipformer.py:1858] name=None, attn_weights_entropy = tensor([3.4619, 3.0951, 3.0294, 2.8094, 2.7679, 2.9824, 3.0918, 2.7093],
13
+ device='cuda:0')
14
+ 2024-06-24 10:33:38,700 INFO [decode.py:721] The transcripts are stored in zipformer/exp/modified_beam_search/recogs-eval_clean-beam_size_4-epoch-30-avg-9-modified_beam_search-beam-size-4-use-averaged-model.txt
15
+ 2024-06-24 10:33:38,793 INFO [utils.py:656] [eval_clean-beam_size_4] %WER 10.35% [5015 / 48463, 1184 ins, 1396 del, 2435 sub ]
16
+ 2024-06-24 10:33:39,004 INFO [decode.py:734] Wrote detailed error stats to zipformer/exp/modified_beam_search/errs-eval_clean-beam_size_4-epoch-30-avg-9-modified_beam_search-beam-size-4-use-averaged-model.txt
17
+ 2024-06-24 10:33:39,007 INFO [decode.py:750]
18
+ For eval_clean, CER of different settings are:
19
+ beam_size_4 10.35 best for eval_clean
20
+
21
+ 2024-06-24 10:33:42,135 INFO [decode.py:705] batch 0/?, cuts processed until now is 17
22
+ 2024-06-24 10:34:14,911 INFO [decode.py:705] batch 20/?, cuts processed until now is 989
23
+ 2024-06-24 10:34:48,575 INFO [decode.py:705] batch 40/?, cuts processed until now is 1761
24
+ 2024-06-24 10:35:23,764 INFO [decode.py:705] batch 60/?, cuts processed until now is 2371
25
+ 2024-06-24 10:35:54,451 INFO [decode.py:705] batch 80/?, cuts processed until now is 2803
26
+ 2024-06-24 10:35:57,514 INFO [decode.py:721] The transcripts are stored in zipformer/exp/modified_beam_search/recogs-eval_other-beam_size_4-epoch-30-avg-9-modified_beam_search-beam-size-4-use-averaged-model.txt
27
+ 2024-06-24 10:35:57,641 INFO [utils.py:656] [eval_other-beam_size_4] %WER 11.35% [8072 / 71101, 1769 ins, 2405 del, 3898 sub ]
28
+ 2024-06-24 10:35:57,915 INFO [decode.py:734] Wrote detailed error stats to zipformer/exp/modified_beam_search/errs-eval_other-beam_size_4-epoch-30-avg-9-modified_beam_search-beam-size-4-use-averaged-model.txt
29
+ 2024-06-24 10:35:57,917 INFO [decode.py:750]
30
+ For eval_other, CER of different settings are:
31
+ beam_size_4 11.35 best for eval_other
32
+
33
+ 2024-06-24 10:35:57,922 INFO [decode.py:1046] Done!
log/tensorboard/events.out.tfevents.1718771868.gpu-1.1326770.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a24d9d8acb6a54507973b5f3db0779000bffee69af86f21de5d88757c848a671
3
+ size 2937883