2023-03-10 10:58:12,879 INFO [decode_with_timestamp.py:878] Decoding started 2023-03-10 10:58:12,880 INFO [decode_with_timestamp.py:884] Device: cuda:0 2023-03-10 10:58:12,882 INFO [decode_with_timestamp.py:899] {'frame_shift_ms': 10.0, 'allowed_excess_duration_ratio': 0.1, 'best_train_loss': inf, 'best_valid_loss': inf, 'best_train_epoch': -1, 'best_valid_epoch': -1, 'batch_idx_train': 0, 'log_interval': 50, 'reset_interval': 200, 'valid_interval': 3000, 'feature_dim': 80, 'subsampling_factor': 4, 'warm_step': 2000, 'env_info': {'k2-version': '1.22', 'k2-build-type': 'Release', 'k2-with-cuda': True, 'k2-git-sha1': '96c9a2aece2a3a7633da07740e24fa3d96f5498c', 'k2-git-date': 'Thu Nov 10 08:14:02 2022', 'lhotse-version': '1.13.0.dev+git.527d964.clean', 'torch-version': '1.12.1', 'torch-cuda-available': True, 'torch-cuda-version': '11.6', 'python-version': '3.8', 'icefall-git-branch': 'random_padding', 'icefall-git-sha1': '202ce08-clean', 'icefall-git-date': 'Thu Mar 9 15:05:03 2023', 'icefall-path': '/ceph-data4/yangxiaoyu/softwares/icefall_development/icefall_random_padding', 'k2-path': '/ceph-data4/yangxiaoyu/softwares/anaconda3/envs/k2_latest/lib/python3.8/site-packages/k2/__init__.py', 'lhotse-path': '/ceph-data4/yangxiaoyu/softwares/lhotse_development/lhotse_random_padding_left/lhotse/__init__.py', 'hostname': 'de-74279-k2-train-2-1216192652-5bcf7587b4-n6q9m', 'IP address': '10.177.74.211'}, 'epoch': 30, 'iter': 0, 'avg': 11, 'use_averaged_model': True, 'exp_dir': PosixPath('pruned_transducer_stateless7/exp_960h_no_paddingidx_ngpu4'), 'bpe_model': 'data/lang_bpe_500/bpe.model', 'lang_dir': PosixPath('data/lang_bpe_500'), 'decoding_method': 'modified_beam_search', 'beam_size': 4, 'beam': 20.0, 'ngram_lm_scale': 0.01, 'max_contexts': 8, 'max_states': 64, 'context_size': 2, 'max_sym_per_frame': 1, 'num_paths': 200, 'nbest_scale': 0.5, 'simulate_streaming': False, 'decode_chunk_size': 16, 'left_context': 64, 'use_shallow_fusion': False, 'lm_type': 'rnn', 'lm_scale': 0.3, 'tokens_ngram': 3, 'backoff_id': 500, 'num_encoder_layers': '2,4,3,2,4', 'feedforward_dims': '1024,1024,2048,2048,1024', 'nhead': '8,8,8,8,8', 'encoder_dims': '384,384,384,384,384', 'attention_dims': '192,192,192,192,192', 'encoder_unmasked_dims': '256,256,256,256,256', 'zipformer_downsampling_factors': '1,2,4,8,2', 'cnn_module_kernels': '31,31,31,31,31', 'decoder_dim': 512, 'joiner_dim': 512, 'full_libri': True, 'manifest_dir': PosixPath('data/fbank_ali'), 'max_duration': 500, 'bucketing_sampler': True, 'num_buckets': 30, 'concatenate_cuts': False, 'random_left_padding': False, 'num_left_padding': 8, 'duration_factor': 1.0, 'gap': 1.0, 'on_the_fly_feats': False, 'shuffle': True, 'drop_last': True, 'return_cuts': True, 'num_workers': 2, 'enable_spec_aug': True, 'spec_aug_time_warp_factor': 80, 'enable_musan': True, 'input_strategy': 'PrecomputedFeatures', 'vocab_size': 500, 'lm_epoch': 7, 'lm_avg': 1, 'lm_exp_dir': None, 'rnn_lm_embedding_dim': 2048, 'rnn_lm_hidden_dim': 2048, 'rnn_lm_num_layers': 3, 'rnn_lm_tie_weights': True, 'transformer_lm_exp_dir': None, 'transformer_lm_dim_feedforward': 2048, 'transformer_lm_encoder_dim': 768, 'transformer_lm_embedding_dim': 768, 'transformer_lm_nhead': 8, 'transformer_lm_num_layers': 16, 'transformer_lm_tie_weights': True, 'res_dir': PosixPath('pruned_transducer_stateless7/exp_960h_no_paddingidx_ngpu4/modified_beam_search'), 'suffix': 'epoch-30-avg-11-modified_beam_search-beam-size-4-use-averaged-model', 'blank_id': 0, 'unk_id': 2} 2023-03-10 10:58:12,882 INFO [decode_with_timestamp.py:901] About to create model 2023-03-10 10:58:13,465 INFO [zipformer.py:178] At encoder stack 4, which has downsampling_factor=2, we will combine the outputs of layers 1 and 3, with downsampling_factors=2 and 8. 2023-03-10 10:58:13,478 INFO [decode_with_timestamp.py:968] Calculating the averaged model over epoch range from 19 (excluded) to 30 2023-03-10 10:58:22,926 INFO [decode_with_timestamp.py:1030] Number of model parameters: 70369391 2023-03-10 10:58:22,927 INFO [asr_datamodule.py:463] About to get test-clean cuts 2023-03-10 10:58:22,932 INFO [asr_datamodule.py:470] About to get test-other cuts 2023-03-10 10:58:30,111 INFO [decode_with_timestamp.py:740] batch 0/?, cuts processed until now is 36 2023-03-10 10:59:47,880 INFO [decode_with_timestamp.py:740] batch 20/?, cuts processed until now is 1037 2023-03-10 11:00:57,301 INFO [decode_with_timestamp.py:740] batch 40/?, cuts processed until now is 2298 2023-03-10 11:01:22,246 INFO [decode_with_timestamp.py:1062] Averaged first symbol emission time: 0.1069923664122152 2023-03-10 11:01:22,324 INFO [decode_with_timestamp.py:760] The transcripts are stored in pruned_transducer_stateless7/exp_960h_no_paddingidx_ngpu4/modified_beam_search/recogs-test-clean-beam_size_4-epoch-30-avg-11-modified_beam_search-beam-size-4-use-averaged-model.txt 2023-03-10 11:01:22,457 INFO [utils.py:795] [test-clean-beam_size_4] %WER 2.22% [1168 / 52576, 126 ins, 97 del, 945 sub ] 2023-03-10 11:01:22,457 INFO [utils.py:800] [test-clean-beam_size_4] %symbol-delay mean (s): -0.043, variance: 0.007 computed on 51534 correct words 2023-03-10 11:01:22,673 INFO [decode_with_timestamp.py:774] Wrote detailed error stats to pruned_transducer_stateless7/exp_960h_no_paddingidx_ngpu4/modified_beam_search/errs-test-clean-beam_size_4-epoch-30-avg-11-modified_beam_search-beam-size-4-use-averaged-model.txt 2023-03-10 11:01:22,674 INFO [decode_with_timestamp.py:803] For test-clean, WER of different settings are: beam_size_4 2.22 best for test-clean 2023-03-10 11:01:22,674 INFO [decode_with_timestamp.py:810] For test-clean, symbol-delay of different settings are: beam_size_4 mean: -0.043s, variance: 0.007 best for test-clean 2023-03-10 11:01:27,036 INFO [decode_with_timestamp.py:740] batch 0/?, cuts processed until now is 43 2023-03-10 11:02:40,085 INFO [decode_with_timestamp.py:740] batch 20/?, cuts processed until now is 1195 2023-03-10 11:03:50,482 INFO [decode_with_timestamp.py:740] batch 40/?, cuts processed until now is 2640 2023-03-10 11:04:09,742 INFO [decode_with_timestamp.py:1062] Averaged first symbol emission time: 0.12684586594079794 2023-03-10 11:04:09,827 INFO [decode_with_timestamp.py:760] The transcripts are stored in pruned_transducer_stateless7/exp_960h_no_paddingidx_ngpu4/modified_beam_search/recogs-test-other-beam_size_4-epoch-30-avg-11-modified_beam_search-beam-size-4-use-averaged-model.txt 2023-03-10 11:04:09,963 INFO [utils.py:795] [test-other-beam_size_4] %WER 5.14% [2691 / 52343, 268 ins, 209 del, 2214 sub ] 2023-03-10 11:04:09,963 INFO [utils.py:800] [test-other-beam_size_4] %symbol-delay mean (s): -0.05, variance: 0.009 computed on 49902 correct words 2023-03-10 11:04:10,173 INFO [decode_with_timestamp.py:774] Wrote detailed error stats to pruned_transducer_stateless7/exp_960h_no_paddingidx_ngpu4/modified_beam_search/errs-test-other-beam_size_4-epoch-30-avg-11-modified_beam_search-beam-size-4-use-averaged-model.txt 2023-03-10 11:04:10,174 INFO [decode_with_timestamp.py:803] For test-other, WER of different settings are: beam_size_4 5.14 best for test-other 2023-03-10 11:04:10,174 INFO [decode_with_timestamp.py:810] For test-other, symbol-delay of different settings are: beam_size_4 mean: -0.05s, variance: 0.009 best for test-other 2023-03-10 11:04:10,174 INFO [decode_with_timestamp.py:1071] Done!