2022-12-14 16:40:31,275 INFO [ctc_guild_decode_bk.py:710] Decoding started 2022-12-14 16:40:31,276 INFO [ctc_guild_decode_bk.py:716] Device: cuda:0 2022-12-14 16:40:31,278 INFO [ctc_guild_decode_bk.py:731] {'best_train_loss': inf, 'best_valid_loss': inf, 'best_train_epoch': -1, 'best_valid_epoch': -1, 'batch_idx_train': 0, 'log_interval': 50, 'reset_interval': 200, 'valid_interval': 3000, 'feature_dim': 80, 'subsampling_factor': 4, 'beam_size': 4, 'use_double_scores': True, 'warm_step': 2000, 'env_info': {'k2-version': '1.22', 'k2-build-type': 'Release', 'k2-with-cuda': True, 'k2-git-sha1': '6df2d56bd9097bba8d8af12d6c1ef8cb66bf9c17', 'k2-git-date': 'Thu Nov 17 19:06:54 2022', 'lhotse-version': '1.10.0', 'torch-version': '1.13.0', 'torch-cuda-available': True, 'torch-cuda-version': '11.6', 'python-version': '3.1', 'icefall-git-branch': 'blankskip', 'icefall-git-sha1': 'cf69804-dirty', 'icefall-git-date': 'Sat Dec 3 16:30:31 2022', 'icefall-path': '/home/yfy62/icefall', 'k2-path': '/home/yfy62/anaconda3/envs/icefall/lib/python3.10/site-packages/k2-1.22.dev20221122+cuda11.6.torch1.13.0-py3.10-linux-x86_64.egg/k2/__init__.py', 'lhotse-path': '/home/yfy62/anaconda3/envs/icefall/lib/python3.10/site-packages/lhotse/__init__.py', 'hostname': 'd3-hpc-sjtu-test-004', 'IP address': '10.11.11.11'}, 'epoch': 30, 'iter': 0, 'avg': 13, 'use_averaged_model': True, 'exp_dir': PosixPath('pruned_transducer_stateless7_ctc_bk/exp_lconv_scaling'), 'bpe_model': 'data/lang_bpe_500/bpe.model', 'lang_dir': PosixPath('data/lang_bpe_500'), 'decoding_method': 'greedy_search', 'beam': 20.0, 'ngram_lm_scale': 0.01, 'max_contexts': 8, 'max_states': 64, 'context_size': 2, 'max_sym_per_frame': 1, 'num_paths': 200, 'nbest_scale': 0.5, 'simulate_streaming': False, 'decode_chunk_size': 16, 'left_context': 64, 'num_encoder_layers': '2,4,3,2,4', 'feedforward_dims': '1024,1024,2048,2048,1024', 'nhead': '8,8,8,8,8', 'encoder_dims': '384,384,384,384,384', 'attention_dims': '192,192,192,192,192', 'encoder_unmasked_dims': '256,256,256,256,256', 'zipformer_downsampling_factors': '1,2,4,8,2', 'cnn_module_kernels': '31,31,31,31,31', 'decoder_dim': 512, 'joiner_dim': 512, 'full_libri': True, 'manifest_dir': PosixPath('data/fbank'), 'max_duration': 600, 'bucketing_sampler': True, 'num_buckets': 30, 'concatenate_cuts': False, 'duration_factor': 1.0, 'gap': 1.0, 'on_the_fly_feats': False, 'shuffle': True, 'drop_last': True, 'return_cuts': True, 'num_workers': 2, 'enable_spec_aug': True, 'spec_aug_time_warp_factor': 80, 'enable_musan': True, 'input_strategy': 'PrecomputedFeatures', 'res_dir': PosixPath('pruned_transducer_stateless7_ctc_bk/exp_lconv_scaling/greedy_search'), 'suffix': 'epoch-30-avg-13-context-2-max-sym-per-frame-1-use-averaged-model', 'blank_id': 0, 'unk_id': 2, 'vocab_size': 500} 2022-12-14 16:40:31,278 INFO [ctc_guild_decode_bk.py:733] About to create model 2022-12-14 16:40:31,774 INFO [zipformer.py:179] At encoder stack 4, which has downsampling_factor=2, we will combine the outputs of layers 1 and 3, with downsampling_factors=2 and 8. 2022-12-14 16:40:31,794 INFO [ctc_guild_decode_bk.py:800] Calculating the averaged model over epoch range from 17 (excluded) to 30 2022-12-14 16:40:36,459 INFO [ctc_guild_decode_bk.py:836] Number of model parameters: 71164387 2022-12-14 16:40:36,459 INFO [asr_datamodule.py:443] About to get test-clean cuts 2022-12-14 16:40:36,460 INFO [asr_datamodule.py:450] About to get test-other cuts 2022-12-14 16:40:41,112 INFO [ctc_guild_decode_bk.py:608] batch 0/?, cuts processed until now is 43 2022-12-14 16:40:42,842 INFO [zipformer.py:1414] attn_weights_entropy = tensor([5.0845, 5.2382, 5.3612, 5.0017, 5.2092, 4.7563, 4.6855, 4.7861], device='cuda:0'), covar=tensor([0.0388, 0.0189, 0.0125, 0.0220, 0.0208, 0.0217, 0.0272, 0.0309], device='cuda:0'), in_proj_covar=tensor([0.0182, 0.0146, 0.0126, 0.0149, 0.0135, 0.0156, 0.0173, 0.0173], device='cuda:0'), out_proj_covar=tensor([0.0002, 0.0001, 0.0001, 0.0001, 0.0001, 0.0001, 0.0002, 0.0002], device='cuda:0') 2022-12-14 16:40:45,678 INFO [zipformer.py:1414] attn_weights_entropy = tensor([3.1129, 3.3022, 3.6867, 2.1906, 3.2836, 3.0820, 3.2884, 2.9239], device='cuda:0'), covar=tensor([0.0854, 0.0538, 0.0214, 0.2138, 0.0361, 0.0956, 0.0574, 0.1603], device='cuda:0'), in_proj_covar=tensor([0.0183, 0.0129, 0.0122, 0.0206, 0.0134, 0.0180, 0.0172, 0.0206], device='cuda:0'), out_proj_covar=tensor([1.2879e-04, 9.0075e-05, 8.3072e-05, 1.4434e-04, 9.0510e-05, 1.2637e-04, 1.1752e-04, 1.4144e-04], device='cuda:0') 2022-12-14 16:40:53,272 INFO [ctc_guild_decode_bk.py:626] The transcripts are stored in pruned_transducer_stateless7_ctc_bk/exp_lconv_scaling/greedy_search/recogs-test-clean-greedy_search-epoch-30-avg-13-context-2-max-sym-per-frame-1-use-averaged-model.txt 2022-12-14 16:40:53,345 INFO [utils.py:536] [test-clean-greedy_search] %WER 5.74% [3016 / 52576, 326 ins, 267 del, 2423 sub ] 2022-12-14 16:40:53,491 INFO [ctc_guild_decode_bk.py:639] Wrote detailed error stats to pruned_transducer_stateless7_ctc_bk/exp_lconv_scaling/greedy_search/errs-test-clean-greedy_search-epoch-30-avg-13-context-2-max-sym-per-frame-1-use-averaged-model.txt 2022-12-14 16:40:53,491 INFO [ctc_guild_decode_bk.py:656] For test-clean, WER of different settings are: greedy_search 5.74 best for test-clean 2022-12-14 16:40:54,313 INFO [ctc_guild_decode_bk.py:608] batch 0/?, cuts processed until now is 52 2022-12-14 16:40:54,451 INFO [zipformer.py:1414] attn_weights_entropy = tensor([4.4679, 4.4290, 4.5958, 5.0097, 4.3190, 4.8124, 4.5159, 4.5238], device='cuda:0'), covar=tensor([0.0281, 0.0263, 0.0248, 0.0141, 0.0244, 0.0115, 0.0241, 0.0221], device='cuda:0'), in_proj_covar=tensor([0.0103, 0.0086, 0.0093, 0.0085, 0.0073, 0.0079, 0.0079, 0.0085], device='cuda:0'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0001, 0.0001, 0.0001, 0.0001, 0.0001], device='cuda:0') 2022-12-14 16:40:55,386 INFO [zipformer.py:1414] attn_weights_entropy = tensor([2.8303, 2.5061, 2.4607, 3.5364, 3.2174, 2.9463, 3.1359, 3.0660], device='cuda:0'), covar=tensor([0.0377, 0.0609, 0.0640, 0.0145, 0.0443, 0.0493, 0.0152, 0.0411], device='cuda:0'), in_proj_covar=tensor([0.0073, 0.0065, 0.0084, 0.0054, 0.0053, 0.0054, 0.0063, 0.0053], device='cuda:0'), out_proj_covar=tensor([5.8793e-05, 5.3987e-05, 8.1245e-05, 3.9913e-05, 4.5528e-05, 4.6460e-05, 4.7543e-05, 4.2467e-05], device='cuda:0') 2022-12-14 16:41:06,368 INFO [ctc_guild_decode_bk.py:626] The transcripts are stored in pruned_transducer_stateless7_ctc_bk/exp_lconv_scaling/greedy_search/recogs-test-other-greedy_search-epoch-30-avg-13-context-2-max-sym-per-frame-1-use-averaged-model.txt 2022-12-14 16:41:06,453 INFO [utils.py:536] [test-other-greedy_search] %WER 15.46% [8093 / 52343, 758 ins, 887 del, 6448 sub ] 2022-12-14 16:41:06,620 INFO [ctc_guild_decode_bk.py:639] Wrote detailed error stats to pruned_transducer_stateless7_ctc_bk/exp_lconv_scaling/greedy_search/errs-test-other-greedy_search-epoch-30-avg-13-context-2-max-sym-per-frame-1-use-averaged-model.txt 2022-12-14 16:41:06,620 INFO [ctc_guild_decode_bk.py:656] For test-other, WER of different settings are: greedy_search 15.46 best for test-other 2022-12-14 16:41:06,620 INFO [ctc_guild_decode_bk.py:867] Done!