2022-05-16 14:13:40,021 INFO [decode.py:496] Decoding started 2022-05-16 14:13:40,021 INFO [decode.py:502] Device: cuda:0 2022-05-16 14:13:40,024 INFO [decode.py:512] {'best_train_loss': inf, 'best_valid_loss': inf, 'best_train_epoch': -1, 'best_valid_epoch': -1, 'batch_idx_train': 0, 'log_interval': 50, 'reset_interval': 200, 'valid_interval': 3000, 'feature_dim': 80, 'subsampling_factor': 4, 'model_warm_step': 3000, 'env_info': {'k2-version': '1.15.1', 'k2-build-type': 'Release', 'k2-with-cuda': True, 'k2-git-sha1': 'f8d2dba06c000ffee36aab5b66f24e7c9809f116', 'k2-git-date': 'Thu Apr 21 12:20:34 2022', 'lhotse-version': '1.1.0.dev+missing.version.file', 'torch-version': '1.10.0+cu102', 'torch-cuda-available': True, 'torch-cuda-version': '10.2', 'python-version': '3.8', 'icefall-git-branch': 'deeper-conformer-without-random-combiner', 'icefall-git-sha1': '2ce48a2-dirty', 'icefall-git-date': 'Fri May 13 23:22:30 2022', 'icefall-path': '/ceph-fj/fangjun/open-source-2/icefall-deeper-conformer-2', 'k2-path': '/ceph-fj/fangjun/open-source-2/k2-multi-22/k2/python/k2/__init__.py', 'lhotse-path': '/ceph-fj/fangjun/open-source-2/lhotse-multi-3/lhotse/__init__.py', 'hostname': 'de-74279-k2-train-2-0307200233-b554c565c-lf9qd', 'IP address': '10.177.74.201'}, 'epoch': 39, 'iter': 0, 'avg': 13, 'use_averaged_model': True, 'exp_dir': PosixPath('pruned_transducer_stateless5/exp-L'), 'bpe_model': 'data/lang_bpe_500/bpe.model', 'decoding_method': 'fast_beam_search', 'beam_size': 4, 'beam': 4, 'max_contexts': 4, 'max_states': 8, 'context_size': 2, 'max_sym_per_frame': 1, 'num_encoder_layers': 18, 'dim_feedforward': 2048, 'nhead': 8, 'encoder_dim': 512, 'decoder_dim': 512, 'joiner_dim': 512, 'full_libri': True, 'manifest_dir': PosixPath('data/fbank'), 'max_duration': 600, 'bucketing_sampler': True, 'num_buckets': 30, 'concatenate_cuts': False, 'duration_factor': 1.0, 'gap': 1.0, 'on_the_fly_feats': False, 'shuffle': True, 'return_cuts': True, 'num_workers': 2, 'enable_spec_aug': True, 'spec_aug_time_warp_factor': 80, 'enable_musan': True, 'res_dir': PosixPath('pruned_transducer_stateless5/exp-L/fast_beam_search'), 'suffix': 'epoch-39-avg-13-beam-4-max-contexts-4-max-states-8-use-averaged-model', 'blank_id': 0, 'unk_id': 2, 'vocab_size': 500} 2022-05-16 14:13:40,024 INFO [decode.py:514] About to create model 2022-05-16 14:13:40,571 INFO [decode.py:581] Calculating the averaged model over epoch range from 26 (excluded) to 39 2022-05-16 14:13:51,002 INFO [decode.py:603] Number of model parameters: 116553580 2022-05-16 14:13:51,002 INFO [asr_datamodule.py:422] About to get test-clean cuts 2022-05-16 14:13:51,175 INFO [asr_datamodule.py:427] About to get test-other cuts 2022-05-16 14:13:53,772 INFO [decode.py:407] batch 0/?, cuts processed until now is 123 2022-05-16 14:14:25,589 INFO [decode.py:407] batch 20/?, cuts processed until now is 1558 2022-05-16 14:15:02,811 INFO [decode.py:424] The transcripts are stored in pruned_transducer_stateless5/exp-L/fast_beam_search/recogs-test-clean-beam_4_max_contexts_4_max_states_8-epoch-39-avg-13-beam-4-max-contexts-4-max-states-8-use-averaged-model.txt 2022-05-16 14:15:02,900 INFO [utils.py:405] [test-clean-beam_4_max_contexts_4_max_states_8] %WER 2.39% [1259 / 52576, 136 ins, 95 del, 1028 sub ] 2022-05-16 14:15:03,076 INFO [decode.py:437] Wrote detailed error stats to pruned_transducer_stateless5/exp-L/fast_beam_search/errs-test-clean-beam_4_max_contexts_4_max_states_8-epoch-39-avg-13-beam-4-max-contexts-4-max-states-8-use-averaged-model.txt 2022-05-16 14:15:03,077 INFO [decode.py:454] For test-clean, WER of different settings are: beam_4_max_contexts_4_max_states_8 2.39 best for test-clean 2022-05-16 14:15:05,024 INFO [decode.py:407] batch 0/?, cuts processed until now is 138 2022-05-16 14:15:47,918 INFO [decode.py:407] batch 20/?, cuts processed until now is 1765 2022-05-16 14:16:20,948 INFO [decode.py:424] The transcripts are stored in pruned_transducer_stateless5/exp-L/fast_beam_search/recogs-test-other-beam_4_max_contexts_4_max_states_8-epoch-39-avg-13-beam-4-max-contexts-4-max-states-8-use-averaged-model.txt 2022-05-16 14:16:21,021 INFO [utils.py:405] [test-other-beam_4_max_contexts_4_max_states_8] %WER 5.73% [3001 / 52343, 308 ins, 270 del, 2423 sub ] 2022-05-16 14:16:21,316 INFO [decode.py:437] Wrote detailed error stats to pruned_transducer_stateless5/exp-L/fast_beam_search/errs-test-other-beam_4_max_contexts_4_max_states_8-epoch-39-avg-13-beam-4-max-contexts-4-max-states-8-use-averaged-model.txt 2022-05-16 14:16:21,317 INFO [decode.py:454] For test-other, WER of different settings are: beam_4_max_contexts_4_max_states_8 5.73 best for test-other 2022-05-16 14:16:21,317 INFO [decode.py:631] Done!