2022-07-28 22:53:58,562 INFO [decode.py:692] Decoding started 2022-07-28 22:53:58,563 INFO [decode.py:698] Device: cuda:0 2022-07-28 22:53:58,566 INFO [decode.py:713] {'best_train_loss': inf, 'best_valid_loss': inf, 'best_train_epoch': -1, 'best_valid_epoch': -1, 'batch_idx_train': 0, 'log_interval': 50, 'reset_interval': 200, 'valid_interval': 3000, 'feature_dim': 80, 'subsampling_factor': 4, 'model_warm_step': 3000, 'env_info': {'k2-version': '1.15.1', 'k2-build-type': 'Release', 'k2-with-cuda': True, 'k2-git-sha1': 'f8d2dba06c000ffee36aab5b66f24e7c9809f116', 'k2-git-date': 'Thu Apr 21 12:20:34 2022', 'lhotse-version': '1.1.0', 'torch-version': '1.10.0+cu102', 'torch-cuda-available': True, 'torch-cuda-version': '10.2', 'python-version': '3.8', 'icefall-git-branch': 'streaming5', 'icefall-git-sha1': 'ee8397e-clean', 'icefall-git-date': 'Thu Jul 28 22:32:51 2022', 'icefall-path': '/ceph-kw/kangwei/code/icefall_streaming2', 'k2-path': '/ceph-hw/kangwei/code/k2_release/k2/k2/python/k2/__init__.py', 'lhotse-path': '/ceph-hw/kangwei/dev_tools/anaconda3/envs/rnnt2/lib/python3.8/site-packages/lhotse-1.1.0-py3.8.egg/lhotse/__init__.py', 'hostname': 'de-74279-k2-train-7-0616225511-78bf4545d8-tv52r', 'IP address': '10.177.77.9'}, 'epoch': 25, 'iter': 0, 'avg': 5, 'use_averaged_model': True, 'exp_dir': PosixPath('pruned_transducer_stateless5/exp-L-nolinear'), 'bpe_model': 'data/lang_bpe_500/bpe.model', 'lang_dir': PosixPath('data/lang_bpe_500'), 'decoding_method': 'fast_beam_search', 'beam_size': 4, 'beam': 20.0, 'ngram_lm_scale': 0.01, 'max_contexts': 8, 'max_states': 64, 'context_size': 2, 'max_sym_per_frame': 1, 'num_paths': 200, 'nbest_scale': 0.5, 'simulate_streaming': True, 'decode_chunk_size': 16, 'left_context': 32, 'num_encoder_layers': 18, 'dim_feedforward': 2048, 'nhead': 8, 'encoder_dim': 512, 'decoder_dim': 512, 'joiner_dim': 512, 'dynamic_chunk_training': False, 'causal_convolution': True, 'short_chunk_size': 25, 'num_left_chunks': 4, 'full_libri': True, 'manifest_dir': PosixPath('data/fbank'), 'max_duration': 1000, 'bucketing_sampler': True, 'num_buckets': 30, 'concatenate_cuts': False, 'duration_factor': 1.0, 'gap': 1.0, 'on_the_fly_feats': False, 'shuffle': True, 'drop_last': True, 'return_cuts': True, 'num_workers': 2, 'enable_spec_aug': True, 'spec_aug_time_warp_factor': 80, 'enable_musan': True, 'input_strategy': 'PrecomputedFeatures', 'res_dir': PosixPath('pruned_transducer_stateless5/exp-L-nolinear/fast_beam_search'), 'suffix': 'epoch-25-avg-5-streaming-chunk-size-16-left-context-32-beam-20.0-max-contexts-8-max-states-64-use-averaged-model', 'blank_id': 0, 'unk_id': 2, 'vocab_size': 500} 2022-07-28 22:53:58,566 INFO [decode.py:715] About to create model 2022-07-28 22:53:59,102 INFO [decode.py:782] Calculating the averaged model over epoch range from 20 (excluded) to 25 2022-07-28 22:54:08,088 INFO [decode.py:818] Number of model parameters: 116553580 2022-07-28 22:54:08,089 INFO [asr_datamodule.py:444] About to get test-clean cuts 2022-07-28 22:54:08,092 INFO [asr_datamodule.py:451] About to get test-other cuts 2022-07-28 22:54:11,600 INFO [decode.py:591] batch 0/?, cuts processed until now is 79 2022-07-28 22:54:54,307 INFO [decode.py:608] The transcripts are stored in pruned_transducer_stateless5/exp-L-nolinear/fast_beam_search/recogs-test-clean-beam_20.0_max_contexts_8_max_states_64-epoch-25-avg-5-streaming-chunk-size-16-left-context-32-beam-20.0-max-contexts-8-max-states-64-use-averaged-model.txt 2022-07-28 22:54:54,388 INFO [utils.py:416] [test-clean-beam_20.0_max_contexts_8_max_states_64] %WER 3.27% [1717 / 52576, 197 ins, 118 del, 1402 sub ] 2022-07-28 22:54:54,611 INFO [decode.py:621] Wrote detailed error stats to pruned_transducer_stateless5/exp-L-nolinear/fast_beam_search/errs-test-clean-beam_20.0_max_contexts_8_max_states_64-epoch-25-avg-5-streaming-chunk-size-16-left-context-32-beam-20.0-max-contexts-8-max-states-64-use-averaged-model.txt 2022-07-28 22:54:54,612 INFO [decode.py:638] For test-clean, WER of different settings are: beam_20.0_max_contexts_8_max_states_64 3.27 best for test-clean 2022-07-28 22:54:57,618 INFO [decode.py:591] batch 0/?, cuts processed until now is 96 2022-07-28 22:55:41,954 INFO [decode.py:608] The transcripts are stored in pruned_transducer_stateless5/exp-L-nolinear/fast_beam_search/recogs-test-other-beam_20.0_max_contexts_8_max_states_64-epoch-25-avg-5-streaming-chunk-size-16-left-context-32-beam-20.0-max-contexts-8-max-states-64-use-averaged-model.txt 2022-07-28 22:55:42,103 INFO [utils.py:416] [test-other-beam_20.0_max_contexts_8_max_states_64] %WER 8.33% [4360 / 52343, 446 ins, 401 del, 3513 sub ] 2022-07-28 22:55:42,337 INFO [decode.py:621] Wrote detailed error stats to pruned_transducer_stateless5/exp-L-nolinear/fast_beam_search/errs-test-other-beam_20.0_max_contexts_8_max_states_64-epoch-25-avg-5-streaming-chunk-size-16-left-context-32-beam-20.0-max-contexts-8-max-states-64-use-averaged-model.txt 2022-07-28 22:55:42,337 INFO [decode.py:638] For test-other, WER of different settings are: beam_20.0_max_contexts_8_max_states_64 8.33 best for test-other 2022-07-28 22:55:42,338 INFO [decode.py:847] Done!