2022-05-30 09:20:04,018 INFO [decode.py:506] Decoding started 2022-05-30 09:20:04,018 INFO [decode.py:512] Device: cuda:0 2022-05-30 09:20:04,056 INFO [decode.py:522] {'best_train_loss': inf, 'best_valid_loss': inf, 'best_train_epoch': -1, 'best_valid_epoch': -1, 'batch_idx_train': 0, 'log_interval': 50, 'reset_interval': 200, 'valid_interval': 3000, 'feature_dim': 80, 'subsampling_factor': 4, 'vgg_frontend': False, 'embedding_dim': 512, 'warm_step': 80000, 'env_info': {'k2-version': '1.15.1', 'k2-build-type': 'Release', 'k2-with-cuda': True, 'k2-git-sha1': 'f8d2dba06c000ffee36aab5b66f24e7c9809f116', 'k2-git-date': 'Thu Apr 21 12:20:34 2022', 'lhotse-version': '1.3.0.dev+missing.version.file', 'torch-version': '1.11.0+cu102', 'torch-cuda-available': True, 'torch-cuda-version': '10.2', 'python-version': '3.8', 'icefall-git-branch': 'streaming-emformer-2022-05-27', 'icefall-git-sha1': 'c8c8645-dirty', 'icefall-git-date': 'Tue May 24 23:07:40 2022', 'icefall-path': '/ceph-fj/fangjun/open-source-2/icefall-streaming-2', 'k2-path': '/ceph-fj/fangjun/open-source-2/k2-multi-22/k2/python/k2/__init__.py', 'lhotse-path': '/ceph-fj/fangjun/open-source-2/lhotse-master/lhotse/__init__.py', 'hostname': 'de-74279-k2-train-7-0309102938-68688b4cbd-xhtcg', 'IP address': '10.48.32.137'}, 'epoch': 39, 'iter': 0, 'avg': 6, 'use_averaged_model': True, 'exp_dir': PosixPath('pruned_stateless_emformer_rnnt2/exp-full'), 'bpe_model': 'data/lang_bpe_500/bpe.model', 'decoding_method': 'fast_beam_search', 'beam_size': 4, 'beam': 4, 'max_contexts': 4, 'max_states': 8, 'context_size': 2, 'max_sym_per_frame': 1, 'attention_dim': 512, 'nhead': 8, 'dim_feedforward': 2048, 'num_encoder_layers': 18, 'left_context_length': 128, 'segment_length': 8, 'right_context_length': 4, 'memory_size': 0, 'full_libri': True, 'manifest_dir': PosixPath('data/fbank'), 'max_duration': 600, 'bucketing_sampler': True, 'num_buckets': 30, 'concatenate_cuts': False, 'duration_factor': 1.0, 'gap': 1.0, 'on_the_fly_feats': False, 'shuffle': True, 'return_cuts': True, 'num_workers': 2, 'enable_spec_aug': True, 'spec_aug_time_warp_factor': 80, 'enable_musan': True, 'res_dir': PosixPath('pruned_stateless_emformer_rnnt2/exp-full/fast_beam_search'), 'suffix': 'epoch-39-avg-6-beam-4-max-contexts-4-max-states-8-use-averaged-model', 'blank_id': 0, 'unk_id': 2, 'vocab_size': 500} 2022-05-30 09:20:04,057 INFO [decode.py:524] About to create model 2022-05-30 09:20:04,779 INFO [decode.py:591] Calculating the averaged model over epoch range from 33 (excluded) to 39 2022-05-30 09:20:10,733 INFO [decode.py:613] Number of model parameters: 65390556 2022-05-30 09:20:10,733 INFO [asr_datamodule.py:423] About to get test-clean cuts 2022-05-30 09:20:10,841 INFO [asr_datamodule.py:428] About to get test-other cuts 2022-05-30 09:20:13,223 INFO [decode.py:417] batch 0/?, cuts processed until now is 47 2022-05-30 09:20:27,894 INFO [decode.py:417] batch 10/?, cuts processed until now is 731 2022-05-30 09:20:42,673 INFO [decode.py:417] batch 20/?, cuts processed until now is 1658 2022-05-30 09:20:58,329 INFO [decode.py:417] batch 30/?, cuts processed until now is 2419 2022-05-30 09:21:05,418 INFO [decode.py:434] The transcripts are stored in pruned_stateless_emformer_rnnt2/exp-full/fast_beam_search/recogs-test-clean-beam_4_max_contexts_4_max_states_8-epoch-39-avg-6-beam-4-max-contexts-4-max-states-8-use-averaged-model.txt 2022-05-30 09:21:05,490 INFO [utils.py:405] [test-clean-beam_4_max_contexts_4_max_states_8] %WER 4.29% [2256 / 52576, 267 ins, 181 del, 1808 sub ] 2022-05-30 09:21:05,669 INFO [decode.py:447] Wrote detailed error stats to pruned_stateless_emformer_rnnt2/exp-full/fast_beam_search/errs-test-clean-beam_4_max_contexts_4_max_states_8-epoch-39-avg-6-beam-4-max-contexts-4-max-states-8-use-averaged-model.txt 2022-05-30 09:21:05,677 INFO [decode.py:464] For test-clean, WER of different settings are: beam_4_max_contexts_4_max_states_8 4.29 best for test-clean 2022-05-30 09:21:07,589 INFO [decode.py:417] batch 0/?, cuts processed until now is 57 2022-05-30 09:21:21,554 INFO [decode.py:417] batch 10/?, cuts processed until now is 843 2022-05-30 09:21:36,166 INFO [decode.py:417] batch 20/?, cuts processed until now is 1889 2022-05-30 09:21:52,078 INFO [decode.py:417] batch 30/?, cuts processed until now is 2745 2022-05-30 09:21:58,077 INFO [decode.py:434] The transcripts are stored in pruned_stateless_emformer_rnnt2/exp-full/fast_beam_search/recogs-test-other-beam_4_max_contexts_4_max_states_8-epoch-39-avg-6-beam-4-max-contexts-4-max-states-8-use-averaged-model.txt 2022-05-30 09:21:58,154 INFO [utils.py:405] [test-other-beam_4_max_contexts_4_max_states_8] %WER 11.26% [5894 / 52343, 607 ins, 637 del, 4650 sub ] 2022-05-30 09:21:58,337 INFO [decode.py:447] Wrote detailed error stats to pruned_stateless_emformer_rnnt2/exp-full/fast_beam_search/errs-test-other-beam_4_max_contexts_4_max_states_8-epoch-39-avg-6-beam-4-max-contexts-4-max-states-8-use-averaged-model.txt 2022-05-30 09:21:58,345 INFO [decode.py:464] For test-other, WER of different settings are: beam_4_max_contexts_4_max_states_8 11.26 best for test-other 2022-05-30 09:21:58,345 INFO [decode.py:641] Done!