pruned-transducer-stateless7-streaming-id / exp /fast_beam_search /log-decode-epoch-30-avg-9-streaming-chunk-size-32-beam-20.0-max-contexts-8-max-states-64-use-averaged-model-2023-06-21-09-40-15
w11wo's picture
Added Model
9a835b2
raw
history blame
No virus
7.01 kB
2023-06-21 09:40:15,150 INFO [decode.py:654] Decoding started
2023-06-21 09:40:15,151 INFO [decode.py:660] Device: cuda:0
2023-06-21 09:40:15,152 INFO [lexicon.py:168] Loading pre-compiled data/lang_phone/Linv.pt
2023-06-21 09:40:15,155 INFO [decode.py:668] {'best_train_loss': inf, 'best_valid_loss': inf, 'best_train_epoch': -1, 'best_valid_epoch': -1, 'batch_idx_train': 0, 'log_interval': 50, 'reset_interval': 200, 'valid_interval': 3000, 'feature_dim': 80, 'subsampling_factor': 4, 'warm_step': 2000, 'env_info': {'k2-version': '1.23.4', 'k2-build-type': 'Release', 'k2-with-cuda': True, 'k2-git-sha1': '9426c9f730820d291f5dcb06be337662595fa7b4', 'k2-git-date': 'Sun Feb 5 17:35:01 2023', 'lhotse-version': '1.15.0.dev+git.00d3e36.clean', 'torch-version': '1.13.1+cu117', 'torch-cuda-available': True, 'torch-cuda-version': '11.7', 'python-version': '3.1', 'icefall-git-branch': 'master', 'icefall-git-sha1': 'd3f5d01-dirty', 'icefall-git-date': 'Wed May 31 04:15:45 2023', 'icefall-path': '/root/icefall', 'k2-path': '/usr/local/lib/python3.10/dist-packages/k2/__init__.py', 'lhotse-path': '/root/lhotse/lhotse/__init__.py', 'hostname': 'bookbot-k2', 'IP address': '127.0.0.1'}, 'epoch': 30, 'iter': 0, 'avg': 9, 'use_averaged_model': True, 'exp_dir': PosixPath('pruned_transducer_stateless7_streaming/exp'), 'lang_dir': 'data/lang_phone', 'decoding_method': 'fast_beam_search', 'beam_size': 4, 'beam': 20.0, 'ngram_lm_scale': 0.01, 'max_contexts': 8, 'max_states': 64, 'context_size': 2, 'max_sym_per_frame': 1, 'num_paths': 200, 'nbest_scale': 0.5, 'num_encoder_layers': '2,4,3,2,4', 'feedforward_dims': '1024,1024,2048,2048,1024', 'nhead': '8,8,8,8,8', 'encoder_dims': '384,384,384,384,384', 'attention_dims': '192,192,192,192,192', 'encoder_unmasked_dims': '256,256,256,256,256', 'zipformer_downsampling_factors': '1,2,4,8,2', 'cnn_module_kernels': '31,31,31,31,31', 'decoder_dim': 512, 'joiner_dim': 512, 'short_chunk_size': 50, 'num_left_chunks': 4, 'decode_chunk_len': 32, 'manifest_dir': PosixPath('data/fbank'), 'max_duration': 600, 'bucketing_sampler': True, 'num_buckets': 30, 'concatenate_cuts': False, 'duration_factor': 1.0, 'gap': 1.0, 'on_the_fly_feats': False, 'shuffle': True, 'drop_last': True, 'return_cuts': True, 'num_workers': 2, 'enable_spec_aug': True, 'spec_aug_time_warp_factor': 80, 'enable_musan': True, 'input_strategy': 'PrecomputedFeatures', 'res_dir': PosixPath('pruned_transducer_stateless7_streaming/exp/fast_beam_search'), 'suffix': 'epoch-30-avg-9-streaming-chunk-size-32-beam-20.0-max-contexts-8-max-states-64-use-averaged-model', 'blank_id': 0, 'unk_id': 7, 'vocab_size': 33}
2023-06-21 09:40:15,155 INFO [decode.py:670] About to create model
2023-06-21 09:40:15,733 INFO [zipformer.py:405] At encoder stack 4, which has downsampling_factor=2, we will combine the outputs of layers 1 and 3, with downsampling_factors=2 and 8.
2023-06-21 09:40:15,737 INFO [decode.py:741] Calculating the averaged model over epoch range from 21 (excluded) to 30
2023-06-21 09:40:19,291 INFO [decode.py:774] Number of model parameters: 69471350
2023-06-21 09:40:19,291 INFO [multidataset.py:122] About to get LibriVox test cuts
2023-06-21 09:40:19,291 INFO [multidataset.py:124] Loading LibriVox in lazy mode
2023-06-21 09:40:19,292 INFO [multidataset.py:133] About to get FLEURS test cuts
2023-06-21 09:40:19,292 INFO [multidataset.py:135] Loading FLEURS in lazy mode
2023-06-21 09:40:19,292 INFO [multidataset.py:144] About to get Common Voice test cuts
2023-06-21 09:40:19,292 INFO [multidataset.py:146] Loading Common Voice in lazy mode
2023-06-21 09:40:22,208 INFO [decode.py:565] batch 0/?, cuts processed until now is 44
2023-06-21 09:40:28,732 INFO [decode.py:579] The transcripts are stored in pruned_transducer_stateless7_streaming/exp/fast_beam_search/recogs-test-librivox-epoch-30-avg-9-streaming-chunk-size-32-beam-20.0-max-contexts-8-max-states-64-use-averaged-model.txt
2023-06-21 09:40:28,779 INFO [utils.py:561] [test-librivox-beam_20.0_max_contexts_8_max_states_64] %WER 4.85% [1773 / 36594, 295 ins, 904 del, 574 sub ]
2023-06-21 09:40:28,860 INFO [decode.py:590] Wrote detailed error stats to pruned_transducer_stateless7_streaming/exp/fast_beam_search/errs-test-librivox-epoch-30-avg-9-streaming-chunk-size-32-beam-20.0-max-contexts-8-max-states-64-use-averaged-model.txt
2023-06-21 09:40:28,860 INFO [decode.py:604]
For test-librivox, WER of different settings are:
beam_20.0_max_contexts_8_max_states_64 4.85 best for test-librivox
2023-06-21 09:40:30,839 INFO [decode.py:565] batch 0/?, cuts processed until now is 38
2023-06-21 09:41:00,055 INFO [decode.py:579] The transcripts are stored in pruned_transducer_stateless7_streaming/exp/fast_beam_search/recogs-test-fleurs-epoch-30-avg-9-streaming-chunk-size-32-beam-20.0-max-contexts-8-max-states-64-use-averaged-model.txt
2023-06-21 09:41:00,146 INFO [utils.py:561] [test-fleurs-beam_20.0_max_contexts_8_max_states_64] %WER 12.55% [11748 / 93580, 1672 ins, 5414 del, 4662 sub ]
2023-06-21 09:41:00,362 INFO [decode.py:590] Wrote detailed error stats to pruned_transducer_stateless7_streaming/exp/fast_beam_search/errs-test-fleurs-epoch-30-avg-9-streaming-chunk-size-32-beam-20.0-max-contexts-8-max-states-64-use-averaged-model.txt
2023-06-21 09:41:00,362 INFO [decode.py:604]
For test-fleurs, WER of different settings are:
beam_20.0_max_contexts_8_max_states_64 12.55 best for test-fleurs
2023-06-21 09:41:01,414 INFO [zipformer.py:2441] attn_weights_entropy = tensor([1.1632, 1.0353, 1.2741, 0.9735, 1.1847, 1.2830, 1.1450, 1.0967],
device='cuda:0'), covar=tensor([0.0547, 0.0601, 0.0483, 0.0755, 0.0373, 0.0368, 0.0490, 0.0569],
device='cuda:0'), in_proj_covar=tensor([0.0018, 0.0019, 0.0019, 0.0021, 0.0018, 0.0017, 0.0019, 0.0019],
device='cuda:0'), out_proj_covar=tensor([1.3702e-05, 1.4294e-05, 1.3432e-05, 1.4389e-05, 1.2265e-05, 1.4168e-05,
1.2323e-05, 1.3747e-05], device='cuda:0')
2023-06-21 09:41:02,049 INFO [decode.py:565] batch 0/?, cuts processed until now is 121
2023-06-21 09:41:22,562 INFO [decode.py:565] batch 20/?, cuts processed until now is 2809
2023-06-21 09:41:31,340 INFO [decode.py:579] The transcripts are stored in pruned_transducer_stateless7_streaming/exp/fast_beam_search/recogs-test-commonvoice-epoch-30-avg-9-streaming-chunk-size-32-beam-20.0-max-contexts-8-max-states-64-use-averaged-model.txt
2023-06-21 09:41:31,464 INFO [utils.py:561] [test-commonvoice-beam_20.0_max_contexts_8_max_states_64] %WER 14.89% [19770 / 132787, 2851 ins, 9210 del, 7709 sub ]
2023-06-21 09:41:31,757 INFO [decode.py:590] Wrote detailed error stats to pruned_transducer_stateless7_streaming/exp/fast_beam_search/errs-test-commonvoice-epoch-30-avg-9-streaming-chunk-size-32-beam-20.0-max-contexts-8-max-states-64-use-averaged-model.txt
2023-06-21 09:41:31,757 INFO [decode.py:604]
For test-commonvoice, WER of different settings are:
beam_20.0_max_contexts_8_max_states_64 14.89 best for test-commonvoice
2023-06-21 09:41:31,758 INFO [decode.py:809] Done!