yfyeung commited on
Commit
e4b0b6b
1 Parent(s): 8a15837

Delete decoding-results/ctc-decoding/log-decode-epoch-30-avg-10-use-averaged-model-2022-12-14-15-17-22

Browse files
decoding-results/ctc-decoding/log-decode-epoch-30-avg-10-use-averaged-model-2022-12-14-15-17-22 DELETED
@@ -1,27 +0,0 @@
1
- 2022-12-14 15:17:22,943 INFO [ctc_decode.py:608] Decoding started
2
- 2022-12-14 15:17:22,944 INFO [ctc_decode.py:614] Device: cuda:0
3
- 2022-12-14 15:17:22,944 INFO [ctc_decode.py:615] {'best_train_loss': inf, 'best_valid_loss': inf, 'best_train_epoch': -1, 'best_valid_epoch': -1, 'batch_idx_train': 0, 'log_interval': 50, 'reset_interval': 200, 'valid_interval': 3000, 'feature_dim': 80, 'subsampling_factor': 4, 'beam_size': 10, 'use_double_scores': True, 'warm_step': 2000, 'env_info': {'k2-version': '1.22', 'k2-build-type': 'Release', 'k2-with-cuda': True, 'k2-git-sha1': '6df2d56bd9097bba8d8af12d6c1ef8cb66bf9c17', 'k2-git-date': 'Thu Nov 17 19:06:54 2022', 'lhotse-version': '1.10.0', 'torch-version': '1.13.0', 'torch-cuda-available': True, 'torch-cuda-version': '11.6', 'python-version': '3.1', 'icefall-git-branch': 'blankskip', 'icefall-git-sha1': 'cf69804-dirty', 'icefall-git-date': 'Sat Dec 3 16:30:31 2022', 'icefall-path': '/home/yfy62/icefall', 'k2-path': '/home/yfy62/anaconda3/envs/icefall/lib/python3.10/site-packages/k2-1.22.dev20221122+cuda11.6.torch1.13.0-py3.10-linux-x86_64.egg/k2/__init__.py', 'lhotse-path': '/home/yfy62/anaconda3/envs/icefall/lib/python3.10/site-packages/lhotse/__init__.py', 'hostname': 'd3-hpc-sjtu-test-004', 'IP address': '10.11.11.11'}, 'frame_shift_ms': 10, 'search_beam': 20, 'output_beam': 8, 'min_active_states': 30, 'max_active_states': 10000, 'epoch': 30, 'iter': 0, 'avg': 10, 'use_averaged_model': True, 'exp_dir': PosixPath('pruned_transducer_stateless7_ctc_bk/exp_lconv_scaling'), 'bpe_model': 'data/lang_bpe_500/bpe.model', 'lang_dir': PosixPath('data/lang_bpe_500'), 'context_size': 2, 'decoding_method': 'ctc-decoding', 'num_paths': 100, 'nbest_scale': 0.5, 'hlg_scale': 0.8, 'lm_dir': PosixPath('data/lm'), 'num_encoder_layers': '2,4,3,2,4', 'feedforward_dims': '1024,1024,2048,2048,1024', 'nhead': '8,8,8,8,8', 'encoder_dims': '384,384,384,384,384', 'attention_dims': '192,192,192,192,192', 'encoder_unmasked_dims': '256,256,256,256,256', 'zipformer_downsampling_factors': '1,2,4,8,2', 'cnn_module_kernels': '31,31,31,31,31', 'decoder_dim': 512, 'joiner_dim': 512, 'full_libri': True, 'manifest_dir': PosixPath('data/fbank'), 'max_duration': 600, 'bucketing_sampler': True, 'num_buckets': 30, 'concatenate_cuts': False, 'duration_factor': 1.0, 'gap': 1.0, 'on_the_fly_feats': False, 'shuffle': True, 'drop_last': True, 'return_cuts': True, 'num_workers': 2, 'enable_spec_aug': True, 'spec_aug_time_warp_factor': 80, 'enable_musan': True, 'input_strategy': 'PrecomputedFeatures', 'res_dir': PosixPath('pruned_transducer_stateless7_ctc_bk/exp_lconv_scaling/ctc-decoding'), 'suffix': 'epoch-30-avg-10-use-averaged-model'}
4
- 2022-12-14 15:17:23,300 INFO [lexicon.py:168] Loading pre-compiled data/lang_bpe_500/Linv.pt
5
- 2022-12-14 15:17:24,967 INFO [ctc_decode.py:693] About to create model
6
- 2022-12-14 15:17:25,311 INFO [zipformer.py:179] At encoder stack 4, which has downsampling_factor=2, we will combine the outputs of layers 1 and 3, with downsampling_factors=2 and 8.
7
- 2022-12-14 15:17:25,325 INFO [ctc_decode.py:760] Calculating the averaged model over epoch range from 20 (excluded) to 30
8
- 2022-12-14 15:17:28,896 INFO [ctc_decode.py:777] Number of model parameters: 71164387
9
- 2022-12-14 15:17:28,896 INFO [asr_datamodule.py:443] About to get test-clean cuts
10
- 2022-12-14 15:17:28,897 INFO [asr_datamodule.py:450] About to get test-other cuts
11
- 2022-12-14 15:17:32,819 INFO [ctc_decode.py:526] batch 0/?, cuts processed until now is 43
12
- 2022-12-14 15:17:53,968 INFO [ctc_decode.py:544] The transcripts are stored in pruned_transducer_stateless7_ctc_bk/exp_lconv_scaling/ctc-decoding/recogs-test-clean-ctc-decoding-epoch-30-avg-10-use-averaged-model.txt
13
- 2022-12-14 15:17:54,111 INFO [utils.py:536] [test-clean-ctc-decoding] %WER 6.24% [3280 / 52576, 327 ins, 240 del, 2713 sub ]
14
- 2022-12-14 15:17:54,339 INFO [ctc_decode.py:555] Wrote detailed error stats to pruned_transducer_stateless7_ctc_bk/exp_lconv_scaling/ctc-decoding/errs-test-clean-ctc-decoding-epoch-30-avg-10-use-averaged-model.txt
15
- 2022-12-14 15:17:54,340 INFO [ctc_decode.py:572]
16
- For test-clean, WER of different settings are:
17
- ctc-decoding 6.24 best for test-clean
18
-
19
- 2022-12-14 15:17:55,518 INFO [ctc_decode.py:526] batch 0/?, cuts processed until now is 52
20
- 2022-12-14 15:18:17,880 INFO [ctc_decode.py:544] The transcripts are stored in pruned_transducer_stateless7_ctc_bk/exp_lconv_scaling/ctc-decoding/recogs-test-other-ctc-decoding-epoch-30-avg-10-use-averaged-model.txt
21
- 2022-12-14 15:18:17,966 INFO [utils.py:536] [test-other-ctc-decoding] %WER 16.97% [8883 / 52343, 842 ins, 805 del, 7236 sub ]
22
- 2022-12-14 15:18:18,137 INFO [ctc_decode.py:555] Wrote detailed error stats to pruned_transducer_stateless7_ctc_bk/exp_lconv_scaling/ctc-decoding/errs-test-other-ctc-decoding-epoch-30-avg-10-use-averaged-model.txt
23
- 2022-12-14 15:18:18,138 INFO [ctc_decode.py:572]
24
- For test-other, WER of different settings are:
25
- ctc-decoding 16.97 best for test-other
26
-
27
- 2022-12-14 15:18:18,138 INFO [ctc_decode.py:810] Done!