File size: 6,702 Bytes
eeb0363
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
2023-03-21 18:33:45,731 INFO [decode.py:482] Decoding started
2023-03-21 18:33:45,731 INFO [decode.py:488] Device: cuda:0
2023-03-21 18:33:45,985 INFO [lexicon.py:168] Loading pre-compiled data/lang_char/Linv.pt
2023-03-21 18:33:46,015 INFO [decode.py:494] {'frame_shift_ms': 10.0, 'allowed_excess_duration_ratio': 0.1, 'best_train_loss': inf, 'best_valid_loss': inf, 'best_train_epoch': -1, 'best_valid_epoch': -1, 'batch_idx_train': 0, 'log_interval': 50, 'reset_interval': 200, 'valid_interval': 3000, 'feature_dim': 80, 'subsampling_factor': 4, 'warm_step': 2000, 'env_info': {'k2-version': '1.22', 'k2-build-type': 'Release', 'k2-with-cuda': True, 'k2-git-sha1': '96c9a2aece2a3a7633da07740e24fa3d96f5498c', 'k2-git-date': 'Thu Nov 10 08:14:02 2022', 'lhotse-version': '1.13.0.dev+git.527d964.clean', 'torch-version': '1.12.1', 'torch-cuda-available': True, 'torch-cuda-version': '11.6', 'python-version': '3.8', 'icefall-git-branch': 'aishell_zipformer', 'icefall-git-sha1': 'd735cdf-dirty', 'icefall-git-date': 'Tue Mar 21 18:08:42 2023', 'icefall-path': '/ceph-data4/yangxiaoyu/softwares/icefall_development/icefall_aishell_zipformer', 'k2-path': '/ceph-data4/yangxiaoyu/softwares/anaconda3/envs/k2_latest/lib/python3.8/site-packages/k2/__init__.py', 'lhotse-path': '/ceph-data4/yangxiaoyu/softwares/lhotse_development/lhotse_random_padding_left/lhotse/__init__.py', 'hostname': 'de-74279-k2-train-7-1218101249-5d97868c7c-v8ngc', 'IP address': '10.177.77.18'}, 'epoch': 42, 'iter': 0, 'avg': 6, 'use_averaged_model': True, 'exp_dir': PosixPath('pruned_transducer_stateless7/exp'), 'lang_dir': PosixPath('data/lang_char'), 'decoding_method': 'fast_beam_search', 'beam_size': 4, 'beam': 4, 'max_contexts': 4, 'max_states': 8, 'context_size': 1, 'max_sym_per_frame': 1, 'num_encoder_layers': '2,4,3,2,4', 'feedforward_dims': '1024,1024,2048,2048,1024', 'nhead': '8,8,8,8,8', 'encoder_dims': '384,384,384,384,384', 'attention_dims': '192,192,192,192,192', 'encoder_unmasked_dims': '256,256,256,256,256', 'zipformer_downsampling_factors': '1,2,4,8,2', 'cnn_module_kernels': '31,31,31,31,31', 'decoder_dim': 512, 'joiner_dim': 512, 'max_duration': 600, 'bucketing_sampler': True, 'num_buckets': 30, 'shuffle': True, 'return_cuts': True, 'num_workers': 2, 'enable_spec_aug': True, 'spec_aug_time_warp_factor': 80, 'enable_musan': True, 'manifest_dir': PosixPath('data/fbank'), 'on_the_fly_feats': False, 'res_dir': PosixPath('pruned_transducer_stateless7/exp/fast_beam_search'), 'suffix': 'epoch-42-avg-6-beam-4-max-contexts-4-max-states-8-use-averaged-model', 'blank_id': 0, 'vocab_size': 4336}
2023-03-21 18:33:46,015 INFO [decode.py:496] About to create model
2023-03-21 18:33:46,792 INFO [zipformer.py:178] At encoder stack 4, which has downsampling_factor=2, we will combine the outputs of layers 1 and 3, with downsampling_factors=2 and 8.
2023-03-21 18:33:46,896 INFO [decode.py:568] Calculating the averaged model over epoch range from 36 (excluded) to 42
2023-03-21 18:33:53,521 INFO [decode.py:591] Number of model parameters: 77741923
2023-03-21 18:33:53,522 INFO [aishell.py:51] About to get test cuts from data/fbank/aishell_cuts_test.jsonl.gz
2023-03-21 18:33:53,525 INFO [aishell.py:45] About to get valid cuts from data/fbank/aishell_cuts_dev.jsonl.gz
2023-03-21 18:33:59,812 INFO [decode.py:398] batch 0/?, cuts processed until now is 95
2023-03-21 18:34:07,903 INFO [zipformer.py:1455] attn_weights_entropy = tensor([3.0248, 3.2528, 2.6991, 3.5256, 3.5019, 3.1906, 3.3530, 3.3513],
       device='cuda:0'), covar=tensor([0.1796, 0.0752, 0.3624, 0.0732, 0.0305, 0.0309, 0.0494, 0.0420],
       device='cuda:0'), in_proj_covar=tensor([0.0247, 0.0226, 0.0243, 0.0252, 0.0193, 0.0192, 0.0212, 0.0220],
       device='cuda:0'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0002, 0.0002, 0.0002, 0.0003],
       device='cuda:0')
2023-03-21 18:34:08,868 INFO [zipformer.py:1455] attn_weights_entropy = tensor([3.8128, 4.1197, 3.8312, 4.0374, 3.6198, 4.0945, 4.2563, 4.3304],
       device='cuda:0'), covar=tensor([0.0209, 0.0119, 0.0202, 0.0161, 0.0337, 0.0199, 0.0217, 0.0149],
       device='cuda:0'), in_proj_covar=tensor([0.0120, 0.0124, 0.0118, 0.0122, 0.0112, 0.0100, 0.0097, 0.0099],
       device='cuda:0'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002],
       device='cuda:0')
2023-03-21 18:34:17,732 INFO [decode.py:398] batch 20/?, cuts processed until now is 2090
2023-03-21 18:34:33,517 INFO [decode.py:398] batch 40/?, cuts processed until now is 4307
2023-03-21 18:34:49,549 INFO [decode.py:398] batch 60/?, cuts processed until now is 6579
2023-03-21 18:34:55,128 INFO [decode.py:412] The transcripts are stored in pruned_transducer_stateless7/exp/fast_beam_search/recogs-test-epoch-42-avg-6-beam-4-max-contexts-4-max-states-8-use-averaged-model.txt
2023-03-21 18:34:55,495 INFO [utils.py:558] [test-beam_4_max_contexts_4_max_states_8] %WER 4.90% [5130 / 104765, 169 ins, 567 del, 4394 sub ]
2023-03-21 18:34:55,946 INFO [decode.py:427] Wrote detailed error stats to pruned_transducer_stateless7/exp/fast_beam_search/errs-test-epoch-42-avg-6-beam-4-max-contexts-4-max-states-8-use-averaged-model.txt
2023-03-21 18:34:55,951 INFO [decode.py:441] 
For test, CER of different settings are:
beam_4_max_contexts_4_max_states_8	4.9	best for test

2023-03-21 18:34:57,799 INFO [decode.py:398] batch 0/?, cuts processed until now is 107
2023-03-21 18:35:13,889 INFO [decode.py:398] batch 20/?, cuts processed until now is 2342
2023-03-21 18:35:29,299 INFO [decode.py:398] batch 40/?, cuts processed until now is 4714
2023-03-21 18:35:44,631 INFO [decode.py:398] batch 60/?, cuts processed until now is 7285
2023-03-21 18:36:00,459 INFO [decode.py:398] batch 80/?, cuts processed until now is 9571
2023-03-21 18:36:16,401 INFO [decode.py:398] batch 100/?, cuts processed until now is 12026
2023-03-21 18:36:29,588 INFO [decode.py:398] batch 120/?, cuts processed until now is 14130
2023-03-21 18:36:31,649 INFO [decode.py:412] The transcripts are stored in pruned_transducer_stateless7/exp/fast_beam_search/recogs-dev-epoch-42-avg-6-beam-4-max-contexts-4-max-states-8-use-averaged-model.txt
2023-03-21 18:36:32,183 INFO [utils.py:558] [dev-beam_4_max_contexts_4_max_states_8] %WER 4.52% [9284 / 205341, 360 ins, 980 del, 7944 sub ]
2023-03-21 18:36:33,060 INFO [decode.py:427] Wrote detailed error stats to pruned_transducer_stateless7/exp/fast_beam_search/errs-dev-epoch-42-avg-6-beam-4-max-contexts-4-max-states-8-use-averaged-model.txt
2023-03-21 18:36:33,061 INFO [decode.py:441] 
For dev, CER of different settings are:
beam_4_max_contexts_4_max_states_8	4.52	best for dev

2023-03-21 18:36:33,083 INFO [decode.py:620] Done!