csukuangfj commited on
Commit
6c877ed
1 Parent(s): 67b5a48

add results for modified_beam_search

Browse files
decoding-results/modified_beam_search/errs-test-clean-beam_size_4-epoch-30-avg-9-modified_beam_search-beam-size-4-use-averaged-model.txt ADDED
The diff for this file is too large to render. See raw diff
 
decoding-results/modified_beam_search/errs-test-other-beam_size_4-epoch-30-avg-9-modified_beam_search-beam-size-4-use-averaged-model.txt ADDED
The diff for this file is too large to render. See raw diff
 
decoding-results/modified_beam_search/log-decode-epoch-30-avg-9-modified_beam_search-beam-size-4-use-averaged-model-2022-11-09-12-45-47 ADDED
@@ -0,0 +1,43 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2022-11-09 12:45:47,077 INFO [decode.py:693] Decoding started
2
+ 2022-11-09 12:45:47,077 INFO [decode.py:699] Device: cuda:0
3
+ 2022-11-09 12:45:47,079 INFO [decode.py:714] {'best_train_loss': inf, 'best_valid_loss': inf, 'best_train_epoch': -1, 'best_valid_epoch': -1, 'batch_idx_train': 0, 'log_interval': 50, 'reset_interval': 200, 'valid_interval': 3000, 'feature_dim': 80, 'subsampling_factor': 4, 'warm_step': 2000, 'env_info': {'k2-version': '1.21', 'k2-build-type': 'Release', 'k2-with-cuda': True, 'k2-git-sha1': '9b0763930ea9d773c14462992157b75878d0f187', 'k2-git-date': 'Fri Nov 4 11:03:17 2022', 'lhotse-version': '1.3.0.dev+missing.version.file', 'torch-version': '1.10.0+cu102', 'torch-cuda-available': True, 'torch-cuda-version': '10.2', 'python-version': '3.8', 'icefall-git-branch': 'from-dan-scaled-adam-exp253', 'icefall-git-sha1': '38a6f069-dirty', 'icefall-git-date': 'Tue Nov 8 11:51:58 2022', 'icefall-path': '/k2-dev/fangjun/open-source/icefall-dan-first', 'k2-path': '/k2-dev/fangjun/open-source/k2-master/k2/python/k2/__init__.py', 'lhotse-path': '/ceph-fj/fangjun/open-source-2/lhotse-jsonl/lhotse/__init__.py', 'hostname': 'de-74279-k2-train-1-0307195509-567fcb96d6-kdztg', 'IP address': '10.177.22.10'}, 'epoch': 30, 'iter': 0, 'avg': 9, 'use_averaged_model': True, 'exp_dir': PosixPath('pruned_transducer_stateless7/exp'), 'bpe_model': 'data/lang_bpe_500/bpe.model', 'lang_dir': PosixPath('data/lang_bpe_500'), 'decoding_method': 'modified_beam_search', 'beam_size': 4, 'beam': 20.0, 'ngram_lm_scale': 0.01, 'max_contexts': 8, 'max_states': 64, 'context_size': 2, 'max_sym_per_frame': 1, 'num_paths': 200, 'nbest_scale': 0.5, 'simulate_streaming': False, 'decode_chunk_size': 16, 'left_context': 64, 'num_encoder_layers': '2,4,3,2,4', 'feedforward_dims': '1024,1024,2048,2048,1024', 'nhead': '8,8,8,8,8', 'encoder_dims': '384,384,384,384,384', 'attention_dims': '192,192,192,192,192', 'encoder_unmasked_dims': '256,256,256,256,256', 'zipformer_downsampling_factors': '1,2,4,8,2', 'cnn_module_kernels': '31,31,31,31,31', 'decoder_dim': 512, 'joiner_dim': 512, 'full_libri': True, 'manifest_dir': PosixPath('data/fbank'), 'max_duration': 600, 'bucketing_sampler': True, 'num_buckets': 30, 'concatenate_cuts': False, 'duration_factor': 1.0, 'gap': 1.0, 'on_the_fly_feats': False, 'shuffle': True, 'drop_last': True, 'return_cuts': True, 'num_workers': 2, 'enable_spec_aug': True, 'spec_aug_time_warp_factor': 80, 'enable_musan': True, 'input_strategy': 'PrecomputedFeatures', 'res_dir': PosixPath('pruned_transducer_stateless7/exp/modified_beam_search'), 'suffix': 'epoch-30-avg-9-modified_beam_search-beam-size-4-use-averaged-model', 'blank_id': 0, 'unk_id': 2, 'vocab_size': 500}
4
+ 2022-11-09 12:45:47,080 INFO [decode.py:716] About to create model
5
+ 2022-11-09 12:45:47,539 INFO [zipformer.py:177] At encoder stack 4, which has downsampling_factor=2, we will combine the outputs of layers 1 and 3, with downsampling_factors=2 and 8.
6
+ 2022-11-09 12:45:47,552 INFO [decode.py:783] Calculating the averaged model over epoch range from 21 (excluded) to 30
7
+ 2022-11-09 12:45:53,700 INFO [decode.py:819] Number of model parameters: 70369391
8
+ 2022-11-09 12:45:53,700 INFO [asr_datamodule.py:444] About to get test-clean cuts
9
+ 2022-11-09 12:45:53,703 INFO [asr_datamodule.py:451] About to get test-other cuts
10
+ 2022-11-09 12:45:58,242 INFO [decode.py:591] batch 0/?, cuts processed until now is 48
11
+ 2022-11-09 12:47:13,742 INFO [decode.py:591] batch 20/?, cuts processed until now is 1654
12
+ 2022-11-09 12:47:13,911 INFO [zipformer.py:1427] attn_weights_entropy = tensor([4.2046, 4.1126, 4.1339, 4.2487, 4.0114, 4.0177, 3.8425, 4.1197],
13
+ device='cuda:0'), covar=tensor([0.0622, 0.0588, 0.0756, 0.0501, 0.1551, 0.1098, 0.0589, 0.0841],
14
+ device='cuda:0'), in_proj_covar=tensor([0.0528, 0.0682, 0.0575, 0.0656, 0.0876, 0.0774, 0.0553, 0.0495],
15
+ device='cuda:0'), out_proj_covar=tensor([0.0004, 0.0005, 0.0005, 0.0005, 0.0006, 0.0006, 0.0004, 0.0004],
16
+ device='cuda:0')
17
+ 2022-11-09 12:48:03,946 INFO [decode.py:609] The transcripts are stored in pruned_transducer_stateless7/exp/modified_beam_search/recogs-test-clean-beam_size_4-epoch-30-avg-9-modified_beam_search-beam-size-4-use-averaged-model.txt
18
+ 2022-11-09 12:48:04,040 INFO [utils.py:428] [test-clean-beam_size_4] %WER 2.15% [1132 / 52576, 127 ins, 91 del, 914 sub ]
19
+ 2022-11-09 12:48:04,232 INFO [decode.py:622] Wrote detailed error stats to pruned_transducer_stateless7/exp/modified_beam_search/errs-test-clean-beam_size_4-epoch-30-avg-9-modified_beam_search-beam-size-4-use-averaged-model.txt
20
+ 2022-11-09 12:48:04,233 INFO [decode.py:639]
21
+ For test-clean, WER of different settings are:
22
+ beam_size_4 2.15 best for test-clean
23
+
24
+ 2022-11-09 12:48:08,501 INFO [decode.py:591] batch 0/?, cuts processed until now is 57
25
+ 2022-11-09 12:48:51,084 INFO [zipformer.py:1427] attn_weights_entropy = tensor([2.6123, 2.4769, 3.9666, 3.9680, 2.4013, 2.5524, 2.6793, 2.0847],
26
+ device='cuda:0'), covar=tensor([0.1506, 0.3047, 0.0460, 0.0437, 0.1364, 0.2039, 0.2576, 0.3502],
27
+ device='cuda:0'), in_proj_covar=tensor([0.0297, 0.0399, 0.0279, 0.0306, 0.0263, 0.0294, 0.0377, 0.0364],
28
+ device='cuda:0'), out_proj_covar=tensor([0.0002, 0.0002, 0.0001, 0.0001, 0.0001, 0.0002, 0.0002, 0.0002],
29
+ device='cuda:0')
30
+ 2022-11-09 12:49:24,764 INFO [decode.py:591] batch 20/?, cuts processed until now is 1886
31
+ 2022-11-09 12:49:53,029 INFO [zipformer.py:1427] attn_weights_entropy = tensor([2.5592, 2.3638, 3.7624, 3.7657, 2.3177, 2.4795, 2.5667, 2.0172],
32
+ device='cuda:0'), covar=tensor([0.1558, 0.3122, 0.0551, 0.0494, 0.1496, 0.2103, 0.2654, 0.3616],
33
+ device='cuda:0'), in_proj_covar=tensor([0.0297, 0.0399, 0.0279, 0.0306, 0.0263, 0.0294, 0.0377, 0.0364],
34
+ device='cuda:0'), out_proj_covar=tensor([0.0002, 0.0002, 0.0001, 0.0001, 0.0001, 0.0002, 0.0002, 0.0002],
35
+ device='cuda:0')
36
+ 2022-11-09 12:50:13,820 INFO [decode.py:609] The transcripts are stored in pruned_transducer_stateless7/exp/modified_beam_search/recogs-test-other-beam_size_4-epoch-30-avg-9-modified_beam_search-beam-size-4-use-averaged-model.txt
37
+ 2022-11-09 12:50:13,938 INFO [utils.py:428] [test-other-beam_size_4] %WER 5.20% [2723 / 52343, 289 ins, 224 del, 2210 sub ]
38
+ 2022-11-09 12:50:14,175 INFO [decode.py:622] Wrote detailed error stats to pruned_transducer_stateless7/exp/modified_beam_search/errs-test-other-beam_size_4-epoch-30-avg-9-modified_beam_search-beam-size-4-use-averaged-model.txt
39
+ 2022-11-09 12:50:14,176 INFO [decode.py:639]
40
+ For test-other, WER of different settings are:
41
+ beam_size_4 5.2 best for test-other
42
+
43
+ 2022-11-09 12:50:14,176 INFO [decode.py:850] Done!
decoding-results/modified_beam_search/recogs-test-clean-beam_size_4-epoch-30-avg-9-modified_beam_search-beam-size-4-use-averaged-model.txt ADDED
The diff for this file is too large to render. See raw diff
 
decoding-results/modified_beam_search/recogs-test-other-beam_size_4-epoch-30-avg-9-modified_beam_search-beam-size-4-use-averaged-model.txt ADDED
The diff for this file is too large to render. See raw diff