initial commit
Browse files- data/lang_bpe_500/bpe.model +3 -0
- decoding-results/fast_beam_search/errs-test-clean-beam_20.0_max_contexts_8_max_states_64-epoch-30-avg-11-beam-20.0-max-contexts-8-max-states-64-use-averaged-model.txt +0 -0
- decoding-results/fast_beam_search/errs-test-other-beam_20.0_max_contexts_8_max_states_64-epoch-30-avg-11-beam-20.0-max-contexts-8-max-states-64-use-averaged-model.txt +0 -0
- decoding-results/fast_beam_search/log-decode-epoch-30-avg-11-beam-20.0-max-contexts-8-max-states-64-use-averaged-model-2023-03-10-12-40-53 +9 -0
- decoding-results/fast_beam_search/log-decode-epoch-30-avg-11-beam-20.0-max-contexts-8-max-states-64-use-averaged-model-2023-03-10-12-44-34 +45 -0
- decoding-results/fast_beam_search/recogs-test-clean-beam_20.0_max_contexts_8_max_states_64-epoch-30-avg-11-beam-20.0-max-contexts-8-max-states-64-use-averaged-model.txt +0 -0
- decoding-results/fast_beam_search/recogs-test-other-beam_20.0_max_contexts_8_max_states_64-epoch-30-avg-11-beam-20.0-max-contexts-8-max-states-64-use-averaged-model.txt +0 -0
- decoding-results/fast_beam_search/wer-summary-test-clean-beam_20.0_max_contexts_8_max_states_64-epoch-30-avg-11-beam-20.0-max-contexts-8-max-states-64-use-averaged-model.txt +2 -0
- decoding-results/fast_beam_search/wer-summary-test-other-beam_20.0_max_contexts_8_max_states_64-epoch-30-avg-11-beam-20.0-max-contexts-8-max-states-64-use-averaged-model.txt +2 -0
- decoding-results/greedy_search/errs-test-clean-greedy_search-epoch-30-avg-11-context-2-max-sym-per-frame-1-use-averaged-model.txt +0 -0
- decoding-results/greedy_search/errs-test-other-greedy_search-epoch-30-avg-11-context-2-max-sym-per-frame-1-use-averaged-model.txt +0 -0
- decoding-results/greedy_search/log-decode-epoch-30-avg-11-context-2-max-sym-per-frame-1-use-averaged-model-2023-03-10-10-18-49 +50 -0
- decoding-results/greedy_search/recogs-test-clean-greedy_search-epoch-30-avg-11-context-2-max-sym-per-frame-1-use-averaged-model.txt +0 -0
- decoding-results/greedy_search/recogs-test-other-greedy_search-epoch-30-avg-11-context-2-max-sym-per-frame-1-use-averaged-model.txt +0 -0
- decoding-results/greedy_search/symbol-delay-summary-test-clean-greedy_search-epoch-30-avg-11-context-2-max-sym-per-frame-1-use-averaged-model.txt +2 -0
- decoding-results/greedy_search/symbol-delay-summary-test-other-greedy_search-epoch-30-avg-11-context-2-max-sym-per-frame-1-use-averaged-model.txt +2 -0
- decoding-results/greedy_search/wer-summary-test-clean-greedy_search-epoch-30-avg-11-context-2-max-sym-per-frame-1-use-averaged-model.txt +2 -0
- decoding-results/greedy_search/wer-summary-test-other-greedy_search-epoch-30-avg-11-context-2-max-sym-per-frame-1-use-averaged-model.txt +2 -0
- decoding-results/modified_beam_search/errs-test-clean-beam_size_4-epoch-30-avg-11-modified_beam_search-beam-size-4-use-averaged-model.txt +0 -0
- decoding-results/modified_beam_search/errs-test-other-beam_size_4-epoch-30-avg-11-modified_beam_search-beam-size-4-use-averaged-model.txt +0 -0
- decoding-results/modified_beam_search/log-decode-epoch-30-avg-11-modified_beam_search-beam-size-4-use-averaged-model-2023-03-10-10-58-12 +42 -0
- decoding-results/modified_beam_search/recogs-test-clean-beam_size_4-epoch-30-avg-11-modified_beam_search-beam-size-4-use-averaged-model.txt +0 -0
- decoding-results/modified_beam_search/recogs-test-other-beam_size_4-epoch-30-avg-11-modified_beam_search-beam-size-4-use-averaged-model.txt +0 -0
- decoding-results/modified_beam_search/symbol-delay-summary-test-clean-beam_size_4-epoch-30-avg-11-modified_beam_search-beam-size-4-use-averaged-model.txt +2 -0
- decoding-results/modified_beam_search/symbol-delay-summary-test-other-beam_size_4-epoch-30-avg-11-modified_beam_search-beam-size-4-use-averaged-model.txt +2 -0
- decoding-results/modified_beam_search/wer-summary-test-clean-beam_size_4-epoch-30-avg-11-modified_beam_search-beam-size-4-use-averaged-model.txt +2 -0
- decoding-results/modified_beam_search/wer-summary-test-other-beam_size_4-epoch-30-avg-11-modified_beam_search-beam-size-4-use-averaged-model.txt +2 -0
- exp/cpu_jit.pt +3 -0
- exp/epoch-30.pt +3 -0
- exp/export.sh +10 -0
- exp/log/log-train-2023-03-08-13-46-28-0 +0 -0
- exp/log/log-train-2023-03-08-13-46-28-1 +0 -0
- exp/log/log-train-2023-03-08-13-46-28-2 +0 -0
- exp/log/log-train-2023-03-08-13-46-28-3 +0 -0
- exp/pretrained.pt +3 -0
- exp/pretrained.sh +8 -0
- exp/run.sh +13 -0
- exp/tensorboard/events.out.tfevents.1678254388.de-74279-k2-train-9-0208143539-7dcb6bfd79-b6fdq.2269179.0 +3 -0
- test_wavs/1089-134686-0001.wav +0 -0
- test_wavs/1221-135766-0001.wav +0 -0
- test_wavs/1221-135766-0002.wav +0 -0
- test_wavs/trans.txt +3 -0
data/lang_bpe_500/bpe.model
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:c53433de083c4a6ad12d034550ef22de68cec62c4f58932a7b6b8b2f1e743fa5
|
3 |
+
size 244865
|
decoding-results/fast_beam_search/errs-test-clean-beam_20.0_max_contexts_8_max_states_64-epoch-30-avg-11-beam-20.0-max-contexts-8-max-states-64-use-averaged-model.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
decoding-results/fast_beam_search/errs-test-other-beam_20.0_max_contexts_8_max_states_64-epoch-30-avg-11-beam-20.0-max-contexts-8-max-states-64-use-averaged-model.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
decoding-results/fast_beam_search/log-decode-epoch-30-avg-11-beam-20.0-max-contexts-8-max-states-64-use-averaged-model-2023-03-10-12-40-53
ADDED
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
2023-03-10 12:40:53,214 INFO [decode_with_timestamp.py:878] Decoding started
|
2 |
+
2023-03-10 12:40:53,214 INFO [decode_with_timestamp.py:884] Device: cuda:0
|
3 |
+
2023-03-10 12:40:53,219 INFO [decode_with_timestamp.py:899] {'frame_shift_ms': 10.0, 'allowed_excess_duration_ratio': 0.1, 'best_train_loss': inf, 'best_valid_loss': inf, 'best_train_epoch': -1, 'best_valid_epoch': -1, 'batch_idx_train': 0, 'log_interval': 50, 'reset_interval': 200, 'valid_interval': 3000, 'feature_dim': 80, 'subsampling_factor': 4, 'warm_step': 2000, 'env_info': {'k2-version': '1.22', 'k2-build-type': 'Release', 'k2-with-cuda': True, 'k2-git-sha1': '96c9a2aece2a3a7633da07740e24fa3d96f5498c', 'k2-git-date': 'Thu Nov 10 08:14:02 2022', 'lhotse-version': '1.13.0.dev+git.527d964.clean', 'torch-version': '1.12.1', 'torch-cuda-available': True, 'torch-cuda-version': '11.6', 'python-version': '3.8', 'icefall-git-branch': 'random_padding', 'icefall-git-sha1': '202ce08-clean', 'icefall-git-date': 'Thu Mar 9 15:05:03 2023', 'icefall-path': '/ceph-data4/yangxiaoyu/softwares/icefall_development/icefall_random_padding', 'k2-path': '/ceph-data4/yangxiaoyu/softwares/anaconda3/envs/k2_latest/lib/python3.8/site-packages/k2/__init__.py', 'lhotse-path': '/ceph-data4/yangxiaoyu/softwares/lhotse_development/lhotse_random_padding_left/lhotse/__init__.py', 'hostname': 'de-74279-k2-train-6-1219221738-65dd59bbf8-2ghmr', 'IP address': '10.177.28.85'}, 'epoch': 30, 'iter': 0, 'avg': 11, 'use_averaged_model': True, 'exp_dir': PosixPath('pruned_transducer_stateless7/exp_960h_no_paddingidx_ngpu4'), 'bpe_model': 'data/lang_bpe_500/bpe.model', 'lang_dir': PosixPath('data/lang_bpe_500'), 'decoding_method': 'fast_beam_search', 'beam_size': 4, 'beam': 20.0, 'ngram_lm_scale': 0.01, 'max_contexts': 8, 'max_states': 64, 'context_size': 2, 'max_sym_per_frame': 1, 'num_paths': 200, 'nbest_scale': 0.5, 'simulate_streaming': False, 'decode_chunk_size': 16, 'left_context': 64, 'use_shallow_fusion': False, 'lm_type': 'rnn', 'lm_scale': 0.3, 'tokens_ngram': 3, 'backoff_id': 500, 'num_encoder_layers': '2,4,3,2,4', 'feedforward_dims': '1024,1024,2048,2048,1024', 'nhead': '8,8,8,8,8', 'encoder_dims': '384,384,384,384,384', 'attention_dims': '192,192,192,192,192', 'encoder_unmasked_dims': '256,256,256,256,256', 'zipformer_downsampling_factors': '1,2,4,8,2', 'cnn_module_kernels': '31,31,31,31,31', 'decoder_dim': 512, 'joiner_dim': 512, 'full_libri': True, 'manifest_dir': PosixPath('data/fbank_ali'), 'max_duration': 500, 'bucketing_sampler': True, 'num_buckets': 30, 'concatenate_cuts': False, 'random_left_padding': False, 'num_left_padding': 8, 'duration_factor': 1.0, 'gap': 1.0, 'on_the_fly_feats': False, 'shuffle': True, 'drop_last': True, 'return_cuts': True, 'num_workers': 2, 'enable_spec_aug': True, 'spec_aug_time_warp_factor': 80, 'enable_musan': True, 'input_strategy': 'PrecomputedFeatures', 'vocab_size': 500, 'lm_epoch': 7, 'lm_avg': 1, 'lm_exp_dir': None, 'rnn_lm_embedding_dim': 2048, 'rnn_lm_hidden_dim': 2048, 'rnn_lm_num_layers': 3, 'rnn_lm_tie_weights': True, 'transformer_lm_exp_dir': None, 'transformer_lm_dim_feedforward': 2048, 'transformer_lm_encoder_dim': 768, 'transformer_lm_embedding_dim': 768, 'transformer_lm_nhead': 8, 'transformer_lm_num_layers': 16, 'transformer_lm_tie_weights': True, 'res_dir': PosixPath('pruned_transducer_stateless7/exp_960h_no_paddingidx_ngpu4/fast_beam_search'), 'suffix': 'epoch-30-avg-11-beam-20.0-max-contexts-8-max-states-64-use-averaged-model', 'blank_id': 0, 'unk_id': 2}
|
4 |
+
2023-03-10 12:40:53,219 INFO [decode_with_timestamp.py:901] About to create model
|
5 |
+
2023-03-10 12:40:54,028 INFO [zipformer.py:178] At encoder stack 4, which has downsampling_factor=2, we will combine the outputs of layers 1 and 3, with downsampling_factors=2 and 8.
|
6 |
+
2023-03-10 12:40:54,048 INFO [decode_with_timestamp.py:968] Calculating the averaged model over epoch range from 19 (excluded) to 30
|
7 |
+
2023-03-10 12:41:00,702 INFO [decode_with_timestamp.py:1030] Number of model parameters: 70369391
|
8 |
+
2023-03-10 12:41:00,702 INFO [asr_datamodule.py:463] About to get test-clean cuts
|
9 |
+
2023-03-10 12:41:00,704 INFO [asr_datamodule.py:470] About to get test-other cuts
|
decoding-results/fast_beam_search/log-decode-epoch-30-avg-11-beam-20.0-max-contexts-8-max-states-64-use-averaged-model-2023-03-10-12-44-34
ADDED
@@ -0,0 +1,45 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
2023-03-10 12:44:34,060 INFO [decode.py:827] Decoding started
|
2 |
+
2023-03-10 12:44:34,060 INFO [decode.py:833] Device: cuda:0
|
3 |
+
2023-03-10 12:44:34,065 INFO [decode.py:848] {'frame_shift_ms': 10.0, 'allowed_excess_duration_ratio': 0.1, 'best_train_loss': inf, 'best_valid_loss': inf, 'best_train_epoch': -1, 'best_valid_epoch': -1, 'batch_idx_train': 0, 'log_interval': 50, 'reset_interval': 200, 'valid_interval': 3000, 'feature_dim': 80, 'subsampling_factor': 4, 'warm_step': 2000, 'env_info': {'k2-version': '1.22', 'k2-build-type': 'Release', 'k2-with-cuda': True, 'k2-git-sha1': '96c9a2aece2a3a7633da07740e24fa3d96f5498c', 'k2-git-date': 'Thu Nov 10 08:14:02 2022', 'lhotse-version': '1.13.0.dev+git.527d964.clean', 'torch-version': '1.12.1', 'torch-cuda-available': True, 'torch-cuda-version': '11.6', 'python-version': '3.8', 'icefall-git-branch': 'random_padding', 'icefall-git-sha1': '202ce08-clean', 'icefall-git-date': 'Thu Mar 9 15:05:03 2023', 'icefall-path': '/ceph-data4/yangxiaoyu/softwares/icefall_development/icefall_random_padding', 'k2-path': '/ceph-data4/yangxiaoyu/softwares/anaconda3/envs/k2_latest/lib/python3.8/site-packages/k2/__init__.py', 'lhotse-path': '/ceph-data4/yangxiaoyu/softwares/lhotse_development/lhotse_random_padding_left/lhotse/__init__.py', 'hostname': 'de-74279-k2-train-6-1219221738-65dd59bbf8-2ghmr', 'IP address': '10.177.28.85'}, 'epoch': 30, 'iter': 0, 'avg': 11, 'use_averaged_model': True, 'exp_dir': PosixPath('pruned_transducer_stateless7/exp_960h_no_paddingidx_ngpu4'), 'bpe_model': 'data/lang_bpe_500/bpe.model', 'lang_dir': PosixPath('data/lang_bpe_500'), 'decoding_method': 'fast_beam_search', 'beam_size': 4, 'beam': 20.0, 'ngram_lm_scale': 0.01, 'max_contexts': 8, 'max_states': 64, 'context_size': 2, 'max_sym_per_frame': 1, 'num_paths': 200, 'nbest_scale': 0.5, 'simulate_streaming': False, 'decode_chunk_size': 16, 'left_context': 64, 'use_shallow_fusion': False, 'lm_type': 'rnn', 'lm_scale': 0.3, 'tokens_ngram': 3, 'backoff_id': 500, 'num_encoder_layers': '2,4,3,2,4', 'feedforward_dims': '1024,1024,2048,2048,1024', 'nhead': '8,8,8,8,8', 'encoder_dims': '384,384,384,384,384', 'attention_dims': '192,192,192,192,192', 'encoder_unmasked_dims': '256,256,256,256,256', 'zipformer_downsampling_factors': '1,2,4,8,2', 'cnn_module_kernels': '31,31,31,31,31', 'decoder_dim': 512, 'joiner_dim': 512, 'full_libri': True, 'manifest_dir': PosixPath('data/fbank_ali'), 'max_duration': 500, 'bucketing_sampler': True, 'num_buckets': 30, 'concatenate_cuts': False, 'random_left_padding': False, 'num_left_padding': 8, 'duration_factor': 1.0, 'gap': 1.0, 'on_the_fly_feats': False, 'shuffle': True, 'drop_last': True, 'return_cuts': True, 'num_workers': 2, 'enable_spec_aug': True, 'spec_aug_time_warp_factor': 80, 'enable_musan': True, 'input_strategy': 'PrecomputedFeatures', 'vocab_size': 500, 'lm_epoch': 7, 'lm_avg': 1, 'lm_exp_dir': None, 'rnn_lm_embedding_dim': 2048, 'rnn_lm_hidden_dim': 2048, 'rnn_lm_num_layers': 3, 'rnn_lm_tie_weights': True, 'transformer_lm_exp_dir': None, 'transformer_lm_dim_feedforward': 2048, 'transformer_lm_encoder_dim': 768, 'transformer_lm_embedding_dim': 768, 'transformer_lm_nhead': 8, 'transformer_lm_num_layers': 16, 'transformer_lm_tie_weights': True, 'res_dir': PosixPath('pruned_transducer_stateless7/exp_960h_no_paddingidx_ngpu4/fast_beam_search'), 'suffix': 'epoch-30-avg-11-beam-20.0-max-contexts-8-max-states-64-use-averaged-model', 'blank_id': 0, 'unk_id': 2}
|
4 |
+
2023-03-10 12:44:34,066 INFO [decode.py:850] About to create model
|
5 |
+
2023-03-10 12:44:34,880 INFO [zipformer.py:178] At encoder stack 4, which has downsampling_factor=2, we will combine the outputs of layers 1 and 3, with downsampling_factors=2 and 8.
|
6 |
+
2023-03-10 12:44:34,899 INFO [decode.py:917] Calculating the averaged model over epoch range from 19 (excluded) to 30
|
7 |
+
2023-03-10 12:44:40,068 INFO [decode.py:979] Number of model parameters: 70369391
|
8 |
+
2023-03-10 12:44:40,068 INFO [asr_datamodule.py:463] About to get test-clean cuts
|
9 |
+
2023-03-10 12:44:40,071 INFO [asr_datamodule.py:470] About to get test-other cuts
|
10 |
+
2023-03-10 12:44:46,177 INFO [decode.py:714] batch 0/?, cuts processed until now is 36
|
11 |
+
2023-03-10 12:45:18,051 INFO [zipformer.py:1455] attn_weights_entropy = tensor([4.3213, 4.3277, 4.3396, 3.9948, 4.1950, 4.0469, 4.3681, 4.4062],
|
12 |
+
device='cuda:0'), covar=tensor([0.0066, 0.0055, 0.0055, 0.0120, 0.0062, 0.0164, 0.0068, 0.0075],
|
13 |
+
device='cuda:0'), in_proj_covar=tensor([0.0095, 0.0069, 0.0075, 0.0094, 0.0075, 0.0104, 0.0087, 0.0086],
|
14 |
+
device='cuda:0'), out_proj_covar=tensor([0.0004, 0.0003, 0.0003, 0.0003, 0.0003, 0.0004, 0.0004, 0.0003],
|
15 |
+
device='cuda:0')
|
16 |
+
2023-03-10 12:45:27,222 INFO [zipformer.py:1455] attn_weights_entropy = tensor([4.6283, 4.2627, 4.9657, 4.3743, 4.0874, 5.4122, 4.7073, 4.1277],
|
17 |
+
device='cuda:0'), covar=tensor([0.0337, 0.0873, 0.0255, 0.0372, 0.0992, 0.0127, 0.0326, 0.0570],
|
18 |
+
device='cuda:0'), in_proj_covar=tensor([0.0205, 0.0234, 0.0213, 0.0158, 0.0218, 0.0205, 0.0243, 0.0190],
|
19 |
+
device='cuda:0'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002],
|
20 |
+
device='cuda:0')
|
21 |
+
2023-03-10 12:45:32,951 INFO [decode.py:714] batch 20/?, cuts processed until now is 1037
|
22 |
+
2023-03-10 12:46:15,809 INFO [decode.py:714] batch 40/?, cuts processed until now is 2298
|
23 |
+
2023-03-10 12:46:41,781 INFO [decode.py:730] The transcripts are stored in pruned_transducer_stateless7/exp_960h_no_paddingidx_ngpu4/fast_beam_search/recogs-test-clean-beam_20.0_max_contexts_8_max_states_64-epoch-30-avg-11-beam-20.0-max-contexts-8-max-states-64-use-averaged-model.txt
|
24 |
+
2023-03-10 12:46:41,930 INFO [utils.py:558] [test-clean-beam_20.0_max_contexts_8_max_states_64] %WER 2.27% [1191 / 52576, 128 ins, 109 del, 954 sub ]
|
25 |
+
2023-03-10 12:46:42,269 INFO [decode.py:743] Wrote detailed error stats to pruned_transducer_stateless7/exp_960h_no_paddingidx_ngpu4/fast_beam_search/errs-test-clean-beam_20.0_max_contexts_8_max_states_64-epoch-30-avg-11-beam-20.0-max-contexts-8-max-states-64-use-averaged-model.txt
|
26 |
+
2023-03-10 12:46:42,270 INFO [decode.py:759]
|
27 |
+
For test-clean, WER of different settings are:
|
28 |
+
beam_20.0_max_contexts_8_max_states_64 2.27 best for test-clean
|
29 |
+
|
30 |
+
2023-03-10 12:46:45,382 INFO [decode.py:714] batch 0/?, cuts processed until now is 43
|
31 |
+
2023-03-10 12:47:27,589 INFO [decode.py:714] batch 20/?, cuts processed until now is 1195
|
32 |
+
2023-03-10 12:48:04,666 INFO [zipformer.py:1455] attn_weights_entropy = tensor([3.8600, 4.4991, 4.0330, 3.9786, 3.9262, 4.4186, 4.2580, 3.9649],
|
33 |
+
device='cuda:0'), covar=tensor([0.1237, 0.0650, 0.1262, 0.0959, 0.1304, 0.1194, 0.0689, 0.1985],
|
34 |
+
device='cuda:0'), in_proj_covar=tensor([0.0358, 0.0286, 0.0314, 0.0315, 0.0327, 0.0426, 0.0283, 0.0420],
|
35 |
+
device='cuda:0'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0004, 0.0002, 0.0004],
|
36 |
+
device='cuda:0')
|
37 |
+
2023-03-10 12:48:05,557 INFO [decode.py:714] batch 40/?, cuts processed until now is 2640
|
38 |
+
2023-03-10 12:48:21,202 INFO [decode.py:730] The transcripts are stored in pruned_transducer_stateless7/exp_960h_no_paddingidx_ngpu4/fast_beam_search/recogs-test-other-beam_20.0_max_contexts_8_max_states_64-epoch-30-avg-11-beam-20.0-max-contexts-8-max-states-64-use-averaged-model.txt
|
39 |
+
2023-03-10 12:48:21,355 INFO [utils.py:558] [test-other-beam_20.0_max_contexts_8_max_states_64] %WER 5.19% [2714 / 52343, 267 ins, 229 del, 2218 sub ]
|
40 |
+
2023-03-10 12:48:21,706 INFO [decode.py:743] Wrote detailed error stats to pruned_transducer_stateless7/exp_960h_no_paddingidx_ngpu4/fast_beam_search/errs-test-other-beam_20.0_max_contexts_8_max_states_64-epoch-30-avg-11-beam-20.0-max-contexts-8-max-states-64-use-averaged-model.txt
|
41 |
+
2023-03-10 12:48:21,707 INFO [decode.py:759]
|
42 |
+
For test-other, WER of different settings are:
|
43 |
+
beam_20.0_max_contexts_8_max_states_64 5.19 best for test-other
|
44 |
+
|
45 |
+
2023-03-10 12:48:21,707 INFO [decode.py:1012] Done!
|
decoding-results/fast_beam_search/recogs-test-clean-beam_20.0_max_contexts_8_max_states_64-epoch-30-avg-11-beam-20.0-max-contexts-8-max-states-64-use-averaged-model.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
decoding-results/fast_beam_search/recogs-test-other-beam_20.0_max_contexts_8_max_states_64-epoch-30-avg-11-beam-20.0-max-contexts-8-max-states-64-use-averaged-model.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
decoding-results/fast_beam_search/wer-summary-test-clean-beam_20.0_max_contexts_8_max_states_64-epoch-30-avg-11-beam-20.0-max-contexts-8-max-states-64-use-averaged-model.txt
ADDED
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
1 |
+
settings WER
|
2 |
+
beam_20.0_max_contexts_8_max_states_64 2.27
|
decoding-results/fast_beam_search/wer-summary-test-other-beam_20.0_max_contexts_8_max_states_64-epoch-30-avg-11-beam-20.0-max-contexts-8-max-states-64-use-averaged-model.txt
ADDED
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
1 |
+
settings WER
|
2 |
+
beam_20.0_max_contexts_8_max_states_64 5.19
|
decoding-results/greedy_search/errs-test-clean-greedy_search-epoch-30-avg-11-context-2-max-sym-per-frame-1-use-averaged-model.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
decoding-results/greedy_search/errs-test-other-greedy_search-epoch-30-avg-11-context-2-max-sym-per-frame-1-use-averaged-model.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
decoding-results/greedy_search/log-decode-epoch-30-avg-11-context-2-max-sym-per-frame-1-use-averaged-model-2023-03-10-10-18-49
ADDED
@@ -0,0 +1,50 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
2023-03-10 10:18:49,578 INFO [decode_with_timestamp.py:878] Decoding started
|
2 |
+
2023-03-10 10:18:49,578 INFO [decode_with_timestamp.py:884] Device: cuda:0
|
3 |
+
2023-03-10 10:18:49,581 INFO [decode_with_timestamp.py:899] {'frame_shift_ms': 10.0, 'allowed_excess_duration_ratio': 0.1, 'best_train_loss': inf, 'best_valid_loss': inf, 'best_train_epoch': -1, 'best_valid_epoch': -1, 'batch_idx_train': 0, 'log_interval': 50, 'reset_interval': 200, 'valid_interval': 3000, 'feature_dim': 80, 'subsampling_factor': 4, 'warm_step': 2000, 'env_info': {'k2-version': '1.22', 'k2-build-type': 'Release', 'k2-with-cuda': True, 'k2-git-sha1': '96c9a2aece2a3a7633da07740e24fa3d96f5498c', 'k2-git-date': 'Thu Nov 10 08:14:02 2022', 'lhotse-version': '1.13.0.dev+git.527d964.clean', 'torch-version': '1.12.1', 'torch-cuda-available': True, 'torch-cuda-version': '11.6', 'python-version': '3.8', 'icefall-git-branch': 'random_padding', 'icefall-git-sha1': '202ce08-clean', 'icefall-git-date': 'Thu Mar 9 15:05:03 2023', 'icefall-path': '/ceph-data4/yangxiaoyu/softwares/icefall_development/icefall_random_padding', 'k2-path': '/ceph-data4/yangxiaoyu/softwares/anaconda3/envs/k2_latest/lib/python3.8/site-packages/k2/__init__.py', 'lhotse-path': '/ceph-data4/yangxiaoyu/softwares/lhotse_development/lhotse_random_padding_left/lhotse/__init__.py', 'hostname': 'de-74279-k2-train-2-1216192652-5bcf7587b4-n6q9m', 'IP address': '10.177.74.211'}, 'epoch': 30, 'iter': 0, 'avg': 11, 'use_averaged_model': True, 'exp_dir': PosixPath('pruned_transducer_stateless7/exp_960h_no_paddingidx_ngpu4'), 'bpe_model': 'data/lang_bpe_500/bpe.model', 'lang_dir': PosixPath('data/lang_bpe_500'), 'decoding_method': 'greedy_search', 'beam_size': 4, 'beam': 20.0, 'ngram_lm_scale': 0.01, 'max_contexts': 8, 'max_states': 64, 'context_size': 2, 'max_sym_per_frame': 1, 'num_paths': 200, 'nbest_scale': 0.5, 'simulate_streaming': False, 'decode_chunk_size': 16, 'left_context': 64, 'use_shallow_fusion': False, 'lm_type': 'rnn', 'lm_scale': 0.3, 'tokens_ngram': 3, 'backoff_id': 500, 'num_encoder_layers': '2,4,3,2,4', 'feedforward_dims': '1024,1024,2048,2048,1024', 'nhead': '8,8,8,8,8', 'encoder_dims': '384,384,384,384,384', 'attention_dims': '192,192,192,192,192', 'encoder_unmasked_dims': '256,256,256,256,256', 'zipformer_downsampling_factors': '1,2,4,8,2', 'cnn_module_kernels': '31,31,31,31,31', 'decoder_dim': 512, 'joiner_dim': 512, 'full_libri': True, 'manifest_dir': PosixPath('data/fbank_ali'), 'max_duration': 500, 'bucketing_sampler': True, 'num_buckets': 30, 'concatenate_cuts': False, 'random_left_padding': False, 'num_left_padding': 8, 'duration_factor': 1.0, 'gap': 1.0, 'on_the_fly_feats': False, 'shuffle': True, 'drop_last': True, 'return_cuts': True, 'num_workers': 2, 'enable_spec_aug': True, 'spec_aug_time_warp_factor': 80, 'enable_musan': True, 'input_strategy': 'PrecomputedFeatures', 'vocab_size': 500, 'lm_epoch': 7, 'lm_avg': 1, 'lm_exp_dir': None, 'rnn_lm_embedding_dim': 2048, 'rnn_lm_hidden_dim': 2048, 'rnn_lm_num_layers': 3, 'rnn_lm_tie_weights': True, 'transformer_lm_exp_dir': None, 'transformer_lm_dim_feedforward': 2048, 'transformer_lm_encoder_dim': 768, 'transformer_lm_embedding_dim': 768, 'transformer_lm_nhead': 8, 'transformer_lm_num_layers': 16, 'transformer_lm_tie_weights': True, 'res_dir': PosixPath('pruned_transducer_stateless7/exp_960h_no_paddingidx_ngpu4/greedy_search'), 'suffix': 'epoch-30-avg-11-context-2-max-sym-per-frame-1-use-averaged-model', 'blank_id': 0, 'unk_id': 2}
|
4 |
+
2023-03-10 10:18:49,581 INFO [decode_with_timestamp.py:901] About to create model
|
5 |
+
2023-03-10 10:18:50,174 INFO [zipformer.py:178] At encoder stack 4, which has downsampling_factor=2, we will combine the outputs of layers 1 and 3, with downsampling_factors=2 and 8.
|
6 |
+
2023-03-10 10:18:50,187 INFO [decode_with_timestamp.py:968] Calculating the averaged model over epoch range from 19 (excluded) to 30
|
7 |
+
2023-03-10 10:18:59,777 INFO [decode_with_timestamp.py:1030] Number of model parameters: 70369391
|
8 |
+
2023-03-10 10:18:59,778 INFO [asr_datamodule.py:463] About to get test-clean cuts
|
9 |
+
2023-03-10 10:18:59,781 INFO [asr_datamodule.py:470] About to get test-other cuts
|
10 |
+
2023-03-10 10:19:04,581 INFO [decode_with_timestamp.py:740] batch 0/?, cuts processed until now is 36
|
11 |
+
2023-03-10 10:20:02,828 INFO [decode_with_timestamp.py:740] batch 50/?, cuts processed until now is 2611
|
12 |
+
2023-03-10 10:20:04,125 INFO [decode_with_timestamp.py:1062] Averaged first symbol emission time: 0.09091603053435197
|
13 |
+
2023-03-10 10:20:04,203 INFO [decode_with_timestamp.py:760] The transcripts are stored in pruned_transducer_stateless7/exp_960h_no_paddingidx_ngpu4/greedy_search/recogs-test-clean-greedy_search-epoch-30-avg-11-context-2-max-sym-per-frame-1-use-averaged-model.txt
|
14 |
+
2023-03-10 10:20:04,334 INFO [utils.py:795] [test-clean-greedy_search] %WER 2.25% [1182 / 52576, 126 ins, 103 del, 953 sub ]
|
15 |
+
2023-03-10 10:20:04,334 INFO [utils.py:800] [test-clean-greedy_search] %symbol-delay mean (s): -0.044, variance: 0.007 computed on 51520 correct words
|
16 |
+
2023-03-10 10:20:04,631 INFO [decode_with_timestamp.py:774] Wrote detailed error stats to pruned_transducer_stateless7/exp_960h_no_paddingidx_ngpu4/greedy_search/errs-test-clean-greedy_search-epoch-30-avg-11-context-2-max-sym-per-frame-1-use-averaged-model.txt
|
17 |
+
2023-03-10 10:20:04,633 INFO [decode_with_timestamp.py:803]
|
18 |
+
For test-clean, WER of different settings are:
|
19 |
+
greedy_search 2.25 best for test-clean
|
20 |
+
|
21 |
+
2023-03-10 10:20:04,633 INFO [decode_with_timestamp.py:810]
|
22 |
+
For test-clean, symbol-delay of different settings are:
|
23 |
+
greedy_search mean: -0.044s, variance: 0.007 best for test-clean
|
24 |
+
|
25 |
+
2023-03-10 10:20:06,540 INFO [decode_with_timestamp.py:740] batch 0/?, cuts processed until now is 43
|
26 |
+
2023-03-10 10:20:49,581 INFO [zipformer.py:1455] attn_weights_entropy = tensor([3.4986, 2.8964, 3.9553, 3.3571, 2.6642, 4.2705, 3.7907, 2.8587],
|
27 |
+
device='cuda:0'), covar=tensor([0.0571, 0.1424, 0.0377, 0.0551, 0.1724, 0.0229, 0.0588, 0.0908],
|
28 |
+
device='cuda:0'), in_proj_covar=tensor([0.0205, 0.0234, 0.0213, 0.0158, 0.0218, 0.0205, 0.0243, 0.0190],
|
29 |
+
device='cuda:0'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002],
|
30 |
+
device='cuda:0')
|
31 |
+
2023-03-10 10:20:51,314 INFO [zipformer.py:1455] attn_weights_entropy = tensor([3.6831, 3.6830, 3.7112, 3.5030, 3.5668, 3.5269, 3.7213, 3.7422],
|
32 |
+
device='cuda:0'), covar=tensor([0.0096, 0.0076, 0.0089, 0.0122, 0.0089, 0.0168, 0.0095, 0.0110],
|
33 |
+
device='cuda:0'), in_proj_covar=tensor([0.0095, 0.0069, 0.0075, 0.0094, 0.0075, 0.0104, 0.0087, 0.0086],
|
34 |
+
device='cuda:0'), out_proj_covar=tensor([0.0004, 0.0003, 0.0003, 0.0003, 0.0003, 0.0004, 0.0004, 0.0003],
|
35 |
+
device='cuda:0')
|
36 |
+
2023-03-10 10:21:01,125 INFO [decode_with_timestamp.py:740] batch 50/?, cuts processed until now is 2939
|
37 |
+
2023-03-10 10:21:01,241 INFO [decode_with_timestamp.py:1062] Averaged first symbol emission time: 0.11329023477373397
|
38 |
+
2023-03-10 10:21:01,321 INFO [decode_with_timestamp.py:760] The transcripts are stored in pruned_transducer_stateless7/exp_960h_no_paddingidx_ngpu4/greedy_search/recogs-test-other-greedy_search-epoch-30-avg-11-context-2-max-sym-per-frame-1-use-averaged-model.txt
|
39 |
+
2023-03-10 10:21:01,454 INFO [utils.py:795] [test-other-greedy_search] %WER 5.23% [2735 / 52343, 265 ins, 232 del, 2238 sub ]
|
40 |
+
2023-03-10 10:21:01,454 INFO [utils.py:800] [test-other-greedy_search] %symbol-delay mean (s): -0.05, variance: 0.008 computed on 49855 correct words
|
41 |
+
2023-03-10 10:21:01,666 INFO [decode_with_timestamp.py:774] Wrote detailed error stats to pruned_transducer_stateless7/exp_960h_no_paddingidx_ngpu4/greedy_search/errs-test-other-greedy_search-epoch-30-avg-11-context-2-max-sym-per-frame-1-use-averaged-model.txt
|
42 |
+
2023-03-10 10:21:01,667 INFO [decode_with_timestamp.py:803]
|
43 |
+
For test-other, WER of different settings are:
|
44 |
+
greedy_search 5.23 best for test-other
|
45 |
+
|
46 |
+
2023-03-10 10:21:01,667 INFO [decode_with_timestamp.py:810]
|
47 |
+
For test-other, symbol-delay of different settings are:
|
48 |
+
greedy_search mean: -0.05s, variance: 0.008 best for test-other
|
49 |
+
|
50 |
+
2023-03-10 10:21:01,667 INFO [decode_with_timestamp.py:1071] Done!
|
decoding-results/greedy_search/recogs-test-clean-greedy_search-epoch-30-avg-11-context-2-max-sym-per-frame-1-use-averaged-model.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
decoding-results/greedy_search/recogs-test-other-greedy_search-epoch-30-avg-11-context-2-max-sym-per-frame-1-use-averaged-model.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
decoding-results/greedy_search/symbol-delay-summary-test-clean-greedy_search-epoch-30-avg-11-context-2-max-sym-per-frame-1-use-averaged-model.txt
ADDED
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
1 |
+
settings symbol-delay
|
2 |
+
greedy_search mean: -0.044s, variance: 0.007
|
decoding-results/greedy_search/symbol-delay-summary-test-other-greedy_search-epoch-30-avg-11-context-2-max-sym-per-frame-1-use-averaged-model.txt
ADDED
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
1 |
+
settings symbol-delay
|
2 |
+
greedy_search mean: -0.05s, variance: 0.008
|
decoding-results/greedy_search/wer-summary-test-clean-greedy_search-epoch-30-avg-11-context-2-max-sym-per-frame-1-use-averaged-model.txt
ADDED
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
1 |
+
settings WER
|
2 |
+
greedy_search 2.25
|
decoding-results/greedy_search/wer-summary-test-other-greedy_search-epoch-30-avg-11-context-2-max-sym-per-frame-1-use-averaged-model.txt
ADDED
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
1 |
+
settings WER
|
2 |
+
greedy_search 5.23
|
decoding-results/modified_beam_search/errs-test-clean-beam_size_4-epoch-30-avg-11-modified_beam_search-beam-size-4-use-averaged-model.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
decoding-results/modified_beam_search/errs-test-other-beam_size_4-epoch-30-avg-11-modified_beam_search-beam-size-4-use-averaged-model.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
decoding-results/modified_beam_search/log-decode-epoch-30-avg-11-modified_beam_search-beam-size-4-use-averaged-model-2023-03-10-10-58-12
ADDED
@@ -0,0 +1,42 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
2023-03-10 10:58:12,879 INFO [decode_with_timestamp.py:878] Decoding started
|
2 |
+
2023-03-10 10:58:12,880 INFO [decode_with_timestamp.py:884] Device: cuda:0
|
3 |
+
2023-03-10 10:58:12,882 INFO [decode_with_timestamp.py:899] {'frame_shift_ms': 10.0, 'allowed_excess_duration_ratio': 0.1, 'best_train_loss': inf, 'best_valid_loss': inf, 'best_train_epoch': -1, 'best_valid_epoch': -1, 'batch_idx_train': 0, 'log_interval': 50, 'reset_interval': 200, 'valid_interval': 3000, 'feature_dim': 80, 'subsampling_factor': 4, 'warm_step': 2000, 'env_info': {'k2-version': '1.22', 'k2-build-type': 'Release', 'k2-with-cuda': True, 'k2-git-sha1': '96c9a2aece2a3a7633da07740e24fa3d96f5498c', 'k2-git-date': 'Thu Nov 10 08:14:02 2022', 'lhotse-version': '1.13.0.dev+git.527d964.clean', 'torch-version': '1.12.1', 'torch-cuda-available': True, 'torch-cuda-version': '11.6', 'python-version': '3.8', 'icefall-git-branch': 'random_padding', 'icefall-git-sha1': '202ce08-clean', 'icefall-git-date': 'Thu Mar 9 15:05:03 2023', 'icefall-path': '/ceph-data4/yangxiaoyu/softwares/icefall_development/icefall_random_padding', 'k2-path': '/ceph-data4/yangxiaoyu/softwares/anaconda3/envs/k2_latest/lib/python3.8/site-packages/k2/__init__.py', 'lhotse-path': '/ceph-data4/yangxiaoyu/softwares/lhotse_development/lhotse_random_padding_left/lhotse/__init__.py', 'hostname': 'de-74279-k2-train-2-1216192652-5bcf7587b4-n6q9m', 'IP address': '10.177.74.211'}, 'epoch': 30, 'iter': 0, 'avg': 11, 'use_averaged_model': True, 'exp_dir': PosixPath('pruned_transducer_stateless7/exp_960h_no_paddingidx_ngpu4'), 'bpe_model': 'data/lang_bpe_500/bpe.model', 'lang_dir': PosixPath('data/lang_bpe_500'), 'decoding_method': 'modified_beam_search', 'beam_size': 4, 'beam': 20.0, 'ngram_lm_scale': 0.01, 'max_contexts': 8, 'max_states': 64, 'context_size': 2, 'max_sym_per_frame': 1, 'num_paths': 200, 'nbest_scale': 0.5, 'simulate_streaming': False, 'decode_chunk_size': 16, 'left_context': 64, 'use_shallow_fusion': False, 'lm_type': 'rnn', 'lm_scale': 0.3, 'tokens_ngram': 3, 'backoff_id': 500, 'num_encoder_layers': '2,4,3,2,4', 'feedforward_dims': '1024,1024,2048,2048,1024', 'nhead': '8,8,8,8,8', 'encoder_dims': '384,384,384,384,384', 'attention_dims': '192,192,192,192,192', 'encoder_unmasked_dims': '256,256,256,256,256', 'zipformer_downsampling_factors': '1,2,4,8,2', 'cnn_module_kernels': '31,31,31,31,31', 'decoder_dim': 512, 'joiner_dim': 512, 'full_libri': True, 'manifest_dir': PosixPath('data/fbank_ali'), 'max_duration': 500, 'bucketing_sampler': True, 'num_buckets': 30, 'concatenate_cuts': False, 'random_left_padding': False, 'num_left_padding': 8, 'duration_factor': 1.0, 'gap': 1.0, 'on_the_fly_feats': False, 'shuffle': True, 'drop_last': True, 'return_cuts': True, 'num_workers': 2, 'enable_spec_aug': True, 'spec_aug_time_warp_factor': 80, 'enable_musan': True, 'input_strategy': 'PrecomputedFeatures', 'vocab_size': 500, 'lm_epoch': 7, 'lm_avg': 1, 'lm_exp_dir': None, 'rnn_lm_embedding_dim': 2048, 'rnn_lm_hidden_dim': 2048, 'rnn_lm_num_layers': 3, 'rnn_lm_tie_weights': True, 'transformer_lm_exp_dir': None, 'transformer_lm_dim_feedforward': 2048, 'transformer_lm_encoder_dim': 768, 'transformer_lm_embedding_dim': 768, 'transformer_lm_nhead': 8, 'transformer_lm_num_layers': 16, 'transformer_lm_tie_weights': True, 'res_dir': PosixPath('pruned_transducer_stateless7/exp_960h_no_paddingidx_ngpu4/modified_beam_search'), 'suffix': 'epoch-30-avg-11-modified_beam_search-beam-size-4-use-averaged-model', 'blank_id': 0, 'unk_id': 2}
|
4 |
+
2023-03-10 10:58:12,882 INFO [decode_with_timestamp.py:901] About to create model
|
5 |
+
2023-03-10 10:58:13,465 INFO [zipformer.py:178] At encoder stack 4, which has downsampling_factor=2, we will combine the outputs of layers 1 and 3, with downsampling_factors=2 and 8.
|
6 |
+
2023-03-10 10:58:13,478 INFO [decode_with_timestamp.py:968] Calculating the averaged model over epoch range from 19 (excluded) to 30
|
7 |
+
2023-03-10 10:58:22,926 INFO [decode_with_timestamp.py:1030] Number of model parameters: 70369391
|
8 |
+
2023-03-10 10:58:22,927 INFO [asr_datamodule.py:463] About to get test-clean cuts
|
9 |
+
2023-03-10 10:58:22,932 INFO [asr_datamodule.py:470] About to get test-other cuts
|
10 |
+
2023-03-10 10:58:30,111 INFO [decode_with_timestamp.py:740] batch 0/?, cuts processed until now is 36
|
11 |
+
2023-03-10 10:59:47,880 INFO [decode_with_timestamp.py:740] batch 20/?, cuts processed until now is 1037
|
12 |
+
2023-03-10 11:00:57,301 INFO [decode_with_timestamp.py:740] batch 40/?, cuts processed until now is 2298
|
13 |
+
2023-03-10 11:01:22,246 INFO [decode_with_timestamp.py:1062] Averaged first symbol emission time: 0.1069923664122152
|
14 |
+
2023-03-10 11:01:22,324 INFO [decode_with_timestamp.py:760] The transcripts are stored in pruned_transducer_stateless7/exp_960h_no_paddingidx_ngpu4/modified_beam_search/recogs-test-clean-beam_size_4-epoch-30-avg-11-modified_beam_search-beam-size-4-use-averaged-model.txt
|
15 |
+
2023-03-10 11:01:22,457 INFO [utils.py:795] [test-clean-beam_size_4] %WER 2.22% [1168 / 52576, 126 ins, 97 del, 945 sub ]
|
16 |
+
2023-03-10 11:01:22,457 INFO [utils.py:800] [test-clean-beam_size_4] %symbol-delay mean (s): -0.043, variance: 0.007 computed on 51534 correct words
|
17 |
+
2023-03-10 11:01:22,673 INFO [decode_with_timestamp.py:774] Wrote detailed error stats to pruned_transducer_stateless7/exp_960h_no_paddingidx_ngpu4/modified_beam_search/errs-test-clean-beam_size_4-epoch-30-avg-11-modified_beam_search-beam-size-4-use-averaged-model.txt
|
18 |
+
2023-03-10 11:01:22,674 INFO [decode_with_timestamp.py:803]
|
19 |
+
For test-clean, WER of different settings are:
|
20 |
+
beam_size_4 2.22 best for test-clean
|
21 |
+
|
22 |
+
2023-03-10 11:01:22,674 INFO [decode_with_timestamp.py:810]
|
23 |
+
For test-clean, symbol-delay of different settings are:
|
24 |
+
beam_size_4 mean: -0.043s, variance: 0.007 best for test-clean
|
25 |
+
|
26 |
+
2023-03-10 11:01:27,036 INFO [decode_with_timestamp.py:740] batch 0/?, cuts processed until now is 43
|
27 |
+
2023-03-10 11:02:40,085 INFO [decode_with_timestamp.py:740] batch 20/?, cuts processed until now is 1195
|
28 |
+
2023-03-10 11:03:50,482 INFO [decode_with_timestamp.py:740] batch 40/?, cuts processed until now is 2640
|
29 |
+
2023-03-10 11:04:09,742 INFO [decode_with_timestamp.py:1062] Averaged first symbol emission time: 0.12684586594079794
|
30 |
+
2023-03-10 11:04:09,827 INFO [decode_with_timestamp.py:760] The transcripts are stored in pruned_transducer_stateless7/exp_960h_no_paddingidx_ngpu4/modified_beam_search/recogs-test-other-beam_size_4-epoch-30-avg-11-modified_beam_search-beam-size-4-use-averaged-model.txt
|
31 |
+
2023-03-10 11:04:09,963 INFO [utils.py:795] [test-other-beam_size_4] %WER 5.14% [2691 / 52343, 268 ins, 209 del, 2214 sub ]
|
32 |
+
2023-03-10 11:04:09,963 INFO [utils.py:800] [test-other-beam_size_4] %symbol-delay mean (s): -0.05, variance: 0.009 computed on 49902 correct words
|
33 |
+
2023-03-10 11:04:10,173 INFO [decode_with_timestamp.py:774] Wrote detailed error stats to pruned_transducer_stateless7/exp_960h_no_paddingidx_ngpu4/modified_beam_search/errs-test-other-beam_size_4-epoch-30-avg-11-modified_beam_search-beam-size-4-use-averaged-model.txt
|
34 |
+
2023-03-10 11:04:10,174 INFO [decode_with_timestamp.py:803]
|
35 |
+
For test-other, WER of different settings are:
|
36 |
+
beam_size_4 5.14 best for test-other
|
37 |
+
|
38 |
+
2023-03-10 11:04:10,174 INFO [decode_with_timestamp.py:810]
|
39 |
+
For test-other, symbol-delay of different settings are:
|
40 |
+
beam_size_4 mean: -0.05s, variance: 0.009 best for test-other
|
41 |
+
|
42 |
+
2023-03-10 11:04:10,174 INFO [decode_with_timestamp.py:1071] Done!
|
decoding-results/modified_beam_search/recogs-test-clean-beam_size_4-epoch-30-avg-11-modified_beam_search-beam-size-4-use-averaged-model.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
decoding-results/modified_beam_search/recogs-test-other-beam_size_4-epoch-30-avg-11-modified_beam_search-beam-size-4-use-averaged-model.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
decoding-results/modified_beam_search/symbol-delay-summary-test-clean-beam_size_4-epoch-30-avg-11-modified_beam_search-beam-size-4-use-averaged-model.txt
ADDED
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
1 |
+
settings symbol-delay
|
2 |
+
beam_size_4 mean: -0.043s, variance: 0.007
|
decoding-results/modified_beam_search/symbol-delay-summary-test-other-beam_size_4-epoch-30-avg-11-modified_beam_search-beam-size-4-use-averaged-model.txt
ADDED
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
1 |
+
settings symbol-delay
|
2 |
+
beam_size_4 mean: -0.05s, variance: 0.009
|
decoding-results/modified_beam_search/wer-summary-test-clean-beam_size_4-epoch-30-avg-11-modified_beam_search-beam-size-4-use-averaged-model.txt
ADDED
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
1 |
+
settings WER
|
2 |
+
beam_size_4 2.22
|
decoding-results/modified_beam_search/wer-summary-test-other-beam_size_4-epoch-30-avg-11-modified_beam_search-beam-size-4-use-averaged-model.txt
ADDED
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
1 |
+
settings WER
|
2 |
+
beam_size_4 5.14
|
exp/cpu_jit.pt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:19fd67cc925607bc275daaf6cf35ba36b22ad498aa021f95280abbaafba9014a
|
3 |
+
size 358526782
|
exp/epoch-30.pt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:2e58739a186421e72b622449fe0941e7edae1ecd141db1f5ab5eef46c0233050
|
3 |
+
size 1126567711
|
exp/export.sh
ADDED
@@ -0,0 +1,10 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/usr/bin/env bash
|
2 |
+
|
3 |
+
export CUDA_VISIBLE_DEVICES=7
|
4 |
+
|
5 |
+
./pruned_transducer_stateless7/export.py \
|
6 |
+
--exp-dir ./pruned_transducer_stateless7/exp \
|
7 |
+
--bpe-model data/lang_bpe_500/bpe.model \
|
8 |
+
--epoch 30 \
|
9 |
+
--avg 11 \
|
10 |
+
--jit 0
|
exp/log/log-train-2023-03-08-13-46-28-0
ADDED
The diff for this file is too large to render.
See raw diff
|
|
exp/log/log-train-2023-03-08-13-46-28-1
ADDED
The diff for this file is too large to render.
See raw diff
|
|
exp/log/log-train-2023-03-08-13-46-28-2
ADDED
The diff for this file is too large to render.
See raw diff
|
|
exp/log/log-train-2023-03-08-13-46-28-3
ADDED
The diff for this file is too large to render.
See raw diff
|
|
exp/pretrained.pt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:577ea4f17b0aa2eefcc0b6c260d958bc3b4e881bc50bb263db2df4d77e07d45a
|
3 |
+
size 281766253
|
exp/pretrained.sh
ADDED
@@ -0,0 +1,8 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/usr/bin/env bash
|
2 |
+
|
3 |
+
./pruned_transducer_stateless7/pretrained.py \
|
4 |
+
--checkpoint ./pruned_transducer_stateless7/exp/pretrained.pt \
|
5 |
+
--bpe-model ./data/lang_bpe_500/bpe.model \
|
6 |
+
--method fast_beam_search \
|
7 |
+
./test_wavs/1089-134686-0001.wav \
|
8 |
+
./test_wavs/1221-135766-0001.wav
|
exp/run.sh
ADDED
@@ -0,0 +1,13 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/usr/bin/env bash
|
2 |
+
|
3 |
+
set -ex
|
4 |
+
export CUDA_VISIBLE_DEVICES="0,3,6,7"
|
5 |
+
|
6 |
+
./pruned_transducer_stateless7/train.py \
|
7 |
+
--world-size 4 \
|
8 |
+
--num-epochs 30 \
|
9 |
+
--full-libri 1 \
|
10 |
+
--use-fp16 1 \
|
11 |
+
--max-duration 750 \
|
12 |
+
--exp-dir pruned_transducer_stateless7/exp \
|
13 |
+
--master-port 12535
|
exp/tensorboard/events.out.tfevents.1678254388.de-74279-k2-train-9-0208143539-7dcb6bfd79-b6fdq.2269179.0
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:a80031141a41891331c37100d45009c184eead4434d03df909f57c5e15627e8e
|
3 |
+
size 1058806
|
test_wavs/1089-134686-0001.wav
ADDED
Binary file (212 kB). View file
|
|
test_wavs/1221-135766-0001.wav
ADDED
Binary file (535 kB). View file
|
|
test_wavs/1221-135766-0002.wav
ADDED
Binary file (154 kB). View file
|
|
test_wavs/trans.txt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
1089-134686-0001 AFTER EARLY NIGHTFALL THE YELLOW LAMPS WOULD LIGHT UP HERE AND THERE THE SQUALID QUARTER OF THE BROTHELS
|
2 |
+
1221-135766-0001 GOD AS A DIRECT CONSEQUENCE OF THE SIN WHICH MAN THUS PUNISHED HAD GIVEN HER A LOVELY CHILD WHOSE PLACE WAS ON THAT SAME DISHONOURED BOSOM TO CONNECT HER PARENT FOR EVER WITH THE RACE AND DESCENT OF MORTALS AND TO BE FINALLY A BLESSED SOUL IN HEAVEN
|
3 |
+
1221-135766-0002 YET THESE THOUGHTS AFFECTED HESTER PRYNNE LESS WITH HOPE THAN APPREHENSION
|