add small streaming zipformer
Browse files- README.md +14 -1
- data/lang_bpe_500/bpe.model +3 -0
- exp/cpu_jit.pt +3 -0
- exp/pretrained.pt +3 -0
- exp/tensorboard/events.out.tfevents.1675637915.r7n07.429298.0 +3 -0
- exp/tensorboard/events.out.tfevents.1675917773.r8n07.149353.0 +3 -0
- log/greedy_search/errs-test-clean-greedy_search-epoch-30-avg-9-streaming-chunk-size-32-context-2-max-sym-per-frame-1-use-averaged-model.txt +0 -0
- log/greedy_search/errs-test-other-greedy_search-epoch-30-avg-9-streaming-chunk-size-32-context-2-max-sym-per-frame-1-use-averaged-model.txt +0 -0
- log/greedy_search/log-decode-epoch-30-avg-9-streaming-chunk-size-32-context-2-max-sym-per-frame-1-use-averaged-model-2023-02-12-08-58-48 +6 -0
- log/greedy_search/log-decode-epoch-30-avg-9-streaming-chunk-size-32-context-2-max-sym-per-frame-1-use-averaged-model-2023-02-12-09-04-44 +28 -0
- log/greedy_search/recogs-test-clean-greedy_search-epoch-30-avg-9-streaming-chunk-size-32-context-2-max-sym-per-frame-1-use-averaged-model.txt +0 -0
- log/greedy_search/recogs-test-other-greedy_search-epoch-30-avg-9-streaming-chunk-size-32-context-2-max-sym-per-frame-1-use-averaged-model.txt +0 -0
- log/greedy_search/wer-summary-test-clean-greedy_search-epoch-30-avg-9-streaming-chunk-size-32-context-2-max-sym-per-frame-1-use-averaged-model.txt +2 -0
- log/greedy_search/wer-summary-test-other-greedy_search-epoch-30-avg-9-streaming-chunk-size-32-context-2-max-sym-per-frame-1-use-averaged-model.txt +2 -0
- log/log-train-2023-02-05-17-58-35-0 +0 -0
- log/log-train-2023-02-05-17-58-35-1 +0 -0
- log/log-train-2023-02-05-17-58-35-2 +0 -0
- log/log-train-2023-02-05-17-58-35-3 +0 -0
- log/log-train-2023-02-08-23-42-53-0 +0 -0
- log/log-train-2023-02-08-23-42-53-1 +0 -0
- log/log-train-2023-02-08-23-42-53-2 +0 -0
- log/log-train-2023-02-08-23-42-53-3 +0 -0
- log/modified_beam_search/errs-test-clean-beam_size_4-epoch-30-avg-9-streaming-chunk-size-32-modified_beam_search-beam-size-4-use-averaged-model.txt +0 -0
- log/modified_beam_search/errs-test-other-beam_size_4-epoch-30-avg-9-streaming-chunk-size-32-modified_beam_search-beam-size-4-use-averaged-model.txt +0 -0
- log/modified_beam_search/log-decode-epoch-30-avg-9-streaming-chunk-size-32-modified_beam_search-beam-size-4-use-averaged-model-2023-02-12-09-10-17 +45 -0
- log/modified_beam_search/recogs-test-clean-beam_size_4-epoch-30-avg-9-streaming-chunk-size-32-modified_beam_search-beam-size-4-use-averaged-model.txt +0 -0
- log/modified_beam_search/recogs-test-other-beam_size_4-epoch-30-avg-9-streaming-chunk-size-32-modified_beam_search-beam-size-4-use-averaged-model.txt +0 -0
- log/modified_beam_search/wer-summary-test-clean-beam_size_4-epoch-30-avg-9-streaming-chunk-size-32-modified_beam_search-beam-size-4-use-averaged-model.txt +2 -0
- log/modified_beam_search/wer-summary-test-other-beam_size_4-epoch-30-avg-9-streaming-chunk-size-32-modified_beam_search-beam-size-4-use-averaged-model.txt +2 -0
README.md
CHANGED
@@ -1,3 +1,16 @@
|
|
1 |
---
|
2 |
-
license: apache
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
license: apache 2.0
|
3 |
---
|
4 |
+
# LibriSpeech pruned_transducer_stateless7_streaming
|
5 |
+
|
6 |
+
This model is based on the icefall `pruned_transducer_stateless7_streaming` recipe,
|
7 |
+
but it the model parameters are modified to be smaller in size. It can be
|
8 |
+
considered a streaming version of [this model](https://huggingface.co/Zengwei/icefall-asr-librispeech-pruned-transducer-stateless7-20M-2023-01-28) and follows
|
9 |
+
the same parameter configuration.
|
10 |
+
|
11 |
+
## Performance Record
|
12 |
+
|
13 |
+
| Decoding method | test-clean | test-other |
|
14 |
+
|---------------------------|------------|------------|
|
15 |
+
| greedy search | 3.94 | 9.79 |
|
16 |
+
| modified beam search | 3.88 | 9.53 |
|
data/lang_bpe_500/bpe.model
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:c53433de083c4a6ad12d034550ef22de68cec62c4f58932a7b6b8b2f1e743fa5
|
3 |
+
size 244865
|
exp/cpu_jit.pt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:322c5e910063a3c88188e9371d7746209515e783e1fbd4bc10a07b96916a1daf
|
3 |
+
size 134186908
|
exp/pretrained.pt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:3222015e7c994de8bc9fd7f03414d6e34e9f81345380d6ee247f55457ac1ff67
|
3 |
+
size 82986907
|
exp/tensorboard/events.out.tfevents.1675637915.r7n07.429298.0
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:6139d1abe9b94257a13f6a4357402edcecb1552ca414ae70f2e4c571668093a6
|
3 |
+
size 2108670
|
exp/tensorboard/events.out.tfevents.1675917773.r8n07.149353.0
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:cba317a5708a495eff4efc2dbdfd8ef06753f25208fe549291341dbe89ab0d9e
|
3 |
+
size 234625
|
log/greedy_search/errs-test-clean-greedy_search-epoch-30-avg-9-streaming-chunk-size-32-context-2-max-sym-per-frame-1-use-averaged-model.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
log/greedy_search/errs-test-other-greedy_search-epoch-30-avg-9-streaming-chunk-size-32-context-2-max-sym-per-frame-1-use-averaged-model.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
log/greedy_search/log-decode-epoch-30-avg-9-streaming-chunk-size-32-context-2-max-sym-per-frame-1-use-averaged-model-2023-02-12-08-58-48
ADDED
@@ -0,0 +1,6 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
2023-02-12 08:58:48,826 INFO [decode.py:655] Decoding started
|
2 |
+
2023-02-12 08:58:48,827 INFO [decode.py:661] Device: cuda:0
|
3 |
+
2023-02-12 08:58:48,855 INFO [decode.py:671] {'best_train_loss': inf, 'best_valid_loss': inf, 'best_train_epoch': -1, 'best_valid_epoch': -1, 'batch_idx_train': 0, 'log_interval': 50, 'reset_interval': 200, 'valid_interval': 3000, 'feature_dim': 80, 'subsampling_factor': 4, 'warm_step': 2000, 'env_info': {'k2-version': '1.23.3', 'k2-build-type': 'Debug', 'k2-with-cuda': True, 'k2-git-sha1': '3b81ac9686aee539d447bb2085b2cdfc131c7c91', 'k2-git-date': 'Thu Jan 26 20:40:25 2023', 'lhotse-version': '1.9.0.dev+git.97bf4b0.dirty', 'torch-version': '1.10.0+cu102', 'torch-cuda-available': True, 'torch-cuda-version': '10.2', 'python-version': '3.8', 'icefall-git-branch': 'surt', 'icefall-git-sha1': 'f8acb25-dirty', 'icefall-git-date': 'Thu Feb 9 12:58:59 2023', 'icefall-path': '/exp/draj/mini_scale_2022/icefall', 'k2-path': '/exp/draj/mini_scale_2022/k2/k2/python/k2/__init__.py', 'lhotse-path': '/exp/draj/mini_scale_2022/lhotse/lhotse/__init__.py', 'hostname': 'r7n03', 'IP address': '10.1.7.3'}, 'epoch': 30, 'iter': 0, 'avg': 9, 'use_averaged_model': True, 'exp_dir': PosixPath('pruned_transducer_stateless7_streaming/exp/v1'), 'bpe_model': 'data/lang_bpe_500/bpe.model', 'lang_dir': PosixPath('data/lang_bpe_500'), 'decoding_method': 'greedy_search', 'beam_size': 4, 'beam': 20.0, 'ngram_lm_scale': 0.01, 'max_contexts': 4, 'max_states': 8, 'context_size': 2, 'max_sym_per_frame': 1, 'num_paths': 200, 'nbest_scale': 0.5, 'num_encoder_layers': '2,4,3,2,4', 'feedforward_dims': '1024,1024,2048,2048,1024', 'nhead': '8,8,8,8,8', 'encoder_dims': '384,384,384,384,384', 'attention_dims': '192,192,192,192,192', 'encoder_unmasked_dims': '256,256,256,256,256', 'zipformer_downsampling_factors': '1,2,4,8,2', 'cnn_module_kernels': '31,31,31,31,31', 'decoder_dim': 512, 'joiner_dim': 512, 'short_chunk_size': 50, 'num_left_chunks': 4, 'decode_chunk_len': 32, 'full_libri': True, 'manifest_dir': PosixPath('data/manifests'), 'max_duration': 500, 'bucketing_sampler': True, 'num_buckets': 30, 'concatenate_cuts': False, 'duration_factor': 1.0, 'gap': 1.0, 'on_the_fly_feats': False, 'shuffle': True, 'drop_last': True, 'return_cuts': True, 'num_workers': 2, 'enable_spec_aug': True, 'spec_aug_time_warp_factor': 80, 'enable_musan': True, 'input_strategy': 'PrecomputedFeatures', 'res_dir': PosixPath('pruned_transducer_stateless7_streaming/exp/v1/greedy_search'), 'suffix': 'epoch-30-avg-9-streaming-chunk-size-32-context-2-max-sym-per-frame-1-use-averaged-model', 'blank_id': 0, 'unk_id': 2, 'vocab_size': 500}
|
4 |
+
2023-02-12 08:58:48,856 INFO [decode.py:673] About to create model
|
5 |
+
2023-02-12 08:58:49,471 INFO [zipformer.py:402] At encoder stack 4, which has downsampling_factor=2, we will combine the outputs of layers 1 and 3, with downsampling_factors=2 and 8.
|
6 |
+
2023-02-12 08:58:49,480 INFO [decode.py:744] Calculating the averaged model over epoch range from 21 (excluded) to 30
|
log/greedy_search/log-decode-epoch-30-avg-9-streaming-chunk-size-32-context-2-max-sym-per-frame-1-use-averaged-model-2023-02-12-09-04-44
ADDED
@@ -0,0 +1,28 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
2023-02-12 09:04:44,043 INFO [decode.py:655] Decoding started
|
2 |
+
2023-02-12 09:04:44,044 INFO [decode.py:661] Device: cuda:0
|
3 |
+
2023-02-12 09:04:44,046 INFO [decode.py:671] {'best_train_loss': inf, 'best_valid_loss': inf, 'best_train_epoch': -1, 'best_valid_epoch': -1, 'batch_idx_train': 0, 'log_interval': 50, 'reset_interval': 200, 'valid_interval': 3000, 'feature_dim': 80, 'subsampling_factor': 4, 'warm_step': 2000, 'env_info': {'k2-version': '1.23.3', 'k2-build-type': 'Debug', 'k2-with-cuda': True, 'k2-git-sha1': '3b81ac9686aee539d447bb2085b2cdfc131c7c91', 'k2-git-date': 'Thu Jan 26 20:40:25 2023', 'lhotse-version': '1.9.0.dev+git.97bf4b0.dirty', 'torch-version': '1.10.0+cu102', 'torch-cuda-available': True, 'torch-cuda-version': '10.2', 'python-version': '3.8', 'icefall-git-branch': 'surt', 'icefall-git-sha1': 'f8acb25-dirty', 'icefall-git-date': 'Thu Feb 9 12:58:59 2023', 'icefall-path': '/exp/draj/mini_scale_2022/icefall', 'k2-path': '/exp/draj/mini_scale_2022/k2/k2/python/k2/__init__.py', 'lhotse-path': '/exp/draj/mini_scale_2022/lhotse/lhotse/__init__.py', 'hostname': 'r7n03', 'IP address': '10.1.7.3'}, 'epoch': 30, 'iter': 0, 'avg': 9, 'use_averaged_model': True, 'exp_dir': PosixPath('pruned_transducer_stateless7_streaming/exp/v1'), 'bpe_model': 'data/lang_bpe_500/bpe.model', 'lang_dir': PosixPath('data/lang_bpe_500'), 'decoding_method': 'greedy_search', 'beam_size': 4, 'beam': 20.0, 'ngram_lm_scale': 0.01, 'max_contexts': 4, 'max_states': 8, 'context_size': 2, 'max_sym_per_frame': 1, 'num_paths': 200, 'nbest_scale': 0.5, 'num_encoder_layers': '2,2,2,2,2', 'feedforward_dims': '768,768,768,768,768', 'nhead': '8,8,8,8,8', 'encoder_dims': '256,256,256,256,256', 'attention_dims': '192,192,192,192,192', 'encoder_unmasked_dims': '192,192,192,192,192', 'zipformer_downsampling_factors': '1,2,4,8,2', 'cnn_module_kernels': '31,31,31,31,31', 'decoder_dim': 512, 'joiner_dim': 512, 'short_chunk_size': 50, 'num_left_chunks': 4, 'decode_chunk_len': 32, 'full_libri': True, 'manifest_dir': PosixPath('data/manifests'), 'max_duration': 500, 'bucketing_sampler': True, 'num_buckets': 30, 'concatenate_cuts': False, 'duration_factor': 1.0, 'gap': 1.0, 'on_the_fly_feats': False, 'shuffle': True, 'drop_last': True, 'return_cuts': True, 'num_workers': 2, 'enable_spec_aug': True, 'spec_aug_time_warp_factor': 80, 'enable_musan': True, 'input_strategy': 'PrecomputedFeatures', 'res_dir': PosixPath('pruned_transducer_stateless7_streaming/exp/v1/greedy_search'), 'suffix': 'epoch-30-avg-9-streaming-chunk-size-32-context-2-max-sym-per-frame-1-use-averaged-model', 'blank_id': 0, 'unk_id': 2, 'vocab_size': 500}
|
4 |
+
2023-02-12 09:04:44,046 INFO [decode.py:673] About to create model
|
5 |
+
2023-02-12 09:04:44,322 INFO [zipformer.py:402] At encoder stack 4, which has downsampling_factor=2, we will combine the outputs of layers 1 and 3, with downsampling_factors=2 and 8.
|
6 |
+
2023-02-12 09:04:44,332 INFO [decode.py:744] Calculating the averaged model over epoch range from 21 (excluded) to 30
|
7 |
+
2023-02-12 09:04:49,669 INFO [decode.py:778] Number of model parameters: 20697573
|
8 |
+
2023-02-12 09:04:49,670 INFO [asr_datamodule.py:444] About to get test-clean cuts
|
9 |
+
2023-02-12 09:04:49,844 INFO [asr_datamodule.py:451] About to get test-other cuts
|
10 |
+
2023-02-12 09:04:53,359 INFO [decode.py:560] batch 0/?, cuts processed until now is 36
|
11 |
+
2023-02-12 09:06:00,901 INFO [decode.py:560] batch 50/?, cuts processed until now is 2609
|
12 |
+
2023-02-12 09:06:01,345 INFO [decode.py:576] The transcripts are stored in pruned_transducer_stateless7_streaming/exp/v1/greedy_search/recogs-test-clean-greedy_search-epoch-30-avg-9-streaming-chunk-size-32-context-2-max-sym-per-frame-1-use-averaged-model.txt
|
13 |
+
2023-02-12 09:06:01,409 INFO [utils.py:538] [test-clean-greedy_search] %WER 3.94% [2072 / 52576, 243 ins, 178 del, 1651 sub ]
|
14 |
+
2023-02-12 09:06:01,657 INFO [decode.py:589] Wrote detailed error stats to pruned_transducer_stateless7_streaming/exp/v1/greedy_search/errs-test-clean-greedy_search-epoch-30-avg-9-streaming-chunk-size-32-context-2-max-sym-per-frame-1-use-averaged-model.txt
|
15 |
+
2023-02-12 09:06:01,658 INFO [decode.py:605]
|
16 |
+
For test-clean, WER of different settings are:
|
17 |
+
greedy_search 3.94 best for test-clean
|
18 |
+
|
19 |
+
2023-02-12 09:06:04,334 INFO [decode.py:560] batch 0/?, cuts processed until now is 43
|
20 |
+
2023-02-12 09:07:03,165 INFO [decode.py:560] batch 50/?, cuts processed until now is 2939
|
21 |
+
2023-02-12 09:07:03,305 INFO [decode.py:576] The transcripts are stored in pruned_transducer_stateless7_streaming/exp/v1/greedy_search/recogs-test-other-greedy_search-epoch-30-avg-9-streaming-chunk-size-32-context-2-max-sym-per-frame-1-use-averaged-model.txt
|
22 |
+
2023-02-12 09:07:03,376 INFO [utils.py:538] [test-other-greedy_search] %WER 9.79% [5125 / 52343, 496 ins, 537 del, 4092 sub ]
|
23 |
+
2023-02-12 09:07:03,533 INFO [decode.py:589] Wrote detailed error stats to pruned_transducer_stateless7_streaming/exp/v1/greedy_search/errs-test-other-greedy_search-epoch-30-avg-9-streaming-chunk-size-32-context-2-max-sym-per-frame-1-use-averaged-model.txt
|
24 |
+
2023-02-12 09:07:03,535 INFO [decode.py:605]
|
25 |
+
For test-other, WER of different settings are:
|
26 |
+
greedy_search 9.79 best for test-other
|
27 |
+
|
28 |
+
2023-02-12 09:07:03,535 INFO [decode.py:809] Done!
|
log/greedy_search/recogs-test-clean-greedy_search-epoch-30-avg-9-streaming-chunk-size-32-context-2-max-sym-per-frame-1-use-averaged-model.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
log/greedy_search/recogs-test-other-greedy_search-epoch-30-avg-9-streaming-chunk-size-32-context-2-max-sym-per-frame-1-use-averaged-model.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
log/greedy_search/wer-summary-test-clean-greedy_search-epoch-30-avg-9-streaming-chunk-size-32-context-2-max-sym-per-frame-1-use-averaged-model.txt
ADDED
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
1 |
+
settings WER
|
2 |
+
greedy_search 3.94
|
log/greedy_search/wer-summary-test-other-greedy_search-epoch-30-avg-9-streaming-chunk-size-32-context-2-max-sym-per-frame-1-use-averaged-model.txt
ADDED
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
1 |
+
settings WER
|
2 |
+
greedy_search 9.79
|
log/log-train-2023-02-05-17-58-35-0
ADDED
The diff for this file is too large to render.
See raw diff
|
|
log/log-train-2023-02-05-17-58-35-1
ADDED
The diff for this file is too large to render.
See raw diff
|
|
log/log-train-2023-02-05-17-58-35-2
ADDED
The diff for this file is too large to render.
See raw diff
|
|
log/log-train-2023-02-05-17-58-35-3
ADDED
The diff for this file is too large to render.
See raw diff
|
|
log/log-train-2023-02-08-23-42-53-0
ADDED
The diff for this file is too large to render.
See raw diff
|
|
log/log-train-2023-02-08-23-42-53-1
ADDED
The diff for this file is too large to render.
See raw diff
|
|
log/log-train-2023-02-08-23-42-53-2
ADDED
The diff for this file is too large to render.
See raw diff
|
|
log/log-train-2023-02-08-23-42-53-3
ADDED
The diff for this file is too large to render.
See raw diff
|
|
log/modified_beam_search/errs-test-clean-beam_size_4-epoch-30-avg-9-streaming-chunk-size-32-modified_beam_search-beam-size-4-use-averaged-model.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
log/modified_beam_search/errs-test-other-beam_size_4-epoch-30-avg-9-streaming-chunk-size-32-modified_beam_search-beam-size-4-use-averaged-model.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
log/modified_beam_search/log-decode-epoch-30-avg-9-streaming-chunk-size-32-modified_beam_search-beam-size-4-use-averaged-model-2023-02-12-09-10-17
ADDED
@@ -0,0 +1,45 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
2023-02-12 09:10:17,598 INFO [decode.py:655] Decoding started
|
2 |
+
2023-02-12 09:10:17,598 INFO [decode.py:661] Device: cuda:0
|
3 |
+
2023-02-12 09:10:17,601 INFO [decode.py:671] {'best_train_loss': inf, 'best_valid_loss': inf, 'best_train_epoch': -1, 'best_valid_epoch': -1, 'batch_idx_train': 0, 'log_interval': 50, 'reset_interval': 200, 'valid_interval': 3000, 'feature_dim': 80, 'subsampling_factor': 4, 'warm_step': 2000, 'env_info': {'k2-version': '1.23.3', 'k2-build-type': 'Debug', 'k2-with-cuda': True, 'k2-git-sha1': '3b81ac9686aee539d447bb2085b2cdfc131c7c91', 'k2-git-date': 'Thu Jan 26 20:40:25 2023', 'lhotse-version': '1.9.0.dev+git.97bf4b0.dirty', 'torch-version': '1.10.0+cu102', 'torch-cuda-available': True, 'torch-cuda-version': '10.2', 'python-version': '3.8', 'icefall-git-branch': 'surt', 'icefall-git-sha1': 'f8acb25-dirty', 'icefall-git-date': 'Thu Feb 9 12:58:59 2023', 'icefall-path': '/exp/draj/mini_scale_2022/icefall', 'k2-path': '/exp/draj/mini_scale_2022/k2/k2/python/k2/__init__.py', 'lhotse-path': '/exp/draj/mini_scale_2022/lhotse/lhotse/__init__.py', 'hostname': 'r7n03', 'IP address': '10.1.7.3'}, 'epoch': 30, 'iter': 0, 'avg': 9, 'use_averaged_model': True, 'exp_dir': PosixPath('pruned_transducer_stateless7_streaming/exp/v1'), 'bpe_model': 'data/lang_bpe_500/bpe.model', 'lang_dir': PosixPath('data/lang_bpe_500'), 'decoding_method': 'modified_beam_search', 'beam_size': 4, 'beam': 20.0, 'ngram_lm_scale': 0.01, 'max_contexts': 4, 'max_states': 8, 'context_size': 2, 'max_sym_per_frame': 1, 'num_paths': 200, 'nbest_scale': 0.5, 'num_encoder_layers': '2,2,2,2,2', 'feedforward_dims': '768,768,768,768,768', 'nhead': '8,8,8,8,8', 'encoder_dims': '256,256,256,256,256', 'attention_dims': '192,192,192,192,192', 'encoder_unmasked_dims': '192,192,192,192,192', 'zipformer_downsampling_factors': '1,2,4,8,2', 'cnn_module_kernels': '31,31,31,31,31', 'decoder_dim': 512, 'joiner_dim': 512, 'short_chunk_size': 50, 'num_left_chunks': 4, 'decode_chunk_len': 32, 'full_libri': True, 'manifest_dir': PosixPath('data/manifests'), 'max_duration': 500, 'bucketing_sampler': True, 'num_buckets': 30, 'concatenate_cuts': False, 'duration_factor': 1.0, 'gap': 1.0, 'on_the_fly_feats': False, 'shuffle': True, 'drop_last': True, 'return_cuts': True, 'num_workers': 2, 'enable_spec_aug': True, 'spec_aug_time_warp_factor': 80, 'enable_musan': True, 'input_strategy': 'PrecomputedFeatures', 'res_dir': PosixPath('pruned_transducer_stateless7_streaming/exp/v1/modified_beam_search'), 'suffix': 'epoch-30-avg-9-streaming-chunk-size-32-modified_beam_search-beam-size-4-use-averaged-model', 'blank_id': 0, 'unk_id': 2, 'vocab_size': 500}
|
4 |
+
2023-02-12 09:10:17,601 INFO [decode.py:673] About to create model
|
5 |
+
2023-02-12 09:10:17,862 INFO [zipformer.py:402] At encoder stack 4, which has downsampling_factor=2, we will combine the outputs of layers 1 and 3, with downsampling_factors=2 and 8.
|
6 |
+
2023-02-12 09:10:17,870 INFO [decode.py:744] Calculating the averaged model over epoch range from 21 (excluded) to 30
|
7 |
+
2023-02-12 09:10:23,393 INFO [decode.py:778] Number of model parameters: 20697573
|
8 |
+
2023-02-12 09:10:23,394 INFO [asr_datamodule.py:444] About to get test-clean cuts
|
9 |
+
2023-02-12 09:10:23,532 INFO [asr_datamodule.py:451] About to get test-other cuts
|
10 |
+
2023-02-12 09:10:30,409 INFO [decode.py:560] batch 0/?, cuts processed until now is 36
|
11 |
+
2023-02-12 09:12:23,706 INFO [decode.py:560] batch 20/?, cuts processed until now is 1038
|
12 |
+
2023-02-12 09:13:00,677 INFO [zipformer.py:2431] attn_weights_entropy = tensor([3.7696, 3.6940, 3.4648, 2.1037, 3.2757, 3.5310, 3.4728, 3.3882],
|
13 |
+
device='cuda:0'), covar=tensor([0.0950, 0.0609, 0.0885, 0.5375, 0.1094, 0.0782, 0.1186, 0.0733],
|
14 |
+
device='cuda:0'), in_proj_covar=tensor([0.0529, 0.0443, 0.0434, 0.0544, 0.0429, 0.0451, 0.0427, 0.0392],
|
15 |
+
device='cuda:0'), out_proj_covar=tensor([0.0003, 0.0002, 0.0002, 0.0003, 0.0002, 0.0002, 0.0002, 0.0002],
|
16 |
+
device='cuda:0')
|
17 |
+
2023-02-12 09:14:04,911 INFO [decode.py:560] batch 40/?, cuts processed until now is 2296
|
18 |
+
2023-02-12 09:14:38,473 INFO [decode.py:576] The transcripts are stored in pruned_transducer_stateless7_streaming/exp/v1/modified_beam_search/recogs-test-clean-beam_size_4-epoch-30-avg-9-streaming-chunk-size-32-modified_beam_search-beam-size-4-use-averaged-model.txt
|
19 |
+
2023-02-12 09:14:38,536 INFO [utils.py:538] [test-clean-beam_size_4] %WER 3.88% [2038 / 52576, 255 ins, 160 del, 1623 sub ]
|
20 |
+
2023-02-12 09:14:38,761 INFO [decode.py:589] Wrote detailed error stats to pruned_transducer_stateless7_streaming/exp/v1/modified_beam_search/errs-test-clean-beam_size_4-epoch-30-avg-9-streaming-chunk-size-32-modified_beam_search-beam-size-4-use-averaged-model.txt
|
21 |
+
2023-02-12 09:14:38,762 INFO [decode.py:605]
|
22 |
+
For test-clean, WER of different settings are:
|
23 |
+
beam_size_4 3.88 best for test-clean
|
24 |
+
|
25 |
+
2023-02-12 09:14:44,597 INFO [decode.py:560] batch 0/?, cuts processed until now is 43
|
26 |
+
2023-02-12 09:15:16,588 INFO [zipformer.py:2431] attn_weights_entropy = tensor([1.5588, 1.8418, 2.6260, 1.4591, 1.9960, 1.9089, 1.6740, 2.0659],
|
27 |
+
device='cuda:0'), covar=tensor([0.1906, 0.2949, 0.0991, 0.5130, 0.2067, 0.3422, 0.2614, 0.2238],
|
28 |
+
device='cuda:0'), in_proj_covar=tensor([0.0529, 0.0621, 0.0552, 0.0656, 0.0651, 0.0599, 0.0549, 0.0635],
|
29 |
+
device='cuda:0'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0003, 0.0002, 0.0002, 0.0002, 0.0002],
|
30 |
+
device='cuda:0')
|
31 |
+
2023-02-12 09:15:27,052 INFO [zipformer.py:2431] attn_weights_entropy = tensor([1.3968, 1.5670, 1.5791, 1.1139, 1.6266, 1.3491, 0.3394, 1.5611],
|
32 |
+
device='cuda:0'), covar=tensor([0.0519, 0.0407, 0.0368, 0.0564, 0.0482, 0.1088, 0.1001, 0.0316],
|
33 |
+
device='cuda:0'), in_proj_covar=tensor([0.0459, 0.0397, 0.0350, 0.0449, 0.0382, 0.0540, 0.0392, 0.0425],
|
34 |
+
device='cuda:0'), out_proj_covar=tensor([1.2185e-04, 1.0291e-04, 9.1160e-05, 1.1731e-04, 9.9927e-05, 1.5083e-04,
|
35 |
+
1.0519e-04, 1.1140e-04], device='cuda:0')
|
36 |
+
2023-02-12 09:16:30,704 INFO [decode.py:560] batch 20/?, cuts processed until now is 1198
|
37 |
+
2023-02-12 09:18:11,673 INFO [decode.py:560] batch 40/?, cuts processed until now is 2642
|
38 |
+
2023-02-12 09:18:40,706 INFO [decode.py:576] The transcripts are stored in pruned_transducer_stateless7_streaming/exp/v1/modified_beam_search/recogs-test-other-beam_size_4-epoch-30-avg-9-streaming-chunk-size-32-modified_beam_search-beam-size-4-use-averaged-model.txt
|
39 |
+
2023-02-12 09:18:40,785 INFO [utils.py:538] [test-other-beam_size_4] %WER 9.53% [4988 / 52343, 533 ins, 455 del, 4000 sub ]
|
40 |
+
2023-02-12 09:18:40,939 INFO [decode.py:589] Wrote detailed error stats to pruned_transducer_stateless7_streaming/exp/v1/modified_beam_search/errs-test-other-beam_size_4-epoch-30-avg-9-streaming-chunk-size-32-modified_beam_search-beam-size-4-use-averaged-model.txt
|
41 |
+
2023-02-12 09:18:40,940 INFO [decode.py:605]
|
42 |
+
For test-other, WER of different settings are:
|
43 |
+
beam_size_4 9.53 best for test-other
|
44 |
+
|
45 |
+
2023-02-12 09:18:40,940 INFO [decode.py:809] Done!
|
log/modified_beam_search/recogs-test-clean-beam_size_4-epoch-30-avg-9-streaming-chunk-size-32-modified_beam_search-beam-size-4-use-averaged-model.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
log/modified_beam_search/recogs-test-other-beam_size_4-epoch-30-avg-9-streaming-chunk-size-32-modified_beam_search-beam-size-4-use-averaged-model.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
log/modified_beam_search/wer-summary-test-clean-beam_size_4-epoch-30-avg-9-streaming-chunk-size-32-modified_beam_search-beam-size-4-use-averaged-model.txt
ADDED
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
1 |
+
settings WER
|
2 |
+
beam_size_4 3.88
|
log/modified_beam_search/wer-summary-test-other-beam_size_4-epoch-30-avg-9-streaming-chunk-size-32-modified_beam_search-beam-size-4-use-averaged-model.txt
ADDED
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
1 |
+
settings WER
|
2 |
+
beam_size_4 9.53
|