luomingshuang commited on
Commit
0e7bfd0
1 Parent(s): f47d9f1

add README.d

Browse files
Files changed (1) hide show
  1. README.md +39 -0
README.md ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Note: This recipe is trained with the codes from this PR https://github.com/k2-fsa/icefall/pull/355
2
+ And the SpecAugment codes from this PR https://github.com/lhotse-speech/lhotse/pull/604.
3
+ # Pre-trained Transducer-Stateless2 models for the Aidatatang_200zh dataset with icefall.
4
+ The model was trained on full [Aidatatang_200zh](https://www.openslr.org/62) with the scripts in [icefall](https://github.com/k2-fsa/icefall) based on the latest version k2.
5
+ ## Training procedure
6
+ The main repositories are list below, we will update the training and decoding scripts with the update of version.
7
+ k2: https://github.com/k2-fsa/k2
8
+ icefall: https://github.com/k2-fsa/icefall
9
+ lhotse: https://github.com/lhotse-speech/lhotse
10
+ * Install k2 and lhotse, k2 installation guide refers to https://k2.readthedocs.io/en/latest/installation/index.html, lhotse refers to https://lhotse.readthedocs.io/en/latest/getting-started.html#installation. I think the latest version would be ok. And please also install the requirements listed in icefall.
11
+ * Clone icefall(https://github.com/k2-fsa/icefall) and check to the commit showed above.
12
+ ```
13
+ git clone https://github.com/k2-fsa/icefall
14
+ cd icefall
15
+ ```
16
+ * Preparing data.
17
+ ```
18
+ cd egs/aidatatang_200zh/ASR
19
+ bash ./prepare.sh
20
+ ```
21
+ * Training
22
+ ```
23
+ export CUDA_VISIBLE_DEVICES="0,1"
24
+ ./pruned_transducer_stateless2/train.py \
25
+ --world-size 2 \
26
+ --num-epochs 30 \
27
+ --start-epoch 0 \
28
+ --exp-dir pruned_transducer_stateless2/exp \
29
+ --lang-dir data/lang_char \
30
+ --max-duration 250
31
+ ```
32
+ ## Evaluation results
33
+ The decoding results (WER%) on Aidatatang_200zh(dev and test) are listed below, we got this result by averaging models from epoch 11 to 29.
34
+ The WERs are
35
+ | | dev | test | comment |
36
+ |------------------------------------|------------|------------|------------------------------------------|
37
+ | greedy search | 5.53 | 6.59 | --epoch 29, --avg 19, --max-duration 100 |
38
+ | modified beam search (beam size 4) | 5.28 | 6.32 | --epoch 29, --avg 19, --max-duration 100 |
39
+ | fast beam search (set as default) | 5.29 | 6.33 | --epoch 29, --avg 19, --max-duration 1500|