luomingshuang
commited on
Commit
•
4a6a58c
1
Parent(s):
e7902ef
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,39 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Note: This recipe is trained with the codes from this PR https://github.com/k2-fsa/icefall/pull/233
|
2 |
+
And the SpecAugment codes from this PR https://github.com/lhotse-speech/lhotse/pull/604.
|
3 |
+
|
4 |
+
# Pre-trained Transducer-Stateless models for the TEDLium3 dataset with icefall.
|
5 |
+
The model was trained on full [TEDLium3](https://www.openslr.org/51) with the scripts in [icefall](https://github.com/k2-fsa/icefall).
|
6 |
+
## Training procedure
|
7 |
+
The main repositories are list below, we will update the training and decoding scripts with the update of version.
|
8 |
+
k2: https://github.com/k2-fsa/k2
|
9 |
+
icefall: https://github.com/k2-fsa/icefall
|
10 |
+
lhotse: https://github.com/lhotse-speech/lhotse
|
11 |
+
* Install k2 and lhotse, k2 installation guide refers to https://k2.readthedocs.io/en/latest/installation/index.html, lhotse refers to https://lhotse.readthedocs.io/en/latest/getting-started.html#installation. I think the latest version would be ok. And please also install the requirements listed in icefall.
|
12 |
+
* Clone icefall(https://github.com/k2-fsa/icefall) and check to the commit showed above.
|
13 |
+
```
|
14 |
+
git clone https://github.com/k2-fsa/icefall
|
15 |
+
cd icefall
|
16 |
+
```
|
17 |
+
* Preparing data.
|
18 |
+
```
|
19 |
+
cd egs/tedlium3/ASR
|
20 |
+
bash ./prepare.sh
|
21 |
+
```
|
22 |
+
* Training
|
23 |
+
```
|
24 |
+
export CUDA_VISIBLE_DEVICES="0,1,2,3"
|
25 |
+
./transducer_stateless/train.py \
|
26 |
+
--world-size 4 \
|
27 |
+
--num-epochs 30 \
|
28 |
+
--start-epoch 0 \
|
29 |
+
--exp-dir transducer_stateless/exp \
|
30 |
+
--max-duration 200
|
31 |
+
```
|
32 |
+
## Evaluation results
|
33 |
+
The decoding results (WER%) on TEDLium3 (dev and test) are listed below, we got this result by averaging models from epoch 14 to 29.
|
34 |
+
The WERs are
|
35 |
+
| | dev | test | comment |
|
36 |
+
|------------------------------------|------------|------------|------------------------------------------|
|
37 |
+
| greedy search | 7.19 | 6.57 | --epoch 29, --avg 16, --max-duration 100 |
|
38 |
+
| beam search (beam size 4) | 7.12 | 6.37 | --epoch 29, --avg 16, --max-duration 100 |
|
39 |
+
| modified beam search (beam size 4) | 7.00 | 6.19 | --epoch 29, --avg 16, --max-duration 100 |
|