OthmaneJ commited on
Commit
dda7488
1 Parent(s): 165d08f

update readme

Browse files
Files changed (2) hide show
  1. README +0 -22
  2. README.md +22 -0
README DELETED
@@ -1,22 +0,0 @@
1
- ---
2
- language: en
3
- datasets:
4
- - librispeech_asr
5
- tags:
6
- - speech
7
- - audio
8
- - automatic-speech-recognition
9
- license: apache-2.0
10
- ---
11
-
12
- # Distil-wav2vec2
13
- This model is a distilled version of the wav2vec2 model (https://arxiv.org/pdf/2006.11477.pdf). This model is 4 times smaller and 3 times faster than the original wav2vec2 large model.
14
-
15
- # Evaluation results
16
- When used with a light tri-gram language model head, this model achieves the following results :
17
- | Dataset | WER |
18
- | ------------- |:-------------:|
19
- | Librispeech-clean| 12.7%|
20
-
21
- #Usage
22
- notebook (google colab) at https://github.com/OthmaneJ/distil-wav2vec2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md CHANGED
@@ -0,0 +1,22 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ datasets:
4
+ - librispeech_asr
5
+ tags:
6
+ - speech
7
+ - audio
8
+ - automatic-speech-recognition
9
+ license: apache-2.0
10
+ ---
11
+
12
+ # Distil-wav2vec2
13
+ This model is a distilled version of the wav2vec2 model (https://arxiv.org/pdf/2006.11477.pdf). This model is 4 times smaller and 3 times faster than the original wav2vec2 large model.
14
+
15
+ # Evaluation results
16
+ When used with a light tri-gram language model head, this model achieves the following results :
17
+ | Dataset | WER |
18
+ | ------------- |:-------------:|
19
+ | Librispeech-clean| 12.7%|
20
+
21
+ #Usage
22
+ notebook (google colab) at https://github.com/OthmaneJ/distil-wav2vec2