OthmaneJ
commited on
Commit
•
ff5b141
1
Parent(s):
350798d
update readme
Browse files
README
ADDED
@@ -0,0 +1,22 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language: en
|
3 |
+
datasets:
|
4 |
+
- librispeech_asr
|
5 |
+
tags:
|
6 |
+
- speech
|
7 |
+
- audio
|
8 |
+
- automatic-speech-recognition
|
9 |
+
license: apache-2.0
|
10 |
+
---
|
11 |
+
|
12 |
+
# Distil-wav2vec2
|
13 |
+
This model is a distilled version of the wav2vec2 model (https://arxiv.org/pdf/2006.11477.pdf). This model is 4 times smaller and 3 times faster than the original wav2vec2 large model.
|
14 |
+
|
15 |
+
# Evaluation results
|
16 |
+
When used with a light tri-gram language model head, this model achieves the following results :
|
17 |
+
| Dataset | WER |
|
18 |
+
| ------------- |:-------------:|
|
19 |
+
| Librispeech-clean| 12.7%|
|
20 |
+
|
21 |
+
#Usage
|
22 |
+
notebook (google colab) at https://github.com/OthmaneJ/distil-wav2vec2
|