File size: 610 Bytes
dda7488
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
843bf49
dda7488
a7563dc
57f7010
dda7488
488a1e8
dda7488
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
---
language: en
datasets:
- librispeech_asr
tags:
- speech
- audio
- automatic-speech-recognition
license: apache-2.0
---

# Distil-wav2vec2 
This model is a distilled version of the wav2vec2 model (https://arxiv.org/pdf/2006.11477.pdf). This model is 4 times smaller and 3 times faster than the original wav2vec2 large model.

# Evaluation results
When used with a light tri-gram language model head, this model achieves the following results : 

| Dataset | WER  | 
| ------------- |-------------| 
| Librispeech-clean| 0.127|

# Usage
notebook (google colab) at https://github.com/OthmaneJ/distil-wav2vec2