--- language: en datasets: - librispeech_asr tags: - speech - audio - automatic-speech-recognition license: apache-2.0 --- # Distil-wav2vec2 This model is a distilled version of the wav2vec2 model (https://arxiv.org/pdf/2006.11477.pdf). This model is 45% times smaller and 3 times faster than the original wav2vec2 base model. # Evaluation results This model achieves the following results : |Model| Size| WER Librispeech-test-clean |WER Librispeech-test-other| |----------| ------------- |-------------|-----------| |Distil-wav2vec2| 197.9 Mb | 0.0983 | 0.2266| |wav2vec2-base| 360 Mb | 0.0389 | 0.1047| # Usage notebook (google colab) at https://github.com/OthmaneJ/distil-wav2vec2