Edit model card

Arabic Hubert-Large - with CTC fine-tuned on Common Voice 8.0 (No LM)

This model is a fine-tuned version of Arabic Hubert-Large. We finetuned this model on the Arabic CommonVoice dataset, acheiving a state of the art for commonvoice arabic test set WER of 17.68% and CER of 5.49%.

The original model was pre-trained on 2,000 hours of 16kHz sampled Arabic speech audio. When using the model make sure that your speech input is also sampled at 16Khz, see the original paper for more details on the model.

The performance of the model on CommonVoice Arabic 8.0 is the following:

Valid WER Valid CER Test WER Test CER
10.93 3.13 17.68 5.49

This model is trained using SpeechBrain.


You can try the model using SpeechBrain as follows:

Install SpeechBrain and Transformers:

pip install speechbrain transformers

Then run the following code:

from speechbrain.pretrained import EncoderASR

asr_model = EncoderASR.from_hparams(source="asafaya/hubert-large-arabic-ft", savedir="pretrained_models/asafaya/hubert-large-arabic-ft")


> وصلوا واحدا خلف الآخر 

More about SpeechBrain.


This work is licensed under CC BY-NC-4.0.



Model fine-tuning and data processing for in this work were performed at KUACC Cluster.

Downloads last month
Hosted inference API
This model can be loaded on the Inference API on-demand.

Space using asafaya/hubert-large-arabic-ft

Evaluation results