Malwhisper-v1-medium
This model is a fine-tuned version of openai/whisper-medium fine-tuned on IMASc dataset.
About Dataset
IMaSC is a Malayalam text and speech corpus made available by ICFOSS for the purpose of developing speech technology for Malayalam, particularly text-to-speech. The corpus contains 34,473 text-audio pairs of Malayalam sentences spoken by 8 speakers, totalling in approximately 50 hours of audio.
Training
Experiment Tracking with Weights and Biases
GPUs used: A100 - 80 GB
Training Time: 16 hours
This project was build with A100 80GB GPU provided by E2E Cloud during their open hack day
Evaluation
The fine-tuned model on evaluating in the following dataset:
In Mozilla CommonVoice 11.0 dataset (Malayalam subset):
WER - 61.84
CER - 15.41
In SMC Malayalam Speech Corpus dataset:
WER - 70.49
CER - 17.0
- Downloads last month
- 13
Dataset used to train smcproject/Malwhisper-v1-medium
Evaluation results
- WER on Common Voice 11.0test set self-reported61.840
- CER on Common Voice 11.0test set self-reported15.410