Model Summary

Well I made a KenLM model that works with Meta's Massively Multilingual Speech (MMS) to improve ASR transcriptions in the Amharic language. The model is based on the Amharic Common Crawl corpus dating from Jan-Dec of 2018 (link). It seems to improve my ASR transcriptions considerably well, but of course I don't expect this LM to improve the WER for amharic transcriptions to the level spoken of in the MMS paper (around 32%). To do that, a larger Amharic corpus would be needed, and I have no clue how to compile one myself. For reference, the one I used is merely 837MB; the MMS paper suggests using a corpus of > 5GB.

Getting Started

To use this LM-boosted processor via the Transformers library, utilize the "Wav2Vec2ProcessorWithLM" class instead of "Wav2Vec2Processor". Here's a quick example of how I use it:

from transformers import Wav2Vec2ForCTC, Wav2Vec2ProcessorWithLM


# load pretrained model
model = Wav2Vec2ForCTC.from_pretrained("facebook/mms-1b-all")
processor = Wav2Vec2ProcessorWithLM.from_pretrained("jlonsako/mms-1b-all-AmhLM")

model.load_adapter("amh")

input_values = processor("insert audio file path", sampling_rate=16_000, return_tensors="pt").input_values

with torch.no_grad():
  logits = model(input_values).logits
transcription = processor.batch_decode(logits.numpy()).text
print(transcription[0])

Limitations

I would love to post stats like WER and BLEU scores on out-of-framework datasets, but considering the fact that I'm a C# web developer by trade, I have no clue how to perform those tests. I just want to make this available for anyone who wants to test MMS performance with an AMH LM, and also for my own use. Happy testing!

Final Notes

I hope it goes without saying that this repository inherits licenses from Meta MMS, meaning this repo can only be used for research purposes. Personally, I am using this to build a simple video transcription app for Amharic videos generated by my church. If you wanna know more or would like to help make this LM better (seeing as I am clueless), feel free to reach out to me at my email: jtlonsako.work@gmail.com.
-Joshua Lonsako

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Examples
Unable to determine this model's library. Check the docs .

Space using jlonsako/mms-1b-all-AmhLM 1