Whisper large for Ugandan languages
Available languages:
ISO 639-3 | Language |
---|---|
eng | English (Ugandan accent) |
lug | Luganda |
ach | Acholi |
lgg | Lugbara |
teo | Ateso |
nyn | Runyankole |
xog | Lusoga |
myx | Lumasaba |
swa | Swahili |
kin | Kinyarwanda |
The model is used in a similar way to the base Whisper model. The model will attempt to auto-detect the language and provide a transcription. However, note that language detection is not always accurate and results may be improved by specifying it instead. The languages in this model are not supported by the base Whisper model, so the format is slightly different:
!git clone https://github.com/jqug/salt.git
import salt.constants
import transformers
import datasets
import torch
processor = transformers.WhisperProcessor.from_pretrained(
"jq/whisper-large-v2-salt-plus-xog-myx-kin-swa")
model = transformers.WhisperForConditionalGeneration.from_pretrained(
"jq/whisper-large-v2-salt-plus-xog-myx-kin-swa")
# Get some test audio
ds = datasets.load_dataset('Sunbird/salt', 'multispeaker-lug', split='test')
audio = ds[0]['audio']
sample_rate = ds[0]['sample_rate']
# Specify a language from: eng, lug, ach, teo, lgg, nyn, myx, xog, swa, kin.
lang = 'lug'
# Apply the model
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
input_features = processor(
audio, sampling_rate=sample_rate, return_tensors="pt").input_features
input_features = input_features.to(device)
predicted_ids = model.to(device).generate(
input_features,
# Optionally set language=None here instead to auto-detect.
language=processor.tokenizer.decode(
salt.constants.SALT_LANGUAGE_TOKENS_WHISPER[lang]),
forced_decoder_ids=None)
transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
print(transcription)
# Ekikoola kya kasooli kya kyenvu wabula langi yaakyo etera okuba eya kitaka wansi.
- Downloads last month
- 85
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.