wav2vec2-xlsr-ft-cy

Fine-tuned facebook/wav2vec2-large-xlsr-53 on the Welsh Common Voice dataset.

Source code and scripts for local server hosting and training within docker environments can be found at https://github.com/techiaith/docker-wav2vec2-xlsr-ft-cy

Usage

The model can be used directly (without a language model) as follows:

import torch
import torchaudio
import librosa

from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor

processor = Wav2Vec2Processor.from_pretrained("techiaith/wav2vec2-xlsr-ft-cy")
model = Wav2Vec2ForCTC.from_pretrained("techiaith/wav2vec2-xlsr-ft-cy")

audio, rate = librosa.load(audio_file, sr=16000)

inputs = processor(audio, sampling_rate=16_000, return_tensors="pt", padding=True)

with torch.no_grad():
  tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits

# greedy decoding
predicted_ids = torch.argmax(logits, dim=-1)

print("Prediction:", processor.batch_decode(predicted_ids))

Using a Language Model

This model can be used with a KenLM language model for reducing the WER down to 15.07%. See https://github.com/techiaith/docker-wav2vec2-xlsr-ft-cy/releases/tag/21.05 for more details and examples.

Evaluation

The model has been evaluated on the Welsh Common Voice test data set.

See: https://github.com/techiaith/docker-wav2vec2-xlsr-ft-cy/blob/main/train/python/evaluate.py

Downloads last month
155
Hosted inference API
Automatic Speech Recognition
or
This model can be loaded on the Inference API on-demand.
Evaluation results