metadata
license: apache-2.0
base_model: facebook/wav2vec2-xls-r-300m
tags:
- generated_from_trainer
datasets:
- common_voice_16_1
metrics:
- wer
model-index:
- name: wav2vec2-large-xls-r-300m-tr-cv16.1
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice_16_1
type: common_voice_16_1
config: tr
split: test
args: tr
metrics:
- name: Wer
type: wer
value: 0.41599252148275984
wav2vec2-large-xls-r-300m-tr-cv16.1
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common_voice_16_1 dataset. It achieves the following results on the evaluation set:
- Loss: 0.3356
- Wer: 0.4160
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2
- mixed_precision_training: Native AMP
Model Inference
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
model = Wav2Vec2ForCTC.from_pretrained("rumeyskeskn/wav2vec2-large-xls-r-300m-tr-cv16.1").to("cpu")
processor = Wav2Vec2Processor.from_pretrained("rumeyskeskn/wav2vec2-large-xls-r-300m-tr-cv16.1")
audio_path = "audio.wav"
audio_array, sampling_rate = librosa.load(audio_path, sr=16000)
input_values = processor(audio_array, sampling_rate=sampling_rate).input_values[0]
input_dict = processor(input_values, return_tensors="pt", padding=True)
logits = model(input_dict.input_values).logits
pred_ids = torch.argmax(logits, dim=-1)
prediction = processor.decode(pred_ids[0])
print("Prediction:")
print(prediction)
Training results
Training Loss | Epoch | Step | Validation Loss | Wer |
---|---|---|---|---|
5.669 | 0.39 | 400 | 1.2228 | 0.8840 |
0.6809 | 0.78 | 800 | 0.6371 | 0.6557 |
0.4224 | 1.17 | 1200 | 0.4607 | 0.5226 |
0.3151 | 1.56 | 1600 | 0.3671 | 0.4457 |
0.2633 | 1.95 | 2000 | 0.3356 | 0.4160 |
Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2