reichenbach's picture
Tag values altered
a2e0f41
metadata
license: apache-2.0
tags:
  - automatic-speech-recognition
  - robust-speech-event
  - generated_from_trainer
datasets:
  - common_voice
model-index:
  - name: wav2vec2-large-xls-r-300m-hi
    results: []

wav2vec2-large-xls-r-300m-hi

This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common_voice dataset. It achieves the following results on the evaluation set:

  • Loss: 2.5039
  • Wer: 0.8877

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 7.5e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • num_epochs: 50
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
9.4071 4.76 400 3.5871 1.0
3.5056 9.52 800 3.4414 1.0
2.9652 14.28 1200 2.1936 0.9573
1.3822 19.05 1600 2.1039 0.9157
0.9906 23.81 2000 2.2512 0.8960
0.8405 28.57 2400 2.2878 0.8931
0.7686 33.33 2800 2.3291 0.8884
0.7092 38.09 3200 2.4806 0.8921
0.6757 42.85 3600 2.4675 0.8847
0.6606 47.62 4000 2.5039 0.8877

Framework versions

  • Transformers 4.11.3
  • Pytorch 1.10.1+cu102
  • Datasets 1.17.1.dev0
  • Tokenizers 0.10.3

Built during robust-speech-challenge. Will keep updating the same !

Thanks Patrick, Anton for the wonderful event.