Edit model card

v2-wav2vec2-large-xls-r-300m-french-colab

This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common_voice dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3972
  • Wer: 0.2154

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0003
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • num_epochs: 30
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.3672 1.07 400 0.4010 0.3241
0.3467 2.14 800 0.4687 0.3642
0.3223 3.21 1200 0.4495 0.3248
0.3236 4.28 1600 0.4325 0.3289
0.2764 5.35 2000 0.4101 0.3005
0.2554 6.42 2400 0.4211 0.3148
0.2198 7.49 2800 0.4217 0.2946
0.2112 8.56 3200 0.4217 0.2930
0.1813 9.63 3600 0.4110 0.2682
0.1727 10.7 4000 0.3908 0.2791
0.1543 11.76 4400 0.4284 0.2746
0.154 12.83 4800 0.4096 0.2743
0.134 13.9 5200 0.4157 0.2582
0.1207 14.97 5600 0.4057 0.2525
0.1145 16.04 6000 0.4255 0.2498
0.0996 17.11 6400 0.4282 0.2488
0.0971 18.18 6800 0.3763 0.2427
0.0886 19.25 7200 0.3833 0.2473
0.082 20.32 7600 0.3849 0.2402
0.0765 21.39 8000 0.4083 0.2327
0.07 22.46 8400 0.4132 0.2355
0.0601 23.53 8800 0.4124 0.2332
0.0583 24.6 9200 0.3956 0.2248
0.0537 25.67 9600 0.4103 0.2289
0.0487 26.74 10000 0.4050 0.2266
0.0459 27.81 10400 0.3827 0.2173
0.0426 28.88 10800 0.3943 0.2152
0.0386 29.95 11200 0.3972 0.2154

Framework versions

  • Transformers 4.24.0
  • Pytorch 1.13.0+cu117
  • Datasets 2.7.1
  • Tokenizers 0.13.2
Downloads last month
10

Dataset used to train nawel-ucsb/v2-wav2vec2-large-xls-r-300m-french-colab

Evaluation results