Edit model card

Visualize in Weights & Biases

xls-r-300m-hbs-fr-unfrozen-batch16

This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common_voice_17_0 dataset. It achieves the following results on the evaluation set:

  • Loss: 0.7093
  • Wer: 0.3960
  • Cer: 0.0915

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0003
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • num_epochs: 100
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer Cer
3.5652 3.2258 100 3.3748 1.0 1.0
3.2583 6.4516 200 3.2149 1.0 1.0
3.1829 9.6774 300 3.1452 1.0 1.0
0.7256 12.9032 400 0.7889 0.7134 0.1766
0.3062 16.1290 500 0.6745 0.6146 0.1423
0.1843 19.3548 600 0.6301 0.5265 0.1242
0.1259 22.5806 700 0.6102 0.4820 0.1121
0.1386 25.8065 800 0.6702 0.4939 0.1176
0.0962 29.0323 900 0.6297 0.4806 0.1147
0.069 32.2581 1000 0.6766 0.4740 0.1113
0.0779 35.4839 1100 0.6565 0.4609 0.1075
0.0715 38.7097 1200 0.6649 0.4649 0.1103
0.0448 41.9355 1300 0.6558 0.4642 0.1094
0.0552 45.1613 1400 0.6893 0.4412 0.1035
0.0396 48.3871 1500 0.7179 0.4527 0.1041
0.0592 51.6129 1600 0.6455 0.4285 0.0976
0.0509 54.8387 1700 0.6605 0.4349 0.1005
0.0665 58.0645 1800 0.7340 0.4243 0.0991
0.0391 61.2903 1900 0.7378 0.4330 0.1018
0.0974 64.5161 2000 0.6984 0.4306 0.1003
0.0344 67.7419 2100 0.6895 0.4208 0.0974
0.043 70.9677 2200 0.7214 0.4140 0.0965
0.0248 74.1935 2300 0.7242 0.4149 0.0990
0.0194 77.4194 2400 0.7233 0.4107 0.0962
0.0277 80.6452 2500 0.7247 0.4100 0.0946
0.0447 83.8710 2600 0.7078 0.4004 0.0941
0.0291 87.0968 2700 0.7073 0.4002 0.0915
0.0208 90.3226 2800 0.7121 0.4025 0.0921
0.0278 93.5484 2900 0.6998 0.3932 0.0914
0.0569 96.7742 3000 0.7105 0.3964 0.0918
0.0132 100.0 3100 0.7093 0.3960 0.0915

Framework versions

  • Transformers 4.42.0.dev0
  • Pytorch 2.3.1+cu121
  • Datasets 2.19.2
  • Tokenizers 0.19.1
Downloads last month
1
Safetensors
Model size
315M params
Tensor type
F32
·
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Finetuned from

Evaluation results