Edit model card

Visualize in Weights & Biases

xls-r-300m-hbs-ar-unfrozen-batch16

This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common_voice_17_0 dataset. It achieves the following results on the evaluation set:

  • Loss: 0.7763
  • Wer: 0.4695
  • Cer: 0.1093

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0003
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • num_epochs: 100
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer Cer
3.3679 3.2258 100 3.2752 1.0 1.0
3.0469 6.4516 200 2.9638 1.0 0.9902
0.5512 9.6774 300 0.7542 0.7664 0.1947
0.3029 12.9032 400 0.6819 0.6432 0.1584
0.1903 16.1290 500 0.7312 0.6361 0.1572
0.1464 19.3548 600 0.7223 0.5916 0.1456
0.1205 22.5806 700 0.7566 0.5738 0.1416
0.091 25.8065 800 0.7472 0.5527 0.1308
0.0686 29.0323 900 0.7029 0.5452 0.1337
0.0598 32.2581 1000 0.7889 0.5464 0.1309
0.0607 35.4839 1100 0.8012 0.5672 0.1412
0.0557 38.7097 1200 0.7628 0.5302 0.1333
0.0421 41.9355 1300 0.7861 0.5258 0.1265
0.0532 45.1613 1400 0.7843 0.5314 0.1272
0.0298 48.3871 1500 0.7888 0.5279 0.1253
0.0543 51.6129 1600 0.7847 0.5295 0.1290
0.0404 54.8387 1700 0.7314 0.5246 0.1249
0.0522 58.0645 1800 0.7505 0.5134 0.1222
0.0275 61.2903 1900 0.7588 0.5082 0.1202
0.0786 64.5161 2000 0.7733 0.4930 0.1171
0.0439 67.7419 2100 0.7953 0.4977 0.1133
0.0418 70.9677 2200 0.7664 0.4897 0.1126
0.0399 74.1935 2300 0.7599 0.4845 0.1100
0.0211 77.4194 2400 0.7747 0.4763 0.1115
0.0225 80.6452 2500 0.7607 0.4702 0.1094
0.0446 83.8710 2600 0.7583 0.4768 0.1103
0.0236 87.0968 2700 0.7824 0.4754 0.1102
0.0267 90.3226 2800 0.7861 0.4726 0.1110
0.0255 93.5484 2900 0.7928 0.4712 0.1106
0.0254 96.7742 3000 0.7834 0.4684 0.1102
0.0137 100.0 3100 0.7763 0.4695 0.1093

Framework versions

  • Transformers 4.42.0.dev0
  • Pytorch 2.3.1+cu121
  • Datasets 2.19.2
  • Tokenizers 0.19.1
Downloads last month
1
Safetensors
Model size
315M params
Tensor type
F32
·
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Finetuned from

Evaluation results