Edit model card

wav2vec2-xls-r-300-vivos

This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5745
  • Wer: 0.3214

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 1000
  • num_epochs: 50
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
8.1425 0.66 500 3.5478 1.0
3.4041 1.31 1000 2.9316 1.0001
1.6144 1.97 1500 0.7917 0.6804
0.8284 2.62 2000 0.5468 0.5401
0.6356 3.28 2500 0.4703 0.4812
0.553 3.94 3000 0.4371 0.4597
0.4903 4.59 3500 0.4748 0.4622
0.4524 5.25 4000 0.4442 0.4235
0.4107 5.91 4500 0.4354 0.4219
0.3869 6.56 5000 0.4204 0.4084
0.3711 7.22 5500 0.4053 0.3917
0.3507 7.87 6000 0.4134 0.3930
0.3396 8.53 6500 0.4040 0.3834
0.3284 9.19 7000 0.4278 0.3961
0.3096 9.84 7500 0.4590 0.3877
0.2878 10.5 8000 0.4369 0.3761
0.2872 11.15 8500 0.4224 0.3759
0.2756 11.81 9000 0.4442 0.3778
0.2618 12.47 9500 0.4504 0.3832
0.2658 13.12 10000 0.4431 0.3677
0.245 13.78 10500 0.4491 0.3684
0.2467 14.44 11000 0.4436 0.3553
0.2289 15.09 11500 0.4655 0.3649
0.2332 15.75 12000 0.4396 0.3530
0.2205 16.4 12500 0.4577 0.3605
0.2181 17.06 13000 0.4662 0.3544
0.2081 17.72 13500 0.4979 0.3617
0.2009 18.37 14000 0.4564 0.3598
0.1997 19.03 14500 0.4696 0.3526
0.1946 19.69 15000 0.5036 0.3590
0.1937 20.34 15500 0.4763 0.3565
0.1848 21.0 16000 0.5059 0.3564
0.1821 21.65 16500 0.5048 0.3622
0.1784 22.31 17000 0.5252 0.3588
0.1758 22.97 17500 0.4968 0.3482
0.1665 23.62 18000 0.5142 0.3511
0.1661 24.28 18500 0.5230 0.3507
0.1625 24.93 19000 0.5133 0.3476
0.1601 25.59 19500 0.5045 0.3406
0.1521 26.25 20000 0.5205 0.3472
0.1474 26.9 20500 0.5262 0.3481
0.1442 27.56 21000 0.5167 0.3393
0.1487 28.22 21500 0.5420 0.3467
0.1403 28.87 22000 0.5737 0.3548
0.1365 29.53 22500 0.5168 0.3359
0.133 30.18 23000 0.5551 0.3394
0.1372 30.84 23500 0.5464 0.3471
0.1313 31.5 24000 0.5537 0.3425
0.1275 32.15 24500 0.5673 0.3366
0.1177 32.81 25000 0.5440 0.3375
0.1231 33.46 25500 0.5436 0.3353
0.121 34.12 26000 0.5624 0.3333
0.1152 34.78 26500 0.5686 0.3415
0.117 35.43 27000 0.5517 0.3390
0.1139 36.09 27500 0.5543 0.3304
0.1089 36.75 28000 0.5630 0.3348
0.1159 37.4 28500 0.5635 0.3366
0.1115 38.06 29000 0.5657 0.3350
0.1068 38.71 29500 0.5782 0.3348
0.1026 39.37 30000 0.5721 0.3282
0.1058 40.03 30500 0.5746 0.3339
0.1017 40.68 31000 0.5727 0.3265
0.099 41.34 31500 0.5721 0.3309
0.1008 41.99 32000 0.5543 0.3274
0.0957 42.65 32500 0.5642 0.3245
0.0921 43.31 33000 0.5768 0.3239
0.0941 43.96 33500 0.5649 0.3235
0.0927 44.62 34000 0.5659 0.3250
0.0899 45.28 34500 0.5680 0.3193
0.0898 45.93 35000 0.5643 0.3212
0.0864 46.59 35500 0.5769 0.3250
0.0941 47.24 36000 0.5726 0.3247
0.0882 47.9 36500 0.5804 0.3250
0.086 48.56 37000 0.5762 0.3225
0.0861 49.21 37500 0.5748 0.3234
0.0842 49.87 38000 0.5745 0.3214

Framework versions

  • Transformers 4.39.3
  • Pytorch 2.1.2
  • Datasets 2.18.0
  • Tokenizers 0.15.2
Downloads last month
118
Safetensors
Model size
316M params
Tensor type
F32
·

Finetuned from