Edit model card

wav2vec2-large-xls-r-300m-spanish-small-v3

This model is a fine-tuned version of jhonparra18/wav2vec2-large-xls-r-300m-spanish-custom on the common_voice dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3986
  • Wer: 0.1980

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0004
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • num_epochs: 25
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.2372 0.26 400 0.3011 0.2660
0.3413 0.53 800 0.3559 0.3228
0.3598 0.79 1200 0.3753 0.3400
0.3529 1.05 1600 0.3385 0.2979
0.3133 1.32 2000 0.3559 0.3056
0.3158 1.58 2400 0.3364 0.2994
0.3092 1.85 2800 0.3210 0.2876
0.2919 2.11 3200 0.3460 0.3010
0.2666 2.37 3600 0.3543 0.3036
0.2819 2.64 4000 0.3477 0.2959
0.283 2.9 4400 0.3492 0.2968
0.2484 3.16 4800 0.3647 0.2993
0.2371 3.43 5200 0.3601 0.2942
0.2382 3.69 5600 0.3656 0.3019
0.2425 3.96 6000 0.3379 0.2873
0.2092 4.22 6400 0.3385 0.2736
0.2171 4.48 6800 0.3503 0.2889
0.2185 4.75 7200 0.3289 0.2727
0.2236 5.01 7600 0.3447 0.2771
0.1882 5.27 8000 0.3586 0.2860
0.1986 5.54 8400 0.3404 0.2829
0.2055 5.8 8800 0.3561 0.2869
0.196 6.06 9200 0.3633 0.2811
0.1748 6.33 9600 0.3703 0.2818
0.1758 6.59 10000 0.3525 0.2816
0.1819 6.86 10400 0.3581 0.2765
0.1715 7.12 10800 0.3480 0.2628
0.1606 7.38 11200 0.3490 0.2703
0.1632 7.65 11600 0.3461 0.2706
0.1638 7.91 12000 0.3458 0.2673
0.1552 8.17 12400 0.3646 0.2732
0.154 8.44 12800 0.3706 0.2726
0.1512 8.7 13200 0.3609 0.2683
0.149 8.97 13600 0.3610 0.2668
0.1357 9.23 14000 0.3693 0.2740
0.1375 9.49 14400 0.3677 0.2625
0.1391 9.76 14800 0.3795 0.2762
0.1378 10.02 15200 0.3541 0.2592
0.1197 10.28 15600 0.3562 0.2507
0.1259 10.55 16000 0.3612 0.2584
0.1266 10.81 16400 0.3470 0.2527
0.1199 11.07 16800 0.3721 0.2571
0.1157 11.34 17200 0.3734 0.2571
0.1107 11.6 17600 0.3730 0.2589
0.1148 11.87 18000 0.3648 0.2536
0.1095 12.13 18400 0.3746 0.2521
0.1047 12.39 18800 0.3566 0.2530
0.1043 12.66 19200 0.3794 0.2545
0.1066 12.92 19600 0.3548 0.2439
0.0974 13.18 20000 0.3702 0.2461
0.0978 13.45 20400 0.3721 0.2492
0.095 13.71 20800 0.3599 0.2467
0.0963 13.97 21200 0.3650 0.2402
0.0902 14.24 21600 0.3689 0.2459
0.0898 14.5 22000 0.3832 0.2452
0.0865 14.77 22400 0.3982 0.2436
0.0911 15.03 22800 0.3785 0.2398
0.0793 15.29 23200 0.3731 0.2396
0.0806 15.56 23600 0.3626 0.2372
0.0789 15.82 24000 0.3707 0.2356
0.0779 16.08 24400 0.3850 0.2368
0.078 16.35 24800 0.3831 0.2363
0.0732 16.61 25200 0.3947 0.2287
0.0733 16.88 25600 0.3928 0.2374
0.0721 17.14 26000 0.3943 0.2324
0.0676 17.4 26400 0.3793 0.2311
0.0682 17.67 26800 0.3958 0.2257
0.0714 17.93 27200 0.3890 0.2322
0.0673 18.19 27600 0.3872 0.2229
0.0613 18.46 28000 0.3828 0.2226
0.0621 18.72 28400 0.3812 0.2214
0.0622 18.98 28800 0.3919 0.2212
0.0576 19.25 29200 0.4000 0.2205
0.0581 19.51 29600 0.3953 0.2203
0.0573 19.78 30000 0.3947 0.2190
0.0576 20.04 30400 0.3909 0.2156
0.0551 20.3 30800 0.4178 0.2153
0.0525 20.57 31200 0.3935 0.2152
0.0522 20.83 31600 0.4054 0.2151
0.0519 21.09 32000 0.3877 0.2135
0.0479 21.36 32400 0.4119 0.2107
0.0472 21.62 32800 0.3967 0.2091
0.048 21.89 33200 0.3812 0.2057
0.0458 22.15 33600 0.3931 0.2043
0.0459 22.41 34000 0.3937 0.2049
0.0448 22.68 34400 0.3900 0.2056
0.0432 22.94 34800 0.4050 0.2049
0.0425 23.2 35200 0.3985 0.2014
0.0415 23.47 35600 0.3976 0.2013
0.0403 23.73 36000 0.4031 0.2018
0.04 23.99 36400 0.3996 0.2000
0.039 24.26 36800 0.3977 0.1993
0.0406 24.52 37200 0.3967 0.2000
0.0391 24.79 37600 0.3986 0.1980

Framework versions

  • Transformers 4.17.0.dev0
  • Pytorch 1.10.2+cu102
  • Datasets 1.18.2.dev0
  • Tokenizers 0.11.0
Downloads last month
28
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train tomascufaro/wav2vec2-large-xls-r-300m-spanish-small-v3