Edit model card

wav2vec2-large-xls-r-300m-ipa

This model was trained from scratch on the common_voice_17_0 dataset. It achieves the following results on the evaluation set:

  • Loss: 0.7309

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 6
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 24
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • num_epochs: 240
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
0.1677 3.6866 200 1.0381
0.1214 7.3733 400 0.5607
0.1272 11.0599 600 0.5442
0.135 14.7465 800 0.5933
0.0824 18.4332 1000 0.6316
0.0711 22.1198 1200 0.5971
0.0653 25.8065 1400 0.6050
0.0499 29.4931 1600 0.6699
0.0516 33.1797 1800 0.6940
0.0507 36.8664 2000 0.7045
0.0478 40.5530 2200 0.7603
0.045 44.2396 2400 0.7415
0.0419 47.9263 2600 0.7341
0.0344 51.6129 2800 0.7328
0.0354 55.2995 3000 0.8550
0.0268 58.9862 3200 0.7838
0.0383 62.6728 3400 0.7995
0.0371 66.3594 3600 0.7765
0.0264 70.0461 3800 0.8186
0.0212 73.7327 4000 0.7439
0.0177 77.4194 4200 0.7830
0.0204 81.1060 4400 0.8145
0.0254 84.7926 4600 0.8149
0.0257 88.4793 4800 0.7663
0.0126 92.1659 5000 0.7704
0.0196 95.8525 5200 0.7660
0.0185 99.5392 5400 0.8580
0.0236 103.2258 5600 0.8169
0.0141 106.9124 5800 0.8222
0.0142 110.5991 6000 0.9001
0.0098 114.2857 6200 0.8509
0.0372 117.9724 6400 0.7734
0.0075 121.6590 6600 0.8911
0.0118 125.3456 6800 0.8347
0.0115 129.0323 7000 0.8926
0.0164 132.7189 7200 0.7985
0.006 136.4055 7400 0.7571
0.0124 140.0922 7600 0.8476
0.0141 143.7788 7800 0.8071
0.0065 147.4654 8000 0.7630
0.0095 151.1521 8200 0.7161
0.0063 154.8387 8400 0.8165
0.0107 158.5253 8600 0.7411
0.0037 162.2120 8800 0.7424
0.0045 165.8986 9000 0.7611
0.0044 169.5853 9200 0.7278
0.0043 173.2719 9400 0.7396
0.0025 176.9585 9600 0.7215
0.0029 180.6452 9800 0.7551
0.0067 184.3318 10000 0.7518
0.0062 188.0184 10200 0.7668
0.0065 191.7051 10400 0.7433
0.0024 195.3917 10600 0.7942
0.0039 199.0783 10800 0.7448
0.0024 202.7650 11000 0.7290
0.0036 206.4516 11200 0.7678
0.0001 210.1382 11400 0.7390
0.0009 213.8249 11600 0.7292
0.0008 217.5115 11800 0.7383
0.0009 221.1982 12000 0.7435
0.0009 224.8848 12200 0.7324
0.0007 228.5714 12400 0.7444
0.0002 232.2581 12600 0.7228
0.0005 235.9447 12800 0.7309

Framework versions

  • Transformers 4.40.2
  • Pytorch 2.3.0+cu121
  • Datasets 2.19.1
  • Tokenizers 0.19.1
Downloads last month
0
Safetensors
Model size
963M params
Tensor type
F32
·