Edit model card

wav2vec2-large-300m-colab-only-gn

This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common_voice_13_0 dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5274
  • Wer: 0.5229

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0003
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 100
  • num_epochs: 30

Training results

Training Loss Epoch Step Validation Loss Wer
20.8148 0.45 25 13.5976 1.0
7.0188 0.9 50 5.5263 1.0
4.1285 1.35 75 3.6078 1.0
3.338 1.8 100 3.3217 1.0
3.2829 2.25 125 3.2781 1.0
3.272 2.7 150 3.2601 1.0
3.2224 3.15 175 3.2234 1.0
3.1949 3.6 200 3.1998 1.0
3.1846 4.05 225 3.1841 1.0
3.1615 4.5 250 3.1719 1.0
3.1367 4.95 275 3.1132 1.0
3.0111 5.41 300 2.9344 1.0
2.7786 5.86 325 2.5643 1.0
2.2106 6.31 350 1.8132 1.0
1.6365 6.76 375 1.4008 0.9982
1.178 7.21 400 1.0678 0.9845
0.8903 7.66 425 0.8744 0.9369
0.7429 8.11 450 0.7213 0.8752
0.5931 8.56 475 0.6681 0.8189
0.5592 9.01 500 0.6622 0.7895
0.4316 9.46 525 0.6177 0.7644
0.4098 9.91 550 0.5599 0.7874
0.3176 10.36 575 0.5649 0.7001
0.3142 10.81 600 0.5828 0.6867
0.3227 11.26 625 0.5505 0.6736
0.275 11.71 650 0.5432 0.6540
0.2783 12.16 675 0.5372 0.6462
0.2316 12.61 700 0.5078 0.6379
0.2281 13.06 725 0.5059 0.6161
0.2191 13.51 750 0.5175 0.5956
0.1911 13.96 775 0.5216 0.5929
0.1731 14.41 800 0.5069 0.5789
0.1743 14.86 825 0.5207 0.5971
0.1755 15.32 850 0.5436 0.6307
0.1568 15.77 875 0.5374 0.6001
0.1629 16.22 900 0.5429 0.6102
0.1418 16.67 925 0.5089 0.5762
0.136 17.12 950 0.5291 0.5878
0.1354 17.57 975 0.5381 0.5840
0.1351 18.02 1000 0.5511 0.5947
0.1252 18.47 1025 0.5204 0.5643
0.1215 18.92 1050 0.5385 0.5613
0.1188 19.37 1075 0.5063 0.5718
0.1209 19.82 1100 0.5211 0.5488
0.1091 20.27 1125 0.5245 0.5557
0.112 20.72 1150 0.4910 0.5587
0.102 21.17 1175 0.5192 0.5581
0.0947 21.62 1200 0.5500 0.5718
0.1066 22.07 1225 0.5288 0.5488
0.1011 22.52 1250 0.5180 0.5438
0.0974 22.97 1275 0.5089 0.5277
0.0926 23.42 1300 0.5222 0.5301
0.0871 23.87 1325 0.5135 0.5366
0.0808 24.32 1350 0.4990 0.5331
0.0739 24.77 1375 0.5281 0.5351
0.0841 25.23 1400 0.5321 0.5360
0.0743 25.68 1425 0.5508 0.5447
0.0809 26.13 1450 0.5228 0.5396
0.0631 26.58 1475 0.5284 0.5351
0.0788 27.03 1500 0.5250 0.5289
0.0754 27.48 1525 0.5204 0.5259
0.0663 27.93 1550 0.5275 0.5313
0.0645 28.38 1575 0.5288 0.5259
0.0729 28.83 1600 0.5268 0.5259
0.0656 29.28 1625 0.5277 0.5232
0.0703 29.73 1650 0.5274 0.5229

Framework versions

  • Transformers 4.34.1
  • Pytorch 2.0.1+cu118
  • Datasets 2.14.6
  • Tokenizers 0.14.1
Downloads last month
2
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for karanthgreeshma/wav2vec2-large-300m-colab-only-gn

Finetuned
(458)
this model
Finetunes
1 model

Evaluation results