Edit model card

torgo_xlsr_finetune-M01-2

This model is a fine-tuned version of facebook/wav2vec2-large-xlsr-53 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.5763
  • Wer: 0.9555

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 1000
  • num_epochs: 30

Training results

Training Loss Epoch Step Validation Loss Wer
23.0716 0.9 500 3.3142 1.0
3.4092 1.8 1000 3.2440 1.0
2.9015 2.7 1500 2.8209 1.0
2.7211 3.6 2000 2.4913 1.2728
2.0884 4.5 2500 1.7817 1.4841
1.3426 5.41 3000 1.5117 1.4678
0.9866 6.31 3500 1.4760 1.3781
0.7874 7.21 4000 1.2179 1.2516
0.6424 8.11 4500 1.4501 1.2226
0.5505 9.01 5000 1.4132 1.3343
0.4709 9.91 5500 1.3289 1.1604
0.4358 10.81 6000 1.2615 1.1102
0.3892 11.71 6500 1.5597 1.1060
0.3602 12.61 7000 1.4205 1.1322
0.3298 13.51 7500 1.4411 1.1237
0.3184 14.41 8000 1.4017 1.1004
0.2954 15.32 8500 1.3428 1.0806
0.2745 16.22 9000 1.4793 1.0982
0.2533 17.12 9500 1.6004 1.1124
0.2378 18.02 10000 1.5802 1.0700
0.2234 18.92 10500 1.4462 1.0473
0.2147 19.82 11000 1.3814 1.0042
0.202 20.72 11500 1.5665 1.0226
0.1691 21.62 12000 1.4534 0.9958
0.1993 22.52 12500 1.4851 0.9894
0.1591 23.42 13000 1.3746 0.9746
0.1602 24.32 13500 1.4077 0.9710
0.1417 25.23 14000 1.5074 0.9668
0.1302 26.13 14500 1.5024 0.9456
0.1334 27.03 15000 1.4816 0.9541
0.1269 27.93 15500 1.5501 0.9541
0.1254 28.83 16000 1.5593 0.9527
0.12 29.73 16500 1.5763 0.9555

Framework versions

  • Transformers 4.26.1
  • Pytorch 1.13.1+cu116
  • Datasets 1.18.3
  • Tokenizers 0.13.2
Downloads last month
2
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.