Edit model card

wav2vec2-telugu_150

This model is a fine-tuned version of facebook/wav2vec2-large-xlsr-53 on the openslr dataset.

It achieves the following results on the evaluation set:

  • Loss: 0.3312
  • Wer: 0.2213

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0003
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • num_epochs: 150
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
6.096 3.84 400 0.5762 0.7029
0.427 7.69 800 0.3124 0.5148
0.208 11.54 1200 0.2994 0.4201
0.1506 15.38 1600 0.3106 0.3844
0.1223 19.23 2000 0.3080 0.3608
0.1094 23.08 2400 0.3206 0.3332
0.0949 26.92 2800 0.3085 0.3253
0.0802 30.77 3200 0.3076 0.3425
0.0713 34.61 3600 0.3280 0.3398
0.0687 38.46 4000 0.3042 0.3081
0.0613 42.31 4400 0.3227 0.3073
0.0548 46.15 4800 0.3152 0.3213
0.0508 50.0 5200 0.3259 0.3107
0.0455 53.84 5600 0.3046 0.2881
0.0427 57.69 6000 0.2779 0.3007
0.0391 61.54 6400 0.2996 0.2693
0.0388 65.38 6800 0.3016 0.2695
0.0339 69.23 7200 0.3225 0.2935
0.0312 73.08 7600 0.2907 0.2942
0.029 76.92 8000 0.3148 0.3029
0.0254 80.77 8400 0.3118 0.2996
0.0229 84.61 8800 0.3022 0.2993
0.0231 88.46 9200 0.3203 0.2465
0.019 92.31 9600 0.3223 0.2460
0.0173 96.15 10000 0.3178 0.2501
0.0168 100.0 10400 0.2937 0.2415
0.015 103.84 10800 0.3062 0.2415
0.014 107.69 11200 0.3104 0.2383
0.012 111.54 11600 0.3308 0.2408
0.0111 115.38 12000 0.3228 0.2335
0.01 119.23 12400 0.3228 0.2374
0.0096 123.08 12800 0.3241 0.2304
0.009 126.92 13200 0.3237 0.2295
0.0075 130.77 13600 0.3221 0.2261
0.0065 134.61 14000 0.3310 0.2277
0.0064 138.46 14400 0.3348 0.2266
0.0064 142.31 14800 0.3330 0.2229
0.0056 146.15 15200 0.3310 0.2229
0.0053 150.0 15600 0.3312 0.2213

Test results

WER(without LM): 42.8%

WER(with LM): 42%

Framework versions

  • Transformers 4.24.0
  • Pytorch 1.13.0+cu117
  • Datasets 2.6.1
  • Tokenizers 0.13.2

ps: 150 in repo name denotes number of epochs

Downloads last month
7

Dataset used to train krishnateja/wav2vec2-telugu_150

Space using krishnateja/wav2vec2-telugu_150 1

Evaluation results