callmesan's picture
End of training
f969973 verified
metadata
license: mit
base_model: Harveenchadha/vakyansh-wav2vec2-tamil-tam-250
tags:
  - generated_from_trainer
metrics:
  - accuracy
model-index:
  - name: vakyansh-wav2vec2-tamil-tam-250-audio-abuse-feature
    results: []

vakyansh-wav2vec2-tamil-tam-250-audio-abuse-feature

This model is a fine-tuned version of Harveenchadha/vakyansh-wav2vec2-tamil-tam-250 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6061
  • Accuracy: 0.7412
  • Macro F1-score: 0.6531

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 64
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy Macro F1-score
6.7458 0.77 10 6.7472 0.0 0.0
6.7056 1.54 20 6.6488 0.0 0.0
6.6158 2.31 30 6.5180 0.6307 0.0917
6.4651 3.08 40 6.2887 0.7143 0.4167
6.2508 3.85 50 5.9094 0.7197 0.4185
5.8959 4.62 60 5.5362 0.7197 0.4185
5.6179 5.38 70 5.2347 0.7197 0.4185
5.3048 6.15 80 4.9823 0.7197 0.4185
5.0858 6.92 90 4.7555 0.7197 0.4185
4.9195 7.69 100 4.5424 0.7197 0.4185
4.6747 8.46 110 4.3265 0.7197 0.4185
4.5861 9.23 120 4.1193 0.7197 0.4185
4.3397 10.0 130 3.9070 0.7197 0.4185
4.0926 10.77 140 3.6954 0.7197 0.4185
3.8859 11.54 150 3.4822 0.7197 0.4185
3.7254 12.31 160 3.2711 0.7197 0.4185
3.5303 13.08 170 3.0599 0.7197 0.4185
3.2531 13.85 180 2.8502 0.7197 0.4185
3.0184 14.62 190 2.6448 0.7197 0.4185
3.0006 15.38 200 2.4472 0.7197 0.4185
2.6674 16.15 210 2.2526 0.7197 0.4185
2.4455 16.92 220 2.0649 0.7197 0.4185
2.2702 17.69 230 1.8883 0.7197 0.4185
2.0536 18.46 240 1.7233 0.7197 0.4185
2.0643 19.23 250 1.5730 0.7197 0.4185
1.8006 20.0 260 1.4368 0.7197 0.4185
1.6975 20.77 270 1.3112 0.7197 0.4185
1.4407 21.54 280 1.2015 0.7197 0.4185
1.2971 22.31 290 1.1050 0.7197 0.4185
1.3202 23.08 300 1.0219 0.7197 0.4185
1.1292 23.85 310 0.9490 0.7197 0.4185
1.1055 24.62 320 0.8879 0.7197 0.4185
0.9817 25.38 330 0.8366 0.7197 0.4185
0.9296 26.15 340 0.7906 0.7197 0.4185
0.8306 26.92 350 0.7506 0.7197 0.4185
0.8303 27.69 360 0.7171 0.7197 0.4185
0.8421 28.46 370 0.6953 0.7197 0.4185
0.7964 29.23 380 0.6650 0.7197 0.4185
0.7528 30.0 390 0.6470 0.7197 0.4185
0.7305 30.77 400 0.6345 0.7197 0.4185
0.6702 31.54 410 0.6163 0.7385 0.4937
0.6416 32.31 420 0.6118 0.7547 0.5507
0.608 33.08 430 0.6086 0.7547 0.5507
0.6659 33.85 440 0.5981 0.7574 0.5949
0.5839 34.62 450 0.6068 0.7547 0.6570
0.6167 35.38 460 0.5894 0.7763 0.6479
0.5991 36.15 470 0.5947 0.7412 0.6531
0.5839 36.92 480 0.5938 0.7574 0.6771
0.5533 37.69 490 0.5922 0.7520 0.6399
0.4998 38.46 500 0.6203 0.7358 0.6625
0.5508 39.23 510 0.5865 0.7493 0.6278
0.5159 40.0 520 0.5963 0.7385 0.6670
0.5344 40.77 530 0.5946 0.7439 0.6420
0.5039 41.54 540 0.5979 0.7466 0.6526
0.5456 42.31 550 0.5999 0.7358 0.6707
0.4822 43.08 560 0.5845 0.7493 0.6437
0.4864 43.85 570 0.6035 0.7439 0.6779
0.4623 44.62 580 0.5961 0.7520 0.6519
0.475 45.38 590 0.6066 0.7439 0.6651
0.4887 46.15 600 0.6014 0.7466 0.6603
0.506 46.92 610 0.6012 0.7412 0.6604
0.5296 47.69 620 0.5986 0.7439 0.6503
0.5255 48.46 630 0.6003 0.7439 0.6503
0.4667 49.23 640 0.6038 0.7466 0.6553
0.4334 50.0 650 0.6061 0.7412 0.6531

Framework versions

  • Transformers 4.33.0
  • Pytorch 2.0.0
  • Datasets 2.1.0
  • Tokenizers 0.13.3