Edit model card

Visualize in Weights & Biases

working

This model is a fine-tuned version of lighteternal/wav2vec2-large-xlsr-53-greek on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0076
  • Accuracy: 0.9984

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 8
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • training_steps: 550

Training results

Training Loss Epoch Step Validation Loss Accuracy
0.3693 0.0039 10 0.4063 0.8983
0.4466 0.0079 20 0.4230 0.8983
0.3205 0.0118 30 0.2852 0.8983
0.3037 0.0158 40 0.2173 0.9009
0.3259 0.0197 50 0.2704 0.8554
0.2012 0.0236 60 0.1175 0.9446
0.1654 0.0276 70 0.0382 0.9894
0.0635 0.0315 80 0.0470 0.9819
0.0182 0.0355 90 0.2888 0.9159
0.2049 0.0394 100 0.1801 0.9486
0.1128 0.0433 110 0.0171 0.9953
0.0246 0.0473 120 0.0321 0.9896
0.1879 0.0512 130 0.0337 0.9903
0.0582 0.0552 140 0.0077 0.9978
0.0052 0.0591 150 0.1915 0.9494
0.0023 0.0630 160 0.0554 0.9866
0.0634 0.0670 170 0.0636 0.9850
0.0058 0.0709 180 0.2431 0.9446
0.0365 0.0749 190 0.0071 0.9982
0.0046 0.0788 200 0.0180 0.9953
0.0227 0.0827 210 0.0186 0.9955
0.0003 0.0867 220 0.0268 0.9943
0.1036 0.0906 230 0.0303 0.9929
0.0764 0.0946 240 0.0070 0.9986
0.0013 0.0985 250 0.0051 0.9990
0.1667 0.1024 260 0.0078 0.9986
0.0399 0.1064 270 0.0085 0.9982
0.0058 0.1103 280 0.0387 0.9894
0.0767 0.1143 290 0.0238 0.9931
0.0011 0.1182 300 0.0148 0.9965
0.0019 0.1221 310 0.0123 0.9972
0.0518 0.1261 320 0.0087 0.9976
0.0019 0.1300 330 0.0213 0.9943
0.0006 0.1340 340 0.0309 0.9927
0.0007 0.1379 350 0.0360 0.9911
0.0007 0.1418 360 0.0356 0.9913
0.0022 0.1458 370 0.0431 0.9901
0.0009 0.1497 380 0.0349 0.9919
0.0005 0.1537 390 0.0247 0.9941
0.0242 0.1576 400 0.0211 0.9953
0.0003 0.1615 410 0.0097 0.9976
0.0003 0.1655 420 0.0076 0.9986
0.0004 0.1694 430 0.0070 0.9988
0.0003 0.1734 440 0.0070 0.9990
0.0198 0.1773 450 0.0036 0.9990
0.0006 0.1812 460 0.0027 0.9994
0.0002 0.1852 470 0.0025 0.9994
0.0011 0.1891 480 0.0034 0.9992
0.0006 0.1931 490 0.0049 0.9990
0.0002 0.1970 500 0.0054 0.9990
0.1014 0.2009 510 0.0066 0.9990
0.0004 0.2049 520 0.0069 0.9990
0.0004 0.2088 530 0.0070 0.9990
0.001 0.2128 540 0.0075 0.9986
0.0377 0.2167 550 0.0076 0.9984

Framework versions

  • Transformers 4.41.0.dev0
  • Pytorch 2.1.2
  • Datasets 2.19.2.dev0
  • Tokenizers 0.19.1
Downloads last month
2
Safetensors
Model size
316M params
Tensor type
F32
·
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for soba2204/wav2vec2-asv19training

Finetuned
(2)
this model