Rest970828's picture
Model save
06ac01e verified
metadata
license: apache-2.0
base_model: facebook/wav2vec2-large-xlsr-53
tags:
  - generated_from_trainer
metrics:
  - accuracy
  - f1
model-index:
  - name: wav2vec2-large-xlsr-53-finetuned-ks
    results: []

wav2vec2-large-xlsr-53-finetuned-ks

This model is a fine-tuned version of facebook/wav2vec2-large-xlsr-53 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.4923
  • Accuracy: 0.7871
  • F1: 0.7863

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy F1
1.379 1.0 141 1.3767 0.2991 0.1377
1.3611 2.0 283 1.3600 0.2991 0.1377
1.3393 3.0 424 1.3515 0.2991 0.1377
1.2932 4.0 566 1.3306 0.3607 0.3098
1.2356 5.0 707 1.2202 0.4397 0.3926
1.2222 6.0 849 1.3719 0.3601 0.2778
1.036 7.0 990 1.2779 0.4290 0.3781
1.0348 8.0 1132 1.2845 0.4257 0.3824
0.9044 9.0 1273 1.2239 0.4927 0.4646
0.8557 10.0 1415 1.6261 0.3926 0.3253
0.804 11.0 1556 1.0748 0.5703 0.5558
0.6517 12.0 1698 1.2891 0.5471 0.5294
0.6063 13.0 1839 0.9921 0.6552 0.6514
0.5008 14.0 1981 1.4346 0.5391 0.5162
0.5425 15.0 2122 1.3406 0.5802 0.5573
0.3806 16.0 2264 1.2260 0.6353 0.6291
0.4022 17.0 2405 1.7530 0.5444 0.5197
0.3001 18.0 2547 1.3619 0.6247 0.6132
0.1921 19.0 2688 1.3687 0.6505 0.6443
0.2704 20.0 2830 1.2533 0.6810 0.6745
0.3145 21.0 2971 1.6079 0.6233 0.6133
0.2045 22.0 3113 1.1432 0.7215 0.7198
0.2444 23.0 3254 1.4012 0.6936 0.6861
0.2223 24.0 3396 1.5944 0.6585 0.6533
0.2415 25.0 3537 1.1057 0.7454 0.7420
0.2233 26.0 3679 1.4083 0.7036 0.6997
0.119 27.0 3820 1.3240 0.7341 0.7323
0.1125 28.0 3962 1.8332 0.6658 0.6590
0.1577 29.0 4103 1.8048 0.6764 0.6714
0.1169 30.0 4245 1.3329 0.7573 0.7563
0.1348 31.0 4386 2.0588 0.6485 0.6359
0.1203 32.0 4528 1.6487 0.7082 0.7012
0.1262 33.0 4669 1.5428 0.7261 0.7236
0.0679 34.0 4811 1.5458 0.7374 0.7357
0.0741 35.0 4952 1.4596 0.7546 0.7508
0.0913 36.0 5094 1.3710 0.7699 0.7702
0.2104 37.0 5235 1.6693 0.7367 0.7344
0.0856 38.0 5377 1.6339 0.75 0.7483
0.0931 39.0 5518 1.6512 0.7580 0.7571
0.0613 40.0 5660 1.6046 0.7646 0.7638
0.0713 41.0 5801 1.4553 0.7785 0.7779
0.025 42.0 5943 1.5725 0.7639 0.7625
0.0811 43.0 6084 1.7562 0.75 0.7474
0.0315 44.0 6226 1.4923 0.7871 0.7863
0.1026 45.0 6367 1.6013 0.7712 0.7706
0.0489 46.0 6509 1.7439 0.7533 0.7502
0.0248 47.0 6650 1.6019 0.7745 0.7730
0.0269 48.0 6792 1.6128 0.7679 0.7659
0.0114 49.0 6933 1.5737 0.7798 0.7788
0.0609 49.82 7050 1.6570 0.7712 0.7692

Framework versions

  • Transformers 4.36.2
  • Pytorch 2.1.1+cu121
  • Datasets 2.16.1
  • Tokenizers 0.15.0