Edit model card

train_from_raw_cv12_true__0050

This model is a fine-tuned version of openai/whisper-tiny on an unknown dataset. It achieves the following results on the evaluation set:

  • Train Loss: 0.0024
  • Train Accuracy: 0.1115
  • Train Wermet: 4.0167
  • Validation Loss: 0.4845
  • Validation Accuracy: 0.0642
  • Validation Wermet: 10.2553
  • Epoch: 49

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
  • training_precision: float32

Training results

Train Loss Train Accuracy Train Wermet Validation Loss Validation Accuracy Validation Wermet Epoch
2.3396 0.0444 2.7124 1.8124 0.0331 9.4418 0
1.7496 0.0562 3.2708 1.6529 0.0359 9.7805 1
1.6254 0.0594 3.1714 1.5874 0.0371 10.4139 2
1.5490 0.0615 3.0208 1.5102 0.0382 8.2054 3
1.4910 0.0631 2.8298 1.4213 0.0402 8.9930 4
1.4233 0.0650 2.7100 1.3319 0.0421 8.8483 5
1.3203 0.0679 2.5125 1.1806 0.0450 7.0373 6
1.1453 0.0730 2.5001 1.0029 0.0484 5.7328 7
0.9537 0.0789 2.6295 0.7799 0.0529 6.7373 8
0.7822 0.0844 2.7501 0.6499 0.0556 8.5538 9
0.6317 0.0895 2.9507 0.5467 0.0578 8.4990 10
0.5010 0.0940 3.1604 0.4597 0.0597 9.4002 11
0.4026 0.0975 3.2910 0.3984 0.0610 9.8173 12
0.3372 0.0998 3.5302 0.3571 0.0619 9.6433 13
0.2922 0.1013 3.5614 0.3391 0.0623 9.6152 14
0.2572 0.1025 3.5685 0.3157 0.0628 9.4497 15
0.2258 0.1036 3.5179 0.3104 0.0630 9.7735 16
0.1995 0.1045 3.5362 0.2944 0.0634 9.8154 17
0.1762 0.1054 3.5227 0.2820 0.0637 9.8672 18
0.1551 0.1061 3.5489 0.2849 0.0638 9.6569 19
0.1356 0.1068 3.4990 0.2821 0.0639 9.9735 20
0.1174 0.1074 3.5119 0.2841 0.0640 9.8770 21
0.1016 0.1080 3.5233 0.2903 0.0640 10.0309 22
0.0847 0.1087 3.5368 0.3013 0.0640 9.8095 23
0.0713 0.1092 3.5250 0.3040 0.0640 9.7686 24
0.0596 0.1096 3.5310 0.3137 0.0640 9.9239 25
0.0478 0.1101 3.5776 0.3228 0.0641 10.2774 26
0.0400 0.1104 3.6155 0.3316 0.0641 9.9082 27
0.0301 0.1107 3.6545 0.3446 0.0641 9.9672 28
0.0227 0.1110 3.7827 0.3579 0.0641 10.2859 29
0.0179 0.1112 3.7672 0.3728 0.0640 10.1965 30
0.0148 0.1112 3.7575 0.3829 0.0641 10.5114 31
0.0116 0.1113 3.7682 0.3959 0.0641 10.4941 32
0.0100 0.1114 3.8332 0.4056 0.0641 10.6330 33
0.0140 0.1112 3.8661 0.4283 0.0638 10.5741 34
0.0159 0.1111 3.8054 0.4203 0.0640 10.5108 35
0.0085 0.1114 3.7278 0.4236 0.0641 10.3731 36
0.0052 0.1115 3.8251 0.4366 0.0641 10.5728 37
0.0058 0.1115 3.9767 0.4459 0.0641 10.5720 38
0.0074 0.1114 4.0721 0.4775 0.0637 10.9922 39
0.0133 0.1111 3.9087 0.4560 0.0640 9.8398 40
0.0072 0.1114 3.9828 0.4533 0.0642 10.7116 41
0.0045 0.1115 3.9705 0.4668 0.0641 10.9326 42
0.0050 0.1114 4.1288 0.4677 0.0641 10.5865 43
0.0057 0.1114 4.1164 0.4709 0.0642 10.9947 44
0.0148 0.1111 4.2062 0.4594 0.0643 10.5855 45
0.0053 0.1114 4.0964 0.4598 0.0644 10.9385 46
0.0024 0.1115 4.1318 0.4660 0.0644 11.2717 47
0.0018 0.1115 4.1024 0.4696 0.0644 11.0203 48
0.0024 0.1115 4.0167 0.4845 0.0642 10.2553 49

Framework versions

  • Transformers 4.33.0.dev0
  • TensorFlow 2.13.0
  • Tokenizers 0.13.3
Downloads last month
6

Finetuned from