Baselhany commited on
Commit
89e6b61
·
verified ·
1 Parent(s): 3f0e4ef

Training finished

Browse files
README.md CHANGED
@@ -20,9 +20,9 @@ should probably proofread and complete it, then remove this comment. -->
20
 
21
  This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the quran-ayat-speech-to-text dataset.
22
  It achieves the following results on the evaluation set:
23
- - Loss: 0.0408
24
- - Wer: 0.3145
25
- - Cer: 0.1128
26
 
27
  ## Model description
28
 
@@ -41,7 +41,7 @@ More information needed
41
  ### Training hyperparameters
42
 
43
  The following hyperparameters were used during training:
44
- - learning_rate: 0.0001
45
  - train_batch_size: 16
46
  - eval_batch_size: 16
47
  - seed: 42
@@ -50,38 +50,33 @@ The following hyperparameters were used during training:
50
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
51
  - lr_scheduler_type: linear
52
  - lr_scheduler_warmup_steps: 500
53
- - num_epochs: 25
54
  - mixed_precision_training: Native AMP
55
 
56
  ### Training results
57
 
58
- | Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
59
- |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
60
- | 0.1136 | 1.0 | 94 | 0.0777 | 5.3133 | 1.8561 |
61
- | 0.0458 | 2.0 | 188 | 0.0478 | 2.6311 | 1.1803 |
62
- | 0.0292 | 3.0 | 282 | 0.0402 | 1.8776 | 0.9786 |
63
- | 0.0239 | 4.0 | 376 | 0.0369 | 0.9783 | 0.4714 |
64
- | 0.0182 | 5.0 | 470 | 0.0368 | 0.6109 | 0.2680 |
65
- | 0.0163 | 6.0 | 564 | 0.0357 | 0.6387 | 0.2671 |
66
- | 0.0132 | 7.0 | 658 | 0.0363 | 0.3701 | 0.1281 |
67
- | 0.01 | 8.0 | 752 | 0.0369 | 0.5068 | 0.1884 |
68
- | 0.0067 | 9.0 | 846 | 0.0392 | 0.3609 | 0.1184 |
69
- | 0.0056 | 10.0 | 940 | 0.0354 | 0.3155 | 0.1000 |
70
- | 0.0041 | 11.0 | 1034 | 0.0382 | 0.3131 | 0.0964 |
71
- | 0.0027 | 12.0 | 1128 | 0.0363 | 0.3419 | 0.1087 |
72
- | 0.0018 | 13.0 | 1222 | 0.0387 | 0.3640 | 0.1111 |
73
- | 0.0015 | 14.0 | 1316 | 0.0388 | 0.3428 | 0.1179 |
74
- | 0.0012 | 15.0 | 1410 | 0.0382 | 0.3730 | 0.1359 |
75
- | 0.0012 | 16.0 | 1504 | 0.0386 | 0.2863 | 0.0933 |
76
- | 0.0007 | 17.0 | 1598 | 0.0387 | 0.3721 | 0.1560 |
77
- | 0.0005 | 18.0 | 1692 | 0.0391 | 0.2780 | 0.0847 |
78
- | 0.0004 | 19.0 | 1786 | 0.0405 | 0.2985 | 0.1114 |
79
- | 0.0003 | 20.0 | 1880 | 0.0409 | 0.3452 | 0.1222 |
80
- | 0.0002 | 21.0 | 1974 | 0.0405 | 0.3475 | 0.1441 |
81
- | 0.0 | 22.0 | 2068 | 0.0413 | 0.2614 | 0.0754 |
82
- | 0.0 | 23.0 | 2162 | 0.0423 | 0.3339 | 0.1156 |
83
- | 0.0 | 24.0 | 2256 | 0.0426 | 0.2676 | 0.0854 |
84
- | 0.0 | 25.0 | 2350 | 0.0442 | 0.3357 | 0.1101 |
85
 
86
 
87
  ### Framework versions
 
20
 
21
  This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the quran-ayat-speech-to-text dataset.
22
  It achieves the following results on the evaluation set:
23
+ - Loss: 0.0256
24
+ - Wer: 0.1250
25
+ - Cer: 0.0445
26
 
27
  ## Model description
28
 
 
41
  ### Training hyperparameters
42
 
43
  The following hyperparameters were used during training:
44
+ - learning_rate: 5e-05
45
  - train_batch_size: 16
46
  - eval_batch_size: 16
47
  - seed: 42
 
50
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
51
  - lr_scheduler_type: linear
52
  - lr_scheduler_warmup_steps: 500
53
+ - num_epochs: 20
54
  - mixed_precision_training: Native AMP
55
 
56
  ### Training results
57
 
58
+ | Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
59
+ |:-------------:|:-------:|:----:|:---------------:|:------:|:------:|
60
+ | 0.0263 | 0.9973 | 187 | 0.0205 | 0.1624 | 0.0613 |
61
+ | 0.0093 | 2.0 | 375 | 0.0149 | 0.1519 | 0.0529 |
62
+ | 0.0051 | 2.9973 | 562 | 0.0157 | 0.1580 | 0.0512 |
63
+ | 0.004 | 4.0 | 750 | 0.0181 | 0.1636 | 0.0539 |
64
+ | 0.002 | 4.9973 | 937 | 0.0193 | 0.1557 | 0.0502 |
65
+ | 0.0011 | 6.0 | 1125 | 0.0206 | 0.1558 | 0.0506 |
66
+ | 0.0009 | 6.9973 | 1312 | 0.0213 | 0.1513 | 0.0498 |
67
+ | 0.0005 | 8.0 | 1500 | 0.0214 | 0.1544 | 0.0504 |
68
+ | 0.0004 | 8.9973 | 1687 | 0.0220 | 0.1464 | 0.0458 |
69
+ | 0.0004 | 10.0 | 1875 | 0.0216 | 0.1459 | 0.0461 |
70
+ | 0.0002 | 10.9973 | 2062 | 0.0224 | 0.1452 | 0.0454 |
71
+ | 0.0001 | 12.0 | 2250 | 0.0224 | 0.1437 | 0.0452 |
72
+ | 0.0001 | 12.9973 | 2437 | 0.0234 | 0.2224 | 0.0832 |
73
+ | 0.0 | 14.0 | 2625 | 0.0231 | 0.1356 | 0.0540 |
74
+ | 0.0 | 14.9973 | 2812 | 0.0236 | 0.2134 | 0.0797 |
75
+ | 0.0 | 16.0 | 3000 | 0.0241 | 0.2159 | 0.0796 |
76
+ | 0.0 | 16.9973 | 3187 | 0.0253 | 0.1338 | 0.0517 |
77
+ | 0.0 | 18.0 | 3375 | 0.0257 | 0.1271 | 0.0493 |
78
+ | 0.0 | 18.9973 | 3562 | 0.0264 | 0.1287 | 0.0492 |
79
+ | 0.0 | 19.9467 | 3740 | 0.0266 | 0.1280 | 0.0489 |
 
 
 
 
 
80
 
81
 
82
  ### Framework versions
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8655c254472fdd64e673d4c9cbcfdd504145be115573e5b80e7c002198577e9d
3
  size 151061672
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fb166b19ffcd8c8ec68b29c1f5e48eb34a08f6e7b10cc400d64bc67367f45c1b
3
  size 151061672
runs/Dec27_12-32-57_9ffc9edd6562/events.out.tfevents.1735317570.9ffc9edd6562.41.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aa44eb46ed25df92cd286927b3ae16799aa76559cd7c31340c8c3178b0c22d89
3
+ size 453