Commit
•
57adb82
1
Parent(s):
0dcb9e9
update model card README.md
Browse files
README.md
CHANGED
@@ -16,9 +16,9 @@ should probably proofread and complete it, then remove this comment. -->
|
|
16 |
|
17 |
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the None dataset.
|
18 |
It achieves the following results on the evaluation set:
|
19 |
-
- Loss: 0.
|
20 |
-
- Wer:
|
21 |
-
- Cer:
|
22 |
|
23 |
## Model description
|
24 |
|
@@ -38,13 +38,13 @@ More information needed
|
|
38 |
|
39 |
The following hyperparameters were used during training:
|
40 |
- learning_rate: 1e-05
|
41 |
-
- train_batch_size:
|
42 |
-
- eval_batch_size:
|
43 |
- seed: 42
|
44 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
45 |
- lr_scheduler_type: linear
|
46 |
- lr_scheduler_warmup_steps: 500
|
47 |
-
- training_steps:
|
48 |
- mixed_precision_training: Native AMP
|
49 |
|
50 |
### Training results
|
@@ -86,16 +86,16 @@ The following hyperparameters were used during training:
|
|
86 |
| 0.4863 | 35.99 | 33000 | 7.2441 | 0.5003 | 23.0542 |
|
87 |
| 0.5007 | 37.08 | 34000 | 7.1545 | 0.4948 | 22.9234 |
|
88 |
| 0.4519 | 38.17 | 35000 | 7.1257 | 0.4922 | 22.8248 |
|
89 |
-
| 0.3674 | 39.26 | 36000 | 0.4754
|
90 |
-
| 0.3481 | 40.35 | 37000 | 0.4679
|
91 |
-
| 0.2992 | 41.44 | 38000 | 0.4622
|
92 |
-
| 0.2505 | 42.53 | 39000 | 0.4641
|
93 |
-
| 0.2477 | 43.62 | 40000 | 0.4678
|
94 |
-
| 0.1994 | 44.71 | 41000 | 0.4689
|
95 |
-
| 0.1865 | 45.8 | 42000 | 0.4717
|
96 |
-
| 0.2307 | 46.89 | 43000 | 0.4754
|
97 |
-
| 0.1705 | 47.98 | 44000 | 0.4759
|
98 |
-
| 0.2007 | 49.07 | 45000 | 0.4767
|
99 |
|
100 |
|
101 |
### Framework versions
|
|
|
16 |
|
17 |
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the None dataset.
|
18 |
It achieves the following results on the evaluation set:
|
19 |
+
- Loss: 0.5835
|
20 |
+
- Wer: 24.8548
|
21 |
+
- Cer: 8.2429
|
22 |
|
23 |
## Model description
|
24 |
|
|
|
38 |
|
39 |
The following hyperparameters were used during training:
|
40 |
- learning_rate: 1e-05
|
41 |
+
- train_batch_size: 1
|
42 |
+
- eval_batch_size: 2
|
43 |
- seed: 42
|
44 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
45 |
- lr_scheduler_type: linear
|
46 |
- lr_scheduler_warmup_steps: 500
|
47 |
+
- training_steps: 35000
|
48 |
- mixed_precision_training: Native AMP
|
49 |
|
50 |
### Training results
|
|
|
86 |
| 0.4863 | 35.99 | 33000 | 7.2441 | 0.5003 | 23.0542 |
|
87 |
| 0.5007 | 37.08 | 34000 | 7.1545 | 0.4948 | 22.9234 |
|
88 |
| 0.4519 | 38.17 | 35000 | 7.1257 | 0.4922 | 22.8248 |
|
89 |
+
| 0.3674 | 39.26 | 36000 | 7.0104 | 0.4754 | 22.6642 |
|
90 |
+
| 0.3481 | 40.35 | 37000 | 7.0311 | 0.4679 | 22.6314 |
|
91 |
+
| 0.2992 | 41.44 | 38000 | 6.9465 | 0.4622 | 22.2595 |
|
92 |
+
| 0.2505 | 42.53 | 39000 | 6.9198 | 0.4641 | 22.1937 |
|
93 |
+
| 0.2477 | 43.62 | 40000 | 7.2008 | 0.4678 | 22.8279 |
|
94 |
+
| 0.1994 | 44.71 | 41000 | 7.1179 | 0.4689 | 22.3808 |
|
95 |
+
| 0.1865 | 45.8 | 42000 | 7.1351 | 0.4717 | 22.5664 |
|
96 |
+
| 0.2307 | 46.89 | 43000 | 7.1364 | 0.4754 | 22.3722 |
|
97 |
+
| 0.1705 | 47.98 | 44000 | 7.0830 | 0.4759 | 22.3863 |
|
98 |
+
| 0.2007 | 49.07 | 45000 | 7.1187 | 0.4767 | 22.4849 |
|
99 |
|
100 |
|
101 |
### Framework versions
|