SuperKogito commited on
Commit
c5da1a0
1 Parent(s): 36e7d4a

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +91 -0
README.md ADDED
@@ -0,0 +1,91 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - generated_from_trainer
5
+ metrics:
6
+ - wer
7
+ model-index:
8
+ - name: whisper-base-nl-3
9
+ results: []
10
+ ---
11
+
12
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
+ should probably proofread and complete it, then remove this comment. -->
14
+
15
+ # whisper-base-nl-3
16
+
17
+ This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the None dataset.
18
+ It achieves the following results on the evaluation set:
19
+ - Loss: 0.7927
20
+ - Wer: 31.4005
21
+ - Cer: 9.9570
22
+
23
+ ## Model description
24
+
25
+ More information needed
26
+
27
+ ## Intended uses & limitations
28
+
29
+ More information needed
30
+
31
+ ## Training and evaluation data
32
+
33
+ More information needed
34
+
35
+ ## Training procedure
36
+
37
+ ### Training hyperparameters
38
+
39
+ The following hyperparameters were used during training:
40
+ - learning_rate: 1e-05
41
+ - train_batch_size: 1
42
+ - eval_batch_size: 2
43
+ - seed: 42
44
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
+ - lr_scheduler_type: linear
46
+ - lr_scheduler_warmup_steps: 500
47
+ - training_steps: 5000
48
+ - mixed_precision_training: Native AMP
49
+
50
+ ### Training results
51
+
52
+ | Training Loss | Epoch | Step | Cer | Validation Loss | Wer |
53
+ |:-------------:|:-----:|:-----:|:-------:|:---------------:|:-------:|
54
+ | 0.5761 | 0.06 | 1000 | 10.1154 | 0.5675 | 28.1532 |
55
+ | 0.48 | 0.13 | 2000 | 9.6911 | 0.5239 | 26.4364 |
56
+ | 0.4094 | 0.19 | 3000 | 9.1532 | 0.4925 | 24.8355 |
57
+ | 0.4792 | 0.26 | 4000 | 8.8414 | 0.4702 | 24.1105 |
58
+ | 0.3444 | 0.32 | 5000 | 8.8531 | 0.4544 | 23.9017 |
59
+ | 0.3943 | 0.39 | 6000 | 8.3602 | 0.4446 | 22.7353 |
60
+ | 0.4925 | 0.45 | 7000 | 8.3724 | 0.4348 | 22.1788 |
61
+ | 0.4455 | 0.52 | 8000 | 8.2989 | 0.4270 | 21.7549 |
62
+ | 0.3987 | 0.58 | 9000 | 7.9417 | 0.4139 | 20.8424 |
63
+ | 0.3373 | 0.65 | 10000 | 7.8871 | 0.4116 | 21.2144 |
64
+ | 0.3808 | 0.71 | 11000 | 7.6264 | 0.4016 | 20.5092 |
65
+ | 0.4214 | 0.78 | 12000 | 7.4153 | 0.3949 | 20.0938 |
66
+ | 0.3029 | 0.84 | 13000 | 7.3581 | 0.3902 | 19.7347 |
67
+ | 0.3549 | 1.66 | 14000 | 7.1195 | 0.3908 | 19.4115 |
68
+ | 0.3385 | 1.78 | 15000 | 7.7792 | 0.3906 | 20.2051 |
69
+ | 0.3282 | 1.9 | 16000 | 7.1081 | 0.3923 | 19.2651 |
70
+ | 0.3196 | 2.02 | 17000 | 7.2249 | 0.3923 | 19.3352 |
71
+ | 0.3251 | 2.14 | 18000 | 7.1761 | 0.3981 | 19.4831 |
72
+ | 0.4162 | 2.25 | 19000 | 7.0590 | 0.3958 | 19.0577 |
73
+ | 0.2851 | 2.37 | 20000 | 7.0167 | 0.3953 | 19.2095 |
74
+ | 0.2982 | 2.49 | 21000 | 6.8426 | 0.3929 | 18.8100 |
75
+ | 0.3642 | 2.61 | 22000 | 6.8867 | 0.3954 | 18.6972 |
76
+ | 0.2297 | 2.73 | 23000 | 6.9384 | 0.3916 | 18.7330 |
77
+ | 0.2313 | 2.85 | 24000 | 6.7785 | 0.3930 | 18.6034 |
78
+ | 0.2833 | 2.97 | 25000 | 6.8552 | 0.3910 | 18.5981 |
79
+ | 0.2509 | 3.09 | 26000 | 6.8165 | 0.3949 | 18.5180 |
80
+ | 0.2085 | 3.2 | 27000 | 6.8113 | 0.3985 | 18.6133 |
81
+ | 0.2055 | 3.32 | 28000 | 6.8624 | 0.3995 | 18.7612 |
82
+ | 0.175 | 3.44 | 29000 | 6.7727 | 0.4009 | 18.4814 |
83
+ | 0.1701 | 3.56 | 30000 | 7.0136 | 0.3998 | 18.8344 |
84
+
85
+
86
+ ### Framework versions
87
+
88
+ - Transformers 4.26.0.dev0
89
+ - Pytorch 1.13.1+cu117
90
+ - Datasets 2.8.1.dev0
91
+ - Tokenizers 0.13.2