MarOsz commited on
Commit
cec7b39
1 Parent(s): 45153b9

Update metadata with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +82 -71
README.md CHANGED
@@ -1,72 +1,83 @@
1
- ---
2
- language:
3
- - pl
4
- library_name: peft
5
- tags:
6
- - generated_from_trainer
7
- datasets:
8
- - mozilla-foundation/common_voice_17_0
9
- base_model: openai/whisper-base
10
- model-index:
11
- - name: Whisper Base Polish PEFT - s22678 prod
12
- results: []
13
- ---
14
-
15
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
16
- should probably proofread and complete it, then remove this comment. -->
17
-
18
- # Whisper Base Polish PEFT - s22678 prod
19
-
20
- This model is a fine-tuned version of [openai/openai/whisper-base](https://huggingface.co/openai/openai/whisper-base) on the Common Voice 17.0 dataset.
21
- It achieves the following results on the evaluation set:
22
- - Loss: 0.5546
23
-
24
- ## Model description
25
-
26
- More information needed
27
-
28
- ## Intended uses & limitations
29
-
30
- More information needed
31
-
32
- ## Training and evaluation data
33
-
34
- More information needed
35
-
36
- ## Training procedure
37
-
38
- ### Training hyperparameters
39
-
40
- The following hyperparameters were used during training:
41
- - learning_rate: 0.001
42
- - train_batch_size: 52
43
- - eval_batch_size: 64
44
- - seed: 42
45
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
46
- - lr_scheduler_type: linear
47
- - lr_scheduler_warmup_steps: 50
48
- - num_epochs: 10
49
- - mixed_precision_training: Native AMP
50
-
51
- ### Training results
52
-
53
- | Training Loss | Epoch | Step | Validation Loss |
54
- |:-------------:|:-----:|:----:|:---------------:|
55
- | 0.3265 | 1.0 | 400 | 0.4049 |
56
- | 0.2506 | 2.01 | 800 | 0.3920 |
57
- | 0.185 | 3.01 | 1200 | 0.3868 |
58
- | 0.1506 | 4.01 | 1600 | 0.3859 |
59
- | 0.1217 | 5.01 | 2000 | 0.3856 |
60
- | 0.0931 | 6.02 | 2400 | 0.3922 |
61
- | 0.0698 | 7.02 | 2800 | 0.3999 |
62
- | 0.0549 | 8.02 | 3200 | 0.4077 |
63
- | 0.0477 | 9.02 | 3600 | 0.4121 |
64
-
65
-
66
- ### Framework versions
67
-
68
- - PEFT 0.11.2.dev0
69
- - Transformers 4.36.0
70
- - Pytorch 2.1.0
71
- - Datasets 2.15.0
 
 
 
 
 
 
 
 
 
 
 
72
  - Tokenizers 0.15.1
 
1
+ ---
2
+ language:
3
+ - pl
4
+ library_name: peft
5
+ tags:
6
+ - generated_from_trainer
7
+ base_model: openai/whisper-base
8
+ datasets:
9
+ - mozilla-foundation/common_voice_17_0
10
+ model-index:
11
+ - name: Whisper Base Polish PEFT - s22678 prod
12
+ results:
13
+ - task:
14
+ type: automatic-speech-recognition
15
+ name: Automatic Speech Recognition
16
+ dataset:
17
+ name: Common Voice 17.0
18
+ type: mozilla-foundation/common_voice_17_0
19
+ split: test
20
+ metrics:
21
+ - type: wer
22
+ value: 42.070773263433814
23
+ name: WER
24
+ ---
25
+
26
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
27
+ should probably proofread and complete it, then remove this comment. -->
28
+
29
+ # Whisper Base Polish PEFT - s22678 prod
30
+
31
+ This model is a fine-tuned version of [openai/openai/whisper-base](https://huggingface.co/openai/openai/whisper-base) on the Common Voice 17.0 dataset.
32
+ It achieves the following results on the evaluation set:
33
+ - Loss: 0.5546
34
+
35
+ ## Model description
36
+
37
+ More information needed
38
+
39
+ ## Intended uses & limitations
40
+
41
+ More information needed
42
+
43
+ ## Training and evaluation data
44
+
45
+ More information needed
46
+
47
+ ## Training procedure
48
+
49
+ ### Training hyperparameters
50
+
51
+ The following hyperparameters were used during training:
52
+ - learning_rate: 0.001
53
+ - train_batch_size: 52
54
+ - eval_batch_size: 64
55
+ - seed: 42
56
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
57
+ - lr_scheduler_type: linear
58
+ - lr_scheduler_warmup_steps: 50
59
+ - num_epochs: 10
60
+ - mixed_precision_training: Native AMP
61
+
62
+ ### Training results
63
+
64
+ | Training Loss | Epoch | Step | Validation Loss |
65
+ |:-------------:|:-----:|:----:|:---------------:|
66
+ | 0.3265 | 1.0 | 400 | 0.4049 |
67
+ | 0.2506 | 2.01 | 800 | 0.3920 |
68
+ | 0.185 | 3.01 | 1200 | 0.3868 |
69
+ | 0.1506 | 4.01 | 1600 | 0.3859 |
70
+ | 0.1217 | 5.01 | 2000 | 0.3856 |
71
+ | 0.0931 | 6.02 | 2400 | 0.3922 |
72
+ | 0.0698 | 7.02 | 2800 | 0.3999 |
73
+ | 0.0549 | 8.02 | 3200 | 0.4077 |
74
+ | 0.0477 | 9.02 | 3600 | 0.4121 |
75
+
76
+
77
+ ### Framework versions
78
+
79
+ - PEFT 0.11.2.dev0
80
+ - Transformers 4.36.0
81
+ - Pytorch 2.1.0
82
+ - Datasets 2.15.0
83
  - Tokenizers 0.15.1