MarOsz commited on
Commit
615ca2d
1 Parent(s): bae68bd

Update metadata with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +71 -60
README.md CHANGED
@@ -1,61 +1,72 @@
1
- ---
2
- language:
3
- - pl
4
- library_name: peft
5
- tags:
6
- - generated_from_trainer
7
- datasets:
8
- - mozilla-foundation/common_voice_17_0
9
- base_model: openai/whisper-small
10
- model-index:
11
- - name: Whisper Small Polish PEFT - s22678 prod
12
- results: []
13
- ---
14
-
15
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
16
- should probably proofread and complete it, then remove this comment. -->
17
-
18
- # Whisper Small Polish PEFT - s22678 prod
19
-
20
- This model is a fine-tuned version of [openai/openai/whisper-small](https://huggingface.co/openai/openai/whisper-small) on the Common Voice 17.0 dataset.
21
- It achieves the following results on the evaluation set:
22
- - Loss: 1.6719
23
-
24
- ## Model description
25
-
26
- More information needed
27
-
28
- ## Intended uses & limitations
29
-
30
- More information needed
31
-
32
- ## Training and evaluation data
33
-
34
- More information needed
35
-
36
- ## Training procedure
37
-
38
- ### Training hyperparameters
39
-
40
- The following hyperparameters were used during training:
41
- - learning_rate: 0.001
42
- - train_batch_size: 32
43
- - eval_batch_size: 64
44
- - seed: 42
45
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
46
- - lr_scheduler_type: linear
47
- - lr_scheduler_warmup_steps: 15
48
- - training_steps: 10
49
- - mixed_precision_training: Native AMP
50
-
51
- ### Training results
52
-
53
-
54
-
55
- ### Framework versions
56
-
57
- - PEFT 0.11.2.dev0
58
- - Transformers 4.36.0
59
- - Pytorch 2.1.0
60
- - Datasets 2.15.0
 
 
 
 
 
 
 
 
 
 
 
61
  - Tokenizers 0.15.1
 
1
+ ---
2
+ language:
3
+ - pl
4
+ library_name: peft
5
+ tags:
6
+ - generated_from_trainer
7
+ base_model: openai/whisper-small
8
+ datasets:
9
+ - mozilla-foundation/common_voice_17_0
10
+ model-index:
11
+ - name: Whisper Small Polish PEFT - s22678 prod
12
+ results:
13
+ - task:
14
+ type: automatic-speech-recognition
15
+ name: Automatic Speech Recognition
16
+ dataset:
17
+ name: Common Voice 17.0
18
+ type: mozilla-foundation/common_voice_17_0
19
+ split: test
20
+ metrics:
21
+ - type: wer
22
+ value: 25.286041189931353
23
+ name: WER
24
+ ---
25
+
26
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
27
+ should probably proofread and complete it, then remove this comment. -->
28
+
29
+ # Whisper Small Polish PEFT - s22678 prod
30
+
31
+ This model is a fine-tuned version of [openai/openai/whisper-small](https://huggingface.co/openai/openai/whisper-small) on the Common Voice 17.0 dataset.
32
+ It achieves the following results on the evaluation set:
33
+ - Loss: 1.6719
34
+
35
+ ## Model description
36
+
37
+ More information needed
38
+
39
+ ## Intended uses & limitations
40
+
41
+ More information needed
42
+
43
+ ## Training and evaluation data
44
+
45
+ More information needed
46
+
47
+ ## Training procedure
48
+
49
+ ### Training hyperparameters
50
+
51
+ The following hyperparameters were used during training:
52
+ - learning_rate: 0.001
53
+ - train_batch_size: 32
54
+ - eval_batch_size: 64
55
+ - seed: 42
56
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
57
+ - lr_scheduler_type: linear
58
+ - lr_scheduler_warmup_steps: 15
59
+ - training_steps: 10
60
+ - mixed_precision_training: Native AMP
61
+
62
+ ### Training results
63
+
64
+
65
+
66
+ ### Framework versions
67
+
68
+ - PEFT 0.11.2.dev0
69
+ - Transformers 4.36.0
70
+ - Pytorch 2.1.0
71
+ - Datasets 2.15.0
72
  - Tokenizers 0.15.1