marinone94 commited on
Commit
df76731
1 Parent(s): 309997b

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +39 -5
README.md CHANGED
@@ -4,9 +4,24 @@ tags:
4
  - generated_from_trainer
5
  datasets:
6
  - fleurs
 
 
7
  model-index:
8
  - name: openai/whisper-tiny
9
- results: []
 
 
 
 
 
 
 
 
 
 
 
 
 
10
  ---
11
 
12
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -15,6 +30,9 @@ should probably proofread and complete it, then remove this comment. -->
15
  # openai/whisper-tiny
16
 
17
  This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the fleurs dataset.
 
 
 
18
 
19
  ## Model description
20
 
@@ -34,15 +52,31 @@ More information needed
34
 
35
  The following hyperparameters were used during training:
36
  - learning_rate: 1e-05
37
- - train_batch_size: 4
38
- - eval_batch_size: 2
39
  - seed: 42
40
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
41
  - lr_scheduler_type: linear
42
- - lr_scheduler_warmup_ratio: 0.5
43
- - training_steps: 2
44
  - mixed_precision_training: Native AMP
45
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
46
  ### Framework versions
47
 
48
  - Transformers 4.26.0.dev0
 
4
  - generated_from_trainer
5
  datasets:
6
  - fleurs
7
+ metrics:
8
+ - wer
9
  model-index:
10
  - name: openai/whisper-tiny
11
+ results:
12
+ - task:
13
+ name: Automatic Speech Recognition
14
+ type: automatic-speech-recognition
15
+ dataset:
16
+ name: fleurs
17
+ type: fleurs
18
+ config: en_us
19
+ split: validation
20
+ args: en_us
21
+ metrics:
22
+ - name: Wer
23
+ type: wer
24
+ value: 19.3465805193222
25
  ---
26
 
27
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
30
  # openai/whisper-tiny
31
 
32
  This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the fleurs dataset.
33
+ It achieves the following results on the evaluation set:
34
+ - Loss: 0.5568
35
+ - Wer: 19.3466
36
 
37
  ## Model description
38
 
 
52
 
53
  The following hyperparameters were used during training:
54
  - learning_rate: 1e-05
55
+ - train_batch_size: 64
56
+ - eval_batch_size: 32
57
  - seed: 42
58
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
59
  - lr_scheduler_type: linear
60
+ - lr_scheduler_warmup_ratio: 0.2
61
+ - training_steps: 407
62
  - mixed_precision_training: Native AMP
63
 
64
+ ### Training results
65
+
66
+ | Training Loss | Epoch | Step | Validation Loss | Wer |
67
+ |:-------------:|:-----:|:----:|:---------------:|:-------:|
68
+ | 1.1599 | 0.1 | 40 | 1.1427 | 15.2139 |
69
+ | 0.4655 | 1.1 | 80 | 0.5613 | 17.5911 |
70
+ | 0.2753 | 2.09 | 120 | 0.5241 | 17.2132 |
71
+ | 0.2077 | 3.09 | 160 | 0.5242 | 17.2620 |
72
+ | 0.1636 | 4.09 | 200 | 0.5290 | 17.6643 |
73
+ | 0.1322 | 5.09 | 240 | 0.5351 | 18.2128 |
74
+ | 0.123 | 6.08 | 280 | 0.5429 | 18.9077 |
75
+ | 0.1074 | 7.08 | 320 | 0.5500 | 19.0540 |
76
+ | 0.1007 | 8.08 | 360 | 0.5553 | 19.3100 |
77
+ | 0.0876 | 9.08 | 400 | 0.5568 | 19.3466 |
78
+
79
+
80
  ### Framework versions
81
 
82
  - Transformers 4.26.0.dev0