marinone94 commited on
Commit
d6cebc9
1 Parent(s): 8a2f935

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +17 -9
README.md CHANGED
@@ -21,7 +21,7 @@ model-index:
21
  metrics:
22
  - name: Wer
23
  type: wer
24
- value: 153.2258064516129
25
  ---
26
 
27
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -31,8 +31,8 @@ should probably proofread and complete it, then remove this comment. -->
31
 
32
  This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the fleurs dataset.
33
  It achieves the following results on the evaluation set:
34
- - Loss: 1.6192
35
- - Wer: 153.2258
36
 
37
  ## Model description
38
 
@@ -52,21 +52,29 @@ More information needed
52
 
53
  The following hyperparameters were used during training:
54
  - learning_rate: 1e-05
55
- - train_batch_size: 4
56
- - eval_batch_size: 2
57
  - seed: 42
58
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
59
  - lr_scheduler_type: linear
60
- - lr_scheduler_warmup_ratio: 0.5
61
- - training_steps: 2
62
  - mixed_precision_training: Native AMP
63
 
64
  ### Training results
65
 
66
  | Training Loss | Epoch | Step | Validation Loss | Wer |
67
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
68
- | 1.565 | 0.5 | 1 | 1.6192 | 153.2258 |
69
- | 1.3028 | 1.0 | 2 | 1.6192 | 153.2258 |
 
 
 
 
 
 
 
 
70
 
71
 
72
  ### Framework versions
 
21
  metrics:
22
  - name: Wer
23
  type: wer
24
+ value: 168.6092926712438
25
  ---
26
 
27
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
31
 
32
  This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the fleurs dataset.
33
  It achieves the following results on the evaluation set:
34
+ - Loss: 1.0456
35
+ - Wer: 168.6093
36
 
37
  ## Model description
38
 
 
52
 
53
  The following hyperparameters were used during training:
54
  - learning_rate: 1e-05
55
+ - train_batch_size: 64
56
+ - eval_batch_size: 32
57
  - seed: 42
58
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
59
  - lr_scheduler_type: linear
60
+ - lr_scheduler_warmup_ratio: 0.2
61
+ - training_steps: 112
62
  - mixed_precision_training: Native AMP
63
 
64
  ### Training results
65
 
66
  | Training Loss | Epoch | Step | Validation Loss | Wer |
67
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
68
+ | 1.5299 | 0.1 | 11 | 1.5622 | 219.6711 |
69
+ | 1.1908 | 0.2 | 22 | 1.3652 | 192.2401 |
70
+ | 1.1161 | 0.29 | 33 | 1.1921 | 200.2395 |
71
+ | 0.9216 | 1.05 | 44 | 1.1263 | 186.5240 |
72
+ | 0.8441 | 1.15 | 55 | 1.0946 | 179.3230 |
73
+ | 0.8505 | 1.25 | 66 | 1.0748 | 159.6839 |
74
+ | 0.7844 | 2.01 | 77 | 1.0585 | 163.2924 |
75
+ | 0.7208 | 2.11 | 88 | 1.0491 | 158.1031 |
76
+ | 0.6481 | 2.21 | 99 | 1.0468 | 158.5183 |
77
+ | 0.7912 | 2.3 | 110 | 1.0456 | 168.6093 |
78
 
79
 
80
  ### Framework versions