marinone94 commited on
Commit
d559445
1 Parent(s): 58c7e8e

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +17 -9
README.md CHANGED
@@ -21,7 +21,7 @@ model-index:
21
  metrics:
22
  - name: Wer
23
  type: wer
24
- value: 123.40425531914893
25
  ---
26
 
27
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -31,8 +31,8 @@ should probably proofread and complete it, then remove this comment. -->
31
 
32
  This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the fleurs dataset.
33
  It achieves the following results on the evaluation set:
34
- - Loss: 1.8532
35
- - Wer: 123.4043
36
 
37
  ## Model description
38
 
@@ -52,21 +52,29 @@ More information needed
52
 
53
  The following hyperparameters were used during training:
54
  - learning_rate: 7.5e-06
55
- - train_batch_size: 4
56
- - eval_batch_size: 2
57
  - seed: 42
58
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
59
  - lr_scheduler_type: linear
60
- - lr_scheduler_warmup_ratio: 0.5
61
- - training_steps: 2
62
  - mixed_precision_training: Native AMP
63
 
64
  ### Training results
65
 
66
  | Training Loss | Epoch | Step | Validation Loss | Wer |
67
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
68
- | 1.8019 | 0.5 | 1 | 1.8532 | 123.4043 |
69
- | 1.6763 | 1.0 | 2 | 1.8532 | 123.4043 |
 
 
 
 
 
 
 
 
70
 
71
 
72
  ### Framework versions
 
21
  metrics:
22
  - name: Wer
23
  type: wer
24
+ value: 186.6677311192719
25
  ---
26
 
27
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
31
 
32
  This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the fleurs dataset.
33
  It achieves the following results on the evaluation set:
34
+ - Loss: 1.0049
35
+ - Wer: 186.6677
36
 
37
  ## Model description
38
 
 
52
 
53
  The following hyperparameters were used during training:
54
  - learning_rate: 7.5e-06
55
+ - train_batch_size: 16
56
+ - eval_batch_size: 8
57
  - seed: 42
58
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
59
  - lr_scheduler_type: linear
60
+ - lr_scheduler_warmup_ratio: 0.3
61
+ - training_steps: 448
62
  - mixed_precision_training: Native AMP
63
 
64
  ### Training results
65
 
66
  | Training Loss | Epoch | Step | Validation Loss | Wer |
67
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
68
+ | 1.4112 | 0.1 | 44 | 1.4919 | 245.2978 |
69
+ | 1.0501 | 0.2 | 88 | 1.2255 | 219.9425 |
70
+ | 0.9033 | 0.29 | 132 | 1.1203 | 205.7800 |
71
+ | 0.8142 | 1.06 | 176 | 1.0675 | 192.8788 |
72
+ | 0.8029 | 1.16 | 220 | 1.0393 | 178.4289 |
73
+ | 0.6324 | 1.25 | 264 | 1.0302 | 216.6055 |
74
+ | 0.6971 | 2.02 | 308 | 1.0135 | 179.3709 |
75
+ | 0.6051 | 2.12 | 352 | 1.0065 | 194.6352 |
76
+ | 0.6048 | 2.21 | 396 | 1.0030 | 173.4792 |
77
+ | 0.585 | 2.31 | 440 | 1.0049 | 186.6677 |
78
 
79
 
80
  ### Framework versions