Dacavi commited on
Commit
935f00a
1 Parent(s): 68e565a

End of training

Browse files
Files changed (1) hide show
  1. README.md +16 -16
README.md CHANGED
@@ -23,7 +23,7 @@ model-index:
23
  metrics:
24
  - name: Wer
25
  type: wer
26
- value: 46.25
27
  ---
28
 
29
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -33,8 +33,8 @@ should probably proofread and complete it, then remove this comment. -->
33
 
34
  This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
35
  It achieves the following results on the evaluation set:
36
- - Loss: 1.1396
37
- - Wer: 46.25
38
 
39
  ## Model description
40
 
@@ -60,23 +60,23 @@ The following hyperparameters were used during training:
60
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
61
  - lr_scheduler_type: linear
62
  - lr_scheduler_warmup_steps: 50
63
- - training_steps: 100
64
  - mixed_precision_training: Native AMP
65
 
66
  ### Training results
67
 
68
- | Training Loss | Epoch | Step | Validation Loss | Wer |
69
- |:-------------:|:-----:|:----:|:---------------:|:-------:|
70
- | No log | 10.0 | 10 | 4.1585 | 27.5000 |
71
- | No log | 20.0 | 20 | 3.0184 | 33.75 |
72
- | 3.8104 | 30.0 | 30 | 2.1938 | 105.0 |
73
- | 3.8104 | 40.0 | 40 | 1.4511 | 267.5 |
74
- | 0.8755 | 50.0 | 50 | 1.3480 | 178.75 |
75
- | 0.8755 | 60.0 | 60 | 1.2947 | 100.0 |
76
- | 0.8755 | 70.0 | 70 | 1.2436 | 86.25 |
77
- | 0.4202 | 80.0 | 80 | 1.1970 | 66.25 |
78
- | 0.4202 | 90.0 | 90 | 1.1718 | 52.5 |
79
- | 0.188 | 100.0 | 100 | 1.1396 | 46.25 |
80
 
81
 
82
  ### Framework versions
 
23
  metrics:
24
  - name: Wer
25
  type: wer
26
+ value: 13.333333333333334
27
  ---
28
 
29
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
33
 
34
  This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
35
  It achieves the following results on the evaluation set:
36
+ - Loss: 0.1798
37
+ - Wer: 13.3333
38
 
39
  ## Model description
40
 
 
60
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
61
  - lr_scheduler_type: linear
62
  - lr_scheduler_warmup_steps: 50
63
+ - training_steps: 1000
64
  - mixed_precision_training: Native AMP
65
 
66
  ### Training results
67
 
68
+ | Training Loss | Epoch | Step | Validation Loss | Wer |
69
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|
70
+ | 0.6172 | 0.1 | 100 | 0.6200 | 107.3958 |
71
+ | 0.2709 | 0.21 | 200 | 0.3492 | 67.0833 |
72
+ | 0.2839 | 0.31 | 300 | 0.2959 | 40.7292 |
73
+ | 0.2876 | 0.41 | 400 | 0.2766 | 29.5833 |
74
+ | 0.2296 | 0.52 | 500 | 0.2375 | 17.3958 |
75
+ | 0.2649 | 0.62 | 600 | 0.2102 | 15.3125 |
76
+ | 0.2644 | 0.72 | 700 | 0.1957 | 17.3958 |
77
+ | 0.2384 | 0.82 | 800 | 0.1886 | 13.7500 |
78
+ | 0.2325 | 0.93 | 900 | 0.1811 | 13.6458 |
79
+ | 0.1374 | 1.03 | 1000 | 0.1798 | 13.3333 |
80
 
81
 
82
  ### Framework versions