Padajno commited on
Commit
d3e5963
1 Parent(s): 2b7ba13

End of training

Browse files
Files changed (2) hide show
  1. README.md +14 -9
  2. generation_config.json +1 -1
README.md CHANGED
@@ -24,7 +24,7 @@ model-index:
24
  metrics:
25
  - name: Wer
26
  type: wer
27
- value: 736.4139693356047
28
  ---
29
 
30
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -34,9 +34,9 @@ should probably proofread and complete it, then remove this comment. -->
34
 
35
  This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 13 dataset.
36
  It achieves the following results on the evaluation set:
37
- - Loss: 0.3656
38
- - Wer Ortho: 346.6232
39
- - Wer: 736.4140
40
 
41
  ## Model description
42
 
@@ -62,19 +62,24 @@ The following hyperparameters were used during training:
62
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
63
  - lr_scheduler_type: constant_with_warmup
64
  - lr_scheduler_warmup_steps: 50
65
- - training_steps: 500
66
  - mixed_precision_training: Native AMP
67
 
68
  ### Training results
69
 
70
- | Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
71
- |:-------------:|:------:|:----:|:---------------:|:---------:|:--------:|
72
- | 0.0427 | 3.0675 | 500 | 0.3656 | 346.6232 | 736.4140 |
 
 
 
 
 
73
 
74
 
75
  ### Framework versions
76
 
77
- - Transformers 4.41.0
78
  - Pytorch 2.3.0+cu121
79
  - Datasets 2.19.1
80
  - Tokenizers 0.19.1
 
24
  metrics:
25
  - name: Wer
26
  type: wer
27
+ value: 25.936967632027258
28
  ---
29
 
30
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
34
 
35
  This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 13 dataset.
36
  It achieves the following results on the evaluation set:
37
+ - Loss: 0.3707
38
+ - Wer Ortho: 28.2066
39
+ - Wer: 25.9370
40
 
41
  ## Model description
42
 
 
62
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
63
  - lr_scheduler_type: constant_with_warmup
64
  - lr_scheduler_warmup_steps: 50
65
+ - training_steps: 600
66
  - mixed_precision_training: Native AMP
67
 
68
  ### Training results
69
 
70
+ | Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
71
+ |:-------------:|:------:|:----:|:---------------:|:---------:|:-------:|
72
+ | 0.9497 | 0.6135 | 100 | 0.8683 | 37.6703 | 35.8745 |
73
+ | 0.171 | 1.2270 | 200 | 0.3742 | 33.4847 | 31.2039 |
74
+ | 0.1841 | 1.8405 | 300 | 0.3407 | 31.0585 | 28.7337 |
75
+ | 0.0592 | 2.4540 | 400 | 0.3492 | 29.5545 | 27.1153 |
76
+ | 0.0434 | 3.0675 | 500 | 0.3624 | 29.7106 | 27.2572 |
77
+ | 0.027 | 3.6810 | 600 | 0.3707 | 28.2066 | 25.9370 |
78
 
79
 
80
  ### Framework versions
81
 
82
+ - Transformers 4.41.1
83
  - Pytorch 2.3.0+cu121
84
  - Datasets 2.19.1
85
  - Tokenizers 0.19.1
generation_config.json CHANGED
@@ -260,5 +260,5 @@
260
  "transcribe": 50359,
261
  "translate": 50358
262
  },
263
- "transformers_version": "4.41.0"
264
  }
 
260
  "transcribe": 50359,
261
  "translate": 50358
262
  },
263
+ "transformers_version": "4.41.1"
264
  }