Frorozcol commited on
Commit
89fa966
1 Parent(s): 7a2eda2

End of training

Browse files
Files changed (3) hide show
  1. README.md +20 -12
  2. generation_config.json +1 -1
  3. model.safetensors +1 -1
README.md CHANGED
@@ -22,7 +22,7 @@ model-index:
22
  metrics:
23
  - name: Wer
24
  type: wer
25
- value: 34.69072164948454
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -32,9 +32,9 @@ should probably proofread and complete it, then remove this comment. -->
32
 
33
  This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset.
34
  It achieves the following results on the evaluation set:
35
- - Loss: 0.6369
36
- - Wer Ortho: 35.1642
37
- - Wer: 34.6907
38
 
39
  ## Model description
40
 
@@ -60,18 +60,26 @@ The following hyperparameters were used during training:
60
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
61
  - lr_scheduler_type: constant_with_warmup
62
  - lr_scheduler_warmup_steps: 50
63
- - training_steps: 500
 
64
 
65
  ### Training results
66
 
67
- | Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
68
- |:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:|
69
- | 0.0007 | 17.86 | 500 | 0.6369 | 35.1642 | 34.6907 |
 
 
 
 
 
 
 
70
 
71
 
72
  ### Framework versions
73
 
74
- - Transformers 4.32.0
75
- - Pytorch 2.0.1+cu118
76
- - Datasets 2.14.4
77
- - Tokenizers 0.13.3
 
22
  metrics:
23
  - name: Wer
24
  type: wer
25
+ value: 34.454756380510446
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
32
 
33
  This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset.
34
  It achieves the following results on the evaluation set:
35
+ - Loss: 0.8668
36
+ - Wer Ortho: 34.2615
37
+ - Wer: 34.4548
38
 
39
  ## Model description
40
 
 
60
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
61
  - lr_scheduler_type: constant_with_warmup
62
  - lr_scheduler_warmup_steps: 50
63
+ - training_steps: 4000
64
+ - mixed_precision_training: Native AMP
65
 
66
  ### Training results
67
 
68
+ | Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
69
+ |:-------------:|:------:|:----:|:---------------:|:---------:|:-------:|
70
+ | 0.0012 | 17.86 | 500 | 0.6821 | 33.2324 | 33.6427 |
71
+ | 0.0002 | 35.71 | 1000 | 0.7362 | 34.0194 | 34.0487 |
72
+ | 0.0001 | 53.57 | 1500 | 0.7689 | 33.9588 | 33.9907 |
73
+ | 0.0001 | 71.43 | 2000 | 0.7934 | 34.5036 | 34.4548 |
74
+ | 0.0 | 89.29 | 2500 | 0.8168 | 34.4431 | 34.3968 |
75
+ | 0.0 | 107.14 | 3000 | 0.8352 | 34.5642 | 34.5708 |
76
+ | 0.0 | 125.0 | 3500 | 0.8514 | 34.3220 | 34.5128 |
77
+ | 0.0 | 142.86 | 4000 | 0.8668 | 34.2615 | 34.4548 |
78
 
79
 
80
  ### Framework versions
81
 
82
+ - Transformers 4.35.2
83
+ - Pytorch 2.1.0+cu121
84
+ - Datasets 2.15.0
85
+ - Tokenizers 0.15.0
generation_config.json CHANGED
@@ -249,5 +249,5 @@
249
  "transcribe": 50359,
250
  "translate": 50358
251
  },
252
- "transformers_version": "4.32.0"
253
  }
 
249
  "transcribe": 50359,
250
  "translate": 50358
251
  },
252
+ "transformers_version": "4.35.2"
253
  }
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:1fb987e3862018e70a20a67a7007daff3d69f756fdef42d5b98e2181632fad45
3
  size 151061672
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:03fdf32d6844a6e502a71f1870eeabdcb2cee1cf99b7c53ae49c639db2cdd6c9
3
  size 151061672