futureProofGlitch commited on
Commit
0aba73c
1 Parent(s): 7ee3eda

End of training

Browse files
README.md CHANGED
@@ -24,7 +24,7 @@ model-index:
24
  metrics:
25
  - name: Wer
26
  type: wer
27
- value: 59.27575494338845
28
  ---
29
 
30
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -34,9 +34,9 @@ should probably proofread and complete it, then remove this comment. -->
34
 
35
  This model is a fine-tuned version of [futureProofGlitch/whisper-small](https://huggingface.co/futureProofGlitch/whisper-small) on the Gigaspeech dataset.
36
  It achieves the following results on the evaluation set:
37
- - Loss: 1.1812
38
- - Wer Ortho: 65.1852
39
- - Wer: 59.2758
40
 
41
  ## Model description
42
 
@@ -62,14 +62,19 @@ The following hyperparameters were used during training:
62
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
63
  - lr_scheduler_type: constant_with_warmup
64
  - lr_scheduler_warmup_steps: 50
65
- - training_steps: 500
66
  - mixed_precision_training: Native AMP
67
 
68
  ### Training results
69
 
70
  | Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
71
  |:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:|
72
- | 1.6925 | 0.5 | 500 | 1.1812 | 65.1852 | 59.2758 |
 
 
 
 
 
73
 
74
 
75
  ### Framework versions
 
24
  metrics:
25
  - name: Wer
26
  type: wer
27
+ value: 46.300985978395836
28
  ---
29
 
30
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
34
 
35
  This model is a fine-tuned version of [futureProofGlitch/whisper-small](https://huggingface.co/futureProofGlitch/whisper-small) on the Gigaspeech dataset.
36
  It achieves the following results on the evaluation set:
37
+ - Loss: 0.3434
38
+ - Wer Ortho: 56.8717
39
+ - Wer: 46.3010
40
 
41
  ## Model description
42
 
 
62
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
63
  - lr_scheduler_type: constant_with_warmup
64
  - lr_scheduler_warmup_steps: 50
65
+ - training_steps: 3000
66
  - mixed_precision_training: Native AMP
67
 
68
  ### Training results
69
 
70
  | Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
71
  |:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:|
72
+ | 0.2268 | 0.5 | 500 | 0.3308 | 29.8287 | 18.2124 |
73
+ | 0.2039 | 0.99 | 1000 | 0.3082 | 28.3139 | 16.3612 |
74
+ | 0.1071 | 1.49 | 1500 | 0.3209 | 30.5425 | 18.9117 |
75
+ | 0.1174 | 1.98 | 2000 | 0.3140 | 51.1370 | 40.1655 |
76
+ | 0.0555 | 2.48 | 2500 | 0.3525 | 65.5197 | 53.9069 |
77
+ | 0.0603 | 2.98 | 3000 | 0.3434 | 56.8717 | 46.3010 |
78
 
79
 
80
  ### Framework versions
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:1ab9b8dd0a205906bf674d6e98eeb8e7a6dc26b3723fb9aec3cea359691ce4f4
3
  size 966995080
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7207b022a6c4ac3e62d20232fe19335eea2af5ed1d5963df2f273ef86a905dee
3
  size 966995080
runs/Mar18_18-25-55_f497e85d0054/events.out.tfevents.1710786356.f497e85d0054.4809.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:23aa8f3dfd493ec4c482a648561310229ccb4b012413878922eb5fd56a6111d9
3
- size 21240
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a94f8c9588b0c784c713102a1931ca1780358d6a52180e1c2b7f74b91d06d43b
3
+ size 21594