JoshEe00 commited on
Commit
6bad00f
1 Parent(s): d9425ea

End of training

Browse files
README.md CHANGED
@@ -22,7 +22,7 @@ model-index:
22
  metrics:
23
  - name: Wer
24
  type: wer
25
- value: 28.91705069124424
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -32,9 +32,9 @@ should probably proofread and complete it, then remove this comment. -->
32
 
33
  This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset.
34
  It achieves the following results on the evaluation set:
35
- - Loss: 0.8974
36
- - Wer Ortho: 30.4322
37
- - Wer: 28.9171
38
 
39
  ## Model description
40
 
@@ -60,14 +60,21 @@ The following hyperparameters were used during training:
60
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
61
  - lr_scheduler_type: constant_with_warmup
62
  - lr_scheduler_warmup_steps: 50
63
- - training_steps: 500
64
  - mixed_precision_training: Native AMP
65
 
66
  ### Training results
67
 
68
- | Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
69
- |:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:|
70
- | 0.0007 | 17.86 | 500 | 0.8974 | 30.4322 | 28.9171 |
 
 
 
 
 
 
 
71
 
72
 
73
  ### Framework versions
 
22
  metrics:
23
  - name: Wer
24
  type: wer
25
+ value: 0.26380766731643923
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
32
 
33
  This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset.
34
  It achieves the following results on the evaluation set:
35
+ - Loss: 1.0871
36
+ - Wer Ortho: 26.8342
37
+ - Wer: 0.2638
38
 
39
  ## Model description
40
 
 
60
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
61
  - lr_scheduler_type: constant_with_warmup
62
  - lr_scheduler_warmup_steps: 50
63
+ - training_steps: 4000
64
  - mixed_precision_training: Native AMP
65
 
66
  ### Training results
67
 
68
+ | Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
69
+ |:-------------:|:------:|:----:|:---------------:|:---------:|:------:|
70
+ | 0.0021 | 17.86 | 500 | 0.7863 | 26.6304 | 0.2534 |
71
+ | 0.0002 | 35.71 | 1000 | 0.8689 | 26.7663 | 0.2612 |
72
+ | 0.0001 | 53.57 | 1500 | 0.9230 | 27.2418 | 0.2664 |
73
+ | 0.0001 | 71.43 | 2000 | 0.9637 | 27.1739 | 0.2664 |
74
+ | 0.0 | 89.29 | 2500 | 0.9977 | 26.9022 | 0.2638 |
75
+ | 0.0 | 107.14 | 3000 | 1.0277 | 27.1739 | 0.2664 |
76
+ | 0.0 | 125.0 | 3500 | 1.0571 | 27.1739 | 0.2671 |
77
+ | 0.0 | 142.86 | 4000 | 1.0871 | 26.8342 | 0.2638 |
78
 
79
 
80
  ### Framework versions
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c220cb6cb1277f8a3357ab1a4161283e651e760b4b9df1022d231e6f45fd8188
3
  size 151061672
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:537a34896bbaa7be826550c9e35ed6c5001b07ea7050bae4731072948c186e87
3
  size 151061672
runs/May14_14-51-29_mum-hpc2-gpu3/events.out.tfevents.1715669490.mum-hpc2-gpu3 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:2eea5ac69cf78138179928f7ce466125f200546f396294a02d2a9fd16b6c2145
3
- size 38139
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6c564e1209f5a7a126d27301fe92cfb0a801c570559190ed3a9d09d14c4b6e99
3
+ size 43455