yuweiiizz commited on
Commit
4983462
1 Parent(s): 528242c

End of training

Browse files
README.md CHANGED
@@ -20,7 +20,7 @@ should probably proofread and complete it, then remove this comment. -->
20
 
21
  This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 16.1 dataset.
22
  It achieves the following results on the evaluation set:
23
- - Loss: 0.6546
24
 
25
  ## Model description
26
 
@@ -40,29 +40,29 @@ More information needed
40
 
41
  The following hyperparameters were used during training:
42
  - learning_rate: 0.001
43
- - train_batch_size: 16
44
  - eval_batch_size: 8
45
  - seed: 42
 
 
46
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
47
  - lr_scheduler_type: linear
48
  - lr_scheduler_warmup_ratio: 0.1
49
- - num_epochs: 4
50
  - mixed_precision_training: Native AMP
51
 
52
  ### Training results
53
 
54
- | Training Loss | Epoch | Step | Validation Loss |
55
- |:-------------:|:-----:|:----:|:---------------:|
56
- | 0.9384 | 1.0 | 1550 | 0.9004 |
57
- | 0.7257 | 2.0 | 3100 | 0.7403 |
58
- | 0.4672 | 3.0 | 4650 | 0.6729 |
59
- | 0.3205 | 4.0 | 6200 | 0.6546 |
60
 
61
 
62
  ### Framework versions
63
 
64
  - PEFT 0.10.0
65
  - Transformers 4.40.1
66
- - Pytorch 2.2.2+cu121
67
  - Datasets 2.19.0
68
  - Tokenizers 0.19.1
 
20
 
21
  This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 16.1 dataset.
22
  It achieves the following results on the evaluation set:
23
+ - Loss: 0.9567
24
 
25
  ## Model description
26
 
 
40
 
41
  The following hyperparameters were used during training:
42
  - learning_rate: 0.001
43
+ - train_batch_size: 8
44
  - eval_batch_size: 8
45
  - seed: 42
46
+ - gradient_accumulation_steps: 2
47
+ - total_train_batch_size: 16
48
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
49
  - lr_scheduler_type: linear
50
  - lr_scheduler_warmup_ratio: 0.1
51
+ - num_epochs: 2
52
  - mixed_precision_training: Native AMP
53
 
54
  ### Training results
55
 
56
+ | Training Loss | Epoch | Step | Validation Loss |
57
+ |:-------------:|:------:|:----:|:---------------:|
58
+ | 1.0718 | 0.9998 | 2498 | 1.0502 |
59
+ | 0.8117 | 1.9996 | 4996 | 0.9567 |
 
 
60
 
61
 
62
  ### Framework versions
63
 
64
  - PEFT 0.10.0
65
  - Transformers 4.40.1
66
+ - Pytorch 2.3.0+cu121
67
  - Datasets 2.19.0
68
  - Tokenizers 0.19.1
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e92ff42904b04e9d922e1337b8432525aa2fa852a692628ef9a1e4de1c1b870b
3
  size 19548472
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e8c69f495c7c075eddd6064ea350debc44238671004dbdf2f128f41e18d9443f
3
  size 19548472
runs/May01_19-07-59_09e62dd42026/events.out.tfevents.1714590480.09e62dd42026.26.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:390a605cbd3a75d93f32f032e1ed55b67953a35e1e9815a8ddc540c7b7d092eb
3
- size 43783
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a49194f644eacf953fede0b6da2657c9e48a83b488bf4c30dbb46b6c3eb5938c
3
+ size 48417