CKSINGH commited on
Commit
06f27cb
1 Parent(s): 0d955a2

End of training

Browse files
README.md CHANGED
@@ -8,19 +8,19 @@ tags:
8
  metrics:
9
  - wer
10
  model-index:
11
- - name: Whisper Small Hi - CKS 1111 gramin dataset over common voice
12
  results: []
13
  ---
14
 
15
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
16
  should probably proofread and complete it, then remove this comment. -->
17
 
18
- # Whisper Small Hi - CKS 1111 gramin dataset over common voice
19
 
20
  This model is a fine-tuned version of [fine tuned openai/whisper-small](https://huggingface.co/fine tuned openai/whisper-small) on the None dataset.
21
  It achieves the following results on the evaluation set:
22
- - Loss: 0.1661
23
- - Wer: 21.6569
24
 
25
  ## Model description
26
 
@@ -45,16 +45,19 @@ The following hyperparameters were used during training:
45
  - seed: 42
46
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
47
  - lr_scheduler_type: linear
48
- - lr_scheduler_warmup_steps: 1
49
- - training_steps: 2
50
  - mixed_precision_training: Native AMP
51
 
52
  ### Training results
53
 
54
  | Training Loss | Epoch | Step | Validation Loss | Wer |
55
  |:-------------:|:-----:|:----:|:---------------:|:-------:|
56
- | 0.2145 | 0.02 | 1 | 0.1826 | 24.9480 |
57
- | 0.2324 | 0.04 | 2 | 0.1661 | 21.6569 |
 
 
 
58
 
59
 
60
  ### Framework versions
 
8
  metrics:
9
  - wer
10
  model-index:
11
+ - name: Whisper Small Hi - CKS 111102 gramin dataset over common voice
12
  results: []
13
  ---
14
 
15
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
16
  should probably proofread and complete it, then remove this comment. -->
17
 
18
+ # Whisper Small Hi - CKS 111102 gramin dataset over common voice
19
 
20
  This model is a fine-tuned version of [fine tuned openai/whisper-small](https://huggingface.co/fine tuned openai/whisper-small) on the None dataset.
21
  It achieves the following results on the evaluation set:
22
+ - Loss: 0.2500
23
+ - Wer: 27.5791
24
 
25
  ## Model description
26
 
 
45
  - seed: 42
46
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
47
  - lr_scheduler_type: linear
48
+ - lr_scheduler_warmup_steps: 100
49
+ - training_steps: 500
50
  - mixed_precision_training: Native AMP
51
 
52
  ### Training results
53
 
54
  | Training Loss | Epoch | Step | Validation Loss | Wer |
55
  |:-------------:|:-----:|:----:|:---------------:|:-------:|
56
+ | 0.2056 | 1.92 | 100 | 0.1679 | 27.0708 |
57
+ | 0.065 | 3.85 | 200 | 0.2057 | 28.1062 |
58
+ | 0.0153 | 5.77 | 300 | 0.2324 | 28.0873 |
59
+ | 0.0031 | 7.69 | 400 | 0.2461 | 27.5979 |
60
+ | 0.0013 | 9.62 | 500 | 0.2500 | 27.5791 |
61
 
62
 
63
  ### Framework versions
generation_config.json CHANGED
@@ -6,6 +6,20 @@
6
  "bos_token_id": 50257,
7
  "decoder_start_token_id": 50258,
8
  "eos_token_id": 50257,
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
  "max_length": 448,
10
  "pad_token_id": 50257,
11
  "return_timestamps": false,
 
6
  "bos_token_id": 50257,
7
  "decoder_start_token_id": 50258,
8
  "eos_token_id": 50257,
9
+ "forced_decoder_ids": [
10
+ [
11
+ 1,
12
+ 50276
13
+ ],
14
+ [
15
+ 2,
16
+ 50359
17
+ ],
18
+ [
19
+ 3,
20
+ 50363
21
+ ]
22
+ ],
23
  "max_length": 448,
24
  "pad_token_id": 50257,
25
  "return_timestamps": false,
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:7298c868c7881648ed6b50e40bc5d0485913b8c9541e3bdb591d7b13fbc076a3
3
  size 966995080
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f8335d6b1b23e3b0a0400a1946c7ad0dba6b7dfc805de45fcd49fafdee804e69
3
  size 966995080
runs/Nov12_19-08-01_16a4cfcfa22f/events.out.tfevents.1699816084.16a4cfcfa22f.20565.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:df5263702b32b58b753a0d3367e6b901fc340b17fa75d0bc87ee3b1df6a69760
3
- size 7563
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e0e2ddf06a373ba4902f41aab3e62d79b952421cdcfc482c4c6c660af7dd2b24
3
+ size 7917