DewiBrynJones commited on
Commit
ccdbf76
1 Parent(s): 2c1e532

Model save

Browse files
README.md CHANGED
@@ -17,8 +17,8 @@ should probably proofread and complete it, then remove this comment. -->
17
 
18
  This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on an unknown dataset.
19
  It achieves the following results on the evaluation set:
20
- - Loss: 0.1480
21
- - Wer: 25.1341
22
 
23
  ## Model description
24
 
@@ -39,28 +39,27 @@ More information needed
39
  The following hyperparameters were used during training:
40
  - learning_rate: 1e-05
41
  - train_batch_size: 4
42
- - eval_batch_size: 8
43
  - seed: 42
44
  - gradient_accumulation_steps: 8
45
  - total_train_batch_size: 32
46
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
47
  - lr_scheduler_type: linear
48
  - lr_scheduler_warmup_steps: 500
49
- - training_steps: 4000
50
 
51
  ### Training results
52
 
53
  | Training Loss | Epoch | Step | Validation Loss | Wer |
54
  |:-------------:|:-----:|:----:|:---------------:|:-------:|
55
- | 0.2078 | 0.25 | 1000 | 0.2198 | 28.7556 |
56
- | 0.1623 | 0.5 | 2000 | 0.1800 | 31.3698 |
57
- | 0.1417 | 0.75 | 3000 | 0.1585 | 18.7051 |
58
- | 0.1188 | 1.01 | 4000 | 0.1480 | 25.1341 |
59
 
60
 
61
  ### Framework versions
62
 
63
- - Transformers 4.37.1
64
- - Pytorch 2.1.2+cu121
65
- - Datasets 2.16.1
66
- - Tokenizers 0.15.1
 
17
 
18
  This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on an unknown dataset.
19
  It achieves the following results on the evaluation set:
20
+ - Loss: 0.1580
21
+ - Wer: 10.0249
22
 
23
  ## Model description
24
 
 
39
  The following hyperparameters were used during training:
40
  - learning_rate: 1e-05
41
  - train_batch_size: 4
42
+ - eval_batch_size: 1
43
  - seed: 42
44
  - gradient_accumulation_steps: 8
45
  - total_train_batch_size: 32
46
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
47
  - lr_scheduler_type: linear
48
  - lr_scheduler_warmup_steps: 500
49
+ - training_steps: 3000
50
 
51
  ### Training results
52
 
53
  | Training Loss | Epoch | Step | Validation Loss | Wer |
54
  |:-------------:|:-----:|:----:|:---------------:|:-------:|
55
+ | 0.2106 | 0.25 | 1000 | 0.2133 | 14.0954 |
56
+ | 0.1599 | 0.5 | 2000 | 0.1756 | 11.2101 |
57
+ | 0.1319 | 0.75 | 3000 | 0.1580 | 10.0249 |
 
58
 
59
 
60
  ### Framework versions
61
 
62
+ - Transformers 4.39.3
63
+ - Pytorch 2.2.2+cu121
64
+ - Datasets 2.18.0
65
+ - Tokenizers 0.15.2
generation_config.json CHANGED
@@ -55,7 +55,7 @@
55
  ],
56
  [
57
  2,
58
- 50359
59
  ]
60
  ],
61
  "is_multilingual": true,
@@ -261,5 +261,5 @@
261
  "transcribe": 50360,
262
  "translate": 50359
263
  },
264
- "transformers_version": "4.37.1"
265
  }
 
55
  ],
56
  [
57
  2,
58
+ 50360
59
  ]
60
  ],
61
  "is_multilingual": true,
 
261
  "transcribe": 50360,
262
  "translate": 50359
263
  },
264
+ "transformers_version": "4.39.3"
265
  }
model-00001-of-00002.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:6686fe3c7690f628e5ab073f06e026348ba174fb5fdd60bb29da596b5b876cda
3
  size 4993448880
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:de9e2e562c74f7b9449456a7377021dc410dd4be61fa3b158f578cdfc3e8aa96
3
  size 4993448880
model-00002-of-00002.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:7d6beb4de5b782026f3ae2233a375c9c67631bb4cbf13783a2afdbf8b4441293
3
  size 1180663192
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:02460920da5874d9b6457e7cf852b06641e0b843afb07bc52742d8e23c735598
3
  size 1180663192