Sandiago21 commited on
Commit
47b48f9
1 Parent(s): 4a7901d

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +26 -21
README.md CHANGED
@@ -1,9 +1,11 @@
1
  ---
 
 
2
  license: apache-2.0
3
  tags:
4
  - generated_from_trainer
5
  datasets:
6
- - fleurs
7
  metrics:
8
  - wer
9
  model-index:
@@ -13,15 +15,15 @@ model-index:
13
  name: Automatic Speech Recognition
14
  type: automatic-speech-recognition
15
  dataset:
16
- name: fleurs
17
- type: fleurs
18
  config: el_gr
19
  split: test
20
  args: el_gr
21
  metrics:
22
  - name: Wer
23
  type: wer
24
- value: 0.8398897182435613
25
  ---
26
 
27
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -29,11 +31,11 @@ should probably proofread and complete it, then remove this comment. -->
29
 
30
  # whisper-large-v2-greek
31
 
32
- This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the fleurs dataset.
33
  It achieves the following results on the evaluation set:
34
- - Loss: 0.2442
35
- - Wer Ortho: 0.8376
36
- - Wer: 0.8399
37
 
38
  ## Model description
39
 
@@ -52,30 +54,33 @@ More information needed
52
  ### Training hyperparameters
53
 
54
  The following hyperparameters were used during training:
55
- - learning_rate: 1e-05
56
  - train_batch_size: 2
57
  - eval_batch_size: 2
58
  - seed: 42
59
- - gradient_accumulation_steps: 8
60
- - total_train_batch_size: 16
61
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
62
  - lr_scheduler_type: constant_with_warmup
63
  - lr_scheduler_warmup_steps: 50
64
- - num_epochs: 9
65
 
66
  ### Training results
67
 
68
  | Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
69
  |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
70
- | 0.1502 | 1.0 | 217 | 0.1780 | 1.1731 | 1.1960 |
71
- | 0.0608 | 2.0 | 435 | 0.1869 | 1.1069 | 1.1209 |
72
- | 0.0305 | 3.0 | 653 | 0.2029 | 1.1970 | 1.2144 |
73
- | 0.0178 | 4.0 | 871 | 0.2186 | 1.3240 | 1.3458 |
74
- | 0.0108 | 5.0 | 1088 | 0.2253 | 1.1080 | 1.1200 |
75
- | 0.0076 | 6.0 | 1306 | 0.2301 | 1.0047 | 1.0155 |
76
- | 0.0072 | 7.0 | 1524 | 0.2402 | 1.1153 | 1.1405 |
77
- | 0.0051 | 8.0 | 1742 | 0.2434 | 1.0095 | 1.0264 |
78
- | 0.0056 | 8.97 | 1953 | 0.2442 | 0.8376 | 0.8399 |
 
 
 
79
 
80
 
81
  ### Framework versions
 
1
  ---
2
+ language:
3
+ - el
4
  license: apache-2.0
5
  tags:
6
  - generated_from_trainer
7
  datasets:
8
+ - google/fleurs
9
  metrics:
10
  - wer
11
  model-index:
 
15
  name: Automatic Speech Recognition
16
  type: automatic-speech-recognition
17
  dataset:
18
+ name: FLEURS
19
+ type: google/fleurs
20
  config: el_gr
21
  split: test
22
  args: el_gr
23
  metrics:
24
  - name: Wer
25
  type: wer
26
+ value: 0.19205164413960057
27
  ---
28
 
29
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
31
 
32
  # whisper-large-v2-greek
33
 
34
+ This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the FLEURS dataset.
35
  It achieves the following results on the evaluation set:
36
+ - Loss: 0.3106
37
+ - Wer Ortho: 0.2256
38
+ - Wer: 0.1921
39
 
40
  ## Model description
41
 
 
54
  ### Training hyperparameters
55
 
56
  The following hyperparameters were used during training:
57
+ - learning_rate: 2e-05
58
  - train_batch_size: 2
59
  - eval_batch_size: 2
60
  - seed: 42
61
+ - gradient_accumulation_steps: 16
62
+ - total_train_batch_size: 32
63
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
64
  - lr_scheduler_type: constant_with_warmup
65
  - lr_scheduler_warmup_steps: 50
66
+ - num_epochs: 12
67
 
68
  ### Training results
69
 
70
  | Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
71
  |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
72
+ | 0.1809 | 1.0 | 274 | 0.2244 | 0.2261 | 0.1947 |
73
+ | 0.0977 | 2.0 | 549 | 0.2306 | 0.2204 | 0.1856 |
74
+ | 0.0594 | 3.0 | 824 | 0.2332 | 0.2137 | 0.1814 |
75
+ | 0.0454 | 4.0 | 1099 | 0.2667 | 0.2315 | 0.1985 |
76
+ | 0.028 | 5.0 | 1374 | 0.2579 | 0.2151 | 0.1822 |
77
+ | 0.022 | 6.0 | 1649 | 0.2674 | 0.2188 | 0.1863 |
78
+ | 0.0202 | 7.0 | 1924 | 0.2719 | 0.2140 | 0.1790 |
79
+ | 0.0129 | 8.0 | 2199 | 0.2894 | 0.2219 | 0.1834 |
80
+ | 0.0218 | 9.0 | 2473 | 0.2861 | 0.2180 | 0.1831 |
81
+ | 0.0144 | 10.0 | 2748 | 0.3076 | 0.2211 | 0.1874 |
82
+ | 0.0157 | 11.0 | 3023 | 0.3094 | 0.2264 | 0.1900 |
83
+ | 0.0114 | 11.96 | 3288 | 0.3106 | 0.2256 | 0.1921 |
84
 
85
 
86
  ### Framework versions