Sandiago21 commited on
Commit
f59ccb3
1 Parent(s): a59f63a

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +20 -19
README.md CHANGED
@@ -1,9 +1,11 @@
1
  ---
 
 
2
  license: apache-2.0
3
  tags:
4
  - generated_from_trainer
5
  datasets:
6
- - fleurs
7
  metrics:
8
  - wer
9
  model-index:
@@ -13,15 +15,15 @@ model-index:
13
  name: Automatic Speech Recognition
14
  type: automatic-speech-recognition
15
  dataset:
16
- name: fleurs
17
- type: fleurs
18
  config: el_gr
19
  split: test
20
  args: el_gr
21
  metrics:
22
  - name: Wer
23
  type: wer
24
- value: 1.0564819086535293
25
  ---
26
 
27
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -29,11 +31,11 @@ should probably proofread and complete it, then remove this comment. -->
29
 
30
  # whisper-large-v2-greek
31
 
32
- This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the fleurs dataset.
33
  It achieves the following results on the evaluation set:
34
- - Loss: 0.2061
35
- - Wer Ortho: 1.0424
36
- - Wer: 1.0565
37
 
38
  ## Model description
39
 
@@ -61,22 +63,21 @@ The following hyperparameters were used during training:
61
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
62
  - lr_scheduler_type: constant_with_warmup
63
  - lr_scheduler_warmup_steps: 50
64
- - num_epochs: 10
65
 
66
  ### Training results
67
 
68
  | Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
69
  |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
70
- | 0.1416 | 1.0 | 217 | 0.1611 | 1.2560 | 1.2638 |
71
- | 0.0617 | 2.0 | 435 | 0.1612 | 1.1956 | 1.1930 |
72
- | 0.027 | 3.0 | 653 | 0.1716 | 1.6495 | 1.6518 |
73
- | 0.0155 | 4.0 | 871 | 0.1812 | 1.2816 | 1.2878 |
74
- | 0.0114 | 5.0 | 1088 | 0.1792 | 1.0087 | 1.0071 |
75
- | 0.0085 | 6.0 | 1306 | 0.1891 | 0.9757 | 0.9971 |
76
- | 0.0073 | 7.0 | 1524 | 0.2017 | 1.0040 | 1.0225 |
77
- | 0.0062 | 8.0 | 1742 | 0.1980 | 1.0737 | 1.0779 |
78
- | 0.0094 | 9.0 | 1959 | 0.2103 | 0.8469 | 0.8459 |
79
- | 0.0039 | 9.97 | 2170 | 0.2061 | 1.0424 | 1.0565 |
80
 
81
 
82
  ### Framework versions
 
1
  ---
2
+ language:
3
+ - el
4
  license: apache-2.0
5
  tags:
6
  - generated_from_trainer
7
  datasets:
8
+ - google/fleurs
9
  metrics:
10
  - wer
11
  model-index:
 
15
  name: Automatic Speech Recognition
16
  type: automatic-speech-recognition
17
  dataset:
18
+ name: FLEURS
19
+ type: google/fleurs
20
  config: el_gr
21
  split: test
22
  args: el_gr
23
  metrics:
24
  - name: Wer
25
  type: wer
26
+ value: 0.8398897182435613
27
  ---
28
 
29
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
31
 
32
  # whisper-large-v2-greek
33
 
34
+ This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the FLEURS dataset.
35
  It achieves the following results on the evaluation set:
36
+ - Loss: 0.2442
37
+ - Wer Ortho: 0.8376
38
+ - Wer: 0.8399
39
 
40
  ## Model description
41
 
 
63
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
64
  - lr_scheduler_type: constant_with_warmup
65
  - lr_scheduler_warmup_steps: 50
66
+ - num_epochs: 9
67
 
68
  ### Training results
69
 
70
  | Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
71
  |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
72
+ | 0.1502 | 1.0 | 217 | 0.1780 | 1.1731 | 1.1960 |
73
+ | 0.0608 | 2.0 | 435 | 0.1869 | 1.1069 | 1.1209 |
74
+ | 0.0305 | 3.0 | 653 | 0.2029 | 1.1970 | 1.2144 |
75
+ | 0.0178 | 4.0 | 871 | 0.2186 | 1.3240 | 1.3458 |
76
+ | 0.0108 | 5.0 | 1088 | 0.2253 | 1.1080 | 1.1200 |
77
+ | 0.0076 | 6.0 | 1306 | 0.2301 | 1.0047 | 1.0155 |
78
+ | 0.0072 | 7.0 | 1524 | 0.2402 | 1.1153 | 1.1405 |
79
+ | 0.0051 | 8.0 | 1742 | 0.2434 | 1.0095 | 1.0264 |
80
+ | 0.0056 | 8.97 | 1953 | 0.2442 | 0.8376 | 0.8399 |
 
81
 
82
 
83
  ### Framework versions