Update README.md
Browse files
README.md
CHANGED
@@ -32,25 +32,29 @@ should probably proofread and complete it, then remove this comment. -->
|
|
32 |
|
33 |
# Whisper Small - Greek (el)
|
34 |
|
35 |
-
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the mozilla-foundation/common_voice_11_0 el dataset
|
|
|
36 |
It achieves the following results on the evaluation set:
|
37 |
- Loss: 0.4642
|
38 |
- Wer: 25.6965
|
39 |
|
40 |
## Model description
|
41 |
|
42 |
-
|
43 |
|
44 |
## Intended uses & limitations
|
45 |
|
46 |
-
|
47 |
|
48 |
## Training and evaluation data
|
49 |
|
50 |
-
|
|
|
51 |
|
52 |
## Training procedure
|
53 |
|
|
|
|
|
54 |
### Training hyperparameters
|
55 |
|
56 |
The following hyperparameters were used during training:
|
|
|
32 |
|
33 |
# Whisper Small - Greek (el)
|
34 |
|
35 |
+
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the mozilla-foundation/common_voice_11_0 el dataset
|
36 |
+
for translation from Greek to English.
|
37 |
It achieves the following results on the evaluation set:
|
38 |
- Loss: 0.4642
|
39 |
- Wer: 25.6965
|
40 |
|
41 |
## Model description
|
42 |
|
43 |
+
This model was finetuned with the encoder frozen. Only the decoder weights have been changed by this training run.
|
44 |
|
45 |
## Intended uses & limitations
|
46 |
|
47 |
+
The purpose of this model was to understand how the freezing of a part of the model might affect learning, in an effort to assess the feasibility of enabling adapters.
|
48 |
|
49 |
## Training and evaluation data
|
50 |
|
51 |
+
The training was performed by streaming interleaved train+eval spits of the greek (el) subset of mozilla-foundation/common_voice_11_0 (el).
|
52 |
+
The test set was similarly used for validation.
|
53 |
|
54 |
## Training procedure
|
55 |
|
56 |
+
The script used to perform the training is included in the files of this space:
|
57 |
+
|
58 |
### Training hyperparameters
|
59 |
|
60 |
The following hyperparameters were used during training:
|