File size: 2,109 Bytes
af577c6
 
 
c151b61
af577c6
c151b61
af577c6
8777739
af577c6
c151b61
8777739
c151b61
 
af577c6
8777739
c151b61
 
 
 
 
 
 
 
 
 
 
 
 
 
af577c6
 
 
 
 
4e3c18c
af577c6
c151b61
 
 
 
af577c6
 
 
4e3c18c
af577c6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4e3c18c
af577c6
 
 
 
 
c151b61
af577c6
 
c151b61
 
 
 
 
 
 
 
 
 
 
af577c6
 
4e3c18c
c151b61
4e3c18c
8777739
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
---
language:
- el
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_11_0
- google/fleurs
metrics:
- wer
model-index:
- name: Whisper Medium El Greco
  results:
  - task:
      name: Automatic Speech Recognition
      type: automatic-speech-recognition
    dataset:
      name: Common Voice 11.0
      type: mozilla-foundation/common_voice_11_0
      config: el
      split: test
      args: el
    metrics:
    - name: Wer
      type: wer
      value: 13.976597325408619
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# Whisper Medium El - Greek One

This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4707
- Wer: 13.9766

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 20
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP

### Training results

| Training Loss | Epoch | Step | Validation Loss | Wer     |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0036        | 10.01 | 1000 | 0.4461          | 15.9082 |
| 0.0001        | 20.02 | 2000 | 0.4250          | 14.5245 |
| 0.0           | 31.0  | 3000 | 0.4526          | 14.1902 |
| 0.0           | 41.01 | 4000 | 0.4657          | 14.1252 |
| 0.0           | 52.0  | 5000 | 0.4707          | 13.9766 |


### Framework versions

- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2