File size: 3,543 Bytes
c888a58
651f679
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3c125fb
651f679
 
 
 
 
 
 
 
 
1170564
 
 
651f679
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
---
language:
- hu
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Small Hu CV17
  results:
  - task:
      name: Automatic Speech Recognition
      type: automatic-speech-recognition
    dataset:
      name: Common Voice 17.0
      type: common_voice_11_0
      config: hu
      split: None
      args: hu
    metrics:
    - name: Wer
      type: wer
      value: 5.627038
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# Whisper Small Hu CV17

This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 17.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0862
- Wer Ortho: 6.536794
- Wer: 5.627038

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 2.5e-05
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 6000
- mixed_precision_training: Native AMP

### Training results

| Training Loss | Epoch  | Step | Validation Loss | Wer Ortho | Wer     |
|:-------------:|:------:|:----:|:---------------:|:---------:|:-------:|
| 0.3208        | 0.3298 | 250  | 0.3558          | 34.5839   | 31.0969 |
| 0.2542        | 0.6596 | 500  | 0.2683          | 27.2010   | 24.2218 |
| 0.2204        | 0.9894 | 750  | 0.2035          | 22.2962   | 19.4960 |
| 0.1301        | 1.3193 | 1000 | 0.1673          | 17.6683   | 15.2446 |
| 0.1171        | 1.6491 | 1250 | 0.1433          | 15.2445   | 12.9677 |
| 0.1075        | 1.9789 | 1500 | 0.1192          | 12.8658   | 10.7827 |
| 0.0492        | 2.3087 | 1750 | 0.1091          | 11.2489   | 9.1877  |
| 0.0491        | 2.6385 | 2000 | 0.1046          | 10.7582   | 9.0543  |
| 0.0472        | 2.9683 | 2250 | 0.0945          | 9.0750    | 7.5037  |
| 0.0203        | 3.2982 | 2500 | 0.0932          | 8.5873    | 7.2428  |
| 0.0188        | 3.6280 | 2750 | 0.0900          | 8.2019    | 6.9671  |
| 0.0183        | 3.9578 | 3000 | 0.0859          | 7.5154    | 6.4008  |
| 0.0068        | 4.2876 | 3250 | 0.0837          | 7.2052    | 6.1785  |
| 0.0064        | 4.6174 | 3500 | 0.0843          | 6.9613    | 6.0213  |
| 0.0061        | 4.9472 | 3750 | 0.0847          | 6.9674    | 5.9383  |
| 0.0025        | 5.2770 | 4000 | 0.0851          | 6.6994    | 5.7723  |
| 0.0023        | 5.6069 | 4250 | 0.0847          | 6.6331    | 5.6863  |
| 0.0022        | 5.9367 | 4500 | 0.0844          | 6.6211    | 5.7338  |
| 0.0014        | 6.2665 | 4750 | 0.0855          | 6.5789    | 5.6834  |
| 0.0011        | 6.5963 | 5000 | 0.0856          | 6.5609    | 5.6567  |
| 0.0012        | 6.9261 | 5250 | 0.0862          | 6.5368    | 5.6270  |
| 0.0009        | 7.2559 | 5500 | 0.0870          | 6.5699    | 5.6597  |
| 0.0009        | 7.5858 | 5750 | 0.0873          | 6.5639    | 5.6597  |
| 0.001         | 7.9156 | 6000 | 0.0873          | 6.5518    | 5.6389  |


### Framework versions

- Transformers 4.41.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1