File size: 2,535 Bytes
ed310dc
2fafd68
 
ed310dc
 
 
9884dff
ed310dc
2fafd68
ed310dc
 
 
cc1783a
ed310dc
 
 
 
 
2fafd68
 
ed310dc
 
 
 
 
 
49c3908
ed310dc
 
 
cc1783a
ed310dc
2fafd68
ed310dc
2fafd68
 
ed310dc
 
 
cc1783a
ed310dc
 
 
cc1783a
 
 
 
 
 
 
 
ed310dc
 
 
cc1783a
ed310dc
 
 
cc1783a
 
ed310dc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
---
language:
- it
license: apache-2.0
tags:
- generated_from_trainer
- whisper-event
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: luigisaetta/whisper-small3-it
  results:
  - task:
      name: Automatic Speech Recognition
      type: automatic-speech-recognition
    dataset:
      name: mozilla-foundation/common_voice_11_0 it
      type: mozilla-foundation/common_voice_11_0
      config: it
      split: test
      args: it
    metrics:
    - name: Wer
      type: wer
      value: 10.2508
---


# Whisper Small3 Italian

This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the mozilla-foundation/common_voice_11_0 it dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2307
- Wer: 10.2508

## Model description

This model is a fine-tuning of the OpenAI Whisper Small model, on the specified dataset.

## Intended uses & limitations

This model has been developed as part of the Hugging Face Whisper Fine Tuning sprint, December 2022.

It is meant to spread the knowledge on how these models are built and can be used to develop solutions
where it is needed ASR on the Italian Language.

It has not been extensively tested. It is possible that on other datasets the accuracy will be lower. 

Please, test it before using it.

## Training and evaluation data

Trained and tested on Mozilla Common Voice, vers. 11

## Training procedure

The script **run.sh**, and the Python file, used for the training are saved in the repository.

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 8e-06
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 6000
- mixed_precision_training: Native AMP

### Training results

| Training Loss | Epoch | Step | Validation Loss | Wer     |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.226         | 2.01  | 1000 | 0.2494          | 11.3684 |
| 0.1017        | 4.02  | 2000 | 0.2403          | 10.6029 |
| 0.0491        | 6.03  | 3000 | 0.2549          | 10.9591 |
| 0.1102        | 8.04  | 4000 | 0.2307          | 10.2508 |
| 0.0384        | 10.05 | 5000 | 0.2592          | 10.5903 |
| 0.0285        | 12.06 | 6000 | 0.2537          | 10.5026 |


### Framework versions

- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2