GIanlucaRub commited on
Commit
bac1f03
1 Parent(s): 72ed659

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -10,7 +10,7 @@ datasets:
10
  metrics:
11
  - wer
12
  model-index:
13
- - name: Whisper Tiny it 6
14
  results:
15
  - task:
16
  name: Automatic Speech Recognition
@@ -26,7 +26,7 @@ model-index:
26
  type: wer
27
  value: 97.56655574043262)
28
  ---
29
- # Whisper Tiny it 6
30
  This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the Common Voice 11.0 dataset.
31
  It achieves the following results on the evaluation set:
32
  - Loss: 2.137834
@@ -36,7 +36,7 @@ It achieves the following results on the evaluation set:
36
 
37
  This model is the openai whisper small transformer adapted for Italian audio to text transcription.
38
  As part of the hyperparameter tuning process weight decay set to 0.1, attention dropout, encoder dropout and decoder dropout have been set to 0.1,
39
- the learning rate has been set to 1e-5, the number of decoder attention heads and encoder attention heads have been set to 8
40
  however, it did not improved the performance on the evaluation set.
41
 
42
  ## Intended uses & limitations
@@ -56,7 +56,7 @@ After loading the pre trained model, it has been trained on the dataset.
56
  ### Training hyperparameters
57
 
58
  The following hyperparameters were used during training:
59
- - learning_rate: 1e-05
60
  - train_batch_size: 16
61
  - eval_batch_size: 8
62
  - seed: 42
 
10
  metrics:
11
  - wer
12
  model-index:
13
+ - name: Whisper Tiny it 7
14
  results:
15
  - task:
16
  name: Automatic Speech Recognition
 
26
  type: wer
27
  value: 97.56655574043262)
28
  ---
29
+ # Whisper Tiny it 7
30
  This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the Common Voice 11.0 dataset.
31
  It achieves the following results on the evaluation set:
32
  - Loss: 2.137834
 
36
 
37
  This model is the openai whisper small transformer adapted for Italian audio to text transcription.
38
  As part of the hyperparameter tuning process weight decay set to 0.1, attention dropout, encoder dropout and decoder dropout have been set to 0.1,
39
+ the learning rate has been set to 1e-6, the number of decoder attention heads and encoder attention heads have been set to 8
40
  however, it did not improved the performance on the evaluation set.
41
 
42
  ## Intended uses & limitations
 
56
  ### Training hyperparameters
57
 
58
  The following hyperparameters were used during training:
59
+ - learning_rate: 1e-06
60
  - train_batch_size: 16
61
  - eval_batch_size: 8
62
  - seed: 42