jonatasgrosman commited on
Commit
863305b
1 Parent(s): 1828d04

add evaluation

Browse files
README.md CHANGED
@@ -9,6 +9,7 @@ datasets:
9
  - mozilla-foundation/common_voice_11_0
10
  metrics:
11
  - wer
 
12
  model-index:
13
  - name: Whisper Large Spanish
14
  results:
@@ -19,71 +20,81 @@ model-index:
19
  name: mozilla-foundation/common_voice_11_0 es
20
  type: mozilla-foundation/common_voice_11_0
21
  config: es
22
- split: validation[:1000]
23
  args: es
24
  metrics:
25
- - name: Wer
26
  type: wer
27
- value: 3.6508096148043854
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
28
  ---
29
 
30
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
31
- should probably proofread and complete it, then remove this comment. -->
32
-
33
- # Whisper Large Spanish
34
-
35
- This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the mozilla-foundation/common_voice_11_0 es dataset.
36
- It achieves the following results on the evaluation set:
37
- - Loss: 0.1321
38
- - Wer: 3.6508
39
- - Cer: 1.0572
40
 
41
- ## Model description
42
 
43
- More information needed
44
 
45
- ## Intended uses & limitations
46
 
47
- More information needed
48
 
49
- ## Training and evaluation data
 
 
 
50
 
51
- More information needed
 
 
 
 
 
52
 
53
- ## Training procedure
54
 
55
- ### Training hyperparameters
56
 
57
- The following hyperparameters were used during training:
58
- - learning_rate: 1e-06
59
- - train_batch_size: 16
60
- - eval_batch_size: 8
61
- - seed: 42
62
- - gradient_accumulation_steps: 2
63
- - total_train_batch_size: 32
64
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
65
- - lr_scheduler_type: linear
66
- - lr_scheduler_warmup_steps: 2000
67
- - training_steps: 20000
68
- - mixed_precision_training: Native AMP
69
 
70
- ### Training results
71
 
72
- | Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
73
- |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
74
- | 0.1837 | 0.32 | 1000 | 0.1669 | 4.2442 | 1.2488 |
75
- | 0.1343 | 0.64 | 2000 | 0.1444 | 4.0833 | 1.2084 |
76
- | 0.1312 | 0.96 | 3000 | 0.1362 | 3.9324 | 1.1933 |
77
- | 0.1206 | 1.28 | 4000 | 0.1333 | 3.8520 | 1.1748 |
78
- | 0.1143 | 1.6 | 5000 | 0.1321 | 3.6508 | 1.0572 |
79
- | 0.1202 | 1.92 | 6000 | 0.1291 | 3.8017 | 1.1311 |
80
- | 0.0856 | 2.24 | 7000 | 0.1325 | 3.7011 | 1.0841 |
81
- | 0.1005 | 2.56 | 8000 | 0.1320 | 3.7011 | 1.0555 |
82
 
 
 
 
 
 
 
83
 
84
- ### Framework versions
85
 
86
- - Transformers 4.26.0.dev0
87
- - Pytorch 1.13.1+cu117
88
- - Datasets 2.7.1.dev0
89
- - Tokenizers 0.13.2
 
 
 
 
 
 
 
9
  - mozilla-foundation/common_voice_11_0
10
  metrics:
11
  - wer
12
+ - cer
13
  model-index:
14
  - name: Whisper Large Spanish
15
  results:
 
20
  name: mozilla-foundation/common_voice_11_0 es
21
  type: mozilla-foundation/common_voice_11_0
22
  config: es
23
+ split: test
24
  args: es
25
  metrics:
26
+ - name: WER
27
  type: wer
28
+ value: 4.673613637544826
29
+ - name: CER
30
+ type: cer
31
+ value: 1.5573247819517182
32
+ - task:
33
+ name: Automatic Speech Recognition
34
+ type: automatic-speech-recognition
35
+ dataset:
36
+ name: google/fleurs es_419
37
+ type: google/fleurs
38
+ config: es_419
39
+ split: test
40
+ args: es_419
41
+ metrics:
42
+ - name: WER
43
+ type: wer
44
+ value: 5.396216546072705
45
+ - name: CER
46
+ type: cer
47
+ value: 3.450427960057061
48
  ---
49
 
50
+ # Whisper Large Portuguese
 
 
 
 
 
 
 
 
 
51
 
52
+ This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on Spanish using the train split of [Common Voice 11](https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0). When using this model, make sure that your speech input is sampled at 16kHz.
53
 
54
+ ## Usage
55
 
56
+ ```python
57
 
58
+ from transformers import pipeline
59
 
60
+ transcriber = pipeline(
61
+ "automatic-speech-recognition",
62
+ model="jonatasgrosman/whisper-large-es-cv11"
63
+ )
64
 
65
+ transcriber.model.config.forced_decoder_ids = (
66
+ transcriber.tokenizer.get_decoder_prompt_ids(
67
+ language="es"
68
+ task="transcribe"
69
+ )
70
+ )
71
 
72
+ transcription = transcriber("path/to/my_audio.wav")
73
 
74
+ ```
75
 
76
+ ## Evaluation
 
 
 
 
 
 
 
 
 
 
 
77
 
78
+ We perform evaluation of the model using the test split of two datasets, the [Common Voice 11](https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0) (same dataset used for the fine-tuning) and the [Fleurs](https://huggingface.co/datasets/google/fleurs) (dataset not seen during the fine-tuning). As Whisper can transcribe casing and punctuation, I performed the model evaluation in 2 different scenarios, one using the raw text and the other using the normalized text (lowercase + removal of punctuations). Additionally, for the Fleurs dataset, I evaluated the model in a scenario where there are no transcriptions of numerical values since the way these values are described in this dataset is different from how they are described in the dataset used in fine-tuning (Common Voice), so it is expected that this difference in the way of describing numerical values will affect the performance of the model for this type of transcription in Fleurs.
79
 
80
+ ### Common Voice 11
 
 
 
 
 
 
 
 
 
81
 
82
+ | | CER | WER |
83
+ | --- | --- | --- |
84
+ | [jonatasgrosman/whisper-large-es-cv11](https://huggingface.co/jonatasgrosman/whisper-large-es-cv11) | 2.43 | 8.85 |
85
+ | [jonatasgrosman/whisper-large-es-cv11](https://huggingface.co/jonatasgrosman/whisper-large-es-cv11) + text normalization | 1.56 | 4.67 |
86
+ | [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) | 3.71 | 12.34 |
87
+ | [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) + text normalization | 2.45 | 6.30 |
88
 
89
+ ### Fleurs
90
 
91
+ | | CER | WER |
92
+ | --- | --- | --- |
93
+ | [jonatasgrosman/whisper-large-es-cv11](https://huggingface.co/jonatasgrosman/whisper-large-es-cv11) | 3.06 | 9.11 |
94
+ | [jonatasgrosman/whisper-large-es-cv11](https://huggingface.co/jonatasgrosman/whisper-large-es-cv11) + text normalization | 3.45 | 5.40 |
95
+ | [jonatasgrosman/whisper-large-es-cv11](https://huggingface.co/jonatasgrosman/whisper-large-es-cv11) + keep only non-numeric samples | 1.83 | 7.57 |
96
+ | [jonatasgrosman/whisper-large-es-cv11](https://huggingface.co/jonatasgrosman/whisper-large-es-cv11) + text normalization + keep only non-numeric samples | 2.36 | 4.14 |
97
+ | [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) | 2.30 | 8.50 |
98
+ | [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) + text normalization | 2.76 | 4.79 |
99
+ | [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) + keep only non-numeric samples | 1.93 | 7.33 |
100
+ | [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) + text normalization + keep only non-numeric samples | 2.50 | 4.28 |
evaluation_cv11_test.json ADDED
The diff for this file is too large to render. See raw diff
 
evaluation_fleurs_test.json ADDED
The diff for this file is too large to render. See raw diff
 
evaluation_whisper-large-v2_cv11_test.json ADDED
The diff for this file is too large to render. See raw diff
 
evaluation_whisper-large-v2_fleurs_test.json ADDED
The diff for this file is too large to render. See raw diff