asierhv commited on
Commit
727c33a
·
verified ·
1 Parent(s): 9652d10

added description and "how to use" example

Browse files
Files changed (1) hide show
  1. README.md +128 -37
README.md CHANGED
@@ -28,45 +28,94 @@ model-index:
28
  value: 16.904258359531294
29
  ---
30
 
31
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
32
- should probably proofread and complete it, then remove this comment. -->
33
-
34
  # Whisper Tiny Catalan
35
 
36
- This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the mozilla-foundation/common_voice_13_0 ca dataset.
37
- It achieves the following results on the evaluation set:
38
- - Loss: 0.3180
39
- - Wer: 16.9043
 
 
 
40
 
41
  ## Model description
42
 
43
- More information needed
 
 
 
 
 
 
 
44
 
45
- ## Intended uses & limitations
 
 
 
 
46
 
47
- More information needed
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
48
 
49
  ## Training and evaluation data
50
 
51
- More information needed
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
52
 
53
  ## Training procedure
54
 
55
  ### Training hyperparameters
56
 
57
- The following hyperparameters were used during training:
58
- - learning_rate: 3.75e-05
59
- - train_batch_size: 256
60
- - eval_batch_size: 128
61
- - seed: 42
62
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
63
- - lr_scheduler_type: linear
64
- - lr_scheduler_warmup_steps: 500
65
- - training_steps: 5000
66
 
67
- ### Training results
68
 
69
- | Training Loss | Epoch | Step | Validation Loss | Wer |
70
  |:-------------:|:-----:|:----:|:---------------:|:-------:|
71
  | 0.2098 | 7.02 | 1000 | 0.3994 | 22.5047 |
72
  | 0.162 | 15.02 | 2000 | 0.3454 | 19.4181 |
@@ -74,27 +123,57 @@ The following hyperparameters were used during training:
74
  | 0.0934 | 31.01 | 4000 | 0.3312 | 18.1600 |
75
  | 0.1167 | 39.0 | 5000 | 0.3180 | 16.9043 |
76
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
77
 
78
- ### Framework versions
 
 
 
 
79
 
80
- - Transformers 4.33.0.dev0
81
- - Pytorch 2.0.1+cu117
82
- - Datasets 2.14.4
83
- - Tokenizers 0.13.3
 
 
 
 
 
 
 
 
 
84
 
85
  ## Citation
86
 
87
- If you use these models in your research, please cite:
88
 
89
  ```bibtex
90
  @misc{dezuazo2025whisperlmimprovingasrmodels,
91
- title={Whisper-LM: Improving ASR Models with Language Models for Low-Resource Languages},
92
- author={Xabier de Zuazo and Eva Navas and Ibon Saratxaga and Inma Hernáez Rioja},
93
- year={2025},
94
- eprint={2503.23542},
95
- archivePrefix={arXiv},
96
- primaryClass={cs.CL},
97
- url={https://arxiv.org/abs/2503.23542},
98
  }
99
  ```
100
 
@@ -102,9 +181,21 @@ Please, check the related paper preprint in
102
  [arXiv:2503.23542](https://arxiv.org/abs/2503.23542)
103
  for more details.
104
 
105
- ## Licensing
 
 
106
 
107
  This model is available under the
108
  [Apache-2.0 License](https://www.apache.org/licenses/LICENSE-2.0).
109
  You are free to use, modify, and distribute this model as long as you credit
110
- the original creators.
 
 
 
 
 
 
 
 
 
 
 
28
  value: 16.904258359531294
29
  ---
30
 
 
 
 
31
  # Whisper Tiny Catalan
32
 
33
+ ## Model summary
34
+
35
+ **Whisper Tiny Catalan** is an automatic speech recognition (ASR) model for **Catalan (ca)** speech. It is fine-tuned from [openai/whisper-tiny] on the **Catalan subset of Mozilla Common Voice 13.0**, achieving a **Word Error Rate (WER) of 16.90%** on the evaluation split.
36
+
37
+ This model is intended for general-purpose transcription of Catalan audio.
38
+
39
+ ---
40
 
41
  ## Model description
42
 
43
+ * **Architecture:** Transformer-based encoder–decoder (Whisper)
44
+ * **Base model:** openai/whisper-tiny
45
+ * **Language:** Catalan (ca)
46
+ * **Task:** Automatic Speech Recognition (ASR)
47
+ * **Output:** Text transcription in Catalan
48
+ * **Decoding:** Autoregressive sequence-to-sequence decoding
49
+
50
+ Fine-tuned to improve transcription quality on Catalan audio, leveraging Whisper’s multilingual pretraining.
51
 
52
+ ---
53
+
54
+ ## Intended use
55
+
56
+ ### Primary use cases
57
 
58
+ * Transcription of Catalan audio recordings
59
+ * Offline or batch ASR pipelines
60
+ * Research and development in Catalan ASR
61
+ * Educational and media applications
62
+
63
+ ### Out-of-scope use
64
+
65
+ * Real-time or low-latency ASR without optimization
66
+ * Speech translation tasks
67
+ * Safety-critical applications without further validation
68
+
69
+ ---
70
+
71
+ ## Limitations and known issues
72
+
73
+ * Performance may degrade on:
74
+ * Noisy or low-quality recordings
75
+ * Conversational or spontaneous speech
76
+ * Dialects underrepresented in Common Voice
77
+ * Dataset biases may be reflected in outputs
78
+ * Occasional transcription errors can occur under difficult acoustic conditions
79
+
80
+ ---
81
 
82
  ## Training and evaluation data
83
 
84
+ * **Dataset:** Mozilla Common Voice 13.0 (Catalan subset)
85
+ * **Data type:** Crowd-sourced, read speech
86
+ * **Preprocessing:**
87
+ * Audio resampled to 16 kHz
88
+ * Text normalized using Whisper tokenizer
89
+ * Filtering of invalid or problematic samples
90
+
91
+ * **Evaluation metric:** Word Error Rate (WER) on held-out evaluation set
92
+
93
+ ---
94
+
95
+ ## Evaluation results
96
+
97
+ | Metric | Value |
98
+ | ---------- | ---------- |
99
+ | WER (eval) | **16.90%** |
100
+
101
+ ---
102
 
103
  ## Training procedure
104
 
105
  ### Training hyperparameters
106
 
107
+ * Learning rate: 3.75e-5
108
+ * Optimizer: Adam (β1=0.9, β2=0.999, ε=1e-8)
109
+ * LR scheduler: Linear
110
+ * Warmup steps: 500
111
+ * Training steps: 5,000
112
+ * Train batch size: 256
113
+ * Eval batch size: 128
114
+ * Seed: 42
 
115
 
116
+ ### Training results (summary)
117
 
118
+ | Training Loss | Epoch | Step | Validation Loss | WER |
119
  |:-------------:|:-----:|:----:|:---------------:|:-------:|
120
  | 0.2098 | 7.02 | 1000 | 0.3994 | 22.5047 |
121
  | 0.162 | 15.02 | 2000 | 0.3454 | 19.4181 |
 
123
  | 0.0934 | 31.01 | 4000 | 0.3312 | 18.1600 |
124
  | 0.1167 | 39.0 | 5000 | 0.3180 | 16.9043 |
125
 
126
+ ---
127
+
128
+ ## Framework versions
129
+
130
+ - Transformers 4.33.0.dev0
131
+ - PyTorch 2.0.1+cu117
132
+ - Datasets 2.14.4
133
+ - Tokenizers 0.13.3
134
+
135
+ ---
136
+
137
+ ## How to use
138
+
139
+ ```python
140
+ from transformers import pipeline
141
+
142
+ hf_model = "HiTZ/whisper-tiny-ca" # replace with actual repo ID
143
+ device = 0 # set to -1 for CPU
144
 
145
+ pipe = pipeline(
146
+ task="automatic-speech-recognition",
147
+ model=hf_model,
148
+ device=device
149
+ )
150
 
151
+ result = pipe("audio.wav")
152
+ print(result["text"])
153
+ ```
154
+
155
+ ---
156
+
157
+ ## Ethical considerations and risks
158
+
159
+ * This model transcribes speech and may process personal data.
160
+ * Users should ensure compliance with applicable data protection laws (e.g., GDPR).
161
+ * The model should not be used for surveillance or non-consensual audio processing.
162
+
163
+ ---
164
 
165
  ## Citation
166
 
167
+ If you use this model in your research, please cite:
168
 
169
  ```bibtex
170
  @misc{dezuazo2025whisperlmimprovingasrmodels,
171
+ title={Whisper-LM: Improving ASR Models with Language Models for Low-Resource Languages},
172
+ author={Xabier de Zuazo and Eva Navas and Ibon Saratxaga and Inma Hernáez Rioja},
173
+ year={2025},
174
+ eprint={2503.23542},
175
+ archivePrefix={arXiv},
176
+ primaryClass={cs.CL}
 
177
  }
178
  ```
179
 
 
181
  [arXiv:2503.23542](https://arxiv.org/abs/2503.23542)
182
  for more details.
183
 
184
+ ---
185
+
186
+ ## License
187
 
188
  This model is available under the
189
  [Apache-2.0 License](https://www.apache.org/licenses/LICENSE-2.0).
190
  You are free to use, modify, and distribute this model as long as you credit
191
+ the original creators.
192
+
193
+ ---
194
+
195
+ ## Contact and attribution
196
+
197
+ * Fine-tuning and evaluation: HiTZ/Aholab - Basque Center for Language Technology
198
+ * Base model: OpenAI Whisper
199
+ * Dataset: Mozilla Common Voice
200
+
201
+ For questions or issues, please open an issue in the model repository.