asierhv commited on
Commit
8a4dd7e
·
verified ·
1 Parent(s): 4b39408

Added description and "how to use" example

Browse files
Files changed (1) hide show
  1. README.md +142 -35
README.md CHANGED
@@ -28,46 +28,112 @@ model-index:
28
  value: 5.971420405830237
29
  ---
30
 
31
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
32
- should probably proofread and complete it, then remove this comment. -->
33
-
34
  # Whisper Large-V3 Catalan
35
 
36
- This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on the mozilla-foundation/common_voice_13_0 ca dataset.
37
- It achieves the following results on the evaluation set:
38
- - Loss: 0.2783
39
- - Wer: 5.9714
 
 
 
40
 
41
  ## Model description
42
 
43
- More information needed
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
44
 
45
- ## Intended uses & limitations
46
 
47
- More information needed
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
48
 
49
  ## Training and evaluation data
50
 
51
- More information needed
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
52
 
53
  ## Training procedure
54
 
55
  ### Training hyperparameters
56
 
57
- The following hyperparameters were used during training:
58
- - learning_rate: 1e-05
59
- - train_batch_size: 32
60
- - eval_batch_size: 16
61
- - seed: 42
62
- - gradient_accumulation_steps: 2
63
- - total_train_batch_size: 64
64
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
65
- - lr_scheduler_type: linear
66
- - lr_scheduler_warmup_steps: 500
67
- - training_steps: 20000
68
- - mixed_precision_training: Native AMP
69
-
70
- ### Training results
71
 
72
  | Training Loss | Epoch | Step | Validation Loss | Wer |
73
  |:-------------:|:-----:|:-----:|:---------------:|:------:|
@@ -100,19 +166,48 @@ The following hyperparameters were used during training:
100
  - Datasets 2.16.1
101
  - Tokenizers 0.15.1
102
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
103
  ## Citation
104
 
105
- If you use these models in your research, please cite:
106
 
107
  ```bibtex
108
  @misc{dezuazo2025whisperlmimprovingasrmodels,
109
- title={Whisper-LM: Improving ASR Models with Language Models for Low-Resource Languages},
110
- author={Xabier de Zuazo and Eva Navas and Ibon Saratxaga and Inma Hernáez Rioja},
111
- year={2025},
112
- eprint={2503.23542},
113
- archivePrefix={arXiv},
114
- primaryClass={cs.CL},
115
- url={https://arxiv.org/abs/2503.23542},
116
  }
117
  ```
118
 
@@ -120,9 +215,21 @@ Please, check the related paper preprint in
120
  [arXiv:2503.23542](https://arxiv.org/abs/2503.23542)
121
  for more details.
122
 
123
- ## Licensing
 
 
124
 
125
  This model is available under the
126
  [Apache-2.0 License](https://www.apache.org/licenses/LICENSE-2.0).
127
  You are free to use, modify, and distribute this model as long as you credit
128
- the original creators.
 
 
 
 
 
 
 
 
 
 
 
28
  value: 5.971420405830237
29
  ---
30
 
 
 
 
31
  # Whisper Large-V3 Catalan
32
 
33
+ ## Model summary
34
+
35
+ **Whisper Large-V3 Catalan** is an automatic speech recognition (ASR) model for **Catalan** speech. It is fine-tuned from [openai/whisper-large-v3] on the Catalan portion of **Mozilla Common Voice 13.0**, achieving a **Word Error Rate (WER) of 5.97%** on the Common Voice test split.
36
+
37
+ The model is intended for high-quality transcription of Catalan speech in a variety of accents and recording conditions, including read and semi-spontaneous speech.
38
+
39
+ ---
40
 
41
  ## Model description
42
 
43
+ * **Architecture:** Transformer-based encoder–decoder (Whisper)
44
+ * **Base model:** openai/whisper-large-v3
45
+ * **Language:** Catalan (ca)
46
+ * **Task:** Automatic Speech Recognition (ASR)
47
+ * **Output:** Text transcription in Catalan
48
+ * **Decoding:** Autoregressive sequence-to-sequence decoding
49
+
50
+ This model leverages Whisper's multilingual pretraining and large-scale speech-text alignment, followed by supervised fine-tuning on Catalan speech data to improve language-specific accuracy.
51
+
52
+ ---
53
+
54
+ ## Intended use
55
+
56
+ ### Primary use cases
57
+
58
+ * Transcription of Catalan audio recordings
59
+ * Speech-to-text pipelines for media, education, and research
60
+ * Accessibility tools (e.g., subtitles, captions)
61
+ * Offline or batch ASR for Catalan datasets
62
 
63
+ ### Intended users
64
 
65
+ * Researchers working on Catalan or low-resource ASR
66
+ * Developers building Catalan speech applications
67
+ * Institutions and companies requiring Catalan transcription
68
+
69
+ ### Out-of-scope use
70
+
71
+ * Real-time or low-latency ASR without optimization
72
+ * Speech translation (this model performs transcription only)
73
+ * Safety-critical applications without additional validation
74
+
75
+ ---
76
+
77
+ ## Limitations and known issues
78
+
79
+ * Performance may degrade on:
80
+
81
+ * Highly noisy audio
82
+ * Strong regional accents underrepresented in Common Voice
83
+ * Conversational or overlapping speech
84
+ * The model may produce hallucinated text when audio quality is very poor or silent.
85
+ * Biases present in the Common Voice dataset (e.g., demographic or accent imbalance) may be reflected in model outputs.
86
+
87
+ Users are encouraged to evaluate the model on their own data before deployment.
88
+
89
+ ---
90
 
91
  ## Training and evaluation data
92
 
93
+ ### Training data
94
+
95
+ * **Dataset:** Mozilla Common Voice 13.0 (Catalan subset)
96
+ * **Data type:** Crowd-sourced, read speech
97
+ * **Preprocessing:**
98
+
99
+ * Audio resampled to 16 kHz
100
+ * Text normalized using Whisper tokenizer
101
+ * Invalid or excessively long samples filtered
102
+
103
+ ### Evaluation data
104
+
105
+ * **Dataset:** Common Voice 13.0 (Catalan test split)
106
+ * **Metric:** Word Error Rate (WER)
107
+
108
+ ---
109
+
110
+ ## Evaluation results
111
+
112
+ | Metric | Value |
113
+ | ---------- | --------- |
114
+ | WER (test) | **5.97%** |
115
+
116
+ These results indicate strong performance compared to the base Whisper multilingual model on Catalan speech.
117
+
118
+ ---
119
 
120
  ## Training procedure
121
 
122
  ### Training hyperparameters
123
 
124
+ * Learning rate: 1e-5
125
+ * Optimizer: Adam (β1=0.9, β2=0.999, ε=1e-8)
126
+ * LR scheduler: Linear
127
+ * Warmup steps: 500
128
+ * Training steps: 20,000
129
+ * Train batch size: 32
130
+ * Gradient accumulation steps: 2
131
+ * Effective batch size: 64
132
+ * Evaluation batch size: 16
133
+ * Mixed precision: FP16 (Native AMP)
134
+ * Seed: 42
135
+
136
+ ### Training results (summary)
 
137
 
138
  | Training Loss | Epoch | Step | Validation Loss | Wer |
139
  |:-------------:|:-----:|:-----:|:---------------:|:------:|
 
166
  - Datasets 2.16.1
167
  - Tokenizers 0.15.1
168
 
169
+ ---
170
+
171
+ ## How to use
172
+
173
+ ```python
174
+ from transformers import pipeline
175
+
176
+ hf_model = "HiTZ/whisper-large-v3-ca"
177
+ device = 0 # set to -1 for CPU
178
+
179
+ pipe = pipeline(
180
+ task="automatic-speech-recognition",
181
+ model=hf_model,
182
+ device=device
183
+ )
184
+
185
+ result = pipe("audio.wav")
186
+ print(result["text"])
187
+ ```
188
+
189
+ ---
190
+
191
+ ## Ethical considerations and risks
192
+
193
+ * This model transcribes speech and may process personal data.
194
+ * Users should ensure compliance with applicable data protection laws (e.g., GDPR).
195
+ * The model should not be used for surveillance or non-consensual audio processing.
196
+
197
+ ---
198
+
199
  ## Citation
200
 
201
+ If you use this model in your research, please cite:
202
 
203
  ```bibtex
204
  @misc{dezuazo2025whisperlmimprovingasrmodels,
205
+ title={Whisper-LM: Improving ASR Models with Language Models for Low-Resource Languages},
206
+ author={Xabier de Zuazo and Eva Navas and Ibon Saratxaga and Inma Hernáez Rioja},
207
+ year={2025},
208
+ eprint={2503.23542},
209
+ archivePrefix={arXiv},
210
+ primaryClass={cs.CL}
 
211
  }
212
  ```
213
 
 
215
  [arXiv:2503.23542](https://arxiv.org/abs/2503.23542)
216
  for more details.
217
 
218
+ ---
219
+
220
+ ## License
221
 
222
  This model is available under the
223
  [Apache-2.0 License](https://www.apache.org/licenses/LICENSE-2.0).
224
  You are free to use, modify, and distribute this model as long as you credit
225
+ the original creators.
226
+
227
+ ---
228
+
229
+ ## Contact and attribution
230
+
231
+ * Fine-tuning and evaluation: HiTZ/Aholab - Basque Center for Language Technology
232
+ * Base model: OpenAI Whisper
233
+ * Dataset: Mozilla Common Voice
234
+
235
+ For questions or issues, please open an issue in the model repository.