1 ---
2 language: el #TODO: replace {lang_id} in your language code here. Make sure the code is one of the *ISO codes* of [this](https://huggingface.co/languages) site.
3 datasets:
4 - common_voice #TODO: remove if you did not use the common voice dataset
5 - CSS10
6 metrics:
7 - wer
8 tags:
9 - audio
10 - automatic-speech-recognition
11 - speech
12 - xlsr-fine-tuning-week
13 license: apache-2.0
14 model-index:
15 - name: Greek XLSR Wav2Vec2 Large 53 - CV + CSS10 #TODO: replace {human_readable_name} with a name of your model as it should appear on the leaderboard. It could be something like `Elgeish XLSR Wav2Vec2 Large 53`
16 results:
17 - task:
18 name: Speech Recognition
19 type: automatic-speech-recognition
20 dataset:
21 name: Common Voice el #TODO: replace {lang_id} in your language code here. Make sure the code is one of the *ISO codes* of [this](https://huggingface.co/languages) site.
22 type: common_voice
23 args: el #TODO: replace {lang_id} in your language code here. Make sure the code is one of the *ISO codes* of [this](https://huggingface.co/languages) site.
24 metrics:
25 - name: Test WER
26 type: wer
27 value: 20.89 #TODO (IMPORTANT): replace {wer_result_on_test} with the WER error rate you achieved on the common_voice test set. It should be in the format XX.XX (don't add the % sign here). **Please** remember to fill out this value after you evaluated your model, so that your model appears on the leaderboard. If you fill out this model card before evaluating your model, please remember to edit the model card afterward to fill in your value
28 ---
29
30 # Wav2Vec2-Large-XLSR-53-greek
31
32 Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on greek using the [Common Voice](https://huggingface.co/datasets/common_voice) and [CSS10](https://github.com/Kyubyong/css10) datasets.
33 When using this model, make sure that your speech input is sampled at 16kHz.
34
35 ## Usage
36
37 The model can be used directly (without a language model) as follows:
38
39 ```python
40 import torch
41 import torchaudio
42 from datasets import load_dataset
43 from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
44
45 test_dataset = load_dataset("common_voice", "el", split="test")
46
47 processor = Wav2Vec2Processor.from_pretrained("PereLluis13/wav2vec2-large-xlsr-53-greek")
48 model = Wav2Vec2ForCTC.from_pretrained("PereLluis13/wav2vec2-large-xlsr-53-greek")
49
50 resampler = torchaudio.transforms.Resample(48_000, 16_000)
51
52 # Preprocessing the datasets.
53 # We need to read the aduio files as arrays
54 def speech_file_to_array_fn(batch):
55 speech_array, sampling_rate = torchaudio.load(batch["path"])
56 batch["speech"] = resampler(speech_array).squeeze().numpy()
57 return batch
58
59 test_dataset = test_dataset.map(speech_file_to_array_fn)
60 inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
61
62 with torch.no_grad():
63 logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
64
65 predicted_ids = torch.argmax(logits, dim=-1)
66
67 print("Prediction:", processor.batch_decode(predicted_ids))
68 print("Reference:", test_dataset["sentence"][:2])
69 ```
70
71
72 ## Evaluation
73
74 The model can be evaluated as follows on the greek test data of Common Voice.
75
76 ```python
77 import torch
78 import torchaudio
79 from datasets import load_dataset, load_metric
80 from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
81 import re
82
83 test_dataset = load_dataset("common_voice", "el", split="test")
84 wer = load_metric("wer")
85
86 processor = Wav2Vec2Processor.from_pretrained("PereLluis13/wav2vec2-large-xlsr-53-greek")
87 model = Wav2Vec2ForCTC.from_pretrained("PereLluis13/wav2vec2-large-xlsr-53-greek")
88 model.to("cuda")
89
90 chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�]'
91
92
93 resampler = torchaudio.transforms.Resample(48_000, 16_000)
94
95 # Preprocessing the datasets.
96 # We need to read the aduio files as arrays
97 def speech_file_to_array_fn(batch):
98 batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
99 speech_array, sampling_rate = torchaudio.load(batch["path"])
100 batch["speech"] = resampler(speech_array).squeeze().numpy()
101 return batch
102
103 test_dataset = test_dataset.map(speech_file_to_array_fn)
104
105 # Preprocessing the datasets.
106 # We need to read the aduio files as arrays
107 def evaluate(batch):
108 inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
109
110 with torch.no_grad():
111 logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
112
113 pred_ids = torch.argmax(logits, dim=-1)
114 batch["pred_strings"] = processor.batch_decode(pred_ids)
115 return batch
116
117 result = test_dataset.map(evaluate, batched=True, batch_size=8)
118
119 print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
120 ```
121
122 **Test Result**: 20.89 %
123
124 ## Training
125
126 The Common Voice `train`, `validation`, and CSS10 datasets were used for training, added as `extra` split to the dataset. The sampling rate and format of the CSS10 files is different, hence the function `speech_file_to_array_fn` was changed to:
127 ```
128 def speech_file_to_array_fn(batch):
129 try:
130 speech_array, sampling_rate = sf.read(batch["path"] + ".wav")
131 except:
132 speech_array, sampling_rate = librosa.load(batch["path"], sr = 16000, res_type='zero_order_hold')
133 sf.write(batch["path"] + ".wav", speech_array, sampling_rate, subtype='PCM_24')
134 batch["speech"] = speech_array
135 batch["sampling_rate"] = sampling_rate
136 batch["target_text"] = batch["text"]
137 return batch
138 ```
139
140 As suggested by [Florian Zimmermeister](https://github.com/flozi00).
141
142 The script used for training can be found in [run_common_voice.py](examples/research_projects/wav2vec2/run_common_voice.py), still pending of PR. The only changes are to `speech_file_to_array_fn`. Batch size was kept at 32 (using `gradient_accumulation_steps`) using one of the [OVH](https://www.ovh.com/) machines, with a V100 GPU (thank you very much [OVH](https://www.ovh.com/)). The model trained for 40 epochs, the first 20 with the `train+validation` splits, and then `extra` split was added with the data from CSS10 at the 20th epoch.