1
# Wav2Vec2-Large-XLSR-53
2
3
---
4
language: gl
5
datasets:
6
- OpenSLR 77
7
metrics:
8
- wer
9
tags:
10
- audio
11
- automatic-speech-recognition
12
- speech
13
- xlsr-fine-tuning-week
14
license: apache-2.0
15
model-index:
16
- name: Galician Wav2Vec2-Large-XLSR-53
17
  results:
18
  - task: 
19
      name: Speech Recognition
20
      type: automatic-speech-recognition
21
    dataset:
22
      name: OpenSLR
23
      type: openslr
24
      args: gl
25
    metrics:
26
       - name: Test WER
27
         type: wer
28
         value: 16.79
29
---
30
31
Wav2Vec2-Large-XLSR-53-galician
32
33
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on galician using the [OpenSLR](https://huggingface.co/datasets/common_voice) dataset
34
35
When using this model, make sure that your speech input is sampled at 16kHz.
36
37
## Usage
38
39
The model can be used directly (without a language model) as follows:
40
41
```python
42
import torch
43
import torchaudio
44
from datasets import load_dataset
45
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
46
47
test_dataset = load_dataset("common_voice", "gl", split="test[:2%]")  # This is not available yet, load OpenSLR or your dataset instead
48
49
processor = Wav2Vec2Processor.from_pretrained("diego-fustes/wav2vec2-large-xlsr-gl")
50
model = Wav2Vec2ForCTC.from_pretrained("diego-fustes/wav2vec2-large-xlsr-gl")
51
52
resampler = torchaudio.transforms.Resample(48_000, 16_000)
53
54
# Preprocessing the datasets.
55
# We need to read the aduio files as arrays
56
def speech_file_to_array_fn(batch):
57
   speech_array, sampling_rate = torchaudio.load(batch["path"])
58
   batch["speech"] = resampler(speech_array).squeeze().numpy()
59
   return batch
60
61
test_dataset = test_dataset.map(speech_file_to_array_fn)
62
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
63
64
with torch.no_grad():
65
  logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
66
67
predicted_ids = torch.argmax(logits, dim=-1)
68
69
print("Prediction:", processor.batch_decode(predicted_ids))
70
print("Reference:", test_dataset["sentence"][:2])
71
```
72
73
74
## Evaluation
75
76
The model can be evaluated as follows on the Galician test data of Common Voice (when it is released).
77
78
```python
79
import torch
80
import torchaudio
81
from datasets import load_dataset, load_metric
82
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
83
import re
84
85
test_dataset = load_dataset("common_voice", "gl", split="test")   # This is not available yet, load OpenSLR or your dataset instead
86
wer = load_metric("wer")
87
88
processor = Wav2Vec2Processor.from_pretrained("diego-fustes/wav2vec2-large-xlsr-gl")
89
model = Wav2Vec2ForCTC.from_pretrained("diego-fustes/wav2vec2-large-xlsr-gl")
90
model.to("cuda")
91
92
chars_to_ignore_regex = '[^a-záéíóúñ ]'
93
resampler = torchaudio.transforms.Resample(48_000, 16_000)
94
95
# Preprocessing the datasets.
96
# We need to read the aduio files as arrays
97
def speech_file_to_array_fn(batch):
98
  batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
99
  speech_array, sampling_rate = torchaudio.load(batch["path"])
100
  batch["speech"] = resampler(speech_array).squeeze().numpy()
101
  return batch
102
103
test_dataset = test_dataset.map(speech_file_to_array_fn)
104
105
# Preprocessing the datasets.
106
# We need to read the aduio files as arrays
107
def evaluate(batch):
108
  inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
109
110
  with torch.no_grad():
111
    logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
112
  
113
  pred_ids = torch.argmax(logits, dim=-1)
114
  batch["pred_strings"] = processor.batch_decode(pred_ids)
115
  return batch
116
117
result = test_dataset.map(evaluate, batched=True, batch_size=8)
118
119
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
120
```
121
122
**Test Result**: 16.79 % on OpenSLR split
123
124
125
## Training
126
127
The OpenSLR [SLR77](https://openslr.org/77/) dataset was used for training and validation. The dataset was split as 70% for training, 15% for validation and 15% for testing  
128
129
The script used for training can be found [here](https://github.com/diego-fustes/xlsr-fine-tuning-gl)
130