1
---
2
language: gu
3
datasets:
4
- openslr
5
metrics:
6
- wer
7
tags:
8
- audio
9
- automatic-speech-recognition
10
- speech
11
- xlsr-fine-tuning-week
12
license: apache-2.0
13
model-index:
14
- name: XLSR Wav2Vec2 Large 53 Gujarati by Gunjan Chhablani
15
  results:
16
  - task: 
17
      name: Speech Recognition
18
      type: automatic-speech-recognition
19
    dataset:
20
      name: OpenSLR gu
21
      type: openslr
22
    metrics:
23
       - name: Test WER
24
         type: wer
25
         value: 23.55
26
---
27
28
# Wav2Vec2-Large-XLSR-53-Gujarati
29
30
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Gujarati using the [OpenSLR SLR78](http://openslr.org/78/) dataset. When using this model, make sure that your speech input is sampled at 16kHz. 
31
32
## Usage
33
34
The model can be used directly (without a language model) as follows, assuming you have a dataset with Gujarati `sentence` and `path` fields:
35
36
```python
37
import torch
38
import torchaudio
39
from datasets import load_dataset
40
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
41
42
# test_dataset = #TODO: WRITE YOUR CODE TO LOAD THE TEST DATASET. 
43
# For sample see the Colab link in Training Section.
44
45
processor = Wav2Vec2Processor.from_pretrained("gchhablani/wav2vec2-large-xlsr-gu")
46
model = Wav2Vec2ForCTC.from_pretrained("gchhablani/wav2vec2-large-xlsr-gu")
47
48
resampler = torchaudio.transforms.Resample(48_000, 16_000) # The original data was with 48,000 sampling rate. You can change it according to your input.
49
50
# Preprocessing the datasets.
51
# We need to read the audio files as arrays
52
def speech_file_to_array_fn(batch):
53
    speech_array, sampling_rate = torchaudio.load(batch["path"])
54
    batch["speech"] = resampler(speech_array).squeeze().numpy()
55
    return batch
56
57
test_dataset_eval = test_dataset_eval.map(speech_file_to_array_fn)
58
inputs = processor(test_dataset_eval["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
59
60
with torch.no_grad():
61
    logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
62
63
predicted_ids = torch.argmax(logits, dim=-1)
64
65
print("Prediction:", processor.batch_decode(predicted_ids))
66
print("Reference:", test_dataset_eval["sentence"][:2])
67
```
68
69
70
## Evaluation
71
72
The model can be evaluated as follows on 10% of the Marathi data on OpenSLR.
73
74
```python
75
import torch
76
import torchaudio
77
from datasets import load_dataset, load_metric
78
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
79
import re
80
81
# test_dataset = #TODO: WRITE YOUR CODE TO LOAD THE TEST DATASET. For sample see the Colab link in Training Section.
82
83
wer = load_metric("wer")
84
85
processor = Wav2Vec2Processor.from_pretrained("gchhablani/wav2vec2-large-xlsr-gu")
86
model = Wav2Vec2ForCTC.from_pretrained("gchhablani/wav2vec2-large-xlsr-gu")
87
model.to("cuda")
88
89
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�\–\…\'\_\’]'
90
resampler = torchaudio.transforms.Resample(48_000, 16_000)
91
92
# Preprocessing the datasets.
93
# We need to read the audio files as arrays
94
def speech_file_to_array_fn(batch):
95
    batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
96
    speech_array, sampling_rate = torchaudio.load(batch["path"])
97
    batch["speech"] = resampler(speech_array).squeeze().numpy()
98
    return batch
99
100
test_dataset = test_dataset.map(speech_file_to_array_fn)
101
102
# Preprocessing the datasets.
103
# We need to read the aduio files as arrays
104
def evaluate(batch):
105
    inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
106
    with torch.no_grad():
107
        logits = model(inputs.input_values.to("cuda"), 
108
        attention_mask=inputs.attention_mask.to("cuda")).logits
109
        pred_ids = torch.argmax(logits, dim=-1)
110
        batch["pred_strings"] = processor.batch_decode(pred_ids)
111
        return batch
112
113
result = test_dataset.map(evaluate, batched=True, batch_size=8)
114
115
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
116
```
117
118
**Test Result**: 23.55 %  
119
120
## Training
121
122
90% of the OpenSLR Gujarati Male+Female dataset was used for training, after removing few examples that contained Roman characters.
123
The colab notebook used for training can be found [here](https://colab.research.google.com/drive/1fRQlgl4EPR4qKGScgza3MpWgbL5BeWtn?usp=sharing). 
124