File size: 3,836 Bytes
e6959dc
2db98f3
e6959dc
23cdb2d
e6959dc
 
 
 
 
 
 
 
 
23cdb2d
e6959dc
 
 
 
 
23cdb2d
e6959dc
23cdb2d
e6959dc
 
 
ad8954f
e6959dc
23cdb2d
 
 
e6959dc
 
 
 
 
 
 
 
 
 
 
 
23cdb2d
e6959dc
23cdb2d
 
e6959dc
 
 
 
 
 
ad8954f
 
 
e6959dc
 
 
 
 
ad8954f
e6959dc
 
 
 
 
 
 
 
 
 
23cdb2d
e6959dc
 
 
 
 
 
 
 
23cdb2d
e6959dc
 
23cdb2d
 
e6959dc
 
ad8954f
e6959dc
 
 
 
 
ad8954f
 
 
 
e6959dc
 
 
 
 
 
ad8954f
e6959dc
ad8954f
 
e6959dc
ad8954f
 
 
e6959dc
 
 
 
 
 
23cdb2d
e6959dc
 
 
23cdb2d
e6959dc
23cdb2d
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
---
language: or
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: odia XLSR Wav2Vec2 Large 2000
  results:
  - task: 
      name: Speech Recognition
      type: automatic-speech-recognition
    dataset:
      name: Common Voice or
      type: common_voice
      args: or
    metrics:
       - name: Test WER
         type: wer
         value: 54.6
---

# Wav2Vec2-Large-XLSR-53-or 
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on odia using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.

## Usage

The model can be used directly (without a language model) as follows:

```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor

test_dataset = load_dataset("common_voice", "or", split="test[:2%]") 

processor = Wav2Vec2Processor.from_pretrained("danurahul/wav2vec2-large-xlsr-or") 
model = Wav2Vec2ForCTC.from_pretrained("danurahul/wav2vec2-large-xlsr-or") 

resampler = torchaudio.transforms.Resample(48_000, 16_000)

# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\treturn batch

test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)

with torch.no_grad():
\tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits

predicted_ids = torch.argmax(logits, dim=-1)

print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```


## Evaluation

The model can be evaluated as follows on the odia test data of Common Voice.  

```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re

test_dataset = load_dataset("common_voice", "or", split="test") 
wer = load_metric("wer")

processor = Wav2Vec2Processor.from_pretrained("danurahul/wav2vec2-large-xlsr-or") 
model = Wav2Vec2ForCTC.from_pretrained("danurahul/wav2vec2-large-xlsr-or") 
model.to("cuda")

chars_to_ignore_regex = '[\\,\\?\\.\\!\\-\\;\\:\\"\\“]'  
resampler = torchaudio.transforms.Resample(48_000, 16_000)

# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\tbatch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\treturn batch

test_dataset = test_dataset.map(speech_file_to_array_fn)

# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
\tinputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)

\twith torch.no_grad():
\t\tlogits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits

\tpred_ids = torch.argmax(logits, dim=-1)
\tbatch["pred_strings"] = processor.batch_decode(pred_ids)
\treturn batch

result = test_dataset.map(evaluate, batched=True, batch_size=8)

print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```

**Test Result**: 54.6 %  

## Training

The Common Voice `train`, `validation`, and test datasets were used for training as well as prediction and testing  

The script used for training can be found [https://github.com/rahul-art/wav2vec2_or]