File size: 2,756 Bytes
87ace80
9a7ce6a
 
 
 
 
 
 
2205f15
87ace80
9a7ce6a
 
 
 
 
 
 
 
 
 
 
 
 
 
87ace80
9f71671
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2205f15
 
9f71671
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
---
language: hi
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: cc
model-index:
- name: Wav2Vec2 Hindi Model by Swayam Mittal
  results:
  - task:
      name: Speech Recognition
      type: automatic-speech-recognition
    dataset:
      name: Common Voice hi
      type: common_voice
      args: hi
    metrics:
    - name: Test WER
      type: wer
      value: 24.17
---

# hindi-clsril-100

Fine-tuned [Harveenchadha/wav2vec2-pretrained-clsril-23-10k](https://huggingface.co/Harveenchadha/wav2vec2-pretrained-clsril-23-10k) on Hindi using the [Common Voice](https://huggingface.co/datasets/common_voice), included [openSLR](http://www.openslr.org/103/) Hindi dataset. 
When using this model, make sure that your speech input is sampled at 16kHz.

## Evaluation
The model can be used directly (with or without a language model) as follows:

```python
#!pip install datasets==1.4.1
#!pip install transformers==4.4.0
#!pip install torchaudio
#!pip install jiwer

import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2ProcessorWithLM
processor_with_lm = Wav2Vec2ProcessorWithLM.from_pretrained("swayam01/hindi-clsril-100")
model = Wav2Vec2ForCTC.from_pretrained("swayam01/hindi-clsril-100")
test_dataset = load_dataset("common_voice", "hi", split="test") 
wer = load_metric("wer")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\�\।\']'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
    batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
    speech_array, sampling_rate = torchaudio.load(batch["path"])
    batch["speech"] = resampler(speech_array).squeeze().numpy()
    return batch

test_dataset = test_dataset.map(speech_file_to_array_fn)

def evaluate(batch):
    inputs = processor_with_lm(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
    with torch.no_grad():
        logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
    batch["pred_strings"] = transcription = processor_with_lm.batch_decode(logits.numpy()).text
    return batch

result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```

**Test Result**: 24.17 %

## Training

The Common Voice hi `train`, `validation` were used for training, as well as openSLR hi `train`, `validation` and `test` datasets.

The script used for training can be found here [colab](https://colab.research.google.com/drive/1YL_csb3LRjqWybeyvQhZ-Hem2dtpvq_x?usp=sharing)