shiwangi27 commited on
Commit
e069486
1 Parent(s): 70fc7fc

First readme here!

Browse files
Files changed (1) hide show
  1. README.md +133 -0
README.md ADDED
@@ -0,0 +1,133 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: hi
3
+ datasets:
4
+ - openslr_hindi
5
+ - common_voice
6
+ metrics:
7
+ - wer
8
+ tags:
9
+ - audio
10
+ - automatic-speech-recognition
11
+ - speech
12
+ - xlsr-fine-tuning-week
13
+ - xlsr-hindi
14
+ license: apache-2.0
15
+ model-index:
16
+ - name: Fine-tuned Hindi XLSR Wav2Vec2 Large
17
+ results:
18
+ - task:
19
+ name: Speech Recognition
20
+ type: automatic-speech-recognition
21
+ datasets:
22
+ - name: Common Voice hi
23
+ type: common_voice
24
+ args: hi
25
+ - name: OpenSLR Hindi
26
+ url: https://www.openslr.org/resources/103/
27
+ metrics:
28
+ - name: Test WER
29
+ type: wer
30
+ value: 46.05
31
+ ---
32
+
33
+ # Wav2Vec2-Large-XLSR-Hindi
34
+
35
+ Fine-tuned facebook/wav2vec2-large-xlsr-53 on Hindi using OpenSLR Hindi dataset for training and Common Voice Hindi Test dataset for Evaluation. The OpenSLR Hindi data used for training was of size 10000 and it was randomly sampled. The OpenSLR train and test sets were combined and used as training data in order to increase the amount of variations. The evaluation was done on Common Voice Test set. The OpenSLR data is 8kHz and hence it was upsampled to 16kHz for training.
36
+
37
+ When using this model, make sure that your speech input is sampled at 16kHz.
38
+
39
+ *Note: This is the first iteration of the fine-tuning. Will update this model if WER improves in future experiments.*
40
+
41
+ ## Test Results
42
+
43
+ | Dataset | WER |
44
+ | ------- | --- |
45
+ | Test split Common Voice Hindi | 46.055 % |
46
+
47
+
48
+ ## Usage
49
+
50
+ The model can be used directly (without a language model) as follows:
51
+
52
+ ```python
53
+ import torch
54
+ import torchaudio
55
+ from datasets import load_dataset
56
+ from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
57
+
58
+ test_dataset = load_dataset("common_voice", "hi", split="test[:2%]")
59
+
60
+ processor = Wav2Vec2Processor.from_pretrained("shiwangi27/wave2vec2-large-xlsr-hindi")
61
+ model = Wav2Vec2ForCTC.from_pretrained("shiwangi27/wave2vec2-large-xlsr-hindi")
62
+
63
+ resampler = torchaudio.transforms.Resample(48_000, 16_000)
64
+
65
+ # Preprocessing the datasets.
66
+ # We need to read the audio files as arrays
67
+ def speech_file_to_array_fn(batch):
68
+ speech_array, sampling_rate = torchaudio.load(batch["path"])
69
+ batch["speech"] = resampler(speech_array).squeeze().numpy()
70
+ return batch
71
+
72
+ test_dataset = test_dataset.map(speech_file_to_array_fn)
73
+ inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
74
+
75
+ with torch.no_grad():
76
+ logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
77
+
78
+ predicted_ids = torch.argmax(logits, dim=-1)
79
+
80
+ print("Prediction:", processor.batch_decode(predicted_ids))
81
+ print("Reference:", test_dataset[:2]["sentence"])
82
+ ```
83
+
84
+ ## Evaluation
85
+
86
+ The model can be evaluated as follows on the Hindi test data of Common Voice.
87
+
88
+ ```python
89
+ import torch
90
+ import torchaudio
91
+ from datasets import load_dataset, load_metric
92
+ from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
93
+ import re
94
+
95
+ test_dataset = load_dataset("common_voice", "hi", split="test")
96
+ wer = load_metric("wer")
97
+
98
+ processor = Wav2Vec2Processor.from_pretrained("shiwangi27/wave2vec2-large-xlsr-hindi")
99
+ model = Wav2Vec2ForCTC.from_pretrained("shiwangi27/wave2vec2-large-xlsr-hindi")
100
+ model.to("cuda")
101
+
102
+ chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\�\।\']'
103
+ resampler = torchaudio.transforms.Resample(48_000, 16_000)
104
+
105
+ # Preprocessing the datasets.
106
+ # We need to read the audio files as arrays
107
+ def speech_file_to_array_fn(batch):
108
+ batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
109
+ speech_array, sampling_rate = torchaudio.load(batch["path"])
110
+ batch["speech"] = resampler(speech_array).squeeze().numpy()
111
+ return batch
112
+
113
+ test_dataset = test_dataset.map(speech_file_to_array_fn)
114
+
115
+ def evaluate(batch):
116
+ inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
117
+
118
+ with torch.no_grad():
119
+ logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
120
+
121
+ pred_ids = torch.argmax(logits, dim=-1)
122
+ batch["pred_strings"] = processor.batch_decode(pred_ids)
123
+ return batch
124
+
125
+ result = test_dataset.map(evaluate, batched=True, batch_size=8)
126
+
127
+ print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
128
+ ```
129
+
130
+ ## Code
131
+
132
+ The Notebook used for training this model can be found at [shiwangi27/googlecolab](https://github.com/shiwangi27/googlecolab/blob/main/run_common_voice.ipynb).
133
+ I used a modified version of [run_common_voice.py](https://github.com/shiwangi27/googlecolab/blob/main/run_common_voice.py) for training.