danurahul commited on
Commit
62829e5
1 Parent(s): 5cb4e45
Files changed (1) hide show
  1. README.md +129 -1
README.md CHANGED
@@ -1 +1,129 @@
1
- hello
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: {pa-IN}
3
+ datasets:
4
+ - common_voice
5
+ metrics:
6
+ - wer
7
+ tags:
8
+ - audio
9
+ - automatic-speech-recognition
10
+ - speech
11
+ - xlsr-fine-tuning-week
12
+ license: apache-2.0
13
+ model-index:
14
+ - name: {danurahul/wav2vec2-large-xlsr-pa-IN}
15
+ results:
16
+ - task:
17
+ name: Speech Recognition
18
+ type: automatic-speech-recognition
19
+ dataset:
20
+ name: Common Voice {pa-IN}
21
+ type: common_voice
22
+ args: {pa-IN}
23
+ metrics:
24
+ - name: Test WER
25
+ type: wer
26
+ value: {wer_result_on_test} #TODO (IMPORTANT): replace {wer_result_on_test} with the WER error rate you achieved on the common_voice test set. It should be in the format XX.XX (don't add the % sign here). **Please** remember to fill out this value after you evaluated your model, so that your model appears on the leaderboard. If you fill out this model card before evaluating your model, please remember to edit the model card afterward to fill in your value
27
+ ---
28
+
29
+ # Wav2Vec2-Large-XLSR-53-{Punjabi}
30
+ Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on {Punjabi} using the [Common Voice](https://huggingface.co/datasets/common_voice).
31
+ When using this model, make sure that your speech input is sampled at 16kHz.
32
+
33
+ ## Usage
34
+
35
+ The model can be used directly (without a language model) as follows:
36
+
37
+ ```python
38
+ import torch
39
+ import torchaudio
40
+ from datasets import load_dataset
41
+ from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
42
+
43
+ test_dataset = load_dataset("common_voice", "{pa-IN}", split="test[:2%]")
44
+
45
+ processor = Wav2Vec2Processor.from_pretrained("{danurahul/wav2vec2-large-xlsr-pa-IN}")
46
+ model = Wav2Vec2ForCTC.from_pretrained("{danurahul/wav2vec2-large-xlsr-pa-IN}")
47
+
48
+ resampler = torchaudio.transforms.Resample(48_000, 16_000)
49
+
50
+ # Preprocessing the datasets.
51
+ # We need to read the aduio files as arrays
52
+ def speech_file_to_array_fn(batch):
53
+ speech_array, sampling_rate = torchaudio.load(batch["path"])
54
+ batch["speech"] = resampler(speech_array).squeeze().numpy()
55
+ return batch
56
+
57
+ test_dataset = test_dataset.map(speech_file_to_array_fn)
58
+ inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
59
+
60
+ with torch.no_grad():
61
+ logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
62
+
63
+ predicted_ids = torch.argmax(logits, dim=-1)
64
+
65
+ print("Prediction:", processor.batch_decode(predicted_ids))
66
+ print("Reference:", test_dataset["sentence"][:2])
67
+ ```
68
+
69
+
70
+ ## Evaluation
71
+
72
+ The model can be evaluated as follows on the {Punjabi} test data of Common Voice.
73
+
74
+
75
+ ```python
76
+ import torch
77
+ import torchaudio
78
+ from datasets import load_dataset, load_metric
79
+ from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
80
+ import re
81
+
82
+ test_dataset = load_dataset("common_voice", "{pa-IN}", split="test")
83
+
84
+ wer = load_metric("wer")
85
+
86
+ processor = Wav2Vec2Processor.from_pretrained("{danurahul/wav2vec2-large-xlsr-pa-IN}")
87
+
88
+ model = Wav2Vec2ForCTC.from_pretrained("{danurahul/wav2vec2-large-xlsr-pa-IN}")
89
+
90
+ model.to("cuda")
91
+
92
+ chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�]'
93
+ resampler = torchaudio.transforms.Resample(48_000, 16_000)
94
+
95
+ # Preprocessing the datasets.
96
+ # We need to read the aduio files as arrays
97
+ def speech_file_to_array_fn(batch):
98
+ batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
99
+ speech_array, sampling_rate = torchaudio.load(batch["path"])
100
+ batch["speech"] = resampler(speech_array).squeeze().numpy()
101
+ return batch
102
+
103
+ test_dataset = test_dataset.map(speech_file_to_array_fn)
104
+
105
+ # Preprocessing the datasets.
106
+ # We need to read the aduio files as arrays
107
+ def evaluate(batch):
108
+ inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
109
+
110
+ with torch.no_grad():
111
+ logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
112
+
113
+ pred_ids = torch.argmax(logits, dim=-1)
114
+ batch["pred_strings"] = processor.batch_decode(pred_ids)
115
+ return batch
116
+
117
+ result = test_dataset.map(evaluate, batched=True, batch_size=8)
118
+
119
+ print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
120
+ ```
121
+
122
+ **Test Result**: XX.XX % # TODO: write output of print here. IMPORTANT: Please remember to also replace {wer_result_on_test} at the top of with this value here. tags.
123
+
124
+
125
+ ## Training
126
+
127
+ The Common Voice `train`, `validation` was used for training as well as validation and testing #
128
+
129
+ The script used for training can be found [here](...) # TODO: fill in a link to your training script here. If you trained your model in a colab, simply fill in the link here. If you trained the model locally, it would be great if you could upload the training script on github and paste the link here.