lucio commited on
Commit
f8903fb
1 Parent(s): 8eefd98

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +127 -0
README.md ADDED
@@ -0,0 +1,127 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: lg
3
+ datasets:
4
+ - common_voice (train+validation+other[upvotes > downvotes])
5
+ metrics:
6
+ - wer
7
+ tags:
8
+ - audio
9
+ - automatic-speech-recognition
10
+ - speech
11
+ - xlsr-fine-tuning-week
12
+ license: apache-2.0
13
+ model-index:
14
+ - name: Lucio XLSR Wav2Vec2 Large Luganda
15
+ results:
16
+ - task:
17
+ name: Speech Recognition
18
+ type: automatic-speech-recognition
19
+ dataset:
20
+ name: Common Voice lg
21
+ type: common_voice
22
+ args: lg
23
+ metrics:
24
+ - name: Test WER
25
+ type: wer
26
+ value: 48.47 #TODO (IMPORTANT): replace {wer_result_on_test} with the WER error rate you achieved on the common_voice test set. It should be in the format XX.XX (don't add the % sign here). **Please** remember to fill out this value after you evaluated your model, so that your model appears on the leaderboard. If you fill out this model card before evaluating your model, please remember to edit the model card afterward to fill in your value
27
+ ---
28
+
29
+ # Wav2Vec2-Large-XLSR-53-lg
30
+
31
+ Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Luganda using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset, using train, validation and other (if the example had more upvotes than downvotes), and taking the test data for validation as well as test.
32
+ When using this model, make sure that your speech input is sampled at 16kHz.
33
+
34
+ ## Usage
35
+
36
+ The model can be used directly (without a language model) as follows:
37
+
38
+ ```python
39
+ import torch
40
+ import torchaudio
41
+ from datasets import load_dataset
42
+ from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
43
+
44
+ test_dataset = load_dataset("common_voice", "lg", split="test[:2%]")
45
+
46
+ processor = Wav2Vec2Processor.from_pretrained("lucio/wav2vec2-large-xlsr-luganda")
47
+ model = Wav2Vec2ForCTC.from_pretrained("lucio/wav2vec2-large-xlsr-luganda")
48
+
49
+ resampler = torchaudio.transforms.Resample(48_000, 16_000)
50
+
51
+ # Preprocessing the datasets.
52
+ # We need to read the audio files as arrays
53
+ def speech_file_to_array_fn(batch):
54
+ speech_array, sampling_rate = torchaudio.load(batch["path"])
55
+ batch["speech"] = resampler(speech_array).squeeze().numpy()
56
+ return batch
57
+
58
+ test_dataset = test_dataset.map(speech_file_to_array_fn)
59
+ inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
60
+
61
+ with torch.no_grad():
62
+ logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
63
+
64
+ predicted_ids = torch.argmax(logits, dim=-1)
65
+
66
+ print("Prediction:", processor.batch_decode(predicted_ids))
67
+ print("Reference:", test_dataset["sentence"][:2])
68
+ ```
69
+
70
+
71
+ ## Evaluation
72
+
73
+ The model can be evaluated as follows on the Luganda test data of Common Voice.
74
+
75
+
76
+ ```python
77
+ import torch
78
+ import torchaudio
79
+ from datasets import load_dataset, load_metric
80
+ from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
81
+ import re
82
+
83
+ test_dataset = load_dataset("common_voice", "lg", split="test")
84
+ wer = load_metric("wer")
85
+
86
+ processor = Wav2Vec2Processor.from_pretrained("lucio/wav2vec2-large-xlsr-luganda")
87
+ model = Wav2Vec2ForCTC.from_pretrained("lucio/wav2vec2-large-xlsr-luganda")
88
+ model.to("cuda")
89
+
90
+ chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“]'
91
+ resampler = torchaudio.transforms.Resample(48_000, 16_000)
92
+
93
+ # Preprocessing the datasets.
94
+ # We need to read the audio files as arrays
95
+ def speech_file_to_array_fn(batch):
96
+ batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
97
+ speech_array, sampling_rate = torchaudio.load(batch["path"])
98
+ batch["speech"] = resampler(speech_array).squeeze().numpy()
99
+ return batch
100
+
101
+ test_dataset = test_dataset.map(speech_file_to_array_fn)
102
+
103
+ # Preprocessing the datasets.
104
+ # We need to read the audio files as arrays
105
+ def evaluate(batch):
106
+ inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
107
+
108
+ with torch.no_grad():
109
+ logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
110
+
111
+ pred_ids = torch.argmax(logits, dim=-1)
112
+ batch["pred_strings"] = processor.batch_decode(pred_ids)
113
+ return batch
114
+
115
+ result = test_dataset.map(evaluate, batched=True, batch_size=8)
116
+
117
+ print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
118
+ ```
119
+
120
+ **Test Result**: 48.47 % # TODO: write output of print here. IMPORTANT: Please remember to also replace {wer_result_on_test} at the top of with this value here. tags.
121
+
122
+
123
+ ## Training
124
+
125
+ The Common Voice `train`, `validation` and `other` datasets were used for training, with the additional filter applied to remove `other` data that did not have more up votes than down votes.
126
+
127
+ The script used for training was just the `run_finetuning.py` script provided in OVHcloud's databuzzword/hf-wav2vec image.