joaoalvarenga commited on
Commit
d7b41a2
1 Parent(s): e1fb739

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +133 -0
README.md ADDED
@@ -0,0 +1,133 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: pt
3
+ datasets:
4
+ - common_voice
5
+ metrics:
6
+ - wer
7
+ tags:
8
+ - audio
9
+ - speech
10
+ - wav2vec2
11
+ - pt
12
+ - apache-2.0
13
+ - portuguese-speech-corpus
14
+ - automatic-speech-recognition
15
+ - speech
16
+ - xlsr-fine-tuning-week
17
+ - PyTorch
18
+ license: apache-2.0
19
+ model-index:
20
+ - name: JoaoAlvarenga XLSR Wav2Vec2 Large 53 Portuguese A
21
+ results:
22
+ - task:
23
+ name: Speech Recognition
24
+ type: automatic-speech-recognition
25
+ dataset:
26
+ name: Common Voice pt
27
+ type: common_voice
28
+ args: pt
29
+ metrics:
30
+ - name: Test WER
31
+ type: wer
32
+ value: 15.037146%
33
+ ---
34
+
35
+
36
+ # Wav2Vec2-Large-XLSR-53-Portuguese
37
+
38
+ Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Portuguese using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
39
+
40
+ ## Usage
41
+
42
+ The model can be used directly (without a language model) as follows:
43
+
44
+ ```python
45
+ import torch
46
+ import torchaudio
47
+ from datasets import load_dataset
48
+ from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
49
+
50
+ test_dataset = load_dataset("common_voice", "pt", split="test[:2%]")
51
+
52
+ processor = Wav2Vec2Processor.from_pretrained("joorock12/wav2vec2-large-xlsr-portuguese-a")
53
+ model = Wav2Vec2ForCTC.from_pretrained("joorock12/wav2vec2-large-xlsr-portuguese-a")
54
+
55
+ resampler = torchaudio.transforms.Resample(48_000, 16_000)
56
+
57
+ # Preprocessing the datasets.
58
+ # We need to read the aduio files as arrays
59
+ def speech_file_to_array_fn(batch):
60
+ speech_array, sampling_rate = torchaudio.load(batch["path"])
61
+ batch["speech"] = resampler(speech_array).squeeze().numpy()
62
+ return batch
63
+
64
+ test_dataset = test_dataset.map(speech_file_to_array_fn)
65
+ inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
66
+
67
+ with torch.no_grad():
68
+ logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
69
+
70
+ predicted_ids = torch.argmax(logits, dim=-1)
71
+
72
+ print("Prediction:", processor.batch_decode(predicted_ids))
73
+ print("Reference:", test_dataset["sentence"][:2])
74
+ ```
75
+
76
+
77
+ ## Evaluation
78
+
79
+ The model can be evaluated as follows on the Portuguese test data of Common Voice.
80
+
81
+
82
+ ```python
83
+ import torch
84
+ import torchaudio
85
+ from datasets import load_dataset, load_metric
86
+ from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
87
+ import re
88
+
89
+ test_dataset = load_dataset("common_voice", "pt", split="test")
90
+ wer = load_metric("wer")
91
+
92
+ processor = Wav2Vec2Processor.from_pretrained("joorock12/wav2vec2-large-xlsr-portuguese-a")
93
+ model = Wav2Vec2ForCTC.from_pretrained("joorock12/wav2vec2-large-xlsr-portuguese-a")
94
+ model.to("cuda")
95
+
96
+ chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\'\�]'
97
+ resampler = torchaudio.transforms.Resample(48_000, 16_000)
98
+
99
+ # Preprocessing the datasets.
100
+ # We need to read the aduio files as arrays
101
+ def speech_file_to_array_fn(batch):
102
+ batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
103
+ speech_array, sampling_rate = torchaudio.load(batch["path"])
104
+ batch["speech"] = resampler(speech_array).squeeze().numpy()
105
+ return batch
106
+
107
+ test_dataset = test_dataset.map(speech_file_to_array_fn)
108
+
109
+ # Preprocessing the datasets.
110
+ # We need to read the aduio files as arrays
111
+ def evaluate(batch):
112
+ inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
113
+
114
+ with torch.no_grad():
115
+ logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
116
+
117
+ pred_ids = torch.argmax(logits, dim=-1)
118
+ batch["pred_strings"] = processor.batch_decode(pred_ids)
119
+ return batch
120
+
121
+ result = test_dataset.map(evaluate, batched=True, batch_size=8)
122
+
123
+ print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
124
+ ```
125
+
126
+ **Test Result (wer)**: 15.037146%
127
+
128
+
129
+ ## Training
130
+
131
+ The Common Voice `train`, `validation` datasets were used for training.
132
+
133
+ The script used for training can be found at: https://github.com/joaoalvarenga/wav2vec2-large-xlsr-53-portuguese/blob/main/fine-tuning.py