1
---
2
language: 
3
- zh-HK
4
- yue
5
datasets:
6
- common_voice 
7
metrics:
8
- cer
9
10
tags:
11
- audio
12
- automatic-speech-recognition
13
- speech
14
- xlsr-fine-tuning-week
15
license: apache-2.0
16
model-index:
17
- name: wav2vec2-large-xlsr-cantonese
18
  results:
19
  - task: 
20
      name: Speech Recognition
21
      type: automatic-speech-recognition
22
    dataset:
23
      name: Common Voice zh-HK
24
      type: common_voice
25
      args: zh-HK
26
    metrics:
27
       - name: Test CER
28
         type: cer
29
         value: 15.36
30
---
31
32
# Wav2Vec2-Large-XLSR-53-Cantonese
33
34
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Cantonese using the [Common Voice](https://huggingface.co/datasets/common_voice).
35
When using this model, make sure that your speech input is sampled at 16kHz.
36
37
## Usage
38
39
The model can be used directly (without a language model) as follows:
40
41
```python
42
import torch
43
import torchaudio
44
from datasets import load_dataset
45
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
46
47
test_dataset = load_dataset("common_voice", "zh-HK", split="test[:2%]")
48
49
processor = Wav2Vec2Processor.from_pretrained("ctl/wav2vec2-large-xlsr-cantonese") 
50
model = Wav2Vec2ForCTC.from_pretrained("ctl/wav2vec2-large-xlsr-cantonese")
51
52
resampler = torchaudio.transforms.Resample(48_000, 16_000)
53
54
# Preprocessing the datasets.
55
# We need to read the aduio files as arrays
56
def speech_file_to_array_fn(batch):
57
	speech_array, sampling_rate = torchaudio.load(batch["path"])
58
	batch["speech"] = resampler(speech_array).squeeze().numpy()
59
	return batch
60
61
test_dataset = test_dataset.map(speech_file_to_array_fn)
62
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
63
64
with torch.no_grad():
65
	logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
66
67
predicted_ids = torch.argmax(logits, dim=-1)
68
69
print("Prediction:", processor.batch_decode(predicted_ids))
70
print("Reference:", test_dataset["sentence"][:2])
71
```
72
73
74
## Evaluation
75
76
The model can be evaluated as follows on the Chinese (Hong Kong) test data of Common Voice. 
77
78
79
```python
80
!pip install jiwer
81
import torch
82
import torchaudio
83
from datasets import load_dataset, load_metric
84
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
85
import re
86
import argparse
87
88
lang_id = "zh-HK" 
89
model_id = "ctl/wav2vec2-large-xlsr-cantonese"
90
91
chars_to_ignore_regex = '[\,\?\.\!\-\;\:"\“\%\‘\”\�\.\⋯\!\-\:\–\。\》\,\)\,\?\;\~\~\…\︰\,\(\」\‧\《\﹔\、\—\/\,\「\﹖\·\']'
92
93
test_dataset = load_dataset("common_voice", f"{lang_id}", split="test") 
94
cer = load_metric("cer")
95
96
processor = Wav2Vec2Processor.from_pretrained(f"{model_id}") 
97
model = Wav2Vec2ForCTC.from_pretrained(f"{model_id}") 
98
model.to("cuda")
99
100
resampler = torchaudio.transforms.Resample(48_000, 16_000)
101
102
# Preprocessing the datasets.
103
# We need to read the aduio files as arrays
104
def speech_file_to_array_fn(batch):
105
    batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
106
    speech_array, sampling_rate = torchaudio.load(batch["path"])
107
    batch["speech"] = resampler(speech_array).squeeze().numpy()
108
    return batch
109
110
test_dataset = test_dataset.map(speech_file_to_array_fn)
111
112
# Preprocessing the datasets.
113
# We need to read the aduio files as arrays
114
def evaluate(batch):
115
    inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
116
    with torch.no_grad():
117
        logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
118
119
    pred_ids = torch.argmax(logits, dim=-1)
120
    batch["pred_strings"] = processor.batch_decode(pred_ids)
121
    return batch
122
123
result = test_dataset.map(evaluate, batched=True, batch_size=16)
124
125
print("CER: {:2f}".format(100 * cer.compute(predictions=result["pred_strings"], references=result["sentence"])))
126
```
127
128
129
**Test Result**: 15.51 % 
130
131
132
## Training
133
134
The Common Voice `train`, `validation` were used for training.
135
136
The script used for training will be posted [here](https://github.com/chutaklee/CantoASR)
137