1 ---
2 language: ja
3 datasets:
4 - common_voice
5 metrics:
6 - wer
7 - cer
8 tags:
9 - audio
10 - automatic-speech-recognition
11 - speech
12 - xlsr-fine-tuning-week
13 license: apache-2.0
14 model-index:
15 - name: XLSR Wav2Vec2 Japanese by Jonatas Grosman
16 results:
17 - task:
18 name: Speech Recognition
19 type: automatic-speech-recognition
20 dataset:
21 name: Common Voice ja
22 type: common_voice
23 args: ja
24 metrics:
25 - name: Test WER
26 type: wer
27 value: 81.80
28 - name: Test CER
29 type: cer
30 value: 20.16
31 ---
32
33 # Wav2Vec2-Large-XLSR-53-Japanese
34
35 Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Japanese using the [Common Voice](https://huggingface.co/datasets/common_voice), [CSS10](https://github.com/Kyubyong/css10) and [JSUT](https://sites.google.com/site/shinnosuketakamichi/publication/jsut).
36 When using this model, make sure that your speech input is sampled at 16kHz.
37
38 This model has been fine-tuned thanks to the GPU credits generously given by the [OVHcloud](https://www.ovhcloud.com/en/public-cloud/ai-training/) :)
39
40 The script used for training can be found here: https://github.com/jonatasgrosman/wav2vec2-sprint
41
42 ## Usage
43
44 The model can be used directly (without a language model) as follows...
45
46 Using the [ASRecognition](https://github.com/jonatasgrosman/asrecognition) library:
47
48 ```python
49 from asrecognition import ASREngine
50
51 asr = ASREngine("ja", model_path="jonatasgrosman/wav2vec2-large-xlsr-53-japanese")
52
53 audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"]
54 transcriptions = asr.transcribe(audio_paths)
55 ```
56
57 Writing your own inference script:
58
59 ```python
60 import torch
61 import librosa
62 from datasets import load_dataset
63 from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
64
65 LANG_ID = "ja"
66 MODEL_ID = "jonatasgrosman/wav2vec2-large-xlsr-53-japanese"
67 SAMPLES = 10
68
69 test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]")
70
71 processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
72 model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
73
74 # Preprocessing the datasets.
75 # We need to read the audio files as arrays
76 def speech_file_to_array_fn(batch):
77 speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000)
78 batch["speech"] = speech_array
79 batch["sentence"] = batch["sentence"].upper()
80 return batch
81
82 test_dataset = test_dataset.map(speech_file_to_array_fn)
83 inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
84
85 with torch.no_grad():
86 logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
87
88 predicted_ids = torch.argmax(logits, dim=-1)
89 predicted_sentences = processor.batch_decode(predicted_ids)
90
91 for i, predicted_sentence in enumerate(predicted_sentences):
92 print("-" * 100)
93 print("Reference:", test_dataset[i]["sentence"])
94 print("Prediction:", predicted_sentence)
95 ```
96
97 | Reference | Prediction |
98 | ------------- | ------------- |
99 | 祖母は、おおむね機嫌よく、サイコロをころがしている。 | 人母は重にきね起くさいがしている |
100 | 財布をなくしたので、交番へ行きます。 | 財布をなく手端ので勾番へ行きます |
101 | 飲み屋のおやじ、旅館の主人、医者をはじめ、交際のある人にきいてまわったら、みんな、私より収入が多いはずなのに、税金は安い。 | ノ宮屋のお親じ旅館の主に医者をはじめ交際のアル人トに聞いて回ったらみんな私より収入が多いはなうに税金は安い |
102 | 新しい靴をはいて出かけます。 | だらしい靴をはいて出かけます |
103 | このためプラズマ中のイオンや電子の持つ平均運動エネルギーを温度で表現することがある | このためプラズマ中のイオンや電子の持つ平均運動エネルギーを温度で表弁することがある |
104 | 松井さんはサッカーより野球のほうが上手です。 | 松井さんはサッカーより野球のほうが上手です |
105 | 新しいお皿を使います。 | 新しいお皿を使います |
106 | 結婚以来三年半ぶりの東京も、旧友とのお酒も、夜行列車も、駅で寝て、朝を待つのも久しぶりだ。 | 結婚ル二来三年半降りの東京も吸とのお酒も野越者も駅で寝て朝を待つの久しぶりた |
107 | これまで、少年野球、ママさんバレーなど、地域スポーツを支え、市民に密着してきたのは、無数のボランティアだった。 | これまで少年野球<unk>三バレーなど地域スポーツを支え市民に満着してきたのは娘数のボランティアだった |
108 | 靴を脱いで、スリッパをはきます。 | 靴を脱いでスイパーをはきます |
109
110 ## Evaluation
111
112 The model can be evaluated as follows on the Japanese test data of Common Voice.
113
114 ```python
115 import torch
116 import re
117 import librosa
118 from datasets import load_dataset, load_metric
119 from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
120
121 LANG_ID = "ja"
122 MODEL_ID = "jonatasgrosman/wav2vec2-large-xlsr-53-japanese"
123 DEVICE = "cuda"
124
125 CHARS_TO_IGNORE = [",", "?", "¿", ".", "!", "¡", ";", ";", ":", '""', "%", '"', "�", "ʿ", "·", "჻", "~", "՞",
126 "؟", "،", "।", "॥", "«", "»", "„", "“", "”", "「", "」", "‘", "’", "《", "》", "(", ")", "[", "]",
127 "{", "}", "=", "`", "_", "+", "<", ">", "…", "–", "°", "´", "ʾ", "‹", "›", "©", "®", "—", "→", "。",
128 "、", "﹂", "﹁", "‧", "~", "﹏", ",", "{", "}", "(", ")", "[", "]", "【", "】", "‥", "〽",
129 "『", "』", "〝", "〟", "⟨", "⟩", "〜", ":", "!", "?", "♪", "؛", "/", "\\", "º", "−", "^", "'", "ʻ", "ˆ"]
130
131 test_dataset = load_dataset("common_voice", LANG_ID, split="test")
132
133 wer = load_metric("wer.py") # https://github.com/jonatasgrosman/wav2vec2-sprint/blob/main/wer.py
134 cer = load_metric("cer.py") # https://github.com/jonatasgrosman/wav2vec2-sprint/blob/main/cer.py
135
136 chars_to_ignore_regex = f"[{re.escape(''.join(CHARS_TO_IGNORE))}]"
137
138 processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
139 model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
140 model.to(DEVICE)
141
142 # Preprocessing the datasets.
143 # We need to read the audio files as arrays
144 def speech_file_to_array_fn(batch):
145 with warnings.catch_warnings():
146 warnings.simplefilter("ignore")
147 speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000)
148 batch["speech"] = speech_array
149 batch["sentence"] = re.sub(chars_to_ignore_regex, "", batch["sentence"]).upper()
150 return batch
151
152 test_dataset = test_dataset.map(speech_file_to_array_fn)
153
154 # Preprocessing the datasets.
155 # We need to read the audio files as arrays
156 def evaluate(batch):
157 inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
158
159 with torch.no_grad():
160 logits = model(inputs.input_values.to(DEVICE), attention_mask=inputs.attention_mask.to(DEVICE)).logits
161
162 pred_ids = torch.argmax(logits, dim=-1)
163 batch["pred_strings"] = processor.batch_decode(pred_ids)
164 return batch
165
166 result = test_dataset.map(evaluate, batched=True, batch_size=8)
167
168 predictions = [x.upper() for x in result["pred_strings"]]
169 references = [x.upper() for x in result["sentence"]]
170
171 print(f"WER: {wer.compute(predictions=predictions, references=references, chunk_size=1000) * 100}")
172 print(f"CER: {cer.compute(predictions=predictions, references=references, chunk_size=1000) * 100}")
173 ```
174
175 **Test Result**:
176
177 In the table below I report the Word Error Rate (WER) and the Character Error Rate (CER) of the model. I ran the evaluation script described above on other models as well (on 2021-05-10). Note that the table below may show different results from those already reported, this may have been caused due to some specificity of the other evaluation scripts used.
178
179 | Model | WER | CER |
180 | ------------- | ------------- | ------------- |
181 | jonatasgrosman/wav2vec2-large-xlsr-53-japanese | **81.80%** | **20.16%** |
182 | vumichien/wav2vec2-large-xlsr-japanese | 1108.86% | 23.40% |
183 | qqhann/w2v_hf_jsut_xlsr53 | 1012.18% | 70.77% |
184