1 ---
2 language: th
3 datasets:
4 - common_voice
5 tags:
6 - audio
7 - automatic-speech-recognition
8 - speech
9 - xlsr-fine-tuning
10 - robust-speech-event
11 license: cc-by-sa-4.0
12 ---
13
14 # `wav2vec2-large-xlsr-53-th`
15 Finetuning `wav2vec2-large-xlsr-53` on Thai [Common Voice 7.0](https://commonvoice.mozilla.org/en/datasets)
16
17 [Read more on our blog](https://medium.com/airesearch-in-th/airesearch-in-th-3c1019a99cd)
18
19 We finetune [wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) based on [Fine-tuning Wav2Vec2 for English ASR](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Fine_tuning_Wav2Vec2_for_English_ASR.ipynb) using Thai examples of [Common Voice Corpus 7.0](https://commonvoice.mozilla.org/en/datasets). The notebooks and scripts can be found in [vistec-ai/wav2vec2-large-xlsr-53-th](https://github.com/vistec-ai/wav2vec2-large-xlsr-53-th). The pretrained model and processor can be found at [airesearch/wav2vec2-large-xlsr-53-th](https://huggingface.co/airesearch/wav2vec2-large-xlsr-53-th).
20
21 ## `robust-speech-event`
22
23 Add `syllable_tokenize`, `word_tokenize` ([PyThaiNLP](https://github.com/PyThaiNLP/pythainlp)) and [deepcut](https://github.com/rkcosmos/deepcut) tokenizers to `eval.py` from [robust-speech-event](https://github.com/huggingface/transformers/tree/master/examples/research_projects/robust-speech-event#evaluation)
24
25 ```
26 > python eval.py --model_id ./ --dataset mozilla-foundation/common_voice_7_0 --config th --split test --log_outputs --thai_tokenizer newmm/syllable/deepcut/cer
27 ```
28
29 ### Eval results on Common Voice 7 "test":
30
31 | | WER PyThaiNLP 2.3.1 | WER deepcut | SER | CER |
32 |---------------------------------|---------------------|-------------|---------|---------|
33 | Only Tokenization | 0.9524% | 2.5316% | 1.2346% | 0.1623% |
34 | Cleaning rules and Tokenization | TBD | TBD | TBD | TBD |
35
36
37 ## Usage
38
39 ```
40 #load pretrained processor and model
41 processor = Wav2Vec2Processor.from_pretrained("airesearch/wav2vec2-large-xlsr-53-th")
42 model = Wav2Vec2ForCTC.from_pretrained("airesearch/wav2vec2-large-xlsr-53-th")
43
44 #function to resample to 16_000
45 def speech_file_to_array_fn(batch,
46 text_col="sentence",
47 fname_col="path",
48 resampling_to=16000):
49 speech_array, sampling_rate = torchaudio.load(batch[fname_col])
50 resampler=torchaudio.transforms.Resample(sampling_rate, resampling_to)
51 batch["speech"] = resampler(speech_array)[0].numpy()
52 batch["sampling_rate"] = resampling_to
53 batch["target_text"] = batch[text_col]
54 return batch
55
56 #get 2 examples as sample input
57 test_dataset = test_dataset.map(speech_file_to_array_fn)
58 inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
59
60 #infer
61 with torch.no_grad():
62 logits = model(inputs.input_values,).logits
63
64 predicted_ids = torch.argmax(logits, dim=-1)
65
66 print("Prediction:", processor.batch_decode(predicted_ids))
67 print("Reference:", test_dataset["sentence"][:2])
68
69 >> Prediction: ['และ เขา ก็ สัมผัส ดีบุก', 'คุณ สามารถ รับทราบ เมื่อ ข้อความ นี้ ถูก อ่าน แล้ว']
70 >> Reference: ['และเขาก็สัมผัสดีบุก', 'คุณสามารถรับทราบเมื่อข้อความนี้ถูกอ่านแล้ว']
71 ```
72
73 ## Datasets
74
75 Common Voice Corpus 7.0](https://commonvoice.mozilla.org/en/datasets) contains 133 validated hours of Thai (255 total hours) at 5GB. We pre-tokenize with `pythainlp.tokenize.word_tokenize`. We preprocess the dataset using cleaning rules described in `notebooks/cv-preprocess.ipynb` by [@tann9949](https://github.com/tann9949). We then deduplicate and split as described in [ekapolc/Thai_commonvoice_split](https://github.com/ekapolc/Thai_commonvoice_split) in order to 1) avoid data leakage due to random splits after cleaning in [Common Voice Corpus 7.0](https://commonvoice.mozilla.org/en/datasets) and 2) preserve the majority of the data for the training set. The dataset loading script is `scripts/th_common_voice_70.py`. You can use this scripts together with `train_cleand.tsv`, `validation_cleaned.tsv` and `test_cleaned.tsv` to have the same splits as we do. The resulting dataset is as follows:
76
77 ```
78 DatasetDict({
79 train: Dataset({
80 features: ['path', 'sentence'],
81 num_rows: 86586
82 })
83 test: Dataset({
84 features: ['path', 'sentence'],
85 num_rows: 2502
86 })
87 validation: Dataset({
88 features: ['path', 'sentence'],
89 num_rows: 3027
90 })
91 })
92 ```
93
94 ## Training
95
96 We fintuned using the following configuration on a single V100 GPU and chose the checkpoint with the lowest validation loss. The finetuning script is `scripts/wav2vec2_finetune.py`
97
98 ```
99 # create model
100 model = Wav2Vec2ForCTC.from_pretrained(
101 "facebook/wav2vec2-large-xlsr-53",
102 attention_dropout=0.1,
103 hidden_dropout=0.1,
104 feat_proj_dropout=0.0,
105 mask_time_prob=0.05,
106 layerdrop=0.1,
107 gradient_checkpointing=True,
108 ctc_loss_reduction="mean",
109 pad_token_id=processor.tokenizer.pad_token_id,
110 vocab_size=len(processor.tokenizer)
111 )
112 model.freeze_feature_extractor()
113 training_args = TrainingArguments(
114 output_dir="../data/wav2vec2-large-xlsr-53-thai",
115 group_by_length=True,
116 per_device_train_batch_size=32,
117 gradient_accumulation_steps=1,
118 per_device_eval_batch_size=16,
119 metric_for_best_model='wer',
120 evaluation_strategy="steps",
121 eval_steps=1000,
122 logging_strategy="steps",
123 logging_steps=1000,
124 save_strategy="steps",
125 save_steps=1000,
126 num_train_epochs=100,
127 fp16=True,
128 learning_rate=1e-4,
129 warmup_steps=1000,
130 save_total_limit=3,
131 report_to="tensorboard"
132 )
133 ```
134
135 ## Evaluation
136
137 We benchmark on the test set using WER with words tokenized by [PyThaiNLP](https://github.com/PyThaiNLP/pythainlp) 2.3.1 and [deepcut](https://github.com/rkcosmos/deepcut), and CER. We also measure performance when spell correction using [TNC](http://www.arts.chula.ac.th/ling/tnc/) ngrams is applied. Evaluation codes can be found in `notebooks/wav2vec2_finetuning_tutorial.ipynb`. Benchmark is performed on `test-unique` split.
138
139 | | WER PyThaiNLP 2.3.1 | WER deepcut | CER |
140 |--------------------------------|---------------------|----------------|----------------|
141 | [Kaldi from scratch](https://github.com/vistec-AI/commonvoice-th) | 23.04 | | 7.57 |
142 | Ours without spell correction | 13.634024 | **8.152052** | **2.813019** |
143 | Ours with spell correction | 17.996397 | 14.167975 | 5.225761 |
144 | Google Web Speech API※ | 13.711234 | 10.860058 | 7.357340 |
145 | Microsoft Bing Speech API※ | **12.578819** | 9.620991 | 5.016620 |
146 | Amazon Transcribe※ | 21.86334 | 14.487553 | 7.077562 |
147 | NECTEC AI for Thai Partii API※ | 20.105887 | 15.515631 | 9.551027 |
148
149 ※ APIs are not finetuned with Common Voice 7.0 data
150
151 ## LICENSE
152
153 [cc-by-sa 4.0](https://github.com/vistec-AI/wav2vec2-large-xlsr-53-th/blob/main/LICENSE)
154
155 ## Ackowledgements
156 * model training and validation notebooks/scripts [@cstorm125](https://github.com/cstorm125/)
157 * dataset cleaning scripts [@tann9949](https://github.com/tann9949)
158 * dataset splits [@ekapolc](https://github.com/ekapolc/) and [@14mss](https://github.com/14mss)
159 * running the training [@mrpeerat](https://github.com/mrpeerat)
160 * spell correction [@wannaphong](https://github.com/wannaphong)
161
162