Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,142 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language: vi
|
3 |
+
datasets:
|
4 |
+
- vlsp
|
5 |
+
- vivos
|
6 |
+
tags:
|
7 |
+
- audio
|
8 |
+
- automatic-speech-recognition
|
9 |
+
license: cc-by-nc-4.0
|
10 |
+
widget:
|
11 |
+
- example_title: VLSP ASR 2020 test T1
|
12 |
+
src: https://huggingface.co/nguyenvulebinh/wav2vec2-base-vietnamese-250h/raw/main/audio-test/t1_0001-00010.wav
|
13 |
+
- example_title: VLSP ASR 2020 test T1
|
14 |
+
src: https://huggingface.co/nguyenvulebinh/wav2vec2-base-vietnamese-250h/raw/main/audio-test/t1_utt000000042.wav
|
15 |
+
- example_title: VLSP ASR 2020 test T2
|
16 |
+
src: https://huggingface.co/nguyenvulebinh/wav2vec2-base-vietnamese-250h/raw/main/audio-test/t2_0000006682.wav
|
17 |
+
model-index:
|
18 |
+
- name: Vietnamese end-to-end speech recognition using wav2vec 2.0 by VietAI
|
19 |
+
results:
|
20 |
+
- task:
|
21 |
+
name: Speech Recognition
|
22 |
+
type: automatic-speech-recognition
|
23 |
+
dataset:
|
24 |
+
name: Common Voice vi
|
25 |
+
type: common_voice
|
26 |
+
args: vi
|
27 |
+
metrics:
|
28 |
+
- name: Test WER
|
29 |
+
type: wer
|
30 |
+
value: 11.52
|
31 |
+
- task:
|
32 |
+
name: Speech Recognition
|
33 |
+
type: automatic-speech-recognition
|
34 |
+
dataset:
|
35 |
+
name: VIVOS
|
36 |
+
type: vivos
|
37 |
+
args: vi
|
38 |
+
metrics:
|
39 |
+
- name: Test WER
|
40 |
+
type: wer
|
41 |
+
value: 6.15
|
42 |
+
---
|
43 |
+
|
44 |
+
# Vietnamese end-to-end speech recognition using wav2vec 2.0
|
45 |
+
|
46 |
+
[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/vietnamese-end-to-end-speech-recognition/speech-recognition-on-common-voice-vi)](https://paperswithcode.com/sota/speech-recognition-on-common-voice-vi?p=vietnamese-end-to-end-speech-recognition)
|
47 |
+
|
48 |
+
[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/vietnamese-end-to-end-speech-recognition/speech-recognition-on-vivos)](https://paperswithcode.com/sota/speech-recognition-on-vivos?p=vietnamese-end-to-end-speech-recognition)
|
49 |
+
|
50 |
+
|
51 |
+
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/)
|
52 |
+
|
53 |
+
### Model description
|
54 |
+
|
55 |
+
[Our models](https://huggingface.co/nguyenvulebinh/wav2vec2-base-vietnamese-250h) are pre-trained on 13k hours of Vietnamese youtube audio (un-label data) and fine-tuned on 250 hours labeled of [VLSP ASR dataset](https://vlsp.org.vn/vlsp2020/eval/asr) on 16kHz sampled speech audio.
|
56 |
+
|
57 |
+
We use [wav2vec2 architecture](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) for the pre-trained model. Follow wav2vec2 paper:
|
58 |
+
|
59 |
+
>For the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler.
|
60 |
+
|
61 |
+
For fine-tuning phase, wav2vec2 is fine-tuned using Connectionist Temporal Classification (CTC), which is an algorithm that is used to train neural networks for sequence-to-sequence problems and mainly in Automatic Speech Recognition and handwriting recognition.
|
62 |
+
|
63 |
+
| Model | #params | Pre-training data | Fine-tune data |
|
64 |
+
|---|---|---|---|
|
65 |
+
| [base]((https://huggingface.co/nguyenvulebinh/wav2vec2-base-vietnamese-250h)) | 95M | 13k hours | 250 hours |
|
66 |
+
|
67 |
+
In a formal ASR system, two components are required: acoustic model and language model. Here ctc-wav2vec fine-tuned model works as an acoustic model. For the language model, we provide a [4-grams model](https://huggingface.co/nguyenvulebinh/wav2vec2-base-vietnamese-250h/blob/main/vi_lm_4grams.bin.zip) trained on 2GB of spoken text.
|
68 |
+
|
69 |
+
Detail of training and fine-tuning process, the audience can follow [fairseq github](https://github.com/pytorch/fairseq/tree/master/examples/wav2vec) and [huggingface blog](https://huggingface.co/blog/fine-tune-wav2vec2-english).
|
70 |
+
|
71 |
+
### Benchmark WER result:
|
72 |
+
|
73 |
+
| | [VIVOS](https://ailab.hcmus.edu.vn/vivos) | [COMMON VOICE VI](https://paperswithcode.com/dataset/common-voice) | [VLSP-T1](https://vlsp.org.vn/vlsp2020/eval/asr) | [VLSP-T2](https://vlsp.org.vn/vlsp2020/eval/asr) |
|
74 |
+
|---|---|---|---|---|
|
75 |
+
|without LM| 10.77 | 18.34 | 13.33 | 51.45 |
|
76 |
+
|with 4-grams LM| 6.15 | 11.52 | 9.11 | 40.81 |
|
77 |
+
|
78 |
+
|
79 |
+
### Example usage
|
80 |
+
|
81 |
+
When using the model make sure that your speech input is sampled at 16Khz. Audio length should be shorter than 10s. Following the Colab link below to use a combination of CTC-wav2vec and 4-grams LM.
|
82 |
+
|
83 |
+
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1pVBY46gSoWer2vDf0XmZ6uNV3d8lrMxx?usp=sharing)
|
84 |
+
|
85 |
+
|
86 |
+
```python
|
87 |
+
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
|
88 |
+
from datasets import load_dataset
|
89 |
+
import soundfile as sf
|
90 |
+
import torch
|
91 |
+
|
92 |
+
# load model and tokenizer
|
93 |
+
processor = Wav2Vec2Processor.from_pretrained("nguyenvulebinh/wav2vec2-base-vietnamese-250h")
|
94 |
+
model = Wav2Vec2ForCTC.from_pretrained("nguyenvulebinh/wav2vec2-base-vietnamese-250h")
|
95 |
+
|
96 |
+
# define function to read in sound file
|
97 |
+
def map_to_array(batch):
|
98 |
+
speech, _ = sf.read(batch["file"])
|
99 |
+
batch["speech"] = speech
|
100 |
+
return batch
|
101 |
+
|
102 |
+
# load dummy dataset and read soundfiles
|
103 |
+
ds = map_to_array({
|
104 |
+
"file": 'audio-test/t1_0001-00010.wav'
|
105 |
+
})
|
106 |
+
|
107 |
+
# tokenize
|
108 |
+
input_values = processor(ds["speech"], return_tensors="pt", padding="longest").input_values # Batch size 1
|
109 |
+
|
110 |
+
# retrieve logits
|
111 |
+
logits = model(input_values).logits
|
112 |
+
|
113 |
+
# take argmax and decode
|
114 |
+
predicted_ids = torch.argmax(logits, dim=-1)
|
115 |
+
transcription = processor.batch_decode(predicted_ids)
|
116 |
+
```
|
117 |
+
|
118 |
+
### Model Parameters License
|
119 |
+
|
120 |
+
The ASR model parameters are made available for non-commercial use only, under the terms of the Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) license. You can find details at: https://creativecommons.org/licenses/by-nc/4.0/legalcode
|
121 |
+
|
122 |
+
### Citation
|
123 |
+
|
124 |
+
[![CITE](https://zenodo.org/badge/DOI/10.5281/zenodo.5356039.svg)](https://github.com/vietai/ASR)
|
125 |
+
|
126 |
+
```text
|
127 |
+
@misc{Thai_Binh_Nguyen_wav2vec2_vi_2021,
|
128 |
+
author = {Thai Binh Nguyen},
|
129 |
+
doi = {10.5281/zenodo.5356039},
|
130 |
+
month = {09},
|
131 |
+
title = {{Vietnamese end-to-end speech recognition using wav2vec 2.0}},
|
132 |
+
url = {https://github.com/vietai/ASR},
|
133 |
+
year = {2021}
|
134 |
+
}
|
135 |
+
```
|
136 |
+
**Please CITE** our repo when it is used to help produce published results or is incorporated into other software.
|
137 |
+
|
138 |
+
# Contact
|
139 |
+
|
140 |
+
nguyenvulebinh@gmail.com / binh@vietai.org
|
141 |
+
|
142 |
+
[![Follow](https://img.shields.io/twitter/follow/nguyenvulebinh?style=social)](https://twitter.com/intent/follow?screen_name=nguyenvulebinh)
|