RedTachyon
commited on
Commit
•
d72de48
1
Parent(s):
f366923
Upload folder using huggingface_hub
Browse files- gNWyr7KBGj/4_image_0.png +3 -0
- gNWyr7KBGj/4_image_1.png +3 -0
- gNWyr7KBGj/4_image_2.png +3 -0
- gNWyr7KBGj/4_image_3.png +3 -0
- gNWyr7KBGj/5_image_0.png +3 -0
- gNWyr7KBGj/6_image_0.png +3 -0
- gNWyr7KBGj/gNWyr7KBGj.md +551 -0
- gNWyr7KBGj/gNWyr7KBGj_meta.json +25 -0
gNWyr7KBGj/4_image_0.png
ADDED
Git LFS Details
|
gNWyr7KBGj/4_image_1.png
ADDED
Git LFS Details
|
gNWyr7KBGj/4_image_2.png
ADDED
Git LFS Details
|
gNWyr7KBGj/4_image_3.png
ADDED
Git LFS Details
|
gNWyr7KBGj/5_image_0.png
ADDED
Git LFS Details
|
gNWyr7KBGj/6_image_0.png
ADDED
Git LFS Details
|
gNWyr7KBGj/gNWyr7KBGj.md
ADDED
@@ -0,0 +1,551 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Mac: A Unified Framework Boosting Low-Resource Automatic Speech Recognition
|
2 |
+
|
3 |
+
Anonymous authors Paper under double-blind review
|
4 |
+
|
5 |
+
## Abstract
|
6 |
+
|
7 |
+
We propose a unified framework for low-resource automatic speech recognition tasks named meta-audio concatenation (MAC). It is easy to implement and can be carried out in extremely low-resource environments. Mathematically, we give a clear description of the MAC
|
8 |
+
framework from the perspective of Bayesian sampling. We propose a broad notion of metaaudio sets for the concatenative synthesis text-to-speech system to meet the modeling demands of different languages and different scenarios when using the system. By the proper meta-audio set, one can integrate language pronunciation rules in a convenient way. Besides, it can also help reduce the difficulty of force alignment, improve the diversity of synthesized audios, and solve the "out of vocabulary" (OOV) issue in synthesis. Our experiments have demonstrated the great effectiveness of MAC on low-resource ASR tasks. On Cantonese, Taiwanese, and Japanese ASR tasks, the MAC method can reduce the character error rate (CER) by more than 15% and achieve comparable performance to the fine-tuned wav2vec2 model. Among them, it is worth mentioning that we achieve a **10.9%** CER on the Common Voice Cantonese ASR task, leading to about 30% relative improvement compared to the wav2vec2 (with fine-tuning), which is a new SOTA.
|
9 |
+
|
10 |
+
## 1 Introduction
|
11 |
+
|
12 |
+
Automatic speech recognition (ASR) is a traditional task with extensive use in applications. Before the popularity of deep learning methods, HMM-GMM (R., 1989) models, possessing an elegant mathematical form, are widely adopted for ASR tasks with satisfied performance in relatively simple speech recognition tasks. The most famous HMM-GMM speech recognition toolkit is Kaldi (Povey et al., 2011). As the rise of deep neural networks (NN), a large number of end-to-end (NN-based) speech recognition models has emerged, e.g., Speech-transformer (Dong et al., 2018), Conformer (Gulati et al., 2020), LAS (Chan et al., 2015), and corresponding toolkits such as Espnet (Watanabe et al., 2018) and Wenet (Yao et al., 2021). Furthermore, there are also many pretrained large models, such as vq-wav2vec (Baevski et al., 2019), wav2vec (Schneider et al., 2019) and wav2vec2 (Baevski et al., 2020; Conneau et al., 2020). However, whether for the HMM-GMM (R., 1989; Rodríguez et al., 1997) model or advanced end-to-end models such as Speech-transformer (Dong et al., 2018), learning a practical model for speech recognition often requires a large amount of data (hundreds of hours even tens of thousands of hours). However, in many scenarios such as dialects and minority languages, it is often difficult and expensive to get sufficient audio data for training. A straightforward solution is to use the text-to-speech (TTS) methods for data augmentation. There has been a lot of work on TTS data augmentation methods for ASR tasks, such as Laptev et al. (2020); Rossenbach et al. (2020); Sun et al. (2020). In this work, we propose a new framework called MAC (meta-audio concatenation) that can enhance low-resource ASR tasks. The MAC framework is built upon a clear mathematical demonstration from a Bayesian sampling perspective. It uses a novel concatenative synthesis TTS system that incorporates the concept of meta-audios, which are fundamental modeling units of language pronunciations. By leveraging this innovative approach, MAC can boost the performance of low-resource ASR tasks. It is worth mentioning that compared with former TTS methods for ASR tasks, our MAC framework has the following advantages:
|
13 |
+
- MAC leverages a novel concatenative synthesis text-to-speech system to boost the ASR tasks, which can integrate language pronunciation rules as the prior knowledge. Furthermore, the process of generating audios shows strong interpretability, controllability and is easy to be adjusted;
|
14 |
+
- The proposed meta-audio set is a broad notion for the concatenative synthesis text-to-speech system, and the dimensions of meta-audio sets can be flexibly determined according to prior knowledge, model complexity budgets, audio data size, etc. Using a proper meta-audio set, MAC owns benefits such as reducing difficulty in force alignment, integrating different prior knowledge and flexible applications to different scenarios with increasing diversity in synthesized audios, etc.
|
15 |
+
|
16 |
+
- There is no need to train additional TTS neural networks, and the generation process is just simple splicing, which is easy to implement and saves computation resources;
|
17 |
+
- Most importantly, the MAC framework can be carried out in extremely low-resource environments
|
18 |
+
(e.g., training data less than 10 hours) without the help of additional labeled data. MAC also has a promising potential to model any low-resource languages as long as there is prior knowledge of the target language (e.g., pronunciation rules).
|
19 |
+
|
20 |
+
We perform extensive experiments to demonstrate the great effectiveness of MAC on low-resource ASR
|
21 |
+
tasks. For Cantonese, Taiwanese and Japanese ASR tasks, MAC can reduce the CER by more than 15%
|
22 |
+
(Cantonese: from 32.5 to 12.7; Taiwanese: from 51.3 to 22.0; Japanese: from 45.3 to 25.0), see Table 3 for more details. Furthermore, MAC outperforms the fine-tuned wav2vec2 on the Cantonese Common Voice dataset and obtains quite competitive results on the Taiwanese and Japanese Common Voice datasets. Besides, with attention rescore decode mode, we achieves a **10.9%** CER on the Common Voice Cantonese ASR task, resulting in a significant relative improvement of about 30% compared to fine-tuning the wav2vec2 model. Notably, these remarkable improvements in accuracy are achieved even without careful tuning of hyper-parameters.
|
23 |
+
|
24 |
+
Commonly, semi-supervised learning and transfer learning are widely utilized in low-resource speech recognition scenarios. However, both of the approaches will possibly face certain difficulties in extremely low-resource scenarios. We will briefly discuss these points below.
|
25 |
+
|
26 |
+
## 1.1 Semi-Supervised Learning
|
27 |
+
|
28 |
+
For semi-supervised learning, pseudo-label algorithms and their variants (e.g., iterative pseudo label algorithm (Xu et al., 2020)) are widely adopted, whose basic ideas are using the pseudo labels to help train the model. Table 1 reported in Higuchi et al. (2022) demonstrated the effectiveness of iterative pseudo-label semi-supervised algorithms. The experiments are conducted on the LibriSpeech dataset (Panayotov et al.,
|
29 |
+
2015) and TEDLIUM3 dataset (Hernandez et al., 2018). Here, LL-10h, LS-100h, LS-360h and LS-860h represent different splits of the LibriSpeech dataset, and the results are evaluated on dev-other split and test-other split of the dataset.
|
30 |
+
|
31 |
+
| Resource | Dev | Test |
|
32 |
+
|------------------|-------|--------|
|
33 |
+
| LS-100h | 22.5 | 23.3 |
|
34 |
+
| LS-100h/LS-360h | 15.9 | 15.8 |
|
35 |
+
| LS-100h/LS-860h | 13.9 | 14.2 |
|
36 |
+
| LS-100h/TEDLIUM3 | 18.9 | 18.5 |
|
37 |
+
| LL-10h | 50.6 | 51.3 |
|
38 |
+
| LL-10h/LS-360h | 35.4 | 36.1 |
|
39 |
+
| LL-10h/LS-860h | 33.5 | 34.4 |
|
40 |
+
|
41 |
+
Table 1: Results of the iterative pseudo-label-based semi-supervised learning for ASR tasks (Higuchi et al.,
|
42 |
+
2022). The experiments are performed on the LibriSpeech and TEDLIUM3 dataset with a CER evaluation criteria.
|
43 |
+
|
44 |
+
Although the performance can be significantly improved through the use of semi-supervised learning, we have observed that the quality of the initial models, trained on LL-10h and LS-100h data, has a remarkable impact on the final performance. As evident from the data provided in the table, a higher quality initial model, characterized by lower error rates associated with LS-100h training resource compared to LL-10h resource, leads to improved performance in the semi-supervised results, even when the same amount of unlabeled data is used. Furthermore, it is worth noting that the benefits of adding additional unlabeled data diminish as the amounbt of unlabeled data increasing. For example, the improvement obtained by adding 860 hours of unlabeled data is only approximately 2% compared to adding 360 hours of unlabeled data. This further emphasizes the significance of the quality of the initial model in influencing the final results. This observation suggests that the performance of the initial model sets an upper limit for the effectiveness of semi-supervised learning. When starting with extremely low resources, such as only 10 hours of data, a poor initial model is obtained, making it challenging for semi-supervised learning to achieve satisfactory results.
|
45 |
+
|
46 |
+
There are also some theoretical results to explain these phenomena (see e.g., Wei et al. (2020) and Min & Tai
|
47 |
+
(2022)). Actually, in real low-resource scenarios, it can be rather difficult to obtain a suitable initial model
|
48 |
+
(or pseudo-label generator) due to the limited amount of labeled data. These difficulties seriously affect the performance of pseudo-label-based semi-supervised learning when applied to extremely low-resource ASR
|
49 |
+
tasks.
|
50 |
+
|
51 |
+
## 1.2 Transfer Learning
|
52 |
+
|
53 |
+
For pretrained large models with fine-tuning, wav2vec2 (Baevski et al., 2020; Conneau et al., 2020) is one of the leading representative models. Wav2vec2 has shown strong transfer learning capabilities on ASR tasks. Using models like wav2vec2, the CER can be significantly reduced (Yi et al., 2021). Although transfer learning is generally effective, its performance can be significantly constrained, if there is a large gap between the target and pretrained speech domain. For example, the wav2vec2 model (Baevski et al.,
|
54 |
+
2020) is pretrained on English audio data in the English speech domain, which gives a 4.8% CER by using 10 minutes of labeled English audio data to fine-tune. However, it only achieves a 28.32% CER even using 27k training utts in the Japanese domain (Yi et al., 2021).
|
55 |
+
|
56 |
+
## 2 Related Work
|
57 |
+
|
58 |
+
There have been many attempts to use TTS for data augmentation to benefit ASR tasks, e.g., Laptev et al.
|
59 |
+
|
60 |
+
(2020); Rossenbach et al. (2020); Sun et al. (2020); Li et al. (2018); Ueno et al. (2021); Rosenberg et al. (2019); Tjandra et al. (2017). Among them, Ueno et al. (2021) focuses on the representation aspect. Results in Rosenberg et al. (2019) indicate the effectiveness of TTS data enhancement, although it may be not as good as the model trained on real speech data. Tjandra et al. (2017) takes advantage of the close connection between TTS and ASR models.
|
61 |
+
|
62 |
+
The idea of leveraging concatenative synthesis text-to-speech system to boost ASR is also explored in Du et al. (2021); Zhao et al. (2021). However, in the audio splicing data augmentation method developed in Du et al. (2021), they just replace the English audio part of the code-switching audio, which is a simple and preliminary splicing method. Therefore, the spliced audio diversity is limited, and it is difficult to introduce the audio containing OOV (out of vocabulary) texts. Zhao et al. (2021) focused on the adaptation to new domains and is out of the scope of low-resource tasks.
|
63 |
+
|
64 |
+
It is also worth mentioning that a recent work Min et al. (2022) shows that competitive performance on Mandarin ASR tasks can be achieved with only 10 hours of Mandarin audio data using a novel concatenative synthesis text-to-speech system. The process mainly involves training models on one Mandarin audio dataset, mapping characters to pinyin using a character-pinyin dictionary, and synthesizing audio by concatenating pinyin-audio pairs. This method owns many properties that are well-suited for low-resource ASR tasks, as it does not require additional labeled audio data, hence it is efficient, interpretable, and convenient for human intervention. For adaption, a simple energy normalization is provided in Min et al. (2022) instead of using other complex energy normalization methods such as Lostanlen et al. (2018).
|
65 |
+
|
66 |
+
## 3 Method
|
67 |
+
|
68 |
+
Both semi-supervised learning and transfer learning methods face challenges in low-resource speech recognition tasks. Besides, training a reliable speech synthesis system in such settings is essentially quite difficult.
|
69 |
+
|
70 |
+
As a solution, our approach uses speech concatenation synthesis as a TTS data augmentation technique. Specifically, for each text to be synthesized, we find the corresponding audio for each word in the text, then normalize their energy and perform concatenation. The key step here is to find the audio for each word in texts, which is accomplished via forced alignment on labeled data. To overcome any possible out-ofvocabulary (OOV) issues, we introduce the concept of meta-audio.
|
71 |
+
|
72 |
+
Our approach has two benefits. First, it provides a manner to mix labeled audios in the time domain, and fully utilizes labeled audios to help the model learn more robust acoustic features. Second, it easily allows the model to learn other textual information, benefiting the model to make accurate predictions. To describe the overall framework of MAC, we use a Bayesian sampling framework. In Section 3.1, we briefly introduce the symbols and notations used, and also the general data configuration for low-resource speech recognition. Sections 3.2 and Section 3.3 introduce the notion of meta-audios and demonstrate its bridging role in the speech concatenation process. Sections 3.4 and Section 3.5 describe the position of forced alignment in the Bayesian sampling process from a mathematical perspective. Section 3.6 presents the benefits of energy normalization in improving the quality of synthesized audios. Finally, Section 3.7 summarizes the overall procedure and gives an algorithmic characterization of the whole synthesis process.
|
73 |
+
|
74 |
+
## 3.1 General Audio Datasets
|
75 |
+
|
76 |
+
The mathematical formulation is as follows. Denote the audio wave space by X , and the transcription text space by Y, i.e., X = {x :all the audio waves}, Y = {y : all the transcription texts}. In general, for ASR
|
77 |
+
tasks, the labeled audio dataset consists of audio-transcription pairs {(xi, yi)}
|
78 |
+
N
|
79 |
+
i=1 sampled from a certain underlying distribution P, and {xi}
|
80 |
+
N
|
81 |
+
i=1 ∼ Px, {yi}
|
82 |
+
N
|
83 |
+
i=1 ∼ Py with Px, Py as the marginal distribution of P.
|
84 |
+
|
85 |
+
Obtaining a practical speech recognition model often requires hundreds or even thousands of hours of audiotranscription pairs for training. Unfortunately, getting audio-transcription pairs is usually expensive. In fact, in many scenarios such as dialects, one can only access around ten hours of audio-transcription pairs.
|
86 |
+
|
87 |
+
However, the audio-only data x ∼ Px and text-only data y ∼ Py are respectively easier to access. Formally, we have a paired dataset D = {(xi, yi)}
|
88 |
+
N
|
89 |
+
i=1, an audio-only dataset Daudio = {xi}
|
90 |
+
N1 i=1 and a text-only dataset Dtext = {yi}
|
91 |
+
N2 i=1 with N2 ≫ N1 ≫ N. This is the typical setting of low-resource ASR tasks. The goal is to sample paired audio-transcription data (*x, y*) from the underlying distribution P.
|
92 |
+
|
93 |
+
Basically, we have P(*x, y*) = Py(y)P(x | y), (1)
|
94 |
+
where Py(y) corresponds to the distribution of transcriptions, and P(x | y) denotes the distribution of audios conditioned on a certain transcription y. Therefore, the sampling of new data (audio-transcription pair) (*x, y*)
|
95 |
+
can be divided into two stages. First, the transcription text y is sampled from Py(y), and then the audio x corresponding to the previous transcription text y is sampled from P(x | y). The first stage, i.e., sampling of the transcription text y, is relatively easy since there is usually sufficient text-only data in Dtext = {yi}
|
96 |
+
N2 i=1, We will simply count the frequency of each transcription text in the text-only dataset Dtext, and use the frequency to estimate the corresponding probability. That is, Py(y) ≈ P˜y(y) = 1 N2 Pyi∈Dtext δ(yi), where δ(yi) denotes the Dirac function that is equal to 1 if y = yi, and 0 for any y ̸= yi, and recall that N2 is the total number of transcription texts in Dtext. Therefore, the key is to analyze and maximize P(x | y), which is in fact a TTS task.
|
97 |
+
|
98 |
+
## 3.2 Meta-Audio Set And Meta-Audio Sequence Space
|
99 |
+
|
100 |
+
In order to perform further "decoupling" analysis on the conditional probability distribution P(x | y), we first introduce the meta-audio set, represented by A. Here, meta-audios refer to the basic modeling units of specific language pronunciations. For example, there are about 50 phonemes in English. If we want to use these phonemes as modeling units to characterize the English pronunciation, these phonemes form a
|
101 |
+
|
102 |
+
$${\mathsf{C h i n e s e s e n t e n c e}}$$
|
103 |
+
meta audio sequence (prim)
|
104 |
+
$${\mathrm{English~sentence}}$$
|
105 |
+
meta audio sequence (phoneme)
|
106 |
+
Japanese sentence meta audio sequence (kana)
|
107 |
+
|
108 |
+
![4_image_0.png](4_image_0.png)
|
109 |
+
|
110 |
+
![4_image_1.png](4_image_1.png)
|
111 |
+
|
112 |
+
![4_image_2.png](4_image_2.png)
|
113 |
+
|
114 |
+
![4_image_3.png](4_image_3.png)
|
115 |
+
|
116 |
+
Figure 1: Examples of the mapping function t in Chinese, English and Japanese. The function t : *Y → A*
|
117 |
+
maps a transcription text to its corresponding meta-audio sequence which reflects language-specific pronunciation rules. Hence, t can act as a fusion of prior knowledge of pronunciation rules.
|
118 |
+
|
119 |
+
nature meta-audio set of English. In our framework, we have designed the selection of meta-audio sets to be flexible, taking into account the unique characteristics of different languages. For example, in English, a meta-audio set can be created by using phonemes directly, or by fusing multiple phonemes together as a single unit. Similarly, for Mandarin, meta-audio sets can be determined based on pinyin, either in a tone-sensitive or tone-insensitive manner, depending on the requirements of the specific application. For Japanese, kana can be used as meta-audio sets. This flexibility allows us to adapt the meta-audio sets to the specific language being considered, making our approach versatile and adaptable to different linguistic contexts. By considering different options for meta-audio set creation, we can optimize our approach for each language, ensuring accurate and effective results in our audio processing tasks.
|
120 |
+
|
121 |
+
Mapping function t : *Y → A*. To reflect language-specific pronunciation rules, we also need a function t : *Y → A* to map a transcription text to its corresponding meta-audio sequence. The construction of t requires prior knowledge of pronunciation rules. Obviously, different languages may have different mappings, and even different meta-audio sequences in the same language may have different mappings. Figure 1 shows examples of t for Chinese, English and Japanese.
|
122 |
+
|
123 |
+
## 3.3 Decoupling Analysis
|
124 |
+
|
125 |
+
In this section, we perform a fine-grained decoupling analysis on the probability P(x | y), i.e., the conditional distribution of audios given transcriptions. On the one hand, when an audio x ∈ X is given, we have a conditional distribution P(y | x). This is in fact the goal of ASR tasks: predict corresponding texts given audios by estimating P(y | x). On the other hand, when a transcription y ∈ Y is given, we are supposed to model the desired conditional distribution P(x | y). One can decompose P(x | y) via the meta-audio sequence space A and the mapping function t as follows:
|
126 |
+
|
127 |
+
$$\begin{split}P(x\,|\,y)&=\sum_{a\in\mathcal{A}}P(x,a\,|\,y)\\ &=\sum_{a\in\mathcal{A}}P(x\,|\,a,y)P(a\,|\,y)\\ &=P(x\,|\,a=\mathbf{t}(y),y).\end{split}$$
|
128 |
+
$$\left(2\right)$$
|
129 |
+
|
130 |
+
![5_image_0.png](5_image_0.png)
|
131 |
+
|
132 |
+
the meta audio sequence $t(y)=a\in\mathcal{A}$. Each blank is a meta audio (phoneme).
|
133 |
+
Figure 2: An illustration of the decoupling process. The goal of ASR tasks is to estimate P(y | x) and the goal of TTS tasks is to estimate P(x | y). In order to simplify the modeling of P(x | y), we introduce the mapping function t to transform P(x | y) into P(x | a). The dimension of a is generally much smaller than y. Here, we applied the fact that P(a | y) is a degenerate distribution with the probability 1 at a = t(y).
|
134 |
+
|
135 |
+
Furthermore, the meta-audio sequence contains all the pronunciation information of transcription texts, hence
|
136 |
+
|
137 |
+
$$P(x\,|\,a={\bf t}(y),y)=P(x\,|\,a={\bf t}(y)),$$
|
138 |
+
which gives $\theta$
|
139 |
+
$$P(x\,|\,y)=P(x\,|\,a=\mathbf{t}(y)).$$
|
140 |
+
P(x | y) = P(x | a = t(y)). (4)
|
141 |
+
Figure 2 illustrates this decoupling process.
|
142 |
+
|
143 |
+
## 3.4 Bayesian Inference On P(X | A)
|
144 |
+
|
145 |
+
Based on Section 3.3 and Eq. (4), instead of analyzing P(x | y) directly, we turn to study P(x | a). This is usually much easier, since a ∈ A often has a *much lower dimension* than y ∈ Y. Here, the dimension refers to the number of element classes per position of a ∈ A or y ∈ Y. For example, in English, the dimension of a ∈ A is about 50 (here we naturally select the phonemes as meta-audios for simple interpretation), while the dimension of y ∈ Y could be much higher since there is a large number of English words.
|
146 |
+
|
147 |
+
Notice that
|
148 |
+
P(x | a) ∝ Px(x)P(a | x), (5)
|
149 |
+
the goal is now converted to maximize Px(x) and P(a | x) in order to maximize P(x | a). Recall that Px(x) denotes the prior probability of audios, we will discuss it later (in Section 3.6). For P(a = (a
|
150 |
+
(1), a(2)*, ..., a*(n))| x),
|
151 |
+
we have
|
152 |
+
|
153 |
+
$$\left({\boldsymbol{3}}\right)$$
|
154 |
+
$$\left(4\right)$$
|
155 |
+
$$P(x\,|\,a)\propto P_{x}(x)P(a\,|\,x),$$
|
156 |
+
$$P(a\,|\,x)=\sum_{\mathbf{s}}\prod_{i=1}^{n}P\left(a^{(i)}\,|\,x^{(i)}=\left[x_{s_{i}},x_{s_{i+1}}\right)\right)\tag{1}$$
|
157 |
+
|
158 |
+
Here, s = (s1, s2*, ..., s*n+1) represents the time slice of x ∈ X . Eq. (6) holds because: 1) the audio wave x can be also properly divided (maybe not unique) to obtain the clip x
|
159 |
+
(i) = [xsi
|
160 |
+
, xsi+1 ) corresponding to each a
|
161 |
+
(i).
|
162 |
+
|
163 |
+
For instance, we can segment an English audio wave of one sentence and get the audio wave segmentation corresponding to the sentence's meta-audio sequence;1 2) the audio wave clip x
|
164 |
+
(i)in x is monotonous with respect to a
|
165 |
+
(i)in a. That is, the timestamps of audio waves and meta-audios must match with each other, i.e., x
|
166 |
+
(i) and only x
|
167 |
+
(i)corresponds to a
|
168 |
+
(i); 3) for simplicity, we treat this correspondence independently.
|
169 |
+
|
170 |
+
1Here, the meta-audio sequence is the phoneme sequence, since we naturally select phonemes as meta-audios for simple interpretation.
|
171 |
+
|
172 |
+
$$\mathbf{\Sigma}$$
|
173 |
+
$$\mathbf{\Sigma}$$
|
174 |
+
|
175 |
+
Unfortunately, it is still quite expensive to consider all the time slices s = (s1, s2*, ..., s*n+1) in Eq. (6).
|
176 |
+
|
177 |
+
However, for each time-fixed slice s 0 = (s 0 1
|
178 |
+
, s02
|
179 |
+
, ..., s0n+1), we can derive a lower bound of P(a | x):
|
180 |
+
|
181 |
+
$$P(a\,|\,x)=\sum_{\mathbf{s}}\prod_{i=1}^{n}P\left(a^{(i)}\,|\,x^{(i)}=\left[x_{s_{i}},x_{s_{i+1}}\right)\right)\tag{1}$$ $$\geq\prod_{i=1}^{n}P\left(a^{(i)}\,|\,x^{(i)}=\left[x_{s_{i}^{0}},x_{s_{i+1}^{0}}\right)\right).$$
|
182 |
+
$$\mathbf{\Sigma}$$
|
183 |
+
(7)
|
184 |
+
|
185 |
+
## 3.5 Optimization Of P(A | X)
|
186 |
+
|
187 |
+
According to Eq. (7) in Section 3.4, we can approximately maximize P(a = (a
|
188 |
+
(1), a(2)*, ..., a*(n))| x) by maximizing a lower bound determined by a fixed partition. The right hand side of Eq. (7) can be maximized by performing force alignment (Kim et al., 2021; López & Luque, 2022). Specifically, we first map the transcription text y ∈ Y in the labeled audio dataset D = {(xi, yi)}
|
189 |
+
N
|
190 |
+
i=1 into a ∈ A to get a corresponding dataset {(xi, ai)}
|
191 |
+
N
|
192 |
+
i=1. Then, we train the ASR model and perform force alignment on {(xi, ai)}
|
193 |
+
N
|
194 |
+
i=1 to get the audio wave clip corresponding to the meta-audio element ai (with high probability). For further efficiency, we can store the forced alignment results and build a database B, whose procedure is illustrated in Figure 3. When we synthesize audios from texts, we first convert the text y to the meta-audio a, then query the audio clip corresponding to each meta-audio element. If there are more than one answers in the database, we just randomly select one. The time slice s 0in Eq. (7) is implicitly considered when we concatenate these audio clips to form a complete audio wave, since when we concatenate the audio clips corresponding to each meta-audio element in the sequence, it automatically forms a time slice s 0.
|
195 |
+
|
196 |
+
Remark 1 **(Database size)** *For each element* a
|
197 |
+
(i)*, we may get different corresponding audio clips (with* high probability) by performing force alignment on {(xi, ai)}
|
198 |
+
|
199 |
+
![6_image_0.png](6_image_0.png)
|
200 |
+
|
201 |
+
N
|
202 |
+
i=1*. We just store all of them in the database* B
|
203 |
+
to enrich the selections and increase the diversity of synthesized audios.
|
204 |
+
|
205 |
+
Figure 3: The process of building the database B. Note that we can get audio clips from the training set, hence we can train an ASR model and use it for the forced alignment on the same training set, which is very helpful in extremely low-resource conditions to obtain high quality audio clips.
|
206 |
+
|
207 |
+
## 3.6 Energy Normalization
|
208 |
+
|
209 |
+
In this section, we discuss energy normalization and its benefits. There are many energy normalization methods, and here we follow the operation described in Min et al. (2022). Specifically, we average the energy of sampled audio clips (x
|
210 |
+
(1), x(2)*, . . . , x*(n)) corresponding to the meta-audio sequence a = (a
|
211 |
+
(1), a(2), . . . , a(n)) ∈ A. That is,
|
212 |
+
|
213 |
+
, $$E=\frac{\sum_{i=1}^n\|x^{(i)}\|}{n},$$ $$\left\{x^{(i)}\right\}_{i=1}^n\to\left\{\frac{x^{(i)}}{\|x^{(i)}\|}*E\right\}_{i=1}^n.$$
|
214 |
+
(8) $\binom{9}{2}$ (9) $\binom{10}{2}$ (10) $\binom{11}{2}$ (11) $\binom{12}{3}$ (12) $\binom{13}{4}$ (13) $\binom{14}{5}$ (14) $\binom{15}{6}$ (15) $\binom{17}{8}$ (16) $\binom{18}{9}$ (17) $\binom{19}{10}$ (18) $\binom{19}{11}$ (19) $\binom{12}{13}$ (14) $\binom{14}{5}$ (15) $\binom{15}{6}$ (17) $\binom{18}{10}$ (19) (19) $\binom{19}{11}$ (19) (19) (10)
|
215 |
+
7 The reason for energy normalization is that the audio wave obtained by combining these audio clips as presented in Section 3.5 may give a high probability Qn i=1 Pa
|
216 |
+
(i)|x
|
217 |
+
(i)but do not take the term Px(x =
|
218 |
+
(x
|
219 |
+
(1), x(2)*, . . . , x*(n))) in Eq. (5) into account. For example, the volume of the generated (x
|
220 |
+
(1), x(2)*, . . . , x*(n))
|
221 |
+
may change rapidly and frequently, leading to a small Px(x = (x
|
222 |
+
(1), x(2)*, . . . , x*(n))). Mathematically, this can be understood as follows: the support set of Px is likely to be a very small (proper) subset in the whole audio space X . Therefore, simply merging these audio clips may cause the synthesized audio wave x = (x
|
223 |
+
(1), x(2)*, . . . , x*(n)) corresponding to a = (a
|
224 |
+
(1), a(2)*, . . . , a*(n)) to be severely distorted.
|
225 |
+
|
226 |
+
There are two benefits of this operation. First, this normalization technique helps to mitigate the rapid and frequent changes in volume, resulting in a more stable and natural synthesis of the audio wave x = (x
|
227 |
+
(1), x(2)*, . . . , x*(n)). This distortion issue is solved by incorporating energy normalization, which ensures that the generated audio clips are combined in a way that maintains their original characteristics and preserves the overall quality of the synthesized audio wave, thereby improving the quality of the synthesized audio output.
|
228 |
+
|
229 |
+
Second, the energy normalization in Eq. (8) and (9) will not affect Qn i=1 Pa
|
230 |
+
(i)|x
|
231 |
+
(i)too much since it only involves linear scaling of x
|
232 |
+
(i), i = 1, 2*, . . . , n*. We further have:
|
233 |
+
|
234 |
+
$$P\left(a^{(i)}|x^{(i)}\right)\approx P\left(a^{(i)}|\frac{x^{(i)}}{\|x^{(i)}\|}*E\right).\tag{1}$$
|
235 |
+
$$(10)$$
|
236 |
+
|
237 |
+
In a nutshell, by incorporating energy normalization, we ensure that the generated audio clips are combined in a way that maintains their original characteristics and preserves the overall quality of the synthesized audio wave, thereby improving the quality of the synthesized audio output.
|
238 |
+
|
239 |
+
Certainly, other (more complex) methods are also widely used in concatenative synthesis-based text-to-speech
|
240 |
+
(TTS) systems (Tabet & Boughazi, 2011; Khan & Chitode, 2016). In the following experiments, energy normalization as described in Eq. (8) and (9) is easy to implement and works well. More importantly, our ultimate goal is to handle ASR tasks, hence it is unnecessary to pay too much attention to the fine-grained quality of audios generated by TTS, which may lead to over-complex models. However, improving the quality of synthesized audios may give better results, and we will further discuss the applicability of other (more complex) methods in the future work.
|
241 |
+
|
242 |
+
## 3.7 Summary Of The Mac Framework
|
243 |
+
|
244 |
+
Overall, the MAC framework is a method for synthesizing audios. The following algorithm outlines all the steps involved:
|
245 |
+
|
246 |
+
## Algorithm 1 Mac
|
247 |
+
|
248 |
+
Input: Labeled audio dataset D = {(xi, yi)}
|
249 |
+
N
|
250 |
+
i=1, text-only dataset Dtext = {yi}
|
251 |
+
N2 i=1 1: Determine a proper meta-audio set A according to the language pronunciation rules 2: Create the mapping function t : *Y → A* as pronunciation rules.
|
252 |
+
|
253 |
+
3: Map D = {(xi, yi)}
|
254 |
+
N
|
255 |
+
i=1 into {(xi, ai)}
|
256 |
+
N
|
257 |
+
i=1 using t 4: Perform force alignment on {(xi, ai)}
|
258 |
+
N
|
259 |
+
i=1, then build the database B
|
260 |
+
5: Sample the text y ∈ Y from P˜y(y) = 1 N2 Pyi∈Dtext δ(yi) and get a = (a
|
261 |
+
(1), a(2)*, ..., a*(n)) = t(y)
|
262 |
+
6: Randomly select one audio clip x
|
263 |
+
(i)in B for each a
|
264 |
+
(i)in the meta-audio sequence a = (a
|
265 |
+
(1), a(2)*, ..., a*(n))
|
266 |
+
7: Perform regularization energy normalization and concatenate the audio clips 8: Obtain the synthesized audio-transcription pair data (*x, y*), where x is from Step 7 and y is from Step 5 Output: The synthesized audio-transcription pairs.
|
267 |
+
|
268 |
+
## 4 Experiments
|
269 |
+
|
270 |
+
We verify the effectiveness of MAC on three real low-resource Cantonese, Taiwanese, and Japanese ASR tasks on the corresponding dataset in the Common Voice datasets.2 The experiments show that MAC outperforms the large-scale wav2vec2 pretrained model with fine-tuning, and achieves very competitive results on these tasks.
|
271 |
+
|
272 |
+
## 4.1 Datasets And Pre-Processing
|
273 |
+
|
274 |
+
We use the training split of the corresponding dataset in the Common Voice datasets (11.0 version) as the original labeled data, which is a publicly available multilingual audio dataset contributed by volunteers around the world. Also, we generate synthetic data using the proposed MAC. The detailed number of original labeled data in the training split and synthetic data generated by MAC is reported in Table 2. We use the test split of corresponding datasets to evaluate the performance.
|
275 |
+
|
276 |
+
| Dataset | Original | Synthesized by MAC | All |
|
277 |
+
|-----------|------------|----------------------|--------|
|
278 |
+
| Cantonese | 8,423 | 63,999 | 72,422 |
|
279 |
+
| Taiwanese | 6,568 | 69,315 | 75,883 |
|
280 |
+
| Japanese | 6,505 | 33,099 | 39,604 |
|
281 |
+
|
282 |
+
Table 2: Detailed number of original labeled audio data in the training split and synthetic data generated by MAC in the Cantonese, Taiwanese and Japanese ASR tasks.
|
283 |
+
|
284 |
+
## 4.2 Hybrid Ctc/Attention Architecture
|
285 |
+
|
286 |
+
To ensure a comprehensive speech recognition solution, we utilize a cutting-edge hybrid CTC/attention model with the advanced Wenet toolkit (Yao et al., 2021). The encoder architecture is based on the Conformer, which is known for its high performance in speech modeling. Meanwhile, the decoder uses the Transformer decoder, which is another well-established architecture in the field of speech recognition.
|
287 |
+
|
288 |
+
The encoder consists of 6 Conformer blocks, each comprising a sequence of multi-head self-attention, CNN
|
289 |
+
and feed-forward modules. The attention layer in encoder has an output dimension of 256 and is equipped with 4 attention heads, the kernel of cnn module is 15 and the feed-forward layer contains 512 units. The decoder consists of 3 blocks of multi-head attention, with 4 attention heads and 512 units in the feed-forward layer .
|
290 |
+
|
291 |
+
We select a dropout rate of 0.3, which helps prevent overfitting and enable generalization. We also employ the Swish (Ramachandran et al., 2017) as the activation function.
|
292 |
+
|
293 |
+
## 4.3 Setup
|
294 |
+
|
295 |
+
We conduct the experiments on 2×RTX 3090GPUs (24GB) and 4×P100GPUs (16GB). We set the maximum epoch as 300, and the model is trained using the Adam optimizer with a learning rate of 0.002. We implement a warm-up learning rate scheduler, and the gradient clip is set as 5. Besides, data augmentation techniques such as speed perturbation and spectral augmentation are applied. We also perform label smoothing with a weight of 0.1 and use a hybrid CTC/attention loss with a weight of 0.3. For pre-processing, we set the maximum input length to be 40960ms, and the minimum length to be 0. The maximum length and minimum length of tokens is set as 200 and 1, respectively. We use a re-sample rate of 16000, and 80 mel-bins with a frame shift of 10 and a frame length of 25 for feature extraction. Besides, we randomly shuffle and sort the data during training.
|
296 |
+
|
297 |
+
2We use the zh-HK dataset in the Common Voice datasets for the Cantonese ASR task.
|
298 |
+
|
299 |
+
| Task | Model | CER |
|
300 |
+
|---------------|----------------------------------|-------|
|
301 |
+
| | Hybrid CTC/attention model | 32.5 |
|
302 |
+
| Cantonese ASR | Hybrid CTC/attention model + MAC | 12.7 |
|
303 |
+
| | wav2vec2 + fine-tuning | 15.4 |
|
304 |
+
| | Hybrid CTC/attention model | 51.3 |
|
305 |
+
| Taiwanese ASR | Hybrid CTC/attention model + MAC | 22.0 |
|
306 |
+
| | wav2vec2 + fine-tuning | 18.4 |
|
307 |
+
| | Hybrid CTC/attention model | 45.3 |
|
308 |
+
| Japanese ASR | Hybrid CTC/attention model + MAC | 25.0 |
|
309 |
+
| | wav2vec2 + fine-tuning | 24.9* |
|
310 |
+
|
311 |
+
Table 3: CERs for Cantonese, Taiwanese and Japanese on ASR tasks. We use the advanced hybrid CTC/attention model and test with CTC prefix beam search. It is shown that MAC can boost performance significantly, and also achieve competitive results compared to the pretrained wav2vec2 model with fine-tuning. For Cantonese, Taiwanese, and Japanese ASR tasks, the natural choice of the meta-audio set is based on their respective pronunciation rules. Here, we use Cantonese pinyin for Cantonese, pinyin for Taiwanese, and kana for Japanese. Taking advantage of the flexibility of the meta-audio notion, some adjustments can be made on each language's meta-audio set. For instance, we do not distinguish tones for Cantonese pinyin, but for pinyin, which is used to construct the Taiwanese meta-audio set, we treat the tones respectively. Due to the fact that for a transcription text y = (y 1, y2*, ..., y*m) in Cantonese or Taiwanese, we generally have the approximation t(y 1, y2*, ..., y*m) ≈ (t(y 1), t(y 2)*, ...,* t(y m)), (11)
|
312 |
+
the order of Step 3 and Step 4 is reversed to avoid additional training of the model for forced alignment3 and the overall process is simplified. For Japanese, Eq. (11) no longer holds, hence the order of Step 3 and Step 4 is not reversed.
|
313 |
+
|
314 |
+
For each language, the mapping function t is used to map the transcription text y ∈ Y to their meta-audio sequence a ∈ A. The text-only dataset Dtext for Step 5 is obtained by using transcriptions in the validation split of the respective language dataset in the Common Voice datasets, and we remove transcriptions appearing in the test split. For Step 7, the energy normalization method described in Eq. (8) and (9) is applied.
|
315 |
+
|
316 |
+
## 4.4 Performance
|
317 |
+
|
318 |
+
We evaluate the performance of MAC on the test split of the corresponding language dataset in the Common Voice datasets. The evaluation metric is the character error rate (CER), which measures the difference between the predicted and ground truth transcripts. We compare the results with baselines obtained by training directly on the original training split of labeled data with data augmentation techniques such as speed perturbation and spectral augmentation mentioned above.
|
319 |
+
|
320 |
+
The main results are shown in Table 3. We refer to the wav2vec2+fine-tuning results from the URLs,4 rounding to one decimal place. In all three tasks, the MAC method reduces CERs by more than 15%. Here, we only show results using the CTC prefix decoding, but in fact, using the attention rescore decoding may yield better results, which can be found in the appendix.
|
321 |
+
|
322 |
+
Furthermore, MAC outperforms wav2vec2 (with fine-tuning) and achieves a new state-of-the-art (SOTA)
|
323 |
+
on the Common Voice Cantonese ASR tasks. The asterisk (*) in Table 3 indicates that the 24.9 CER is achieved by using extra data compared to the Japanese audio dataset in the Common Voice datasets during fine-tuning the wav2vec2 model. It is shown that MAC relatively improves the performance by about 20% on the Cantonese ASR task (and we can achieve the 10.9 CER with attention rescore decoding on the Cantonese 3We can directly use the baseline model in Table 3 to perform forced alignment.
|
324 |
+
|
325 |
+
4Cantonese: https://huggingface.co/ctl/wav2vec2-large-xlsr-cantonese, Japanese: https://huggingface.co/qqhann/
|
326 |
+
w2v_hf_jsut_xlsr53, and Taiwanese: https://huggingface.co/voidful/wav2vec2-large-xlsr-53-tw-gpt.
|
327 |
+
|
328 |
+
ASR, with an around 30% relative improvement compared to fine-tuning the wav2vec2 model. It is a new SOTA to the best of our knowledge, and more details can be found in the appendix.) Additionally, for the Taiwanese and Japanese ASR tasks, MAC also achieves comparable results to the fine-tuned wav2vec2 model
|
329 |
+
(and we can achieve the 23.4 CER with attention rescore decoding on the Japanese ASR, with an around 6% relative improvement compared to fine-tuning the wav2vec2 model. See more details in the appendix.)
|
330 |
+
|
331 |
+
## 5 Ablation Studies
|
332 |
+
|
333 |
+
In this section, we conduct ablation experiments to evaluate the performance of MAC, aiming to gain a deeper understanding of MAC and its various components, as well as to provide insights into the most effective ways to use this method for ASR modeling. We focus on three key aspects:
|
334 |
+
1. What are the advantages of MAC over other NN-based TTS methods? How much room is there to improve the ASR performance by adding synthetic audio?
|
335 |
+
|
336 |
+
2. What is the impact of synthetic data quantity on the ASR performance?
|
337 |
+
|
338 |
+
3. Does energy normalization really help to improve the ASR performance?
|
339 |
+
|
340 |
+
5.1 Comparison: MAC and other NN-based methods
|
341 |
+
|
342 |
+
| Language | Data | Attention | CTC greedy | CTC prefix | Attention rescore |
|
343 |
+
|------------------------------|-------------------------|-------------|--------------|--------------|---------------------|
|
344 |
+
| Original training split | | 44.1 | 32.5 | 32.5 | 30.6 |
|
345 |
+
| Cantonese | + Synthesized by MAC | 11.0 | 12.7 | 12.7 | 10.9 |
|
346 |
+
| | + Validation split | 6.7 | 7.0 | 7.0 | 6.1 |
|
347 |
+
| Japanese | Original training split | 72.1 | 45.3 | 45.3 | 44.5 |
|
348 |
+
| + Synthesized by MAC | | 24.3 | 25.1 | 25.0 | 23.4 |
|
349 |
+
| + Tacotron2 synthesized data | | 25.8 | 24.2 | 24.2 | 23.0 |
|
350 |
+
| | + Validation split | 8.5 | 8.3 | 8.3 | 7.5 |
|
351 |
+
| Original training split | | 55.3 | 51.3 | 51.3 | 48.2 |
|
352 |
+
| Taiwanese | + Synthesized by MAC | 18.6 | 22.0 | 22.0 | 19.5 |
|
353 |
+
| | + Validation split | 10.0 | 11.5 | 11.5 | 9.9 |
|
354 |
+
|
355 |
+
Table 4: Comparison of speech recognition results. The "original training split" refers to only using the training split data from the Common Voice datasets, "+ synthesized by MAC" refers to adding MAC
|
356 |
+
synthesized data, and "+ validation split" refers to adding the validation split data in the Common Voice datasets, "+ Tacotron2 synthesized data" refers to adding the synthesized data generated by the Tacotron2 model (Shen et al., 2018)
|
357 |
+
.
|
358 |
+
|
359 |
+
In this section, we discuss the advantages of MAC over other NN-based TTS systems under low-resource scenarios and report the results of adding real data instead of synthesized data to assess the remaining room for improvement. Table 4 presents the corresponding experimental results.
|
360 |
+
|
361 |
+
One significant advantage of MAC is that it can be applied normally compared to other NN-based TTS
|
362 |
+
systems under low-resource conditions. The limited availability of annotated audio data makes it challenging to train an NN-based TTS system for audio data synthesis. In fact, for languages like Cantonese, Taiwanese, and Japanese, we are unable to train an NN-based TTS system successfully using the limited labeled data in the Common Voice datasets.
|
363 |
+
|
364 |
+
For comparison, we use the Japanese TTS system,5 which employs extra data for training. Unfortunately, we find no publicly available TTS systems of Cantonese and Taiwanese, likely due to the scarcity of annotated 5This is available at https://github.com/coqui-ai/TTS.
|
365 |
+
|
366 |
+
data. To ensure a fair comparison, we fix the number of synthesized samples as 30,000, and we use traditional data augmentation techniques such as speed perturbation and SpecAugment (Park et al., 2019) in all settings.
|
367 |
+
|
368 |
+
Nonetheless, MAC achieves comparable results to the NN-based TTS system (Tacotron2), but without the requirement on additional data for training or extensive inference operations, as reported in Table 4. We further add real data instead of synthesized data to explore the remaining room for improvement. In Table 4, "+ validation split" denotes adding the validation split data in the Common Voice datasets, which can be viewed as an upper bound of the optimal performance. The results demonstrate that adding MACsynthesized data significantly improves the recognition accuracy compared to only using the original training split data in the Common Voice datasets, leading to a limited improvement possibility.
|
369 |
+
|
370 |
+
In summary, the results suggest that adding MAC-synthesized speech data is comparable to adding NNbased TTS synthesized data in improving the ASR performance but without requiring additional data or extensive inference computation. Additionally, the results here imply little room for further improvement by improving the quality of synthesized data using an advanced NN-based TTS system.
|
371 |
+
|
372 |
+
## 5.2 Impact Of Synthetic Data Quantity
|
373 |
+
|
374 |
+
The amount of data for training an ASR model is an important factor that can significantly impact the performance. In this section, we explore the impact of synthetic data quantity on the performance of different ASR models on the Taiwanese and Cantonese datasets. Specifically, we examine how the CER
|
375 |
+
changes when more synthetic data is added to the training set. The results are shown in Table 5.
|
376 |
+
|
377 |
+
| | Taiwanese | | | |
|
378 |
+
|-------------|-------------|------------|-------------------|------|
|
379 |
+
| Utt | CER (%) | | | |
|
380 |
+
| Attention | CTC greedy | CTC prefix | Attention rescore | |
|
381 |
+
| 10000 | 25.3 | 30.0 | 29.9 | 26.8 |
|
382 |
+
| 30000 | 18.6 | 23.3 | 23.3 | 20.7 |
|
383 |
+
| All (69315) | 18.6 | 22.0 | 22.0 | 19.5 |
|
384 |
+
| | Cantonese | | | |
|
385 |
+
| Utt | CER (%) | | | |
|
386 |
+
| Attention | CTC greedy | CTC prefix | Attention rescore | |
|
387 |
+
| 10000 | 19.9 | 19.5 | 19.5 | 17.4 |
|
388 |
+
| 30000 | 12.8 | 14.1 | 14.1 | 12.0 |
|
389 |
+
| All (63999) | 11.0 | 12.7 | 12.7 | 10.9 |
|
390 |
+
|
391 |
+
Table 5: The effect of synthetic data quantity on the ASR performance.
|
392 |
+
|
393 |
+
Based on Table 5, we can observe that there is a decrease in the CER for all models when more data is added, but the magnitude becomes smaller as the amount of synthetic data increases. For example, on Taiwanese, the model with the CTC prefix decoding mode has a CER of 29.9% with 10,000 utterances, which drops to 23.3% with 30,000 utterances, but only drops to 22.0% with all 69,315 utterances. Similarly, on Cantonese, the model with the CTC prefix decoding mode has a CER of 19.5% with 10,000 utterances, which drops to 14.1% with 30,000 utterances, but only drops to 12.7% with all 63,999 utterances. It indicates that adding more synthetic data does help to improve the performance, but the improvement becomes smaller as the amount of data increases. The results suggest that adding more data can continue to improve the performance, but it may be not practical or feasible to collect and use all the data. The reasonable amount of required data depends on the desired level of performance and the data availability. It is necessary to trade off between the cost of collecting data and the potential improvement in performance.
|
394 |
+
|
395 |
+
## 5.3 Effect Of Energy Normalization
|
396 |
+
|
397 |
+
| Language | Model | Attention | CTC greedy | CTC prefix | Attention rescore |
|
398 |
+
|--------------------|-----------------------|-------------|--------------|--------------|---------------------|
|
399 |
+
| Cantonese | Without normalization | 12.5 | 13.8 | 13.8 | 11.8 |
|
400 |
+
| With normalization | 11.0 | 12.7 | 12.7 | 10.9 | |
|
401 |
+
| Taiwanese | Without normalization | 18.6 | 22.3 | 22.3 | 20.2 |
|
402 |
+
| With normalization | 18.6 | 22.0 | 22.0 | 19.5 | |
|
403 |
+
|
404 |
+
In this section, we explore the role of energy normalization. Specifically, we present experimental results with and without energy normalization on two different languages. The results demonstrate that energy normalization can improve the quality of synthesized audios, resulting in lower error rates. Table 6: Comparison of speech recognition models on Cantonese and Taiwanese with and without energy normalization.
|
405 |
+
|
406 |
+
Table 6 presents results for four different models (Attention, CTC greedy, CTC prefix, and attention rescore)
|
407 |
+
on two different languages (Cantonese and Taiwanese) with and without energy normalization. It is shown that energy normalization generally improves the performance of corresponding models, leading to lower character error rates, which demonstrates the importance of energy normalization. In general, the use of energy normalization can enhance the quality of synthesized audios, which potentially leads to a better ASR
|
408 |
+
modeling.
|
409 |
+
|
410 |
+
## 6 Conclusion
|
411 |
+
|
412 |
+
In this work, we propose the MAC framework as a unified solution for low-resource automatic speech recognition tasks. The framework incorporates a broad notion of meta-audio sets that enables its application as long as there is knowledge of the required pronunciation rules to construct a suitable meta-audio set.
|
413 |
+
|
414 |
+
Additionally, we provide a clear mathematical description of the MAC framework from the perspective of Bayesian sampling.
|
415 |
+
|
416 |
+
Our experiments demonstrate the effectiveness of MAC in low-resource speech recognition tasks, achieving remarkable improvements in accuracy even without a careful tuning of hyper-parameters. Furthermore, the proposed method significantly improves the performance of speech recognition systems in low-resource settings. Our ablation experiments provide insights into the contribution of different components, demonstrating that speech concatenation synthesis with forced alignment, meta-audios, and energy normalization can be a useful data augmentation technique. The MAC method also has some limitations. For example, it requires in-domain texts and prior knowledge of pronunciation rules to construct the meta-audio sets, which may be not always readily available.
|
417 |
+
|
418 |
+
To conclude, the present work provides a comprehensive framework that can enhance the performance of speech recognition systems in low-resource settings. In future work, we plan to explore efficient ways to construct the meta-audio sets and combine with other sampling procedures such as Li et al. (2019). We hope that the MAC framework can contribute to the development of low-resource audio recognition.
|
419 |
+
|
420 |
+
## References
|
421 |
+
|
422 |
+
Alexei Baevski, Steffen Schneider, and Michael Auli. vq-wav2vec: Self-supervised learning of discrete speech representations. *arXiv preprint arXiv:1910.05453*, 2019.
|
423 |
+
|
424 |
+
Alexei Baevski, Yuhao Zhou, Abdelrahman Mohamed, and Michael Auli. wav2vec 2.0: A framework for self-supervised learning of speech representations. *Advances in Neural Information Processing Systems*,
|
425 |
+
33:12449–12460, 2020.
|
426 |
+
|
427 |
+
William Chan, Navdeep Jaitly, Quoc V Le, and Oriol Vinyals. Listen, attend and spell. arXiv preprint arXiv:1508.01211, 2015.
|
428 |
+
|
429 |
+
Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, and Michael Auli. Unsupervised cross-lingual representation learning for speech recognition. *arXiv preprint arXiv:2006.13979*, 2020.
|
430 |
+
|
431 |
+
Linhao Dong, Shuang Xu, and Bo Xu. Speech-transformer: a no-recurrence sequence-to-sequence model for speech recognition. In *2018 IEEE International Conference on Acoustics, Speech and Signal Processing*
|
432 |
+
(ICASSP), pp. 5884–5888. IEEE, 2018.
|
433 |
+
|
434 |
+
Chenpeng Du, Hao Li, Yizhou Lu, Lan Wang, and Yanmin Qian. Data augmentation for end-to-end codeswitching speech recognition. In *2021 IEEE Spoken Language Technology Workshop (SLT)*, pp. 194–200.
|
435 |
+
|
436 |
+
IEEE, 2021.
|
437 |
+
|
438 |
+
Anmol Gulati, James Qin, Chung-Cheng Chiu, Niki Parmar, Yu Zhang, Jiahui Yu, Wei Han, Shibo Wang, Zhengdong Zhang, Yonghui Wu, et al. Conformer: Convolution-augmented transformer for speech recognition. *arXiv preprint arXiv:2005.08100*, 2020.
|
439 |
+
|
440 |
+
François Hernandez, Vincent Nguyen, Sahar Ghannay, Natalia Tomashenko, and Yannick Esteve. Ted-lium 3: twice as much data and corpus repartition for experiments on speaker adaptation. In *International* conference on speech and computer, pp. 198–208. Springer, 2018.
|
441 |
+
|
442 |
+
Yosuke Higuchi, Niko Moritz, Jonathan Le Roux, and Takaaki Hori. Momentum pseudo-labeling: Semisupervised asr with continuously improving pseudo-labels. IEEE Journal of Selected Topics in Signal Processing, 16(6):1424–1438, 2022.
|
443 |
+
|
444 |
+
Rubeena A Khan and Janardan Shrawan Chitode. Concatenative speech synthesis: A review. *International* Journal of Computer Applications, 136(3):1–6, 2016.
|
445 |
+
|
446 |
+
Jaeyoung Kim, Han Lu, Anshuman Tripathi, Qian Zhang, and Hasim Sak. Reducing streaming asr model delay with self alignment. *arXiv preprint arXiv:2105.05005*, 2021.
|
447 |
+
|
448 |
+
Aleksandr Laptev, Roman Korostik, Aleksey Svischev, Andrei Andrusenko, Ivan Medennikov, and Sergey Rybin. You do not need more data: Improving end-to-end speech recognition by text-to-speech data augmentation. In 2020 13th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI), pp. 439–444. IEEE, 2020.
|
449 |
+
|
450 |
+
Jason Li, Ravi Gadde, Boris Ginsburg, and Vitaly Lavrukhin. Training neural speech recognition systems with synthetic speech augmentation. *arXiv preprint arXiv:1811.00707*, 2018.
|
451 |
+
|
452 |
+
Xinjian Li, Siddharth Dalmia, Alan W Black, and Florian Metze. Multilingual speech recognition with corpus relatedness sampling. *arXiv preprint arXiv:1908.01060*, 2019.
|
453 |
+
|
454 |
+
Fernando López and Jordi Luque. Iterative pseudo-forced alignment by acoustic ctc loss for self-supervised asr domain adaptation. *arXiv preprint arXiv:2210.15226*, 2022.
|
455 |
+
|
456 |
+
Vincent Lostanlen, Justin Salamon, Mark Cartwright, Brian McFee, Andrew Farnsworth, Steve Kelling, and Juan Pablo Bello. Per-channel energy normalization: Why and how. *IEEE Signal Processing Letters*, 26
|
457 |
+
(1):39–43, 2018.
|
458 |
+
|
459 |
+
Zeping Min and Cheng Tai. Why pseudo label based algorithm is effective?–from the perspective of pseudo labeled data. *arXiv preprint arXiv:2211.10039*, 2022.
|
460 |
+
|
461 |
+
Zeping Min, Qian Ge, and Zhong Li. 10 hours data is all you need. *arXiv preprint arXiv:2210.13067*, 2022.
|
462 |
+
|
463 |
+
Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur. Librispeech: an asr corpus based on public domain audio books. In *2015 IEEE international conference on acoustics, speech and signal* processing (ICASSP), pp. 5206–5210. IEEE, 2015.
|
464 |
+
|
465 |
+
Daniel S Park, William Chan, Yu Zhang, Chung-Cheng Chiu, Barret Zoph, Ekin D Cubuk, and Quoc V
|
466 |
+
Le. Specaugment: A simple data augmentation method for automatic speech recognition. arXiv preprint arXiv:1904.08779, 2019.
|
467 |
+
|
468 |
+
Daniel Povey, Arnab Ghoshal, Gilles Boulianne, Lukas Burget, Ondrej Glembek, Nagendra Goel, Mirko Hannemann, Petr Motlicek, Yanmin Qian, Petr Schwarz, et al. The kaldi speech recognition toolkit. In IEEE 2011 workshop on automatic speech recognition and understanding, number CONF. IEEE Signal Processing Society, 2011.
|
469 |
+
|
470 |
+
Rabiner Lawrence R. A tutorial on hidden markov models and selected applications in speech recognition.
|
471 |
+
|
472 |
+
Proceedings of the IEEE, 77(2):257–286, 1989.
|
473 |
+
|
474 |
+
Prajit Ramachandran, Barret Zoph, and Quoc V. Le. Searching for activation functions. *arXiv preprint* arXiv:1710.05941, 2017.
|
475 |
+
|
476 |
+
Elena Rodríguez, Belén Ruíz, Ángel García-Crespo, and Fernando García. Speech/speaker recognition using a hmm/gmm hybrid model. In International Conference on Audio-and Video-Based Biometric Person Authentication, pp. 227–234. Springer, 1997.
|
477 |
+
|
478 |
+
Andrew Rosenberg, Yu Zhang, Bhuvana Ramabhadran, Ye Jia, Pedro Moreno, Yonghui Wu, and Zelin Wu.
|
479 |
+
|
480 |
+
Speech recognition with augmented synthesized speech. In 2019 IEEE automatic speech recognition and understanding workshop (ASRU), pp. 996–1002. IEEE, 2019.
|
481 |
+
|
482 |
+
Nick Rossenbach, Albert Zeyer, Ralf Schlüter, and Hermann Ney. Generating synthetic audio data for attention-based speech recognition systems. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 7069–7073. IEEE, 2020.
|
483 |
+
|
484 |
+
Steffen Schneider, Alexei Baevski, Ronan Collobert, and Michael Auli. wav2vec: Unsupervised pre-training for speech recognition. *arXiv preprint arXiv:1904.05862*, 2019.
|
485 |
+
|
486 |
+
Jonathan Shen, Ruoming Pang, Ron J Weiss, Mike Schuster, Navdeep Jaitly, Zongheng Yang, Zhifeng Chen, Yu Zhang, Yuxuan Wang, Rj Skerrv-Ryan, et al. Natural tts synthesis by conditioning wavenet on mel spectrogram predictions. In *2018 IEEE international conference on acoustics, speech and signal processing*
|
487 |
+
(ICASSP), pp. 4779–4783. IEEE, 2018.
|
488 |
+
|
489 |
+
Guangzhi Sun, Yu Zhang, Ron J Weiss, Yuan Cao, Heiga Zen, Andrew Rosenberg, Bhuvana Ramabhadran, and Yonghui Wu. Generating diverse and natural text-to-speech samples using a quantized fine-grained vae and autoregressive prosody prior. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 6699–6703. IEEE, 2020.
|
490 |
+
|
491 |
+
Youcef Tabet and Mohamed Boughazi. Speech synthesis techniques. a survey. In *International Workshop on* Systems, Signal Processing and their Applications, WOSSPA, pp. 67–70. IEEE, 2011.
|
492 |
+
|
493 |
+
Andros Tjandra, Sakriani Sakti, and Satoshi Nakamura. Listening while speaking: Speech chain by deep learning. In *2017 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU)*, pp. 301–308.
|
494 |
+
|
495 |
+
IEEE, 2017.
|
496 |
+
|
497 |
+
Sei Ueno, Masato Mimura, Shinsuke Sakai, and Tatsuya Kawahara. Data augmentation for asr using tts via a discrete representation. In *2021 IEEE Automatic Speech Recognition and Understanding Workshop*
|
498 |
+
(ASRU), pp. 68–75. IEEE, 2021.
|
499 |
+
|
500 |
+
Shinji Watanabe, Takaaki Hori, Shigeki Karita, Tomoki Hayashi, Jiro Nishitoba, Yuya Unno, Nelson Enrique Yalta Soplin, Jahn Heymann, Matthew Wiesner, Nanxin Chen, et al. Espnet: End-to-end speech processing toolkit. *arXiv preprint arXiv:1804.00015*, 2018.
|
501 |
+
|
502 |
+
Colin Wei, Kendrick Shen, Yining Chen, and Tengyu Ma. Theoretical analysis of self-training with deep networks on unlabeled data. *arXiv preprint arXiv:2010.03622*, 2020.
|
503 |
+
|
504 |
+
Qiantong Xu, Tatiana Likhomanenko, Jacob Kahn, Awni Hannun, Gabriel Synnaeve, and Ronan Collobert.
|
505 |
+
|
506 |
+
Iterative pseudo-labeling for speech recognition. *arXiv preprint arXiv:2005.09267*, 2020.
|
507 |
+
|
508 |
+
Zhuoyuan Yao, Di Wu, Xiong Wang, Binbin Zhang, Fan Yu, Chao Yang, Zhendong Peng, Xiaoyu Chen, Lei Xie, and Xin Lei. Wenet: Production oriented streaming and non-streaming end-to-end speech recognition toolkit. *arXiv preprint arXiv:2102.01547*, 2021.
|
509 |
+
|
510 |
+
Cheng Yi, Jianzong Wang, Ning Cheng, Shiyu Zhou, and Bo Xu. Transfer ability of monolingual wav2vec2. 0 for low-resource speech recognition. In *2021 International Joint Conference on Neural Networks (IJCNN)*,
|
511 |
+
pp. 1–6. IEEE, 2021.
|
512 |
+
|
513 |
+
Rui Zhao, Jian Xue, Jinyu Li, Wenning Wei, Lei He, and Yifan Gong. On addressing practical challenges for rnn-transducer. In *2021 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU)*, pp. 526–533. IEEE, 2021.
|
514 |
+
|
515 |
+
## A More Experimental Results
|
516 |
+
|
517 |
+
| task | model | decode mode | CER |
|
518 |
+
|---------------|----------------------------------------------------------|----------------------------|-------|
|
519 |
+
| | Hybrid CTC/attention model | CTC greedy search | 32.5 |
|
520 |
+
| | CTC prefix beam search | 32.5 | |
|
521 |
+
| | attention | 44.1 | |
|
522 |
+
| | attention rescore | 30.6 | |
|
523 |
+
| Cantonese ASR | Hybrid CTC/attention model + MAC | CTC greedy search | 12.7 |
|
524 |
+
| | CTC prefix beam search | 12.7 | |
|
525 |
+
| | attention | 11.0 | |
|
526 |
+
| | attention rescore | 10.9 | |
|
527 |
+
| | wav2vec2 + fine-tuning | - | 15.4 |
|
528 |
+
| | Hybrid CTC/attention model | CTC greedy search | 51.3 |
|
529 |
+
| | CTC prefix beam search | 51.3 | |
|
530 |
+
| | attention | 55.3 | |
|
531 |
+
| | attention rescore | 48.2 | |
|
532 |
+
| Taiwanese ASR | Hybrid CTC/attention model + MAC | CTC greedy search | 22.0 |
|
533 |
+
| | CTC prefix beam search | 22.0 | |
|
534 |
+
| | attention | 18.6 | |
|
535 |
+
| | attention rescore | 19.5 | |
|
536 |
+
| | wav2vec2 + fine-tuning | - | 18.4 |
|
537 |
+
| | Hybrid CTC/attention model | CTC greedy search | 45.3 |
|
538 |
+
| | CTC prefix beam search | 45.3 | |
|
539 |
+
| | attention | 72.1 | |
|
540 |
+
| | attention rescore | 44.5 | |
|
541 |
+
| Japanese ASR | Hybrid CTC/attention model + MAC | CTC greedy search | 25.1 |
|
542 |
+
| | CTC prefix beam search | 25.0 | |
|
543 |
+
| | attention | 24.3 | |
|
544 |
+
| | attention rescore | 23.4 | |
|
545 |
+
| | wav2vec2 + fine-tuning | - | 24.9* |
|
546 |
+
| Table 7: | CERs for Cantonese, Taiwanese and Japanese on ASR tasks. | We use the advanced hybrid | |
|
547 |
+
|
548 |
+
As a supplement of Table 3, we consider about several different decoding modes: CTC greedy search, CTC
|
549 |
+
prefix beam search, attention as well as attention rescore. The complete results are shown in Table 7.
|
550 |
+
|
551 |
+
Table 7: CERs for Cantonese, Taiwanese and Japanese on ASR tasks. We use the advanced hybrid CTC/attention model as the baseline (tested for four decoding modes: CTC greedy search, CTC prefix beam search, attention, and attention rescore). It is shown that MAC can boost performance significantly, and also achieve competitive results compared to fine-tuning the pretrained wav2vec2 model.
|
gNWyr7KBGj/gNWyr7KBGj_meta.json
ADDED
@@ -0,0 +1,25 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"languages": null,
|
3 |
+
"filetype": "pdf",
|
4 |
+
"toc": [],
|
5 |
+
"pages": 17,
|
6 |
+
"ocr_stats": {
|
7 |
+
"ocr_pages": 0,
|
8 |
+
"ocr_failed": 0,
|
9 |
+
"ocr_success": 0,
|
10 |
+
"ocr_engine": "none"
|
11 |
+
},
|
12 |
+
"block_stats": {
|
13 |
+
"header_footer": 17,
|
14 |
+
"code": 0,
|
15 |
+
"table": 7,
|
16 |
+
"equations": {
|
17 |
+
"successful_ocr": 24,
|
18 |
+
"unsuccessful_ocr": 0,
|
19 |
+
"equations": 24
|
20 |
+
}
|
21 |
+
},
|
22 |
+
"postprocess_stats": {
|
23 |
+
"edit": {}
|
24 |
+
}
|
25 |
+
}
|