ACL-OCL / Base_JSON /prefixV /json /vlsp /2020.vlsp-1.5.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
54.6 kB
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:14:05.101051Z"
},
"title": "Development of Smartcall Vietnamese Text-to-Speech for VLSP 2020",
"authors": [
{
"first": "Manh",
"middle": [
"Cuong"
],
"last": "Nguyen",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Duy",
"middle": [],
"last": "Khuong",
"suffix": "",
"affiliation": {},
"email": "ntphuong@itcu.edu.vn"
},
{
"first": "",
"middle": [],
"last": "Trieu",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Thu",
"middle": [
"Phuong"
],
"last": "Nguyen",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Bao",
"middle": [],
"last": "Quoc",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "",
"middle": [],
"last": "Nguyen",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "An end-to-end text-to-speech (TTS) system (e.g. consisting of Tacotron-2 and WaveGlow vocoder) can achieve the state-of-the art quality in the presence of a large, professionallyrecorded training database. However, the drawbacks of using neural vocoders such as WaveGlow include 1) a time-consuming training process, 2) a slow inference speed, and 3) resource hunger when synthesizing waveform from spectral features. Moreover, the synthesized waveform from the neural vocoder can inherit the noise from an imperfect training data. This paper deals with the task of building Vietnamese TTS systems from moderate quality training data with noise. Our system utilizes an end-to-end TTS system that takes advantage of the Tacotron-2 acoustic model, and a custom vocoder combining a High Fidelity Generative Adversarial Networks (HiFiGAN)-based vocoder and a Wave-Glow denoiser. Specifically, we used the Hi-FiGAN vocoder to achieve a better performance in terms of inference efficiency, and speech quality. Unlike previous works, we used WaveGlow as an effective denoiser to address the noisy synthesized speech. Moreover, the provided training data was thoroughly preprocessed using voice activity detection, automatic speech recognition and prosodic punctuation insertion. Our experiment showed that the proposed TTS system (as a combination of Tacotron-2, HiFiGAN-based vocoder, and WaveGlow denoiser) trained on the preprocessed data achieved a mean opinion score (MOS) of 3.77 compared to 4.22 for natural speech, which is the best result among participating systems of VLSP 2020's TTS evaluation.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "An end-to-end text-to-speech (TTS) system (e.g. consisting of Tacotron-2 and WaveGlow vocoder) can achieve the state-of-the art quality in the presence of a large, professionallyrecorded training database. However, the drawbacks of using neural vocoders such as WaveGlow include 1) a time-consuming training process, 2) a slow inference speed, and 3) resource hunger when synthesizing waveform from spectral features. Moreover, the synthesized waveform from the neural vocoder can inherit the noise from an imperfect training data. This paper deals with the task of building Vietnamese TTS systems from moderate quality training data with noise. Our system utilizes an end-to-end TTS system that takes advantage of the Tacotron-2 acoustic model, and a custom vocoder combining a High Fidelity Generative Adversarial Networks (HiFiGAN)-based vocoder and a Wave-Glow denoiser. Specifically, we used the Hi-FiGAN vocoder to achieve a better performance in terms of inference efficiency, and speech quality. Unlike previous works, we used WaveGlow as an effective denoiser to address the noisy synthesized speech. Moreover, the provided training data was thoroughly preprocessed using voice activity detection, automatic speech recognition and prosodic punctuation insertion. Our experiment showed that the proposed TTS system (as a combination of Tacotron-2, HiFiGAN-based vocoder, and WaveGlow denoiser) trained on the preprocessed data achieved a mean opinion score (MOS) of 3.77 compared to 4.22 for natural speech, which is the best result among participating systems of VLSP 2020's TTS evaluation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Text-to-speech synthesis plays a crucial role in speech-based interaction systems. In the last two decades, there have been many attempts to build high quality Vietnamese TTS systems. A data processing scheme proved its efficacy in optimizing naturalness of end-to-end TTS systems trained on Vietnamese found data (Phung et al., 2020) . Text normalization methods were explored; utilizing regular expressions and language model (Tuan et al., 2012) . New prosodic features (e.g. phrase breaks) were investigated, which showed their efficacy in improving naturalness of Vietnamese hidden Markov models (HMM)-based TTS systems Trang et al., 2013; . Different types of acoustic models were investigated such as HMM , deep neural networks (DNN) (Nguyen et al., 2019) , and sequence-to-sequence models (Phung et al., 2020) . For postfiltering, it was shown that a global variance scaling method may destroy the tonal information; therefore, exemplar-based voice conversion methods were utilized in postfiltering to preserve the tonal information (Tuan et al., 2016) . To our knowledge, there is little to none research on vocoders for Vietnamese TTS systems, especially when the training data is moderately noisy.",
"cite_spans": [
{
"start": 314,
"end": 334,
"text": "(Phung et al., 2020)",
"ref_id": "BIBREF8"
},
{
"start": 428,
"end": 447,
"text": "(Tuan et al., 2012)",
"ref_id": "BIBREF13"
},
{
"start": 624,
"end": 643,
"text": "Trang et al., 2013;",
"ref_id": "BIBREF12"
},
{
"start": 740,
"end": 761,
"text": "(Nguyen et al., 2019)",
"ref_id": "BIBREF5"
},
{
"start": 796,
"end": 816,
"text": "(Phung et al., 2020)",
"ref_id": "BIBREF8"
},
{
"start": 1040,
"end": 1059,
"text": "(Tuan et al., 2016)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the International Workshop on Vietnamese Language and Speech Processing (VLSP) 2020, a TTS challenge (Trang et al., 2020) required participants to build Vietnamese TTS systems from a provided moderately noisy corpus. The corpus included raw text and corresponding audio files. However, the corpus has incorrect pronunciation of a foreign language, the slight buzzer sounds in audio data, and many incorrectly labeled words, which pose significant challenges to participants. For example, a general neural vocoder will learn the buzzer sounds from the corpus, and introduce it to the synthesized speech.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In previous VLSP 2019's TTS evaluation, Tacotron-2 and WaveGlow neural vocoder were combined to achieve the best speech quality in Vietnamese speech synthesis (Lam et al.) . However, HiFiGAN vocoder significantly outperformed WaveGlow vocoder in term of vocoding quality and efficiency (Kong et al., 2020) . In the paper, we present the complete steps of building our endto-end TTS system combining data preprocessing (Phung et al., 2020) and end-to-end modeling which showed that the system addressed the data problems and achieved high performance and high efficiency.",
"cite_spans": [
{
"start": 159,
"end": 171,
"text": "(Lam et al.)",
"ref_id": null
},
{
"start": 286,
"end": 305,
"text": "(Kong et al., 2020)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In particular, we introduced a solution that combines HiFiGAN and WaveGlow denoiser as a custom vocoder to enhance the quality of the final synthesized sound. Specifically, in Section II, we present the TTS system architecture consisting of a Tacotron-2 network followed by the HiFiGAN model as a vocoder and the WaveGlow model as a denoiser. The use of HiFiGAN has both improved aggregation speed and reduced resource size, and utilizing WaveGlow denoiser significantly reduces unexpected noise of synthesized speech. The challenges of naturalness, background noise and buzzer noises in the artificial sound were also overcome by combining Tacotron-2, a HiFiGAN-based vocoder and a WaveGlow denoiser.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We inherited the data processing method (as shown in Figure 1 ) proposed in (Phung et al., 2020). We remove non-speech segments from the audio files using Voice Activity Detection (VAD) model (Kim and Hahn, 2018) . As for textual data, we normalized the original text to lower case without punctuation, then use the results from an Automatic Speech Recognition (ASR) (Peddinti et al., 2015) model to define unvoiced intervals to automatic punctuation to improve the naturalness and prosody of synthesized voices (Phung et al., 2020). Moreover, there is an enormous number of English words in the provided databases, so our solution is to borrow Vietnamese sounds to read the English words. Even, the English words can consist of Vietnamese syllables and English fricative sounds (for example, x sound) if necessary (for instance, \"study\" becomes 'x-ta-di'), which can make it easier for the model to learn the fricative sounds. Also, by selecting the pronunciation of English words, we introduced uncommon Vietnamese syllables, which enriched the vocabulary of the training data set. The overall text normalization was carried out using regular expressions and a dictionary. Finally, we manually reviewed and corrected the transcription. The data processing scheme is shown in Figure 1 2.1.1 Voice Activity Detection",
"cite_spans": [
{
"start": 192,
"end": 212,
"text": "(Kim and Hahn, 2018)",
"ref_id": "BIBREF1"
},
{
"start": 367,
"end": 390,
"text": "(Peddinti et al., 2015)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 53,
"end": 61,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 1277,
"end": 1285,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Data Preprocessing",
"sec_num": "2.1"
},
{
"text": "We used the Voice Activity Detection (VAD) module to split long audio files of many sentences into short speech segments corresponding to many new sentences. Additionally, large silences at the beginning and the end of each audio were removed. We utilized the a VAD model (Kim and Hahn, 2018) including a Long Short Term Memory Recurrent Neural Network (LSTM-RNN)-based classification.",
"cite_spans": [
{
"start": 272,
"end": 292,
"text": "(Kim and Hahn, 2018)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Preprocessing",
"sec_num": "2.1"
},
{
"text": "We utilized a Automatic Speech Recognition (ASR) system to obtain the time stamps of each word or each sound in each sentence. Moreover, the within-sentence pauses were identified and considered as potential punctuation. We marked a pause as a punctuation when its duration is greater than a threshold of 0.12 seconds. Then, the punctuation was added to input text. Without the added punctuation, the Tacotron-2 may align short pauses to any word or phoneme; which significantly reduce the quality of the synthesized voice. The ASR acoustic model is the state-of-the art Time Delay Neural Network (Peddinti et al., 2015) . To achieve the best performance on provided VLSP data, the language model is trained to over-fit the provided data.",
"cite_spans": [
{
"start": 597,
"end": 620,
"text": "(Peddinti et al., 2015)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic Speech Recognition and Speech Punctuation",
"sec_num": "2.1.2"
},
{
"text": "We proposed a text-to-speech system which is robust to noisy training data. Our system (as shown in Figure 2 ) was composed of a recurrent sequence-to-sequence feature prediction network called Tacotron-2, which mapped text embedding to acoustic features, followed by a Generative Adversarial Networks for Efficient and High Fidelity Speech Synthesis (HiFiGAN)-based vocoder. When using the HiFiGAN-based vocoder alone, we realized that the synthesized speech was noisy. As a result, we utilized the WaveGlow model to denoise the synthesized sound. Therefore, our proposed speech synthesis system includes a Tacotron-2 as a -Tacotron-2: In previous VLSP 2019's TTS evaluation, Tacotron-2 was utilized in Vietnamese speech synthesis to achieve the best speech quality (Lam et al.) . Therefore, we utilized Tacotron-2 as our TTS acoustic model. Our network architecture was almost similar to (Shen et al., 2017) , with some modifications. Firstly, character embedding was used instead of phoneme embedding, which can take advantage of a more flexible and diverse pronunciation dictionary for the Vietnamese dataset. Lastly, we changed some parameters to better fit the data set which has a sampling rate of 22050 Hz, a minimum frequency of 75 Hz, and a amaximum frequency of 7600 Hz.",
"cite_spans": [
{
"start": 767,
"end": 779,
"text": "(Lam et al.)",
"ref_id": null
},
{
"start": 890,
"end": 909,
"text": "(Shen et al., 2017)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 100,
"end": 108,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Proposed text-to-speech systems",
"sec_num": "2.2"
},
{
"text": "-HiFiGAN: To achieve better vocoding quality and higher efficiency, we utilized a HiFiGANbased vocoder instead of WaveGlow vocoder. Our network architecture was similar to config V1 (Kong et al., 2020) . A mel-spectrogram was used as input of generator and upsamples it through transposed convolutions until the length of the output sequence matches the temporal resolution of a raw waveform.",
"cite_spans": [
{
"start": 182,
"end": 201,
"text": "(Kong et al., 2020)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed text-to-speech systems",
"sec_num": "2.2"
},
{
"text": "-WaveGlow: Our network architecture was similar to (Prenger et al., 2019) . However, we only use WaveGlow for audio's noise reduction. First, we generate bias audio with mel-spectrogram from Tacotron-2 (sigma=0.0). And then we transform bias audio to bias mel-spectrogram. Next, for audio's noise reduction, we took the converted melspectrogram from the HiFi-GAN output minus the mel-spectrogram bias by a \"denoiser strength\" of 0.15. Finally, we obtained the last mel-spectrogram and converted it back to sound.",
"cite_spans": [
{
"start": 51,
"end": 73,
"text": "(Prenger et al., 2019)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed text-to-speech systems",
"sec_num": "2.2"
},
{
"text": "The goal of the subjective experiments is to show the efficacy of our proposed method when the training data is noisy. We used the Tacotron-2 acoustic model in combination with different vocoders including 1) WaveGlow vocoder (denoted as WaveGlow), 2) HiFiGAN vocoder (denoted as HiFiGAN), and 3) our proposed method combining HiFiGANbased vocoder and WaveGlow denoiser (denoted as HiFiGAN+Denoiser). The target natural speech is denoted as NAT.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "The original corpus contained 9 hours and 23 minutes of speaking from a female speaker. And after removing the unvoiced parts, the corpus had 8 hours and 21 minutes of speech. All data has been entered to train from scratch for the Tacotron-2 model. We also trained our HiFiGAN and WaveGlow model on the ground truth-aligned predictions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Network Training",
"sec_num": "3.1"
},
{
"text": "We submitted our proposed system (described in Section 2) to the VLSP 2020's TTS evaluation. The system was evaluated using the VLSP organizer's subjective MOS test. There were 24 participants listening to the stimuli of synthesized and natural speech. The participants gave each utterance a score on a 5-point scale including \"very bad\", \"bad\", \"fair\", \"good\", and \"very good\". Details of the results of the second MOS test are given in Table 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 438,
"end": 445,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "3.2"
},
{
"text": "Our system NAT 3.77 4.22 Table 1 : Average MOS of our proposed system (described in Section 2) from VLSP's TTS evaluation",
"cite_spans": [],
"ref_spans": [
{
"start": 25,
"end": 32,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "3.2"
},
{
"text": "We conducted the second Mean Opinion Score (MOS) test to evaluate the performance of four vocoders (WaveGlow, HiFiGAN, and HiFi-GAN+WaveGlow) in speech synthesis. Each listener listened to 20 test sentences and rate the quality of each sentence in a 5-point scale including \"very bad\", \"bad\", \"fair\", \"good\", and \"very good\". In total, there are 20 (sentences) \u00d7 4 (systems) = 80 (trials) 1 in a Latin-square design. We need 80 \u00f7 20 = 4 listeners to cover all the trials.There were 12 participants in the the test.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "3.2"
},
{
"text": "We summarize the perceptual characteristics of each speech synthesis systems in Table 2 . The Figure 3 showed that our proposed system (denoted as HiFiGAN+Denoiser) has a highest MOS. The proposed system is better than natural speech (NAT) due to the fact that the target natural speech is noisy. The results showed that HiFiGAN vocoder outperformed WaveGlow vocoder when the training data is noisy.",
"cite_spans": [],
"ref_spans": [
{
"start": 80,
"end": 87,
"text": "Table 2",
"ref_id": "TABREF0"
},
{
"start": 90,
"end": 102,
"text": "The Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "3.2"
},
{
"text": "We also ran the benchmarks for three models on the same Nvidia GTX 1080 Ti GPU hardware, Systems Evaluate",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "3.2"
},
{
"text": "Each pronouncing word has a buzzer, however, the background noise is noticeable HiFiGAN",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WaveGlow",
"sec_num": null
},
{
"text": "The sound quality of each word has been improved, the background noise is moderate HiFiGAN+Denoiser The sound is clean with the same set of samples to show the inference efficicenty of using HiFiGAN-based vocoder. Statistics of real-time factor (RTF) values, which tells how many seconds of speech are generated in 1 second of wall time, are shown in Table 3 . The results show that the speech synthesis rate of the model with HiFiGAN vocoder compared to the model with WaveGlow vocoder is 1.8 times, which hugely improves the speed performance of the system. For the system with both HiFiGAN and WaveGlow, the speed performance is approximate to the model using only HiFiGAN, because the denoising process of WaveGlow is not computationally exhausting. The results indicate that the HiFiGAN-based vocoder has better inference efficiency than the WaveGlow vocoder.",
"cite_spans": [],
"ref_spans": [
{
"start": 351,
"end": 358,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "WaveGlow",
"sec_num": null
},
{
"text": "On the other hand, the resource consumption of our proposed model increases due to the use of both HiFiGAN and WaveGlow denoiser. While the number of HiFiGAN's parameters is 13.92 million, the WaveGlow has six times more parameters than HiFiGAN (as shown in ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WaveGlow",
"sec_num": null
},
{
"text": "In this report, we have presented our Vietnam TTS system for VLSP 2020. As for the challenge, our approach yields MOS result pretty close to this of natural speech. By testing various solutions to these challenges, we found that combining the methods to develop a custom vocoder played a significant role in the quality of synthesized speech. And the system efficiency was also significantly improved. As a result, the challenges of naturalness, background noise and buzzer noises in the artificial sound have been overcome. We plan to investigate other types of neural vocoders for improving the quality of speech synthesis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CONCLUSION AND FUTURE WORKS",
"sec_num": "4"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Vietnamese hmmbased speech synthesis with prosody information",
"authors": [
{
"first": "Anh",
"middle": [],
"last": "Tuan Dinh",
"suffix": ""
},
{
"first": "Thanh",
"middle": [
"Son"
],
"last": "Phan",
"suffix": ""
},
{
"first": "Tat Thang",
"middle": [],
"last": "Vu",
"suffix": ""
},
{
"first": "Chi",
"middle": [
"Mai"
],
"last": "Luong",
"suffix": ""
}
],
"year": 2013,
"venue": "Eighth ISCA Workshop on Speech Synthesis",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anh Tuan Dinh, Thanh Son Phan, Tat Thang Vu, and Chi Mai Luong. 2013. Vietnamese hmm- based speech synthesis with prosody information. In Eighth ISCA Workshop on Speech Synthesis, Barcelona, Spain.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Voice activity detection using an adaptive context attention model",
"authors": [
{
"first": "J",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Hahn",
"suffix": ""
}
],
"year": 2018,
"venue": "IEEE Signal Processing Letters",
"volume": "25",
"issue": "8",
"pages": "1181--1185",
"other_ids": {
"DOI": [
"10.1109/LSP.2018.2811740"
]
},
"num": null,
"urls": [],
"raw_text": "J. Kim and M. Hahn. 2018. Voice activity detection us- ing an adaptive context attention model. IEEE Sig- nal Processing Letters, 25(8):1181-1185.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Hifi-gan: Generative adversarial networks for efficient and high fidelity speech synthesis",
"authors": [
{
"first": "Jungil",
"middle": [],
"last": "Kong",
"suffix": ""
},
{
"first": "Jaehyeon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Jaekyoung",
"middle": [],
"last": "Bae",
"suffix": ""
}
],
"year": 2020,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jungil Kong, Jaehyeon Kim, and Jaekyoung Bae. 2020. Hifi-gan: Generative adversarial networks for ef- ficient and high fidelity speech synthesis. ArXiv, abs/2010.05646.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Development of zalo vietnamese textto-speech for vlsp 2019",
"authors": [
{
"first": "Phung Viet",
"middle": [],
"last": "Lam",
"suffix": ""
},
{
"first": "Phan",
"middle": [],
"last": "Huy Kinh",
"suffix": ""
},
{
"first": "Anh",
"middle": [],
"last": "Dinh",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Tuan",
"suffix": ""
},
{
"first": "Nguyen Quoc",
"middle": [],
"last": "Trieu Khuong Duy",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bao",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Phung Viet Lam, Phan Huy Kinh, Dinh Anh Tuan, Trieu Khuong Duy, and Nguyen Quoc Bao. Development of zalo vietnamese text- to-speech for vlsp 2019. http://vlsp. org.vn/sites/default/files/2019-10/",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Development of vietnamese speech synthesis system using deep neural networks",
"authors": [
{
"first": "Bao",
"middle": [
"Quoc"
],
"last": "Thinh Van Nguyen",
"suffix": ""
},
{
"first": "Kinh",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Hai",
"middle": [],
"last": "Huy Phan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Van Do",
"suffix": ""
}
],
"year": 2019,
"venue": "In Journal of Computer Science and Cybernetics",
"volume": "34",
"issue": "",
"pages": "349--363",
"other_ids": {
"DOI": [
"10.15625/1813-9663/34/4/13172"
]
},
"num": null,
"urls": [],
"raw_text": "Thinh Van Nguyen, Bao Quoc Nguyen, Kinh Huy Phan, and Hai Van Do. 2019. Development of viet- namese speech synthesis system using deep neural networks. In Journal of Computer Science and Cy- bernetics, volume 34, pages 349-363.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A time delay neural network architecture for efficient modeling of long temporal contexts",
"authors": [
{
"first": "V",
"middle": [],
"last": "Peddinti",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Povey",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Khudanpur",
"suffix": ""
}
],
"year": 2015,
"venue": "Sixteenth Annual Conference of the International Speech Communication Association",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "V. Peddinti, D. Povey, and S. Khudanpur. 2015. A time delay neural network architecture for efficient mod- eling of long temporal contexts. In Sixteenth Annual Conference of the International Speech Communica- tion Association.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Improvement of naturalness for an hmm-based vietnamese speech synthesis using the prosodic information",
"authors": [
{
"first": "T",
"middle": [],
"last": "Phan",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Duong",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Dinh",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Vu",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Luong",
"suffix": ""
}
],
"year": 2013,
"venue": "The 2013 RIVF International Conference on Computing Communication Technologies -Research, Innovation, and Vision for Future (RIVF)",
"volume": "",
"issue": "",
"pages": "276--281",
"other_ids": {
"DOI": [
"10.1109/RIVF.2013.6719907"
]
},
"num": null,
"urls": [],
"raw_text": "T. Phan, T. Duong, A. Dinh, T. Vu, and C. Luong. 2013. Improvement of naturalness for an hmm-based viet- namese speech synthesis using the prosodic infor- mation. In The 2013 RIVF International Confer- ence on Computing Communication Technologies - Research, Innovation, and Vision for Future (RIVF), pages 276-281.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Data processing for optimizing naturalness of vietnamese text-to-speech system",
"authors": [
{
"first": "Phan",
"middle": [],
"last": "Viet Lam Phung",
"suffix": ""
},
{
"first": "Anh",
"middle": [
"Tuan"
],
"last": "Huy Kinh",
"suffix": ""
},
{
"first": "Quoc Bao",
"middle": [],
"last": "Dinh",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nguyen",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2004.09607"
]
},
"num": null,
"urls": [],
"raw_text": "Viet Lam Phung, Phan Huy Kinh, Anh Tuan Dinh, and Quoc Bao Nguyen. 2020. Data processing for opti- mizing naturalness of vietnamese text-to-speech sys- tem. arXiv:2004.09607.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Waveglow: A flow-based generative network for speech synthesis",
"authors": [
{
"first": "R",
"middle": [],
"last": "Prenger",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Valle",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Catanzaro",
"suffix": ""
}
],
"year": 2019,
"venue": "ICASSP 2019 -2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)",
"volume": "",
"issue": "",
"pages": "3617--3621",
"other_ids": {
"DOI": [
"10.1109/ICASSP.2019.8683143"
]
},
"num": null,
"urls": [],
"raw_text": "R. Prenger, R. Valle, and B. Catanzaro. 2019. Wave- glow: A flow-based generative network for speech synthesis. In ICASSP 2019 -2019 IEEE Interna- tional Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 3617-3621.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Yannis Agiomyrgiannakis, and Yonghui Wu. 2017. Natural TTS synthesis by conditioning wavenet on mel spectrogram predictions. CoRR",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Ruoming",
"middle": [],
"last": "Pang",
"suffix": ""
},
{
"first": "Ron",
"middle": [
"J"
],
"last": "Weiss",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Navdeep",
"middle": [],
"last": "Jaitly",
"suffix": ""
},
{
"first": "Zongheng",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yuxuan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "R",
"middle": [
"J"
],
"last": "Skerry-Ryan",
"suffix": ""
},
{
"first": "Rif",
"middle": [
"A"
],
"last": "Saurous",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan Shen, Ruoming Pang, Ron J. Weiss, Mike Schuster, Navdeep Jaitly, Zongheng Yang, Zhifeng Chen, Yu Zhang, Yuxuan Wang, R. J. Skerry-Ryan, Rif A. Saurous, Yannis Agiomyrgiannakis, and Yonghui Wu. 2017. Natural TTS synthesis by con- ditioning wavenet on mel spectrogram predictions. CoRR, abs/1712.05884.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Vietnamese text-to-speech shared task vlsp 2020: Remaining problems with state-of-the-art techniques in proceedings of the seventh international workshop on vietnamese language and speech processing",
"authors": [
{
"first": "Thu",
"middle": [],
"last": "Nguyen Thi",
"suffix": ""
},
{
"first": "Nguyen",
"middle": [],
"last": "Trang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hoang Ky",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pham Quang Minh",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Vu Duy",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manh",
"suffix": ""
}
],
"year": 2020,
"venue": "International workshop on Vietnamese Language and Speech Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nguyen Thi Thu Trang, Nguyen Hoang Ky, Pham Quang Minh, and Vu Duy Manh. 2020. Vietnamese text-to-speech shared task vlsp 2020: Remaining problems with state-of-the-art tech- niques in proceedings of the seventh international workshop on vietnamese language and speech processing (vlsp 2020). In International workshop on Vietnamese Language and Speech Processing (VLSP 2020).",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Prosodic phrasing modeling for vietnamese tts using syntactic information",
"authors": [
{
"first": "Thu",
"middle": [],
"last": "Nguyen Thi",
"suffix": ""
},
{
"first": "Albert",
"middle": [],
"last": "Trang",
"suffix": ""
},
{
"first": "Tran",
"middle": [
"Do"
],
"last": "Rilliard",
"suffix": ""
},
{
"first": "Christophe",
"middle": [],
"last": "Dat",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of Interspeech",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nguyen Thi Thu Trang, Albert Rilliard, Tran Do Dat, and Christophe d'Alessandro. 2013. Prosodic phras- ing modeling for vietnamese tts using syntactic in- formation. In Proceedings of Interspeech, Lyon, France.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A study of text normalization in vietnamese for text-to-speech system",
"authors": [
{
"first": "Phi Tung",
"middle": [],
"last": "Dinh Anh Tuan",
"suffix": ""
},
{
"first": "Phan Dang",
"middle": [],
"last": "Lam",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hung",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of Oriental COCOSDA Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dinh Anh Tuan, Phi Tung Lam, and Phan Dang Hung. 2012. A study of text normalization in vietnamese for text-to-speech system. In Proceedings of Orien- tal COCOSDA Conference, Macau, China.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Quality improvement of vietnamese hmmbased speech synthesis system based on decomposition of naturalness and intelligibility using nonnegative matrix factorization",
"authors": [
{
"first": "Anh",
"middle": [],
"last": "Dinh",
"suffix": ""
},
{
"first": "Phan",
"middle": [],
"last": "Tuan",
"suffix": ""
},
{
"first": "Masato",
"middle": [],
"last": "Thanh Son",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Akagi",
"suffix": ""
}
],
"year": 2016,
"venue": "Advances in Information and Communication Technology. ICTA 2016. Advances in Intelligent Systems and Computing",
"volume": "538",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dinh Anh Tuan, Phan Thanh Son, and Masato Akagi. 2016. Quality improvement of vietnamese hmm- based speech synthesis system based on decompo- sition of naturalness and intelligibility using non- negative matrix factorization. In Advances in Infor- mation and Communication Technology. ICTA 2016. Advances in Intelligent Systems and Computing, vol 538. Springer, Cham.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Data Processing Scheme"
},
"FIGREF1": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "End-to-end system architecture acoustic model, a HiFiGAN-based vocoder, and a WaveGlow denoiser."
},
"FIGREF2": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Average MOS of four systems. Dashed lines show statistically significant differences with p-value < 10 \u22128"
},
"TABREF0": {
"html": null,
"content": "<table/>",
"type_str": "table",
"text": "Experimental reviews",
"num": null
},
"TABREF1": {
"html": null,
"content": "<table><tr><td>). And the total</td></tr></table>",
"type_str": "table",
"text": "",
"num": null
},
"TABREF2": {
"html": null,
"content": "<table><tr><td>Models</td><td>Param (M)</td></tr><tr><td>WaveGlow</td><td>87.73</td></tr><tr><td>HiFiGAN</td><td>13.92</td></tr><tr><td>HiFiGAN and WaveGlow</td><td>101.65</td></tr></table>",
"type_str": "table",
"text": "RTF results number of parameters using both models is 101.65 million.",
"num": null
},
"TABREF3": {
"html": null,
"content": "<table/>",
"type_str": "table",
"text": "Number of parameters",
"num": null
}
}
}
}