{ "paper_id": "O09-1010", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:11:17.623852Z" }, "title": "Latent Prosody Model-Assisted Mandarin Accent Identification", "authors": [ { "first": "Yuan-Fu", "middle": [], "last": "Liao", "suffix": "", "affiliation": {}, "email": "yfliao@ntut.edu.tw" }, { "first": "Shuan-Chen", "middle": [], "last": "Yeh", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Ming-Feng", "middle": [], "last": "Tsai", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Wei-Hsiung", "middle": [], "last": "Ting", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Taipei University of Technology", "location": {} }, "email": "" }, { "first": "Sen-Chia", "middle": [], "last": "Chang", "suffix": "", "affiliation": { "laboratory": "", "institution": "Industrial Technology Research Institute", "location": {} }, "email": "5chang@itri.org.tw" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "A two-stage latent prosody model-language model (LPM-LM)-based approach is proposed to identify two Mandarin accent types spoken by native speakers in Mainland China and Taiwan. The frontend LPM tokenizes and jointly models the affections of speaker, tone and prosody state of an utterance. The backend LM takes the decoded prosody state sequences and builds n-grams to model the prosodic differences of the two accent types. Experimental results on a mixed TRSC and MAT database showed that fusion of the proposed LPM-LM with a SDC/GMM+PPR-LM+UPR-LM baseline system could further reduced the average accent identification error rate from 20.7% to 16.2%. Therefore, the proposed LPM-LM method is a promising approach.", "pdf_parse": { "paper_id": "O09-1010", "_pdf_hash": "", "abstract": [ { "text": "A two-stage latent prosody model-language model (LPM-LM)-based approach is proposed to identify two Mandarin accent types spoken by native speakers in Mainland China and Taiwan. The frontend LPM tokenizes and jointly models the affections of speaker, tone and prosody state of an utterance. The backend LM takes the decoded prosody state sequences and builds n-grams to model the prosodic differences of the two accent types. Experimental results on a mixed TRSC and MAT database showed that fusion of the proposed LPM-LM with a SDC/GMM+PPR-LM+UPR-LM baseline system could further reduced the average accent identification error rate from 20.7% to 16.2%. Therefore, the proposed LPM-LM method is a promising approach.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Over the past decades, many approaches have been proposed to deal with language identification (LID) tasks. They tried to capture the specific characteristics of different languages. These characteristics roughly fall into three categories: the phonetic repertoire, the phonotactics, and the prosody. The mainstream system (as shown in NIST language recognition evaluation (LRE) 2007) [1] is usually based on the fusion of multiple acoustic and phonotactic systems.", "cite_spans": [ { "start": 385, "end": 388, "text": "[1]", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Although LID is extensively studied, less works have been done on accent identification (AID), especially for native speakers, such as American and Indian English, Mainland China and Taiwan Mandarin, Hindi and Urdu Hindustani and Caribbean and non-Caribbean Spanish. Comparing with LID task, AID of native speakers is more challenging because, (1) some linguistic knowledge, such as syllable structure, may be of little use since native speakers seldom make such mistakes; (2) difference among those speakers is relatively smaller than that among foreign (non-native) speakers. In other words, the capacities of the popular acoustic and phonotactic approaches may be limited in this case.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Many approaches have been proposed to model the prosodic differences between languages, dialects or accents [2] , recently. Most of them are based on direct modeling of surface prosodic features, i.e., the raw prosodic features. For example, frame-level pitch flux features and GMMs were proposed in [3] ; segmental-level pitch features were extracted using Legendre polynomials and modeled by ergodic Markov model in [4] ; and supra-segment-level prosodic features were captured by n-gram in [5] . However, surface prosodic features are often affected by many other non-prosodic latent factors, such as channel, speaker, phonetic context, and so on. Therefore, it is necessary to apply some feature normalization methods [6] to alleviate the unwanted affections. To absorb those unwanted affections, in this study a two-stage latent prosody model-language model (LPM-LM)-based approach as shown in Fig. 1 and 2 is proposed. The aim is to discriminate two Mandarin accent types spoken by native speakers in Mainland China and Taiwan.", "cite_spans": [ { "start": 108, "end": 111, "text": "[2]", "ref_id": "BIBREF1" }, { "start": 300, "end": 303, "text": "[3]", "ref_id": "BIBREF2" }, { "start": 418, "end": 421, "text": "[4]", "ref_id": "BIBREF3" }, { "start": 493, "end": 496, "text": "[5]", "ref_id": "BIBREF4" }, { "start": 722, "end": 725, "text": "[6]", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 899, "end": 905, "text": "Fig. 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "In this approach, the frontend LPM [7] tokenizes (with the help of automatic speech recognizers (ASRs)) an input utterance into smaller prosodic units (sub-syllable in our case) and artificially introduces latent prosody states to represent the prosodic status of each token in an utterance. It then jointly models the affections of speaker, tone and prosody state on surface prosodic features in order to decode more precise prosody state sequences of the utterance. The backend LM then takes the decoded prosody state sequences and builds an n-gram to model the supra-segmental prosodic charactistics of each accent type.", "cite_spans": [ { "start": 35, "end": 38, "text": "[7]", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "In more detail, LPM as shown in Fig. 2 (1) introduces a two-level hierarchical structure of speech prosody [8] with prosodic states and state transition probabilities and (2) describes the joint affections of latent factors in a state by a variable-parameter probability density function whose parameters varies as a function of those latent factor-dependent parameters. The purpose is to explain the variant due to speaker, phonetic context and, especially, tone factors.", "cite_spans": [ { "start": 107, "end": 110, "text": "[8]", "ref_id": "BIBREF7" } ], "ref_spans": [ { "start": 32, "end": 38, "text": "Fig. 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "It is worth noting that (1) the proposed LPM-LM framework is similar to the popular parallel phone recognizer (PPR)-LM approach. However, the phone recognizers are replaced by automatic prosodic state tokenizers/labelers and, especially, (2) the LPM module could be trained in an unsupervised way to avoid any human annotation efforts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "This paper is organized as follows. Section 2 reviews the LPM framework. Section 3 discusses the application of LPM-LM on Mandarin AID. Section 4 reports the experimental results on a Mainland China and Taiwan Mandarin corpus. Some conclusions are given in the last section.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Based on the proposed LPM framework shown in Fig. 2 , an input training utterance is first tokenized into a sequence of smaller prosodic units (sub-syllable in this case) including voiced and unvoiced segments. For each token, a segment-level prosodic feature vector n x is extracted (coefficients of log-pitch and log-energy trajectories and the duration of the segment). Here, the coefficients of trajectories are computed using Legendre polynomial function from the raw log-pitch and log-energy contours. The speech prosody of an input utterance is thus represented by a sequence of segment-level prosodic feature vectors, i.e.,", "cite_spans": [], "ref_spans": [ { "start": 45, "end": 51, "text": "Fig. 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Latent Prosody Model of Speech Prosody", "sec_num": "2." }, { "text": "\uf07b \uf07d , 1,..., n n N \uf03d \uf03d X x .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Latent Prosody Model of Speech Prosody", "sec_num": "2." }, { "text": "To well explain the variant of the observed prosodic feature vector sequence X of the utterance, several latent factors are introduced including speaker s , tone", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Latent Prosody Model of Speech Prosody", "sec_num": "2." }, { "text": "\uf07b \uf07d , 1,..., n t n N \uf03d \uf03d T", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Latent Prosody Model of Speech Prosody", "sec_num": "2." }, { "text": "(or major/minor stress in toneless language) and prosody state sequence", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Latent Prosody Model of Speech Prosody", "sec_num": "2." }, { "text": "\uf07b \uf07d , 1,..., n q n N \uf03d \uf03d Q", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Latent Prosody Model of Speech Prosody", "sec_num": "2." }, { "text": "(phonetic context is ignored in this study). The probability of X is defined as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Latent Prosody Model of Speech Prosody", "sec_num": "2." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\uf028 \uf029 \uf028 \uf029 \uf028 \uf029 , ,", "eq_num": "| , , , , s p" } ], "section": "Latent Prosody Model of Speech Prosody", "sec_num": "2." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p s ps \uf03d \uf0e5 Q T X X TQ TQ", "eq_num": "(1)" } ], "section": "Latent Prosody Model of Speech Prosody", "sec_num": "2." }, { "text": "Assume that each observed n x is dependent only on local prosodic state n q and tone n t (and the speaker s ), the first term in the right hand side of Eq. (1) is approximated as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Latent Prosody Model of Speech Prosody", "sec_num": "2." }, { "text": "\uf028 \uf029 \uf028 \uf029 1 | , , | , , N n n n n p s p s t q \uf03d \uf03d \uf0d5 X T Q x (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Latent Prosody Model of Speech Prosody", "sec_num": "2." }, { "text": "Assume that speaker, prosodic state and tone sequences are all independent variables and the probabilities of speaker s and tone sequence T are uniform distributions, the last term in the right hand side of Eq. (1) is approximated as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Latent Prosody Model of Speech Prosody", "sec_num": "2." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\uf028 \uf029 \uf028 \uf029 \uf028 \uf029 1 1 2 , , | N n n n p s p q p q q \uf02d \uf03d \uf0b5 \uf0d5 T Q", "eq_num": "(3)" } ], "section": "Latent Prosody Model of Speech Prosody", "sec_num": "2." }, { "text": "Finally, the distribution of the surface prosodic feature vector n x is modeled by the following linearly additive [9] formulation: ", "cite_spans": [ { "start": 115, "end": 118, "text": "[9]", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Latent Prosody Model of Speech Prosody", "sec_num": "2." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "n n n n s t q \uf03d \uf02b \uf02b \uf02b x y \u03bc \u03bc \u03bc", "eq_num": "(4)" } ], "section": "Latent Prosody Model of Speech Prosody", "sec_num": "2." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p s t q \uf03d \uf02b \uf02b x x \u03bc \u03bc \u03bc \u03a3 \uf0a5", "eq_num": "(5)" } ], "section": "Latent Prosody Model of Speech Prosody", "sec_num": "2." }, { "text": "By this way, the likelihood function of an utterance given an LPM \uf06c is expressed by", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Latent Prosody Model of Speech Prosody", "sec_num": "2." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\uf028 \uf029 \uf028 \uf029 \uf028 \uf029 \uf028 \uf029 1 1 1 2 | |, , | N N n n n n n n n L p s t q p q p q q \uf06c \uf02d \uf03d \uf03d \uf03d \uf0d7 \uf0d5 \uf0d5 X x", "eq_num": "(6)" } ], "section": "Latent Prosody Model of Speech Prosody", "sec_num": "2." }, { "text": "Moreover, the optimal prosody state sequence Q of an utterance could be automatically labeled using a Viterbi search algorithm (with or without tone tags given)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Latent Prosody Model of Speech Prosody", "sec_num": "2." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "which maximize the likelihood function \uf028 \uf029 | L \uf06c X , i.e., \uf028 \uf029 \uf028 \uf029 1 1 1 2 argmaxlog ( | , , ) | N N n n n n n n n p s t q p q p q q \uf02d \uf03d \uf03d \uf0ec \uf0fc \uf03d \uf0d7 \uf0ed \uf0fd \uf0ee \uf0fe \uf0d5 \uf0d5 Q Q x", "eq_num": "(7)" } ], "section": "Latent Prosody Model of Speech Prosody", "sec_num": "2." }, { "text": "Mandarin spoken in Taiwan exhibits several major prosody differences from the Mandarin spoken in Mainland China [10] . Especially, people from Taiwan usually speak slower with a lower voice, and they sound soft and gentle; while Mainlanders have more ups and downs in their intonation, and their voices are higher and faster. These characteristics are likely attributable, at least in part, to influence from the Southern Fujianese dialect widely spoken throughout Taiwan.", "cite_spans": [ { "start": 112, "end": 116, "text": "[10]", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "LPM-based Mandarin Accent Identification", "sec_num": "3." }, { "text": "Since there are prosodic differences between Mainlander's and Taiwanese Mandarin, a LPM-based accent identification approach is built to identify these two Mandarin accent types. In the following subsections, the tokenization front-end and the speaker normalization parts of the proposed LPM-based approach and its training procedure are described in detail.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LPM-based Mandarin Accent Identification", "sec_num": "3." }, { "text": "The operation of the tokenization front-end is shown in Fig. 3 . ", "cite_spans": [], "ref_spans": [ { "start": 56, "end": 62, "text": "Fig. 3", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Tokenization front-end", "sec_num": "3.1." }, { "text": "To estimate the parameters of the LPM, an unsupervised sequential optimization procedure based on the maximum likelihood criterion is adopted. The training procedure sequentially decodes latent prosody state sequences using Eq. 7and updates the affecting factors (i.e., tone and prosody state) to optimize the likelihood function in Eq. (6).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LPM training algorithm", "sec_num": "3.2." }, { "text": "In more detail, the sequential optimization training procedure executes the following steps until a convergence has been reached. It is worth noting that each step updates a subset of LPM parameters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LPM training algorithm", "sec_num": "3.2." }, { "text": "Step 0: Initialization \u2027 Derive the initial prosody state transition probabilities using the statistics of labeled prosody states.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LPM training algorithm", "sec_num": "3.2." }, { "text": "Step 1: Re-Label \u2027 Re-label the prosody state sequence of all utterance using Eq. (7).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LPM training algorithm", "sec_num": "3.2." }, { "text": "Step 2: Re-Estimate \u2027 Update the covariance matrix \u03a3 and the prosody state transition probabilities.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LPM training algorithm", "sec_num": "3.2." }, { "text": "Step 3: Iteration \u2027 Repeat step 1 to 2 until the likelihood function Eq. (6) is converged.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LPM training algorithm", "sec_num": "3.2." }, { "text": "To evaluate the proposed LPM approach, two telephone speech corpora were mixed together, one is Mandarin across Taiwan (MAT) [11] Mainlander speakers in MAT and TRSC, respectively. The mixed corpus is randomly divided into a training, a development and a test set. The detail of speaker and utterance information is listed in Table. 1. The evaluation is executed utterance by utterance and the average length of an utterance is about 5 seconds. First of the all, the learning curves of the LPMs were examined. Fig. 4 shows the likelihood functions on the MAT and TRSC training sets, respectively, along with the number of training iterations. It could be found from the figure that LPMs converged quickly, especially for the TRSC set. After LPM training was converged, the learned 5 tone affecting patterns of Taiwanese and Mainlanders' Mandarin, respectively, were drawn in Fig. 5 . It is found that the major tone differences between Taiwan and Mainland China is the pattern of tone 3 and 5. This is consistent with common linguistic knowledge [10] .", "cite_spans": [ { "start": 125, "end": 129, "text": "[11]", "ref_id": "BIBREF10" }, { "start": 1046, "end": 1050, "text": "[10]", "ref_id": "BIBREF9" } ], "ref_spans": [ { "start": 326, "end": 332, "text": "Table.", "ref_id": null }, { "start": 510, "end": 516, "text": "Fig. 4", "ref_id": "FIGREF7" }, { "start": 875, "end": 881, "text": "Fig. 5", "ref_id": "FIGREF8" } ], "eq_spans": [], "section": "Corpus", "sec_num": "4.1." }, { "text": "These results suggest that LPMs could automatically learn the accent-specific characteristics of Taiwanese and Mainlanders' Mandarin. We therefore expect that LPM-LM-based approach could be successfully used to discriminate these two Mandarin accents.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpus", "sec_num": "4.1." }, { "text": "To set up a reference baseline, two popular phonotactic and one acoustic approaches were Table 2 shows the performances of the individual systems and their fusion results. The fusion was done using a softmax-output multi-layer perceptual (MLP) and trained with the development sets. From Table 2 , it is found that (1) PPRLM and UPRLM worked better than SDC/GMM and (2) the best performance, 20.68% error rate, was achieved by the fusion of the PPR-LM, UPR-LM and SDC/GMM systems.", "cite_spans": [], "ref_spans": [ { "start": 89, "end": 96, "text": "Table 2", "ref_id": null }, { "start": 288, "end": 295, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Acoustic and Phonotactic baselines", "sec_num": "4.3." }, { "text": "The proposed LPM-LM approach was then evaluated. In training phase, the correct tone tags were given but in testing phase MLP-based tone recognizers are adopted to provide estimated tone tags online [7] . Table 2 shows the performances of the proposed LPM-LM and the fusion of LPM-LM with the acoustic and phonotactic baseline. The fusion was also done using the same softmax-output MLP and trained with the development sets. Different from acoustic feature, the prosodic feature extracts another characteristic (example: tone). From Table 2 , it is found that LPM-LM worked compatible with the SDC/GMM but is worse than the acoustic and phonotactic baseline. It was caused by just using prosodic feature rather than strong acoustic feature. However, the fusion of LPM-LM and the acoustic and phonotactic baseline could further reduce the error rate from 20.68% to 16.18%. This result may suggest the complementary of those methods.", "cite_spans": [ { "start": 199, "end": 202, "text": "[7]", "ref_id": "BIBREF6" } ], "ref_spans": [ { "start": 205, "end": 212, "text": "Table 2", "ref_id": null }, { "start": 534, "end": 541, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Prosodic approach", "sec_num": "4.4." }, { "text": "In this paper, a LPM-LM-based approach is proposed to identify two Mandarin accent types spoken by native speakers in Mainland China and Taiwan. Experimental results on a mixed TRSC and MAT database showed that fusion of the proposed LPM-LM and a SDC/GMM+PPR-LM+UPR-LM baseline system could further reduced the average accent identification error rate from 20.7% to 16.2%. Therefore, the proposed LPM method is a promising approach. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5." } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Language Recognition Evaluation", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Language Recognition Evaluation, National Institute of Standards and Technology, http://www.itl.nist.gov/iad/mig/tests/lre/.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Automatic Prosodic Variations Modeling for Language and Dialect Discrimination", "authors": [ { "first": "Jean-Luc", "middle": [], "last": "Rouas", "suffix": "" } ], "year": 2007, "venue": "Audio, Speech, and Language Processing", "volume": "15", "issue": "", "pages": "1904--1911", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jean-Luc Rouas, \"Automatic Prosodic Variations Modeling for Language and Dialect Discrimination,\" Audio, Speech, and Language Processing, IEEE Transactions on, vol. 15, pp. 1904-1911, Aug. 2007.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Chinese Dialect Identification Using Tone Features Based on Pitch Flux", "authors": [ { "first": "Bin", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Donglai", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Rong", "middle": [], "last": "Tong", "suffix": "" } ], "year": 2006, "venue": "Proceedings. 2006 IEEE International Conference on", "volume": "", "issue": "", "pages": "I--I", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bin Ma, Donglai Zhu, and Rong Tong, \"Chinese Dialect Identification Using Tone Features Based on Pitch Flux,\" in Acoustics, Speech and Signal Processing, 2006. ICASSP 2006 Proceedings. 2006 IEEE International Conference on, Toulouse, France, May 2006, pp. I-I.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Language Identification Using Pitch Contour Information in the Ergodic Markov Model", "authors": [ { "first": "Chi-Yueh", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Hsiao-Chuan", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2006, "venue": "Proceedings. 2006 IEEE International Conference on", "volume": "", "issue": "", "pages": "I--I", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chi-Yueh Lin and Hsiao-Chuan Wang, \"Language Identification Using Pitch Contour Information in the Ergodic Markov Model,\" in Acoustics, Speech and Signal Processing, 2006. ICASSP 2006 Proceedings. 2006 IEEE International Conference on, Toulouse, France, May 2006, pp. I-I.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Language Identification Using Phonetic and Prosodic HMMs with Feature Normalization", "authors": [ { "first": "Y", "middle": [], "last": "Obuchi", "suffix": "" }, { "first": "N", "middle": [], "last": "Sato", "suffix": "" } ], "year": 2005, "venue": "Acoustics, Speech, and Signal Processing", "volume": "", "issue": "", "pages": "569--572", "other_ids": {}, "num": null, "urls": [], "raw_text": "Obuchi, Y. and Sato, N, \"Language Identification Using Phonetic and Prosodic HMMs with Feature Normalization,\" in Acoustics, Speech, and Signal Processing, 2005. Proceedings. (ICASSP '05). IEEE International Conference on, Philadelphia, Mar. 2005, pp. 569-572.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Modeling Prosodic Features With Joint Factor Analysis for Speaker Verification", "authors": [ { "first": "Najim", "middle": [], "last": "Dehak", "suffix": "" }, { "first": "Pierre", "middle": [], "last": "Dumouchel", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Kenny", "suffix": "" } ], "year": 2007, "venue": "Audio, Speech, and Language Processing", "volume": "15", "issue": "", "pages": "2095--2103", "other_ids": {}, "num": null, "urls": [], "raw_text": "Najim Dehak, Pierre Dumouchel, and Patrick Kenny, \"Modeling Prosodic Features With Joint Factor Analysis for Speaker Verification,\" Audio, Speech, and Language Processing, IEEE Transactions on, vol. 15, no. 17, pp. 2095-2103, Sept. 2007.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Latent Prosody Model of Continuous Mandarin Speech", "authors": [ { "first": "Chen-Yu", "middle": [], "last": "Chiang", "suffix": "" }, { "first": "Xiao-Dong", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Yuan-Fu", "middle": [], "last": "Liao", "suffix": "" }, { "first": "Yih-Ru", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Sin-Horng", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Keikichi", "middle": [], "last": "Hirose", "suffix": "" } ], "year": 2007, "venue": "Acoustics, Speech and Signal Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chen-Yu Chiang, Xiao-Dong Wang, Yuan-Fu Liao, Yih-Ru Wang, Sin-Horng Chen, and Keikichi Hirose , \"Latent Prosody Model of Continuous Mandarin Speech,\" in Acoustics, Speech and Signal Processing, 2007. ICASSP 2007. IEEE International Conference on, Hawaii, Apr. 2007, pp. IV-625-IV-628.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Fluent speech prosody: Framework and modeling", "authors": [ { "first": "Chiu-Yu", "middle": [], "last": "Tseng", "suffix": "" }, { "first": "Yehlin", "middle": [], "last": "Shao-Huang Pin", "suffix": "" }, { "first": "", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Wang", "middle": [], "last": "Hsin-Min", "suffix": "" }, { "first": "Yong-Cheng", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2005, "venue": "Speech Comminication", "volume": "46", "issue": "", "pages": "284--309", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chiu-yu Tseng, Shao-huang Pin, Yehlin Lee, Hsin-min Wang, and Yong-cheng Chen, \"Fluent speech prosody: Framework and modeling,\" Speech Comminication, vol. 46:3-4, pp. 284-309, Mar. 2005.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "A statistics-based pitch contour model for Mandarin speech", "authors": [ { "first": "Wen-Hsing", "middle": [], "last": "Sin-Horng Chen", "suffix": "" }, { "first": "Yih-Ru", "middle": [], "last": "Lai", "suffix": "" }, { "first": "", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2005, "venue": "Journal of the Acoustical Society of America", "volume": "117", "issue": "2", "pages": "908--925", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sin-Horng Chen, Wen-Hsing Lai, and Yih-Ru Wang, \" A statistics-based pitch contour model for Mandarin speech,\" Journal of the Acoustical Society of America, 117 (2), pp. 908-925, Feb. 2005.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Prosodic Properties of Intonation in Two Major Varieties of Mandarin Chinese: Mainland China vs. Taiwan", "authors": [ { "first": "Chin-Chin", "middle": [], "last": "Tseng", "suffix": "" } ], "year": 2004, "venue": "International Symposium on Tonal Aspects of Languages: With Emphasis on Tone Languages", "volume": "", "issue": "", "pages": "28--31", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chin-Chin Tseng, \"Prosodic Properties of Intonation in Two Major Varieties of Mandarin Chinese: Mainland China vs. Taiwan,\" in International Symposium on Tonal Aspects of Languages: With Emphasis on Tone Languages, Beijing, China, Mar. 2004, pp. 28-31.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "MAT-2000 -Design, Collection, and Validation of a Mandarin 2000-Speaker Telephone Speech Database", "authors": [ { "first": "", "middle": [], "last": "Hsiao-Chuan", "suffix": "" }, { "first": "Frank", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Chiu-Yu", "middle": [], "last": "Seide", "suffix": "" }, { "first": "Lin-Shan", "middle": [], "last": "Tseng", "suffix": "" }, { "first": "", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "460--463", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hsiao-Chuan Wang, Frank Seide, Chiu-Yu Tseng, Lin-Shan Lee, \"MAT-2000 - Design, Collection, and Validation of a Mandarin 2000-Speaker Telephone Speech Database\", in ICSLP 2000, Beijing, China, Oct. 2000, pp. 460-463.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "500-People TRSC (Telephone Read Speech, Corpus), Chinese Corpus Consortium", "authors": [], "year": 2003, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "500-People TRSC (Telephone Read Speech, Corpus), Chinese Corpus Consortium, China, http://www.d-ear.com/CCC/corpora/2003-TRSC.pdf, 2003.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "uris": null, "num": null, "text": "The block diagram of the proposed LPM-LM-based Mandarin accent identification system." }, "FIGREF1": { "type_str": "figure", "uris": null, "num": null, "text": "The block diagram of the proposed LPM framework (speaker factor is omitted to simply this figure)." }, "FIGREF2": { "type_str": "figure", "uris": null, "num": null, "text": "It firstly extracts the raw prosodic contours (log-pitch and log-energy) of an input utterance. The pitch and energy contours are then segmented by an ASR engine. The output is a sequence of voiced and unvoiced segments." }, "FIGREF3": { "type_str": "figure", "uris": null, "num": null, "text": "A typical segmentation results of the tokenization front-end (from top to bottom panel: spectrum, syllable and sub-syllable segmentations, log-pitch and log-energy contours). For each voiced segment, six dimensional prosodic features are extracted including coefficients of 3-order Legendre polynomial function for approximating the log-pitch contour, the log-energy mean and duration of the segment. On the other hand, for each unvoiced segment, only its log-energy mean and duration are utilized." }, "FIGREF4": { "type_str": "figure", "uris": null, "num": null, "text": "Derive the initial affecting factors s \u03bc and n t \u03bc of tones by averaging all prosodic feature vector nx of a speaker or the whole training data, respectively.\u2027 Cluster and label the prosody state of each segment by vector quantization (VQ) using the residue prosodic feature vector Derive the initial covariance matrix \u03a3 ." }, "FIGREF5": { "type_str": "figure", "uris": null, "num": null, "text": "Update the affecting factors s \u03bc of speakers," }, "FIGREF6": { "type_str": "figure", "uris": null, "num": null, "text": "released by Association for Computational Linguistics and Chinese Language Processing (ACLCLP), Taiwan, and the other is 500-people telephone reading speech corpus (TRSC) [12] released by Chinese Corpus Consortium (CCC), China. There are about 4500 (MAT-2000+MAT-2500) Taiwanese and 500" }, "FIGREF7": { "type_str": "figure", "uris": null, "num": null, "text": "The learning curves of the LPMs training on MAT and TRSC training sets (left: MAT, right: TRSC), respectively." }, "FIGREF8": { "type_str": "figure", "uris": null, "num": null, "text": "The learned tone affecting patterns on MAT and TRSC corpora (top 5 panels: MAT, bottom 5 panels: TRSC), respectively." }, "FIGREF9": { "type_str": "figure", "uris": null, "num": null, "text": "first tested including (1) PPR-LM, (2) universal phone recognizer (UPR)-LM and (3) shifted delta cepstral (SDC)/Gaussian mixture model (GMM). For PPR-LM and UPR-LM, 39-dimensional mel-frequency cesptrum coefficient (MFCC) feature vectors were utilized to train the front-end phone recognizers. There are in total 50 phonemes in Mandarin for PPR-LM. But for UPR-LM, the number of phonemes is extended to 63 to reflect the major pronunciation differences (retroflex and nasal-endings sounds) between Mainlander's and Taiwanese Mandarin. All MFCCs were pre-processed by cepstral normalization (CN) to partially compensate the channel and database mismatch. Beside, tri-gram LM backbends were adopted for both PPR-LM and UPR-LM. Moreover, the parameters of SDC were empirically set to 7-3-3-7 and the number of mixtures in GMMs was 512. Experimental results of the individual acoustic, phonotactic and prosodic approaches and their fusion on a mixed TRSC and" }, "FIGREF10": { "type_str": "figure", "uris": null, "num": null, "text": "This work was supported by the National Science Council, Taiwan, under the project with contract NSC 96-2221-E-027-100-MY2 and is a partial result of Project 8353C41220 conducted by ITRI under sponsorship of the Ministry of Economic Affairs, Taiwan, R.O.C." }, "TABREF1": { "html": null, "type_str": "table", "text": "Detail information of the MAT ad TRSC corpora including number of speakers and utterances.", "content": "
TrainingDevelopmentTest
spkuttspkuttspkutt
MAT3936676333742201922382009
TRSC4094334012012594202042
4.2. LPM training results
For all following LPM experiments, the number of prosody states was empirically set to 11 (8
for voiced, 3 for unvoiced states) and there are 5 different tones in Mandarin.
", "num": null } } } }