ACL-OCL / Base_JSON /prefixA /json /autosimtrans /2020.autosimtrans-1.3.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
104 kB
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T02:09:35.968233Z"
},
"title": "Robust Neural Machine Translation with ASR Errors",
"authors": [
{
"first": "Haiyang",
"middle": [],
"last": "Xue",
"suffix": "",
"affiliation": {
"laboratory": "Key Laboratory of Intelligent Information Processing",
"institution": "Sogou Inc",
"location": {
"settlement": "Beijing"
}
},
"email": "xuehaiyang@ict.ac.cn"
},
{
"first": "Yang",
"middle": [],
"last": "Feng",
"suffix": "",
"affiliation": {
"laboratory": "Key Laboratory of Intelligent Information Processing",
"institution": "Sogou Inc",
"location": {
"settlement": "Beijing"
}
},
"email": "fengyang@ict.ac.cn"
},
{
"first": "Shuhao",
"middle": [],
"last": "Gu",
"suffix": "",
"affiliation": {
"laboratory": "Key Laboratory of Intelligent Information Processing",
"institution": "Sogou Inc",
"location": {
"settlement": "Beijing"
}
},
"email": "gushuhao19b@ict.ac.cn"
},
{
"first": "Wei",
"middle": [],
"last": "Chen",
"suffix": "",
"affiliation": {
"laboratory": "Key Laboratory of Intelligent Information Processing",
"institution": "Sogou Inc",
"location": {
"settlement": "Beijing"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In many practical applications, neural machine translation systems have to deal with the input from automatic speech recognition (ASR) systems which may contain a certain number of errors. This leads to two problems which degrade translation performance. One is the discrepancy between the training and testing data and the other is the translation error caused by the input errors may ruin the whole translation. In this paper, we propose a method to handle the two problems so as to generate robust translation to ASR errors. First, we simulate ASR errors in the training data so that the data distribution in the training and test is consistent. Second, we focus on ASR errors on homophone words and words with similar pronunciation and make use of their pronunciation information to help the translation model to recover from the input errors. Experiments on two Chinese-English data sets show that our method is more robust to input errors and can outperform the strong Transformer baseline significantly.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "In many practical applications, neural machine translation systems have to deal with the input from automatic speech recognition (ASR) systems which may contain a certain number of errors. This leads to two problems which degrade translation performance. One is the discrepancy between the training and testing data and the other is the translation error caused by the input errors may ruin the whole translation. In this paper, we propose a method to handle the two problems so as to generate robust translation to ASR errors. First, we simulate ASR errors in the training data so that the data distribution in the training and test is consistent. Second, we focus on ASR errors on homophone words and words with similar pronunciation and make use of their pronunciation information to help the translation model to recover from the input errors. Experiments on two Chinese-English data sets show that our method is more robust to input errors and can outperform the strong Transformer baseline significantly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In recent years, neural machine translation (NMT) has achieved impressive progress and has shown superiority over statistical machine translation (SMT) systems on multiple language pairs . NMT models are usually built under the encoder-decoder architecture where the encoder produces a representation for the source sentence and the decoder generates target translation from this representation word by word Sutskever et al., 2014; Gehring et al., 2017; Vaswani et al., 2017) . Now NMT systems are widely used in real world and in many cases they receive as input the result of the automatic speech recognition (ASR) system.",
"cite_spans": [
{
"start": 408,
"end": 431,
"text": "Sutskever et al., 2014;",
"ref_id": "BIBREF20"
},
{
"start": 432,
"end": 453,
"text": "Gehring et al., 2017;",
"ref_id": "BIBREF8"
},
{
"start": 454,
"end": 475,
"text": "Vaswani et al., 2017)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Despite the great success, NMT is subject to orthographic and morphological errors which can be comprehended by human (Belinkov and Bisk, 2017) . Due to the auto-regression of decoding process, translation errors will be accumulated along with the generated sequence. Once a translation error occurs at the beginning, it will lead to a totally different translation. Although ASR technique is mature enough for commercial applications, there are still recognition errors in their output. These errors from ASR systems will bring about translation errors even totally meaning drift. As the increasing of ASR errors, the translation performance will decline gradually (Le et al., 2017) . Moreover, the training data used for NMT training is mainly human-edited sentence pairs in high quality and thus ASR errors in the input are always unseen in the training data. This discrepancy between training and test data will further degrade the translation performance. In this paper, we propose a robust method to address the above two problems introduced by ASR input. Our method not only tries to keep the consistency of the training and test data but to correct the input errors introduced by ASR systems.",
"cite_spans": [
{
"start": 118,
"end": 143,
"text": "(Belinkov and Bisk, 2017)",
"ref_id": "BIBREF1"
},
{
"start": 666,
"end": 683,
"text": "(Le et al., 2017)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We focus on the most widely existent substitution errors in ASR results which can be further distinguished into wrong substitution between words with similar pronunciation and wrong substitution between the words with the same pronunciation (known as homophone words). Table 1 shows Chinese-to-English translation examples of these two kinds of errors. Although only one input word changes in the given three source sentences, their translations are quite different. To keep the consistency between training and testing, we simulate these two types of errors and inject them into the training data randomly. To recover from ASR errors, we integrate the pronunciation information into the translation model to recover the two kinds of errors. For words with similar pronunciation(we name it as Sim-Pron-Words ), we first predict the",
"cite_spans": [],
"ref_spans": [
{
"start": 269,
"end": 276,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u8fd9 \u4efd \u793c \u7269 \u9971 \u542b \u4e00 \u4efd \u6df1 \u6df1 \u6df1 \u60c5 \u60c5 \u60c5. zh\u00e8 f\u00e8n l\u01d0 w\u00f9 b\u01ceo h\u00e1n y\u012b f\u00e8n sh\u0113n q\u00edng.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gold input",
"sec_num": null
},
{
"text": "\u8fd9 \u4efd \u793c \u7269 \u9971 \u542b \u4e00 \u4efd \u7533 \u7533 \u7533 \u8bf7 \u8bf7 \u8bf7. zh\u00e8 f\u00e8n l\u01d0 w\u00f9 b\u01ceo h\u00e1n y\u012b f\u00e8n sh\u0113n q\u01d0ng.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ASR-HM",
"sec_num": null
},
{
"text": "\u8fd9 \u4efd \u793c \u7269 \u9971 \u542b \u4e00 \u4efd \u5fc3 \u5fc3 \u5fc3 \u60c5 \u60c5 \u60c5. zh\u00e8 f\u00e8n l\u01d0 w\u00f9 b\u01ceo h\u00e1n y\u012b f\u00e8n x\u012bn q\u00edng. Reference This gift is full of affection. Trans-HM This gift contains an application.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ASR-SP",
"sec_num": null
},
{
"text": "This gift is full of mood. Table 1 : A Chinese-English translation example with ASR errors. \"ASR-HM\" gives an input sentence with ASR errors on homophone words and \"Trans-HM\" shows its translation. \"ASR-SP\" gives an input sentence with ASR errors on words with similar pronunciation and \"Trans-SP\" denotes its translation.",
"cite_spans": [],
"ref_spans": [
{
"start": 27,
"end": 34,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Trans-SP",
"sec_num": null
},
{
"text": "true pronunciation and then integrate the predicted pronunciation into the translation model. For homophone words, although the input characters are wrong, the pronunciation is correct and can be used to assistant translation. In this way, we get a twostepped method for ASR inputted translation. The first step is to get a training data close to the practical input, so that they can have similar distribution. The second step is to smooth ASR errors according to the pronunciation. We conducted experiments on two Chinese-to-English data sets and added noise to the test data sets at different rates. The results show that our method can achieve significant improvements over the strong Transformer baseline and is more robust to input errors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Trans-SP",
"sec_num": null
},
{
"text": "As our method is based on the self-attention based neural machine translation model (Transformer) (Vaswani et al., 2017) , we will first introduce Transformer briefly before introducing our method.",
"cite_spans": [
{
"start": 98,
"end": 120,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Encoder The encoder consists of 6 identical layers. Each layer consists of two sub-layers: selfattention followed by a position-wise fully connected feed-forward layer. It uses residual connections around each of the sub-layers, followed by layer normalization. The output of each sublayer is LayerNorm(x + Sublayer(x)), where Sublayer(x) is the function carried out by the sublayer itself. The input sequence x is fed into these two sub-layers, then we can get the hidden state sequence of the encoder:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Encoder and Decoder",
"sec_num": "2.1"
},
{
"text": "h = (h 1 , h 2 , . . . , h j )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Encoder and Decoder",
"sec_num": "2.1"
},
{
"text": "where j denotes the length of the input sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Encoder and Decoder",
"sec_num": "2.1"
},
{
"text": "Decoder The decoder shares a similar structure with the encoder, which also consists of 6 layers. Each layer has three sub-layers: self-attention, encoder-decoder attention and a position-wise feedforward layer. It also employs a residual connection and layer normalization at each sub-layer. The decoder uses masking in its self-attention to prevent a given output position from incorporating information about future output positions during training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Encoder and Decoder",
"sec_num": "2.1"
},
{
"text": "The attention mechanism in Transformer is the socalled scaled dot product attention which uses the dot-product of the query and keys to present the relevance of the attention distribution:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attention",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "a = sof tmax( QK T \u221a d k )",
"eq_num": "(1)"
}
],
"section": "Attention",
"sec_num": "2.2"
},
{
"text": "where the d k is the dimensions of the keys. Then the weighted values are summed together to get the final results:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attention",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "t = (a V)",
"eq_num": "(2)"
}
],
"section": "Attention",
"sec_num": "2.2"
},
{
"text": "Instead of performing a single attention function with a single version of queries, keys and values, multi-head attention mechanism get h different versions of queries, keys and values with different projection functions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attention",
"sec_num": "2.2"
},
{
"text": "Q i , K i , V i = QW Q i , KW K i , VW V i , i \u2208 [1, h] (3) where Q i , K i , V",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attention",
"sec_num": "2.2"
},
{
"text": "i are the query , key and value representations of the i-th head respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attention",
"sec_num": "2.2"
},
{
"text": "W Q i , W K i , W V",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attention",
"sec_num": "2.2"
},
{
"text": "i are the transformation matrices. h is the number of attention heads. h attention Figure 1 : The illustration of our method. \"HM\" stands for substitution errors between homophone words and \"SP\" stands for substitution errors between the words with similar pronunciation. The elements in blue boxes are a case of SP errors. Those in the red boxes represent the corrected version with the help of pronunciation information. functions are applied in parallel to produce the output states u i . Finally, the outputs are concatenated to produce the final attention:",
"cite_spans": [],
"ref_spans": [
{
"start": 83,
"end": 91,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Attention",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "t = Concat(t 1 , ..., t h )",
"eq_num": "(4)"
}
],
"section": "Attention",
"sec_num": "2.2"
},
{
"text": "3 The Proposed Method",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attention",
"sec_num": "2.2"
},
{
"text": "Although ASR is mature for commercial applications, there are still recognition errors in the result of ASR. The ASR recognition errors can be classified into three categories: substitution, deletion and insertion, which are shown in Table 2 . We counted the word error rate (WER) for the three types of errors respectively on our in-house data set, which consists of 100 hours of Chinese speech across multiple domains. The results in Table 2 gives the ratio of the wrong words against the total words. We can see that the substitution errors are the main errors which is consistent with the results in Mirzaei et al. 2016. Other researchers have proven that over 50% of the machine translation errors are associated with substitution errors which have a greater impact on translation quality than deletion or insertion errors (Vilar et al., 2006; Ruiz and Federico, 2014) . Substitution errors can be further divided into two categories: substitution between the words with similar pronunciation (denoted as SP errors) and substitution between homophone words (denoted as HM errors). Based on these conclusions, we focus on these two kinds of substitution errors in this paper. In what follows we will take Chinese as an example to introduce our method and our method can also be applied to many other languages in a similar way. Our method aims to improve the robustness of NMT to ASR errors. To this end, our method first constructs a training data set which has a similar data distribution with the test data, then makes use of pronunciation information to recover from the SP errors and HM errors. Specifically, our method works in a flow of three steps as 1. adding SP errors and HM errors in the training data randomly to simulate ASR errors occurring in test;",
"cite_spans": [
{
"start": 828,
"end": 848,
"text": "(Vilar et al., 2006;",
"ref_id": "BIBREF23"
},
{
"start": 849,
"end": 873,
"text": "Ruiz and Federico, 2014)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 234,
"end": 241,
"text": "Table 2",
"ref_id": "TABREF1"
},
{
"start": 436,
"end": 443,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Attention",
"sec_num": "2.2"
},
{
"text": "2. predicting the true pronunciation for SP errors and amending the pronunciation to the predicted results;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attention",
"sec_num": "2.2"
},
{
"text": "3. integrating pronunciation information into the word semantic to assistant the translation of HM errors as homophone words always have the pronunciation. Figure 1 illustrate the architecture of our method. Note that the above three steps must be cascaded which means we always first try to correct the pronunciation information for SP errors and then use the corrected pronunciation information to play a part in the translation for HM errors. We will introduce the three steps in details in the following sections.",
"cite_spans": [],
"ref_spans": [
{
"start": 156,
"end": 164,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Attention",
"sec_num": "2.2"
},
{
"text": "We process source words one by one by first deciding whether to change it to ASR noise at a certain probability p \u2208 [0, 1], and if yes, then selecting one word to substitute the source word according to the word frequency of the training data. Given a source word x, we first collect its SP word set V sp (x) and HM word set V hm (x), then sample from a Bernoulli distribution with a probability p to substitute it with a noise:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Simulating ASR errors in Training",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "r x \u223c Bernoulli(p)",
"eq_num": "(5)"
}
],
"section": "Simulating ASR errors in Training",
"sec_num": "3.1"
},
{
"text": "where r x \u2208 {0, 1} is the output of the Bernoulli distribution and p \u2208 [0, 1] is the probability that the Bernoulli distribution outputs 1. When r x is 1, we go to the next step to substitute x. Next, we can select a word to substitute x from a word set V(x) at a probability as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Simulating ASR errors in Training",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(x) = Count(x) x \u2208V(x)\\{x} Count(x )",
"eq_num": "(6)"
}
],
"section": "Simulating ASR errors in Training",
"sec_num": "3.1"
},
{
"text": "where Count(x) stands for the count that the word x occurs in the training data, and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Simulating ASR errors in Training",
"sec_num": "3.1"
},
{
"text": "V(x) can be V sp (x), V hm (x) or V sp (x) \u222a V hm (x)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Simulating ASR errors in Training",
"sec_num": "3.1"
},
{
"text": "depending on whether we want to simulate SP errors, HM errors or mixture. To get the training data with the data distribution consistent with the ASR input, we sample words from",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Simulating ASR errors in Training",
"sec_num": "3.1"
},
{
"text": "V sp (x) \u222a V hm (x).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Simulating ASR errors in Training",
"sec_num": "3.1"
},
{
"text": "In Chinese, the Pinyin word is used to represent the pronunciation of the word and a Pinyin word usually consists of several Pinyin letters. For example, in Table 2 , the Pinyin word for the word \"\u8bed\" is \"y\u01d4\" and it has two Pinyin letters as \"y\" and \"\u01d4\". According to the pronunciation, one Pinyin word can be divided into two parts: the initial, which usually only contains the first Pinyin letter, and the final, which usually contains the rest Pinyin letters. We looked into our in-house ASR results and found that most SP errors are caused by the wrong initial. Besides, Chinese Pinyin has fixed combinations of the initial and the final, and hence given a final, we can get all possible initials that can occur together with the final in one Pinyin word. In this sense, for an SP error, we can draw the distribution over all the possible initials to predict the correct Pinyin word. With the distribution, we can amend the embedding of the Pinyin word to the correct one. Formally, given a source sentence x = (x 1 , . . . , x J ), we use u = (u 1 , . . . , u J ) to denote its Pinyin word sequence and use u jk to denote the k-th Pinyin letter in the Pinyin word u j . For a Pinyin word u j , we represent its initial as",
"cite_spans": [],
"ref_spans": [
{
"start": 157,
"end": 164,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Amending Pronunciation for SP Errors",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "u ini j = u j1",
"eq_num": "(7)"
}
],
"section": "Amending Pronunciation for SP Errors",
"sec_num": "3.2"
},
{
"text": "and represent its final as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Amending Pronunciation for SP Errors",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "u fin j = [u j2 , . . . , u j K j ]",
"eq_num": "(8)"
}
],
"section": "Amending Pronunciation for SP Errors",
"sec_num": "3.2"
},
{
"text": "where K j is the number of Pinyin letters of u j . We also maintain an embedding matrix for the Pinyin words and the Pinyin letters, respectively. Then we can get the embedding for the final u fin j by adding all the embedding of its Pinyin letters as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Amending Pronunciation for SP Errors",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "E[u fin j ] = K j k=2 (E[u jk ])",
"eq_num": "(9)"
}
],
"section": "Amending Pronunciation for SP Errors",
"sec_num": "3.2"
},
{
"text": "where E[\u2022] means the corresponding embedding of the input. As SP errors usually result from wrong initials, we predict the probability of the true initial according to the co-occurrence with the immediately previous Pinyin word u j\u22121 and the right after Pinyin word u j+1 . Then we can draw the distribution over all the possible initials for u j as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Amending Pronunciation for SP Errors",
"sec_num": "3.2"
},
{
"text": "p ini \u223c softmax(g ini (E[u j\u22121 ] + E[u fin j ] + E[u j+1 ])) (10) where g ini (\u2022)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Amending Pronunciation for SP Errors",
"sec_num": "3.2"
},
{
"text": "is a linear transformation function. Then we use the weighted sum of the embedding of all the possible initials as the true embedding of",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Amending Pronunciation for SP Errors",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "u ini j as E[u ini j ] = l\u2208V ini (u j ) p ini (l) * E[l]",
"eq_num": "(11)"
}
],
"section": "Amending Pronunciation for SP Errors",
"sec_num": "3.2"
},
{
"text": "where V ini (u j ) denotes the letter set which can be used as the initial of u j and p ini (l) denotes the predicted probability for the Pinyin letter l in Equation 10. Then we can update the embedding of u j based on the amended Pinyin letter embedding as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Amending Pronunciation for SP Errors",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "E[u j ] = g(E[u j ], E[u ini j ], E[u fin j ])",
"eq_num": "(12)"
}
],
"section": "Amending Pronunciation for SP Errors",
"sec_num": "3.2"
},
{
"text": "where g(.) is a linear transformation function.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Amending Pronunciation for SP Errors",
"sec_num": "3.2"
},
{
"text": "For HM errors, although the source word is not correct, the Pinyin word is still correct. Therefore, the Pinyin word can be used to provide additional true information about the source word. Specifically, we integrate the embedding of Pinyin words into the final output of the encoder, denoted as h = (h 1 , . . . , h J ), to get an advanced encoding for each source word. This is implemented via a gating mechanism and we calculate the gate \u03bb j for the j-th source word as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Amending Encoding for HM Errors",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03bb j = W \u03bb tanh (W h h j + W u E[u j ])",
"eq_num": "(13)"
}
],
"section": "Amending Encoding for HM Errors",
"sec_num": "3.3"
},
{
"text": "where W \u03bb , W h and W u are weight matrices.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Amending Encoding for HM Errors",
"sec_num": "3.3"
},
{
"text": "With the gate, we update the hidden state h j to",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Amending Encoding for HM Errors",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h j = \u03bb j * h j + (1 \u2212 \u03bb j ) * E[u j ]",
"eq_num": "(14)"
}
],
"section": "Amending Encoding for HM Errors",
"sec_num": "3.3"
},
{
"text": "Then the updated hidden states of source words are fed to the decoder for the calculation of attention and generation of target words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Amending Encoding for HM Errors",
"sec_num": "3.3"
},
{
"text": "We evaluated our method on two Chinese-English data sets which are from the NIST translation task and WMT17 translation task, respectively. For the NIST translation task, the training data consists of about 1.25M sentence pairs from LDC corpora with 27.9M Chinese words and 34.5M English words respectively 1 . We used NIST 02 data set as the development set and NIST 03, 04, 05, 06, 08 sets as the clean test sets which don't have ASR errors in the source side. For the WMT17 translation task, the training data consists of 9.3M bilingual sentence pairs obtained by combing the CWMT corpora and News Commentary v12. We use the newsdev2017 and newstest2017 as our development set and clean test set, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Preparation",
"sec_num": "4.1"
},
{
"text": "For both of these two corpus, we tokenized and truecased the English sentences using the Moses scripts 2 . Then 30K merging operations were performed to learn byte-pair encoding(BPE) (Sennrich et al., 2015) . As for the Chinese data, we split the sentence into Chinese chars. We use the Chinese-Tone 3 tool to convert Chinese characters into their Pinyin counterpart without tones.",
"cite_spans": [
{
"start": 183,
"end": 206,
"text": "(Sennrich et al., 2015)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Preparation",
"sec_num": "4.1"
},
{
"text": "Then we apply the method mentioned in the section 3.1 to add SP errors, HM errors or both to the clean training set to get three kinds of noisy data. We have also set the substituting probability p to 0.1, 0.2 and 0.3 to investigate the impacts of the ASR errors in the training set. Considering that there is no public test sets simulating the substitution errors of ASR, we also crafted another three noisy test sets based on the clean sets with different amount of HM errors and SP errors in each source side sentence to test the robustness of the NMT model. We try our best to make these noisy test sets be close to the results of ASR, so that it can check the ability of our proposed method in the realistic speech translation scenario.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Preparation",
"sec_num": "4.1"
},
{
"text": "We evaluate the proposed method on the Transformer model and implement on the top of an opensource toolkit Fairseq-py (Edunov et al., 2017) . We follow (Vaswani et al., 2017) to set the configurations and have reproduced their reported results on the Base model. All the models were trained on a single server with eight NVIDIA TITAN Xp GPUs where each was allocated with a batch size of 4096 tokens. Sentences longer than 100 tokens were removed from the training data. For the base model, we trained it for a total of 100k steps and save a checkpoint at every 1k step intervals. The single model obtained by averaging the last 5 checkpoints were used for measuring the results.",
"cite_spans": [
{
"start": 118,
"end": 139,
"text": "(Edunov et al., 2017)",
"ref_id": null
},
{
"start": 152,
"end": 174,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training Details",
"sec_num": "4.2"
},
{
"text": "During decoding, we set beam size to 5, and length penalty \u03b1=0.6 (Wu et al., 2016) . Other training parameters are the same as the default configuration of the Transformer model. We report casesensitive NIST BLEU (Papineni et al., 2002) scores for all the systems. For evaluation, we first merge output tokens back to their untokenized representation using detokenizer.pl and then use multi-bleu.pl to compute the scores as per reference.",
"cite_spans": [
{
"start": 65,
"end": 82,
"text": "(Wu et al., 2016)",
"ref_id": "BIBREF24"
},
{
"start": 213,
"end": 236,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training Details",
"sec_num": "4.2"
},
{
"text": "The main results are shown in the Table 4 : Results of the ablation study on the NIST data. \"+SP Amendment\", \"+HM Amendmen\" and \"+Both Amendment\" represents the model only with the amending pronunciation for SP errors, amending errors for HM errors and with amending pronunciation for both of these two kinds of errors, respectively. Table 5 : Comparison of \"+SP Amendment\", \"+HM Amendmen\" and \"+Both Amendment\" on the WMT17 ZH\u2192EN dataset.",
"cite_spans": [],
"ref_spans": [
{
"start": 34,
"end": 41,
"text": "Table 4",
"ref_id": null
},
{
"start": 334,
"end": 341,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Main Results",
"sec_num": "4.3"
},
{
"text": "model significantly outperforms the baseline model on the noisy test sets on both of the NIST and WMT17 translation tasks. Furthernmore, we got the following conclusions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Main Results",
"sec_num": "4.3"
},
{
"text": "First, the baseline model performs well on the clean test set, but it suffers a great performance drop on the noisy test sets, which indicates that the conventional NMT is indeed fragile to permuted inputs, which is consistent with prior work (Belinkov and Bisk, 2017; Cheng et al., 2018) .",
"cite_spans": [
{
"start": 243,
"end": 268,
"text": "(Belinkov and Bisk, 2017;",
"ref_id": "BIBREF1"
},
{
"start": 269,
"end": 288,
"text": "Cheng et al., 2018)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Main Results",
"sec_num": "4.3"
},
{
"text": "Second, the results of our proposed method show that our model can not only get a competitive performance compared to the baseline model on the clean test set, but also outperform all the baseline models on the noisy tests. Moreover, our proposed method doesn't drop so much on the noisy test sets as the ASR errors increase, which proves that our proposed method is more robust to the noisy inputs after we make use of the pronunciation features to amend the representation of the input tokens for the SP errors and HM errors. Last, we find that our method works best when the hyper-parameter p was set to 0.2 in our experiments. It indicates that the different noise sampling methods have different impacts on the final results. Too few or too much ASR errors simulated in the training data both can't make the model achieve the best performance in practice. This finding can guide us to better simulate the noisy data, thus helping us train a more robust model in the future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Main Results",
"sec_num": "4.3"
},
{
"text": "In order to further understand the impact of the components of the proposed method, we performed some further studies by training multiple versions of our model by removing the some components of it. The first one is just with the amending pronunciation for SP errors. The second one is just with the amending errors for HM error. The overall results are shown in the Table 4 and Table 5 .",
"cite_spans": [],
"ref_spans": [
{
"start": 368,
"end": 387,
"text": "Table 4 and Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Ablation Study",
"sec_num": "4.4"
},
{
"text": "The \"+SP Amendment\" method also improve the robustness and fault tolerance of the model. It is obvious that in all the cases, our proposed Sim-Pron-Words model outperforms baseline system by +1.15 and + 1.89 BLEU. which indicates that it can also greatly enhance the anti-noise capability of the NMT model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ablation Study",
"sec_num": "4.4"
},
{
"text": "The \"+HM Amendmen\" method provides further robustness improvements compared to the baseline system on all the noisy test sets. The results show that the model with SP amendment achieves a further improvement by an average of +1.37 and +2.00 BLEU on the NIST and WMT17 noisy test sets respectively. In addition, it has also achieved a performance equivalent to baseline on the clean test sets. It demonstrates that homophones feature is an effective input feature for improving the robustness of Chinese-sourced NMT.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ablation Study",
"sec_num": "4.4"
},
{
"text": "Eventually, as expectecd, the best performance is obtained with the simultaneous use of all the tested elements, proving that these two features can cooperate with each other to improve the performance further.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ablation Study",
"sec_num": "4.4"
},
{
"text": "We also investigate the training cost of our proposed method and the baseline system. The loss curves are shown in the Figure 2 . It shows that the training cost of our model is higher than the baseline system, which indicates that our proposed model may take more words into consideration when predicting the next word, because it aggregate the pronunciation information of the source side character. Thus we can get a higher BLEU score on the test sets than the baseline system, which will ignore some more appropriate word candidates just without the pronunciation information. The training loss curves and the BLEU results on the test sets show that our approach effectively improves the generalization performance of the conventional NMT model trained on the clean training data.",
"cite_spans": [],
"ref_spans": [
{
"start": 119,
"end": 127,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Training Cost",
"sec_num": "4.5"
},
{
"text": "We also evaluate the performance of our proposed method and the baseline on the noisy test sets with different source sentence lengths. As shown in Figure 3 , the translation quality of both systems is improved as the length increases and then degrades as the length exceeds 50. Our observation is also consistent with prior work . These curves imply that more context is helpful to noise disambiguation. It also can be seen that our robust system outperforms the baseline model on all the noisy test sets in each length interval. Besides, the increasing number of the error in the source sentence doesn't degrade the performance of our proposed model too much, indicating the effectiveness of our method.",
"cite_spans": [],
"ref_spans": [
{
"start": 148,
"end": 156,
"text": "Figure 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Effect of Source Sentence Length",
"sec_num": "4.6"
},
{
"text": "In Table 6 , we provide a realistic example to illustrate the advantage of our robust NMT system on erroneous ASR output. For this case, the syntactic structure and meaning of the original sentence are destroyed since the original character \"\u6570\" which means digit is misrecognized as the character \"\u4e66\" which means book. \"\u6570\" and \"\u4e66\" share the same pronunciation without tones. Human beings generally have no obstacle to understanding this flawed sentence with the aid of its correct pronunciation. The baseline NMT system can hardly avoid the translation of \"\u4e66\" which is a high-frequency character with explicit word sense. In contrast, our robust NMT system can translate this sentence correctly. We also observe that our system works well even if the original character \"\u6570\" is substituted with other homophones, such as \"\u8212\" which means comfortable. It shows that our system has a powerful ability to recover the minor ASR error. We consider that the robustness improvement is mainly attributed to our proposed ASR-specific noise training and Chinese Pinyin feature.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "A Case Study",
"sec_num": "4.7"
},
{
"text": "It is necessary to enhance the robustness of machine translation since the ASR system carries misrecognized transcriptions over into the downstream MT system in the SLT scenario. Prior work at-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "Speech \u8be5 \u6570 \u6570 \u6570 \u5b57 \u5df2 \u7ecf \u5927 \u5e45 \u4e0b \u964d \u8fd1 \u4e00 \u534a\u3002 g\u0101i sh\u00f9 z\u00ec y\u01d0 j\u012bng d\u00e0 f\u00fa xi\u00e0 ji\u00e0ng j\u00ecn y\u012b b\u00e0n\u3002 ASR \u8be5 \u4e66 \u4e66 \u4e66 \u5b57 \u5df2 \u7ecf \u5927 \u5e45 \u4e0b \u964d \u8fd1 \u4e00 \u534a\u3002 g\u0101i sh\u016b z\u00ec y\u01d0 j\u012bng d\u00e0 f\u00fa xi\u00e0 ji\u00e0ng j\u00ecn y\u012b b\u00e0n\u3002 Ref",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "The figure has fallen sharply by almost half.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "The book has fallen by nearly half. Our Approach The figure has fallen by nearly half. Table 6 : For the same erroneous ASR output, translations of the baseline NMT system and our robust NMT system. tempted to induce noise by considering the realistic ASR outputs as the source corpora used for training MT systems (Peitz et al., 2012; Tsvetkov et al., 2014) . Although the problem of error propagation could be alleviated by the promising end-to-end speech translation models (Serdyuk et al., 2018; B\u00e9rard et al., 2018) . Unfortunately, there are few training data in the form of speech paired with text translations. In contrast, our approach utilizes the large-scale written parallel corpora. Recently, Sperber et al. (2017) adapted the NMT model to noise outputs from ASR, where they introduced artificially corrupted inputs during the training process and only achieved minor improvements on noisy input but harmed the translation quality on clean text. However, our approach not only significantly enhances the robustness of NMT on noisy test sets, but also improves the generalization performance.",
"cite_spans": [
{
"start": 315,
"end": 335,
"text": "(Peitz et al., 2012;",
"ref_id": "BIBREF13"
},
{
"start": 336,
"end": 358,
"text": "Tsvetkov et al., 2014)",
"ref_id": "BIBREF21"
},
{
"start": 477,
"end": 499,
"text": "(Serdyuk et al., 2018;",
"ref_id": "BIBREF18"
},
{
"start": 500,
"end": 520,
"text": "B\u00e9rard et al., 2018)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 87,
"end": 94,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Baseline",
"sec_num": null
},
{
"text": "In the context of NMT, a similar approach was very recently proposed by Cheng et al. (2018) , where they proposed two methods of constructing adversarial samples with minor perturbations to train NMT models more robust by supervising both the encoder and decoder to represent similarly for both the perturbed input sentence and its original counterpart. In contrast, our approach has several advantages: 1) our method of constructing noise examples is efficient yet straightforward without expensive computation of words similarity at training time; 2) our method has only one hyper-parameter without putting too much effort into performance tuning; 3) the training of our approach performs efficiently without pre-training of NMT models and complicated discriminator; 4) our approach achieves a stable performance on noise input with different amount of errors.",
"cite_spans": [
{
"start": 72,
"end": 91,
"text": "Cheng et al. (2018)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline",
"sec_num": null
},
{
"text": "Our approach is motivated by the work of NMT incorporated with linguistic input features (Sennrich and Haddow, 2016). Chinese linguistic features, such as radicals and Pinyin, have been demonstrated effective to Chinese-sourced NMT (Liu et al., 2019; Zhang and Matsumoto, 2017; Du and Way, 2017) and Chinese ASR (Chan and Lane, 2016) . We also incorporate Pinyin as an additional input feature in the robust NMT model, aiming at improving the robustness of NMT further.",
"cite_spans": [
{
"start": 232,
"end": 250,
"text": "(Liu et al., 2019;",
"ref_id": "BIBREF10"
},
{
"start": 251,
"end": 277,
"text": "Zhang and Matsumoto, 2017;",
"ref_id": "BIBREF25"
},
{
"start": 278,
"end": 295,
"text": "Du and Way, 2017)",
"ref_id": "BIBREF6"
},
{
"start": 312,
"end": 333,
"text": "(Chan and Lane, 2016)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline",
"sec_num": null
},
{
"text": "Voice input has become popular recently and as a result, machine translation systems have to deal with the input from the results of ASR systems which contains recognition errors. In this paper we aim to improve the robustness of NMT when its input contains ASR errors from two aspects. One is from the perspective of data by adding simulated ASR errors to the training data so that the training data and the test data have a consistent distribution. The other is from the perspective of the model itself. Our method takes measures to handle two types of the most widely existent ASR errors: substitution errors between the words with similar pronunciation (SP errors) and substitution errors between homophone words (HM errors). For SP errors, we make use of the context pronunciation information to correct the embedding of Pinyin words. For HM errors, we use pronunciation information directly to amend the encoding of source words. Experiment results prove the effectiveness of our method and the ablation study indicates that our method can handle both the types of errors well. Experiments also show that our method is stable during training and more robust to the errors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "The corpora include LDC2002E18, LDC2003E07, LDC2003E14, Hansards portion of LDC2004T07, LDC2004T08 and LDC2005T06.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www.statmt.org/moses/ 3 https://github.com/letiantian/ChineseTone",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1409.0473"
]
},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Synthetic and natural noise both break neural machine translation",
"authors": [
{
"first": "Yonatan",
"middle": [],
"last": "Belinkov",
"suffix": ""
},
{
"first": "Yonatan",
"middle": [],
"last": "Bisk",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yonatan Belinkov and Yonatan Bisk. 2017. Synthetic and natural noise both break neural machine transla- tion. In Proc. ICLR.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "End-to-end automatic speech translation of audiobooks",
"authors": [
{
"first": "Alexandre",
"middle": [],
"last": "B\u00e9rard",
"suffix": ""
},
{
"first": "Laurent",
"middle": [],
"last": "Besacier",
"suffix": ""
},
{
"first": "Ali",
"middle": [
"Can"
],
"last": "Kocabiyikoglu",
"suffix": ""
},
{
"first": "Olivier",
"middle": [],
"last": "Pietquin",
"suffix": ""
}
],
"year": 2018,
"venue": "Proc. ICASSP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexandre B\u00e9rard, Laurent Besacier, Ali Can Ko- cabiyikoglu, and Olivier Pietquin. 2018. End-to-end automatic speech translation of audiobooks. In Proc. ICASSP.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "On online attention-based speech recognition and joint mandarin character-pinyin training",
"authors": [
{
"first": "William",
"middle": [],
"last": "Chan",
"suffix": ""
},
{
"first": "Ian",
"middle": [],
"last": "Lane",
"suffix": ""
}
],
"year": 2016,
"venue": "Proc. Interspeech",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William Chan and Ian Lane. 2016. On online attention-based speech recognition and joint man- darin character-pinyin training. In Proc. Inter- speech.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Towards robust neural machine translation",
"authors": [
{
"first": "Yong",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Zhaopeng",
"middle": [],
"last": "Tu",
"suffix": ""
},
{
"first": "Fandong",
"middle": [],
"last": "Meng",
"suffix": ""
},
{
"first": "Junjie",
"middle": [],
"last": "Zhai",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2018,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "1756--1766",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yong Cheng, Zhaopeng Tu, Fandong Meng, Junjie Zhai, and Yang Liu. 2018. Towards robust neural machine translation. In Proc. ACL, pages 1756- 1766.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Learning phrase representations using rnn encoder-decoder for statistical machine translation",
"authors": [
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Bart",
"middle": [],
"last": "Van Merri\u00ebnboer",
"suffix": ""
},
{
"first": "Caglar",
"middle": [],
"last": "Gulcehre",
"suffix": ""
},
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Fethi",
"middle": [],
"last": "Bougares",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1406.1078"
]
},
"num": null,
"urls": [],
"raw_text": "Kyunghyun Cho, Bart Van Merri\u00ebnboer, Caglar Gul- cehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Pinyin as subword unit for chinese-sourced neural machine translation",
"authors": [
{
"first": "Jinhua",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Andy",
"middle": [],
"last": "Way",
"suffix": ""
}
],
"year": 2017,
"venue": "Irish Conference on Artificial Intelligence and Cognitive Science",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jinhua Du and Andy Way. 2017. Pinyin as subword unit for chinese-sourced neural machine translation. In Irish Conference on Artificial Intelligence and Cognitive Science.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Convolutional sequence to sequence learning",
"authors": [
{
"first": "Jonas",
"middle": [],
"last": "Gehring",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
},
{
"first": "Denis",
"middle": [],
"last": "Yarats",
"suffix": ""
},
{
"first": "Yann N",
"middle": [],
"last": "Dauphin",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1705.03122"
]
},
"num": null,
"urls": [],
"raw_text": "Jonas Gehring, Michael Auli, David Grangier, De- nis Yarats, and Yann N Dauphin. 2017. Convolu- tional sequence to sequence learning. arXiv preprint arXiv:1705.03122.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Disentangling asr and mt errors in speech translation",
"authors": [
{
"first": "Ngoc-Tien",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Lecouteux",
"suffix": ""
},
{
"first": "Laurent",
"middle": [],
"last": "Besacier",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1709.00678"
]
},
"num": null,
"urls": [],
"raw_text": "Ngoc-Tien Le, Benjamin Lecouteux, and Laurent Besacier. 2017. Disentangling asr and mt er- rors in speech translation. arXiv preprint arXiv:1709.00678.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Robust neural machine translation with joint textual and phonetic embedding",
"authors": [
{
"first": "Hairong",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Mingbo",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Zhongjun",
"middle": [],
"last": "He",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3044--3049",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hairong Liu, Mingbo Ma, Liang Huang, Hao Xiong, and Zhongjun He. 2019. Robust neural machine translation with joint textual and phonetic embed- ding. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3044-3049.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Automatic speech recognition errors as a predictor of l2 listening difficulties",
"authors": [
{
"first": "Maryam",
"middle": [],
"last": "Sadat Mirzaei",
"suffix": ""
},
{
"first": "Kourosh",
"middle": [],
"last": "Meshgi",
"suffix": ""
},
{
"first": "Tatsuya",
"middle": [],
"last": "Kawahara",
"suffix": ""
}
],
"year": 2016,
"venue": "Proc. the Workshop on Computational Linguistics for Linguistic Complexity (CL4LC)",
"volume": "",
"issue": "",
"pages": "192--201",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maryam Sadat Mirzaei, Kourosh Meshgi, and Tatsuya Kawahara. 2016. Automatic speech recognition er- rors as a predictor of l2 listening difficulties. In Proc. the Workshop on Computational Linguistics for Lin- guistic Complexity (CL4LC), pages 192-201.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. the 40th annual meeting on association for computational linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proc. the 40th an- nual meeting on association for computational lin- guistics, pages 311-318.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Spoken language translation using automatically transcribed text in training",
"authors": [
{
"first": "Stephan",
"middle": [],
"last": "Peitz",
"suffix": ""
},
{
"first": "Simon",
"middle": [],
"last": "Wiesler",
"suffix": ""
},
{
"first": "Markus",
"middle": [],
"last": "Nu\u00dfbaum-Thom",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2012,
"venue": "Proc. IWSLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephan Peitz, Simon Wiesler, Markus Nu\u00dfbaum- Thom, and Hermann Ney. 2012. Spoken language translation using automatically transcribed text in training. In Proc. IWSLT.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Assessing the impact of speech recognition errors on machine translation quality. Association for Machine Translation in the",
"authors": [
{
"first": "Nicholas",
"middle": [],
"last": "Ruiz",
"suffix": ""
},
{
"first": "Marcello",
"middle": [],
"last": "Federico",
"suffix": ""
}
],
"year": 2014,
"venue": "Americas (AMTA)",
"volume": "",
"issue": "",
"pages": "261--274",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nicholas Ruiz and Marcello Federico. 2014. Assessing the impact of speech recognition errors on machine translation quality. Association for Machine Trans- lation in the Americas (AMTA), Vancouver, Canada, pages 261-274.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Linguistic input features improve neural machine translation",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
}
],
"year": 2016,
"venue": "Proc. WMT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich and Barry Haddow. 2016. Linguistic in- put features improve neural machine translation. In Proc. WMT.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Neural machine translation of rare words with subword units",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1508.07909"
]
},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015. Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Edinburgh neural machine translation systems for wmt 16",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proc. the First Conference on Machine Translation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Edinburgh neural machine translation sys- tems for wmt 16. In Proc. the First Conference on Machine Translation.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Towards end-to-end spoken language understanding",
"authors": [
{
"first": "Dmitriy",
"middle": [],
"last": "Serdyuk",
"suffix": ""
},
{
"first": "Yongqiang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Fuegen",
"suffix": ""
},
{
"first": "Anuj",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Baiyang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1802.08395"
]
},
"num": null,
"urls": [],
"raw_text": "Dmitriy Serdyuk, Yongqiang Wang, Christian Fue- gen, Anuj Kumar, Baiyang Liu, and Yoshua Bengio. 2018. Towards end-to-end spoken language under- standing. arXiv preprint arXiv:1802.08395.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Neural lattice-to-sequence models for uncertain inputs",
"authors": [
{
"first": "Matthias",
"middle": [],
"last": "Sperber",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1704.00559"
]
},
"num": null,
"urls": [],
"raw_text": "Matthias Sperber, Graham Neubig, Jan Niehues, and Alex Waibel. 2017. Neural lattice-to-sequence models for uncertain inputs. arXiv preprint arXiv:1704.00559.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Sequence to sequence learning with neural networks",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Quoc V",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2014,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "3104--3112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing sys- tems, pages 3104-3112.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Augmenting translation models with simulated acoustic confusions for improved spoken language translation",
"authors": [
{
"first": "Yulia",
"middle": [],
"last": "Tsvetkov",
"suffix": ""
},
{
"first": "Florian",
"middle": [],
"last": "Metze",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
}
],
"year": 2014,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yulia Tsvetkov, Florian Metze, and Chris Dyer. 2014. Augmenting translation models with simu- lated acoustic confusions for improved spoken lan- guage translation. Proc. ACL.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "6000--6010",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems, pages 6000-6010.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Error analysis of statistical machine translation output",
"authors": [
{
"first": "David",
"middle": [],
"last": "Vilar",
"suffix": ""
},
{
"first": "Jia",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "D'haro Luis",
"middle": [],
"last": "Fernando",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2006,
"venue": "Proc. LREC",
"volume": "",
"issue": "",
"pages": "697--702",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Vilar, Jia Xu, D'Haro Luis Fernando, and Her- mann Ney. 2006. Error analysis of statistical ma- chine translation output. In Proc. LREC, pages 697- 702.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Google's neural machine translation system",
"authors": [
{
"first": "Yonghui",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Norouzi",
"suffix": ""
},
{
"first": "Maxim",
"middle": [],
"last": "Macherey",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Krikun",
"suffix": ""
},
{
"first": "Qin",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Klaus",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Macherey",
"suffix": ""
}
],
"year": 2016,
"venue": "Bridging the gap between human and machine translation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1609.08144"
]
},
"num": null,
"urls": [],
"raw_text": "Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google's neural machine translation system: Bridging the gap between hu- man and machine translation. arXiv preprint arXiv:1609.08144.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Improving character-level japanese-chinese neural machine translation with radicals as an additional input feature",
"authors": [
{
"first": "Jinyi",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Tadahiro",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 2017,
"venue": "2017 International Conference on",
"volume": "",
"issue": "",
"pages": "172--175",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jinyi Zhang and Tadahiro Matsumoto. 2017. Improv- ing character-level japanese-chinese neural machine translation with radicals as an additional input fea- ture. In Asian Language Processing (IALP), 2017 International Conference on, pages 172-175. IEEE.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Training cost of the baseline model (blue dots) and our proposed method (red dots).",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF1": {
"text": "Effect of source sentence lengths of noisy input.",
"uris": null,
"type_str": "figure",
"num": null
},
"TABREF1": {
"content": "<table/>",
"type_str": "table",
"num": null,
"html": null,
"text": "Word error rate (WER) against all the words for the three types of ASR errors."
},
"TABREF2": {
"content": "<table><tr><td>and Ta-</td></tr></table>",
"type_str": "table",
"num": null,
"html": null,
"text": ""
},
"TABREF3": {
"content": "<table><tr><td>System</td><td colspan=\"2\">p Clean</td><td>Noise Ave.</td></tr><tr><td>Baseline</td><td>-</td><td colspan=\"2\">45.21 42.40</td></tr><tr><td>+SP Amendment</td><td colspan=\"3\">0.2 45.20 43.55</td></tr><tr><td colspan=\"4\">+HM Amendment 0.2 45.30 43.77</td></tr><tr><td colspan=\"4\">+Both Amendment 0.2 45.13 44.45</td></tr></table>",
"type_str": "table",
"num": null,
"html": null,
"text": "Case-sensitive BLEU scores of our approaches on thec NIST clean test set (average bleu score on nist03, nist04, nist05, nist06) and three artificial noisy test sets (1 Sub, 2 Subs and 3 Subs) which are crafted by randomly substituting one, two and three original characters of each source sentence in the clean test set with HM errors or SP errors, respectively. p is the substitution rate."
}
}
}
}