Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "I17-1044",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:38:16.143855Z"
},
"title": "Local Monotonic Attention Mechanism for End-to-End Speech and Language Processing",
"authors": [
{
"first": "Andros",
"middle": [],
"last": "Tjandra",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Nara Institute of Science and Technology",
"location": {
"country": "Japan"
}
},
"email": "andros.tjandra.ai6@is.naist.jp"
},
{
"first": "Sakriani",
"middle": [],
"last": "Sakti",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Nara Institute of Science and Technology",
"location": {
"country": "Japan"
}
},
"email": "ssakti@is.naist.jp"
},
{
"first": "Satoshi",
"middle": [],
"last": "Nakamura",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Nara Institute of Science and Technology",
"location": {
"country": "Japan"
}
},
"email": "s-nakamura@is.naist.jp"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Recently, encoder-decoder neural networks have shown impressive performance on many sequence-related tasks. The architecture commonly uses an attentional mechanism which allows the model to learn alignments between the source and the target sequence. Most attentional mechanisms used today is based on a global attention property which requires a computation of a weighted summarization of the whole input sequence generated by encoder states. However, it is computationally expensive and often produces misalignment on the longer input sequence. Furthermore, it does not fit with monotonous or left-to-right nature in several tasks, such as automatic speech recognition (ASR), grapheme-to-phoneme (G2P), etc. In this paper, we propose a novel attention mechanism that has local and monotonic properties. Various ways to control those properties are also explored. Experimental results on ASR, G2P and machine translation between two languages with similar sentence structures, demonstrate that the proposed encoderdecoder model with local monotonic attention could achieve significant performance improvements and reduce the computational complexity in comparison with the one that used the standard global attention architecture.",
"pdf_parse": {
"paper_id": "I17-1044",
"_pdf_hash": "",
"abstract": [
{
"text": "Recently, encoder-decoder neural networks have shown impressive performance on many sequence-related tasks. The architecture commonly uses an attentional mechanism which allows the model to learn alignments between the source and the target sequence. Most attentional mechanisms used today is based on a global attention property which requires a computation of a weighted summarization of the whole input sequence generated by encoder states. However, it is computationally expensive and often produces misalignment on the longer input sequence. Furthermore, it does not fit with monotonous or left-to-right nature in several tasks, such as automatic speech recognition (ASR), grapheme-to-phoneme (G2P), etc. In this paper, we propose a novel attention mechanism that has local and monotonic properties. Various ways to control those properties are also explored. Experimental results on ASR, G2P and machine translation between two languages with similar sentence structures, demonstrate that the proposed encoderdecoder model with local monotonic attention could achieve significant performance improvements and reduce the computational complexity in comparison with the one that used the standard global attention architecture.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "End-to-end training is a newly emerging approach to sequence-to-sequence mapping tasks, that allows the model to directly learn the mapping between variable-length representation of different modalities (i.e., text-to-text sequence Sutskever et al., 2014) , speech-totext sequence (Chorowski et al., 2014; Chan et al., 2016) , image-to-text sequence (Xu et al., 2015) , etc).",
"cite_spans": [
{
"start": 232,
"end": 255,
"text": "Sutskever et al., 2014)",
"ref_id": "BIBREF20"
},
{
"start": 281,
"end": 305,
"text": "(Chorowski et al., 2014;",
"ref_id": "BIBREF6"
},
{
"start": 306,
"end": 324,
"text": "Chan et al., 2016)",
"ref_id": "BIBREF2"
},
{
"start": 350,
"end": 367,
"text": "(Xu et al., 2015)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "One popular approaches in the end-to-end mapping tasks of different modalities is based on encoder-decoder architecture. The earlier version of an encoder-decoder model is built with only two different components (Sutskever et al., 2014; Cho et al., 2014b) : (1) an encoder that processes the source sequence and encodes them into a fixedlength vector; and (2) a decoder that produces the target sequence based on information from fixedlength vector given by encoder. Both the encoder and decoder are jointly trained to maximize the probability of a correct target sequence given a source sequence. This architecture has been applied in many applications such as machine translation (Sutskever et al., 2014; Cho et al., 2014b) , image captioning (Karpathy and Fei-Fei, 2015) , and so on.",
"cite_spans": [
{
"start": 213,
"end": 237,
"text": "(Sutskever et al., 2014;",
"ref_id": "BIBREF20"
},
{
"start": 238,
"end": 256,
"text": "Cho et al., 2014b)",
"ref_id": "BIBREF5"
},
{
"start": 683,
"end": 707,
"text": "(Sutskever et al., 2014;",
"ref_id": "BIBREF20"
},
{
"start": 708,
"end": 726,
"text": "Cho et al., 2014b)",
"ref_id": "BIBREF5"
},
{
"start": 746,
"end": 774,
"text": "(Karpathy and Fei-Fei, 2015)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "However, such architecture encounters difficulties, especially for coping with long sequences. Because in order to generate the correct target sequence, the decoder solely depends only on the last hidden state of the encoder. In other words, the network needs to compress all of the information contained in the source sequence into a single fixed-length vector. (Cho et al., 2014a) demonstrated a decrease in the performance of the encoder-decoder model associated with an increase in the length of the input sentence sequence. Therefore, introduced attention mechanism to address these issues. Instead of relying on a fixed-length vector, the decoder is assisted by the attention module to get the related context from the encoder sides, depends on the current decoder states.",
"cite_spans": [
{
"start": 363,
"end": 382,
"text": "(Cho et al., 2014a)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Most attention-based encoder-decoder model used today has a \"global\" property Luong et al., 2015) . Every time the decoder needs to predict the output given the previous output, it must compute a weighted summarization of the whole input sequence generated by the encoder states. This global property allows the decoder to address any parts of the source sequence at each step of the output generation and provides advantages in some cases like machine translation tasks. Specifically, when the source and the target languages have different sentence structures and the last part of the target sequence may depend on the first part of the source sequence. However, although the global attention mechanism has often improved performance in some tasks, it is very computationally expensive. For a case that requires mapping between long sequences, misalignments might happen in standard attention mechanism (Kim et al., 2017) . Furthermore, it does not fit with monotonous or left-toright natures in several tasks, such as ASR, G2P, etc.",
"cite_spans": [
{
"start": 78,
"end": 97,
"text": "Luong et al., 2015)",
"ref_id": "BIBREF15"
},
{
"start": 905,
"end": 923,
"text": "(Kim et al., 2017)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we propose a novel attention module that has two important characteristics to address those problems: local and monotonicity properties. The local property helps our attention module focus on certain parts from the source sequence that the decoder wants to transcribe, and the monotonicity property strictly generates alignment left-to-right from beginning to the end of the source sequence. In case of speech recognition task that need to produces a transcription given the speech signal, the attention module is now able to focus on the audio's specific timing and always move in one direction from the start to the end of the audio. Similar way can be applied also for G2P or machine translation (MT) between two languages with similar sentences structure, i.e., Subject-Verb-Object (SVO) word order in English and French languages. Experimental results demonstrate that the proposed encoder-decoder model with local monotonic attention could achieve significant performance improvements and reduce the computational complexity in comparison with the one that used the standard global attention architecture.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The encoder-decoder model is a neural network that directly models conditional probability",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attention-based Encoder Decoder Neural Network",
"sec_num": "2"
},
{
"text": "Figure 1: Attention-based encoder-decoder archi- tecture. p(y|x), where x = [x 1 , ..., x S ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attention-based Encoder Decoder Neural Network",
"sec_num": "2"
},
{
"text": "is the source sequence with length S and y = [y 1 , ..., y T ] is the target sequence with length T . Figure 1 shows the overall structure of the attention-based encoderdecoder model that consists of encoder, decoder and attention modules.",
"cite_spans": [],
"ref_spans": [
{
"start": 102,
"end": 110,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Attention-based Encoder Decoder Neural Network",
"sec_num": "2"
},
{
"text": "The encoder task processes input sequence x and outputs representative information h e = [h e 1 , ..., h e S ] for the decoder. The attention module is an extension scheme for assisting the decoder to find relevant information on the encoder side based on the current decoder hidden states Luong et al., 2015) . Usually, attention modules produces context information c t at the time t based on the encoder and decoder hidden states:",
"cite_spans": [
{
"start": 290,
"end": 309,
"text": "Luong et al., 2015)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Attention-based Encoder Decoder Neural Network",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "c t = S s=1 a t (s) * h e s (1) a t (s) = Align(h e s , h d t ) = exp(Score(h e s , h d t )) S s=1 exp(Score(h e s , h d t ))",
"eq_num": "(2)"
}
],
"section": "Attention-based Encoder Decoder Neural Network",
"sec_num": "2"
},
{
"text": "There are several variations for score functions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attention-based Encoder Decoder Neural Network",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Score(h e s , h d t ) = \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 h e s , h d t , dot product h e s W s h d t , bilinear V s tanh(W s [h e s , h d t ]), MLP",
"eq_num": "(3)"
}
],
"section": "Attention-based Encoder Decoder Neural Network",
"sec_num": "2"
},
{
"text": "where Score :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attention-based Encoder Decoder Neural Network",
"sec_num": "2"
},
{
"text": "(R M \u00d7 R N ) \u2192 R, M",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attention-based Encoder Decoder Neural Network",
"sec_num": "2"
},
{
"text": "is the number of hidden units for encoder and N is the number of hidden units for decoder. Finally, the decoder task, which predicts the target sequence probability at time t based on previous output and context information c t can be formulated:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attention-based Encoder Decoder Neural Network",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "log p(y|x) = T t=1 log p(y t |y <t , c t )",
"eq_num": "(4)"
}
],
"section": "Attention-based Encoder Decoder Neural Network",
"sec_num": "2"
},
{
"text": "For speech recognition task, most common input x is a sequence of feature vectors like Mel-spectral filterbank and/or MFCC. Therefore, x \u2208 R S\u00d7D where D is the number of features and S is the total frame length for an utterance. Output y, which is a speech transcription sequence, can be either phoneme or grapheme (character) sequence. In text-related task such as machine translation, x and y are a sequence of word or character indexes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attention-based Encoder Decoder Neural Network",
"sec_num": "2"
},
{
"text": "In the previous section, we explained the standard global attention-based encoder-decoder model. However, in order to control the area and focus attention given previous information, such mechanism requires to apply the scoring function into all the encoder states and normalizes them with a softmax function. Another problem is we cannot explicitly enforce the probability mass generated by the current attention modules that are always moving incrementally to the end of the source sequence. In this section, we discuss and explain how to model the locality and monotonicity properties on the attention module. This way, we could improve the sensitivity of capturing regularities and ensure to focus only an important subset instead of whole sequence. Figure 2 illustrates the overall mechanism of our proposed local monotonic attention, and details are described blow.",
"cite_spans": [],
"ref_spans": [
{
"start": 754,
"end": 762,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Locality and Monotonicity Properties",
"sec_num": "3"
},
{
"text": "Position First, we define how to predict the next central position of the alignment illustrated in . At time t, we want to decode the t-th target output given the source sequence, previous output y t\u22121 , and current decoder hidden states h d t \u2208 R N . In standard approaches, we use hidden states h d t to predict the position difference \u2206p t with a multilayer perceptron (MLP). We use variable \u2206p t to determine how far we should move the center of the alignment compared to previous center p t\u22121 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Monotonicity-based Prediction of Central",
"sec_num": "1."
},
{
"text": "In this paper, we propose two different formulations for estimating \u2206p t to ensure a forward or monotonicity movement:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Monotonicity-based Prediction of Central",
"sec_num": "1."
},
{
"text": "\u2022 Constrained position prediction:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Monotonicity-based Prediction of Central",
"sec_num": "1."
},
{
"text": "We limit maximum range from \u2206p t with hyperparameter C max with the following equation:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Monotonicity-based Prediction of Central",
"sec_num": "1."
},
{
"text": "\u2206p t = C max * sigmoid(V p tanh(W p h d t )) (5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Monotonicity-based Prediction of Central",
"sec_num": "1."
},
{
"text": "Here we can control how far our next center of alignment position p t relies on our datasets and guarantee 0 \u2264 \u2206p t \u2264 C max . However, it requires us to handle hyperparameter C max .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Monotonicity-based Prediction of Central",
"sec_num": "1."
},
{
"text": "\u2022 Unconstrained position prediction:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Monotonicity-based Prediction of Central",
"sec_num": "1."
},
{
"text": "Compared to a previous formulation, since we do not limit the maximum range of \u2206p t , here we can ignore hyperparameter C max and use exponential (exp) function instead of sigmoid. We can also use another function (e.g softplus) as long as the function satisfy f : R \u2192 R + 0 and the result of \u2206p t \u2265 0. We formulate unconstrained position prediction with following equation:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Monotonicity-based Prediction of Central",
"sec_num": "1."
},
{
"text": "\u2206p t = exp(V p tanh(W p h d t )) (6) Here V p \u2208 R K\u00d71 , W p \u2208 R K\u00d7N , N",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Monotonicity-based Prediction of Central",
"sec_num": "1."
},
{
"text": "is the number of decoder hidden units and K is the number of hidden projection layer units. We omit the bias for simplicity. Both equations guarantee monotonicity properties since \u2200t \u2208",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Monotonicity-based Prediction of Central",
"sec_num": "1."
},
{
"text": "[1..T ], p t \u2265 (p t\u22121 + \u2206p t ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Monotonicity-based Prediction of Central",
"sec_num": "1."
},
{
"text": "Additionally, we also used scaling variable \u03bb t to scale the unnormalized Gaussian distribution that depends on h t . We calculated \u03bb t with following equation:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Monotonicity-based Prediction of Central",
"sec_num": "1."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03bb t = exp(V \u03bb tanh(W p h d t ))",
"eq_num": "(7)"
}
],
"section": "Monotonicity-based Prediction of Central",
"sec_num": "1."
},
{
"text": "where V \u03bb \u2208 R K\u00d71 . In our initial experiments, we discovered that we improved our model performance by scaling with \u03bb t for each time-step. The main objective of this step is to generate a scaled Gaussian distribution a N t :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Monotonicity-based Prediction of Central",
"sec_num": "1."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "a N t (s) = \u03bb t * exp \u2212 (s \u2212 p t ) 2 2\u03c3 2 .",
"eq_num": "(8)"
}
],
"section": "Monotonicity-based Prediction of Central",
"sec_num": "1."
},
{
"text": "where p t is the mean and \u03c3 is the standard deviation, both of which are used to calculate the weighted sum from the encoder states to generate context vector c t later. In this paper, we treat \u03c3 as a hyperparameter.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Monotonicity-based Prediction of Central",
"sec_num": "1."
},
{
"text": "After calculating new position p t , we generate locality-based alignment, as shown in Part (2) of Figure 2 . Based on predicted position p t , we follow (Luong et al., 2015) to generate alignment a S t only within [p t \u2212 2\u03c3, p t + 2\u03c3]:",
"cite_spans": [
{
"start": 154,
"end": 174,
"text": "(Luong et al., 2015)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 99,
"end": 107,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Locality-based Alignment Generation",
"sec_num": "2."
},
{
"text": "a S t (s) = Align(h e s , h d t ), (9) \u2200s \u2208 [p t \u2212 2\u03c3, p t + 2\u03c3].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Locality-based Alignment Generation",
"sec_num": "2."
},
{
"text": "Since p t is a real number and the indexes for the encoder states are integers, we convert p t into an integer with floor operation. After we know the center of the position p t , we only need to calculate the scores (Eq. 3) for each encoder states in [p t \u22122\u03c3, .., p t +2\u03c3] then calculate the context alignment scores (Eq. 2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Locality-based Alignment Generation",
"sec_num": "2."
},
{
"text": "Compared to the standard global attention, we can reduce the decoding computational complexity O(T * S) into O(T * \u03c3) where \u03c3 S and \u03c3 is constant, T is total decoding step, S is the length of the encoder states.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Locality-based Alignment Generation",
"sec_num": "2."
},
{
"text": "In the last step, we calculate context c t with alignments a N t and a S t , as shown in Part (3) of Figure 2 :",
"cite_spans": [],
"ref_spans": [
{
"start": 101,
"end": 109,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Context Calculation",
"sec_num": "3."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "c t = (pt+2\u03c3) s=(pt\u22122\u03c3) a N t (s) * a S t (s) * h e s",
"eq_num": "(10)"
}
],
"section": "Context Calculation",
"sec_num": "3."
},
{
"text": "Context c t and current hidden state h d t will later be utilized for calculating current output y t .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context Calculation",
"sec_num": "3."
},
{
"text": "Overall, we can rephrase the first step as generating \"prior\" probabilities a N t based on the previous p t\u22121 position and the current decoder states. Then the second step task generates \"likelihood\" probabilities a S t by measuring the relevance of our encoder states with the current decoder states. In the third step, we combine our \"prior\" and \"likelihood\" probability into an unnormalized \"posterior\" probability a t and calculate expected context c t .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context Calculation",
"sec_num": "3."
},
{
"text": "We applied our proposed architecture on ASR task. The local property helps our attention module focus on certain parts from the speech that the decoder wants to transcribe, and the monotonicity property strictly generates alignment left-to-right from beginning to the end of the speech.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment on Speech Recognition",
"sec_num": "4"
},
{
"text": "We conducted our experiments on the TIMIT 1 (Garofolo et al., 1993) dataset with the same set-up for training, development, and test sets as defined in the Kaldi s5 recipe (Povey et al., 2011) . The training set contains 3696 sentences from 462 speakers. We also used another sets of 50 speakers for the development set and the test set contains 192 utterances, 8 each from 24 speakers. For every experiment, we used 40-dimensional fbank with delta and acceleration (total 120-dimension feature vector) extracted from the Kaldi toolkit. The input features were normalized by subtracting the mean and divided by the standard deviation from the training set. For our decoder target, we re-mapped the original target phoneme set from 61 into 39 phoneme class plus the end of sequence mark (eos).",
"cite_spans": [
{
"start": 172,
"end": 192,
"text": "(Povey et al., 2011)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Data",
"sec_num": "4.1"
},
{
"text": "On the encoder sides, we projected our input features with a linear layer with 512 hidden units followed by tanh activation function. We used three bidirectional LSTMs (Bi-LSTM) for our encoder with 256 hidden units for each LSTM (total 512 hidden units for Bi-LSTM). To reduce the computational time, we used hierarchical subsampling (Graves, 2012; Bahdanau et al., 2016) , applied it to the top two Bi-LSTM layers, and reduced their length by a factor of 4.",
"cite_spans": [
{
"start": 335,
"end": 349,
"text": "(Graves, 2012;",
"ref_id": "BIBREF8"
},
{
"start": 350,
"end": 372,
"text": "Bahdanau et al., 2016)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model Architectures",
"sec_num": "4.2"
},
{
"text": "On the decoder sides, we used a 64dimensional embedding matrix to transform the input phonemes into a continuous vector, followed by two unidirectional LSTMs with 512 hidden units. For every local monotonic model, we used an MLP with 256 hidden units to generate \u2206p t and \u03bb t . Hyperparameter 2\u03c3 was set to 3, and C max for constrained position prediction (see Eq. 5) was set to 5. Both hyperparameters were empirically selected and generally gave consistent results across various settings in our proposed model. For our scorer module, we used bilinear and MLP scorers (see Eq 3) with 256 hidden units. We used an Adam (Kingma and Ba, 2014) optimizer with a learning rate of 5e \u2212 4.",
"cite_spans": [
{
"start": 620,
"end": 641,
"text": "(Kingma and Ba, 2014)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model Architectures",
"sec_num": "4.2"
},
{
"text": "In the recognition phase, we generated transcriptions with best-1 (greedy) search from the decoder. We did not use any language model in this work. All of our models were implemented on the Chainer framework (Tokui et al., 2015) .",
"cite_spans": [
{
"start": 208,
"end": 228,
"text": "(Tokui et al., 2015)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model Architectures",
"sec_num": "4.2"
},
{
"text": "For comparison, we evaluated our proposed model with the standard global attention-based encoder-decoder model and local-m attention (Luong et al., 2015) as the baseline. Most of the con-figurations follow the above descriptions, except the baseline model that does not have an MLP for generating \u2206p t and \u03bb t . Table 1 summarizes our experiments on our proposed local attention models and compares them to the baseline model using several possible scenarios.",
"cite_spans": [],
"ref_spans": [
{
"start": 312,
"end": 319,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model Architectures",
"sec_num": "4.2"
},
{
"text": "Considering the use of constrained and unconstrained position prediction \u2206p t , our results show that the model with the unconstrained position prediction (exp) model gives better results than one based on the constrained position prediction (sigmoid) model on both MLP and bilinear scorers. We conclude that it is more beneficial to use the unconstrained position prediction formulation since it gives better performance and we do not need to handle the additional hyperparameter C max .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constrained vs Unconstrained Position Prediction",
"sec_num": "5.1"
},
{
"text": "Next we investigate the importance of the scorer module by comparing the results between a model with and without it. Our results reveal that, by only relying on Gaussian alignment a N t and set a S t = 1, our model performance's was worse than one that used both the scorer and Gaussian alignment. This might be because the scorer modules are able to correct the details from the Gaussian alignment based on the relevance of the encoder states in the current decoder states. Thus, we conclude that alignment with the scorer is essential for our proposed models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Alignment Scorer vs Non-Scorer",
"sec_num": "5.2"
},
{
"text": "Overall, our proposed encoder-decoder model with local monotonic attention significantly improved the performance and reduced the computational complexity in comparison with one that used standard global attention mechanism (we cannot compare directly with (Chorowski et al., 2014) since its pretrained with HMM state alignment). We also tried local-m attention from (Luong et al., 2015), however our model cannot converge and we hypothesize the reason is because ratio length between the speech and their corresponding text is larger than 1, therefore the Table 1 : Results from baseline and proposed models on ASR task with TIMIT test set.",
"cite_spans": [
{
"start": 257,
"end": 281,
"text": "(Chorowski et al., 2014)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 557,
"end": 564,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Overall comparison to the baseline",
"sec_num": "5.3"
},
{
"text": "Test PER (%) Global Attention Model (Baseline) Att Enc-Dec (pretrained with HMM align) (Chorowski et al., 2014) 18.6",
"cite_spans": [
{
"start": 87,
"end": 111,
"text": "(Chorowski et al., 2014)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "Att Enc-Dec (Pereyra et al., 2017) 23.2 Att Enc-Dec (Luo et al., 2016) 24.5 Att Enc-Dec with MLP Scorer (ours) 23.8 Att Enc-Dec with local-m (ours) (Luong et al., 2015 \u2206p t cannot be represented by fixed value. The best performance achieved by our proposed model with unconstrained position prediction and bilinear scorer, and provided 12.2% relative error rate reduction to our baseline.",
"cite_spans": [
{
"start": 12,
"end": 34,
"text": "(Pereyra et al., 2017)",
"ref_id": "BIBREF16"
},
{
"start": 52,
"end": 70,
"text": "(Luo et al., 2016)",
"ref_id": null
},
{
"start": 104,
"end": 110,
"text": "(ours)",
"ref_id": null
},
{
"start": 148,
"end": 167,
"text": "(Luong et al., 2015",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "We also investigated our proposed architecture on G2P conversion task. Here, the model need to generate corresponding phoneme given small segment of characters and its always moving from left to right. The local property helps our attention module focus on certain parts from the grapheme source sequence that the decoder wants to convert into phoneme, and the monotonicity property strictly generates alignment left-to-right from beginning to the end of the grapheme source sequence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment on Grapheme-to-Phoneme",
"sec_num": "6"
},
{
"text": "Here, we used the CMUDict dataset 2 . It contains 113438 words for training and 12753 for testing (12000 unique words). For validation, we randomly select 3000 sentences from the training set. The evaluation metrics for this task are phoneme error rate (PER) and word error rate (WER). In the evaluation process, there are some words has multiple references (pronunciations). Therefore, we select one of the references that has lowest PER between compared to our hypothesis, and if the hypothesis completely match with one of those references, then the WER is not increasing. For our encoder input, we used 26 letter (A-Z) + single quotes ('). For our decoder target, we used 39 phonemes plus the end of sequence mark (eos).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "6.1"
},
{
"text": "On the encoder sides, the characters input were projected into 256 dims using embedding matrix. We used two bidirectional LSTMs (Bi-LSTM) for our encoder with 512 hidden units for each LSTM (total 1024 hidden units for Bi-LSTM). On the decoder sides, the phonemes input were projected into 256 dims using embedding matrix, followed by two unidirectional LSTMs with 512 hidden units. For local monotonic model, we used an MLP with 256 hidden units to generate \u2206p t and \u03bb t . For this task, we only used the unconstrained formulation because based on previous sections, we able to achieved better performance and we didn't need to find optimal hyperparameter for C max . For our scorer module, we used MLP scorer with 256 hidden units.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Architectures",
"sec_num": "6.2"
},
{
"text": "In the decoding phase, we used beam search strategy with beam size 3 to generate the phonemes given the character sequences. For comparison, we evaluated our model with standard global attention and local-m attention model (Luong et al., 2015) as the baseline. Table 2 summarizes our experiment on proposed local attention models. We compared our proposed models with several baselines from other algorithm as well. Our model significantly improving the PER and WER compared to encoderdecoder, attention-based global softmax and localm attention (fixed-step size). Compared to Bi-LSTM model which was trained with explicit alignment, we achieve slightly better PER and WER with larger window size (2\u03c3 = 3).",
"cite_spans": [
{
"start": 223,
"end": 243,
"text": "(Luong et al., 2015)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 261,
"end": 268,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Model Architectures",
"sec_num": "6.2"
},
{
"text": "We also conducted experiment on machine translation task, specifically between two languages with similar sentences structure. By using our proposed method, we able to focus only to a small related segment on the source side and the target generation process usually follows the source sentence structure without many reordering process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment on Machine Translation",
"sec_num": "7"
},
{
"text": "We used BTEC dataset (Kikui et al., 2003) and chose English-to-France and Indonesian-to-English parallel corpus. From BTEC dataset, we extracted 162318 sentences for training and 510 sentences for test data. Because there are no default development set, we randomly sampled 1000 sentences from training data for validation set. For all language pairs, we preprocessed our dataset using Moses (Koehn et al., 2007) tokenizer. For training, we replaced any word that appear less then twice with unknown (unk) symbol. In details, we keep 10105 words for French corpus, 8265 words for English corpus and 9577 words for Indonesian corpus. We only used sentence pairs where the source is no longer than 60 words in training phase.",
"cite_spans": [
{
"start": 21,
"end": 41,
"text": "(Kikui et al., 2003)",
"ref_id": "BIBREF10"
},
{
"start": 392,
"end": 412,
"text": "(Koehn et al., 2007)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "7.1"
},
{
"text": "On both encoder and decoder sides, the input words were projected into 256 dims using embedding matrix. We used three Bi-LSTM for our encoder with 512 hidden units for each LSTM (total 1024 hidden unit for Bi-LSTM). For our decoder, we used three LSTM with 512 hidden units. For local monotonic model, we used an MLP with 256 hidden units to generate \u2206p t and \u03bb t . Same as previous section, we only used the unconstrained for-mulation for local monotonic experiment. For our scorer module, we used MLP scorer with 256 hidden units. In the decoding phase, we used beam search strategy with beam size 5 and normalized length penalty with \u03b1 = 1 (Wu et al., 2016) . For comparison, we evaluate our model with standard global attention and local-m attention model (Luo et al., 2016) as the baseline. Table 3 summarizes our experiment on proposed local attention models compared to baseline global attention model and local-m attention model (Luong et al., 2015) . Generally, local monotonic attention had better result compared to global attention on both English-to-France and Indonesianto-English translation task. Our proposed model were able to improve the BLEU up to 2.2 points on English-to-France and 3.6 points on Indonesianto-English translation task compared to standard global attention. Compared to local-m attention with fixed step size, our proposed model able to improve the performance up to 0.8 BLEU on English-to-France and 2.0 BLEU on Indonesianto-English translation task.",
"cite_spans": [
{
"start": 643,
"end": 660,
"text": "(Wu et al., 2016)",
"ref_id": "BIBREF22"
},
{
"start": 760,
"end": 778,
"text": "(Luo et al., 2016)",
"ref_id": null
},
{
"start": 937,
"end": 957,
"text": "(Luong et al., 2015)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 796,
"end": 803,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Model Architecture",
"sec_num": "7.2"
},
{
"text": "Humans do not generally process all of the information that they encounter at once. Selective attention, which is a critical property in human perception, allows attention to be focused on particular information while filtering out a range of other information. The biological structure of the eye and the eye movement mechanism is one part of visual selective attention that provides the ability to focus attention selectively on parts of the visual space to acquire information when and where it is needed (Rensink, 2000) . In the case of the cocktail party effect, humans can selectively focus their attentive hearing on a single speaker among various conversation and background noise sources (Cherry, 1953) . The attention mechanism in deep learning has been studied for many years. But, only recently have attention mechanisms made their way into the sequence-to-sequence deep learning architectures that were proposed to solve machine translation tasks. Such mechanisms provide a model with the ability to jointly align and translate . With the attention-based model, the encoder-decoder model significantly improved the performance on machine translation Luong et al., 2015) and has successfully been applied to ASR tasks (Chorowski et al., 2014; Chan et al., 2016) .",
"cite_spans": [
{
"start": 508,
"end": 523,
"text": "(Rensink, 2000)",
"ref_id": "BIBREF19"
},
{
"start": 697,
"end": 711,
"text": "(Cherry, 1953)",
"ref_id": "BIBREF3"
},
{
"start": 1163,
"end": 1182,
"text": "Luong et al., 2015)",
"ref_id": "BIBREF15"
},
{
"start": 1230,
"end": 1254,
"text": "(Chorowski et al., 2014;",
"ref_id": "BIBREF6"
},
{
"start": 1255,
"end": 1273,
"text": "Chan et al., 2016)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "8"
},
{
"text": "However, as we mentioned earlier, most of those attention mechanism are based on \"global\" property, where the attention module tries to match the current hidden states with all the states from the encoder sides. This approach is inefficient and computationally expensive on longer source sequences. A \"local attention\" was recently introduced by (Luong et al., 2015) which provided the capability to only focus small subset of the encoder sides. They also proposed monotonic attention but limited to fixed step-size and not suitable for a task where the length ratio between source and target sequence is vastly different. Our proposed method are able to elevated this problem by predicting the step size dynamically instead of using fixed step size. After we constructed our proposed framework, we found work by (Raffel et al., 2017) recently that also proposed a method for producing monotonic alignment by using Bernoulli random variable to control when the alignment should stop and generate output. However, it cannot attend the source sequence outside the range between previous and current position. In contrast with our approach, we are able to control how large the area we want to attend based on the window size. (Chorowski et al., 2014) also proposed a soft constraint to encourage monotonicity by invoking a penalty based on the current alignment and previous alignments. However, the methods still did not guarantee a monotonicity movement of the attention.",
"cite_spans": [
{
"start": 346,
"end": 366,
"text": "(Luong et al., 2015)",
"ref_id": "BIBREF15"
},
{
"start": 813,
"end": 834,
"text": "(Raffel et al., 2017)",
"ref_id": "BIBREF18"
},
{
"start": 1224,
"end": 1248,
"text": "(Chorowski et al., 2014)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "8"
},
{
"text": "To the best of our knowledge, only few studies have explored about local and monotonicity properties on an attention-based model. This work presents a novel attention module with locality and monotonicity properties. Our proposed mechanism strictly enforces monotonicity and locality properties in their alignment by explicitly modeling them in mathematical equations. The observation on our proposed model can also possibly act as regularizer by only observed a subset of encoder states. Here, we also explore various ways to control both properties and evaluate the impact of each variations on our proposed model. Experimental results also demonstrate that the proposed encoder-decoder model with local monotonic attention could provide a better performances in comparison with the standard global attention architecture and local-m attention model (Luong et al., 2015) .",
"cite_spans": [
{
"start": 852,
"end": 872,
"text": "(Luong et al., 2015)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "8"
},
{
"text": "This paper demonstrated a novel attention mechanism for encoder decoder model that ensures monotonicity and locality properties. We explored various ways to control these properties, including dynamic monotonicity-based position prediction and locality-based alignment generation. The results reveal our proposed encoder-decoder model with local monotonic attention significantly improved the performance on three different tasks and able to reduced the computational complexity more than one that used standard global attention architecture.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "9"
},
{
"text": "Part of this work was supported by JSPS KAKENHI Grant Numbers JP17H06101 and JP17K00237.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgement",
"sec_num": "10"
},
{
"text": "https://catalog.ldc.upenn.edu/ldc93s1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "CMUdict: https://sourceforge.net/ projects/cmusphinx/files/G2P%20Models/ phonetisaurus-cmudict-split.tar.gz",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1409.0473"
]
},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Endto-end attention-based large vocabulary speech recognition",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Chorowski",
"suffix": ""
},
{
"first": "Dmitriy",
"middle": [],
"last": "Serdyuk",
"suffix": ""
}
],
"year": 2016,
"venue": "Acoustics, Speech and Signal Processing (ICASSP)",
"volume": "",
"issue": "",
"pages": "4945--4949",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Jan Chorowski, Dmitriy Serdyuk, Philemon Brakel, and Yoshua Bengio. 2016. End- to-end attention-based large vocabulary speech recognition. In Acoustics, Speech and Signal Pro- cessing (ICASSP), 2016 IEEE International Confer- ence on, pages 4945-4949. IEEE.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Listen, attend and spell: A neural network for large vocabulary conversational speech recognition",
"authors": [
{
"first": "William",
"middle": [],
"last": "Chan",
"suffix": ""
},
{
"first": "Navdeep",
"middle": [],
"last": "Jaitly",
"suffix": ""
},
{
"first": "Quoc",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
}
],
"year": 2016,
"venue": "Acoustics, Speech and Signal Processing (ICASSP)",
"volume": "",
"issue": "",
"pages": "4960--4964",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William Chan, Navdeep Jaitly, Quoc Le, and Oriol Vinyals. 2016. Listen, attend and spell: A neural network for large vocabulary conversational speech recognition. In Acoustics, Speech and Signal Pro- cessing (ICASSP), 2016 IEEE International Confer- ence on, pages 4960-4964. IEEE.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Some experiments on the recognition of speech, with one and with two ears",
"authors": [
{
"first": "Cherry",
"middle": [],
"last": "E Colin",
"suffix": ""
}
],
"year": 1953,
"venue": "The Journal of the acoustical society of America",
"volume": "25",
"issue": "5",
"pages": "975--979",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E Colin Cherry. 1953. Some experiments on the recognition of speech, with one and with two ears. The Journal of the acoustical society of America, 25(5):975-979.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "On the properties of neural machine translation: Encoder-decoder approaches. Syntax, Semantics and Structure in Statistical Translation",
"authors": [
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Bart",
"middle": [],
"last": "Van Merri\u00ebnboer",
"suffix": ""
},
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kyunghyun Cho, Bart van Merri\u00ebnboer, Dzmitry Bah- danau, and Yoshua Bengio. 2014a. On the proper- ties of neural machine translation: Encoder-decoder approaches. Syntax, Semantics and Structure in Sta- tistical Translation, page 103.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Learning phrase representations using RNN encoder-decoder for statistical machine translation",
"authors": [
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Bart",
"middle": [],
"last": "Van Merri\u00ebnboer",
"suffix": ""
},
{
"first": "Caglar",
"middle": [],
"last": "Gulcehre",
"suffix": ""
},
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Fethi",
"middle": [],
"last": "Bougares",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1406.1078"
]
},
"num": null,
"urls": [],
"raw_text": "Kyunghyun Cho, Bart Van Merri\u00ebnboer, Caglar Gul- cehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014b. Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "End-to-end continuous speech recognition using attention-based recurrent NN: First results",
"authors": [
{
"first": "Jan",
"middle": [],
"last": "Chorowski",
"suffix": ""
},
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.1602"
]
},
"num": null,
"urls": [],
"raw_text": "Jan Chorowski, Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. End-to-end continuous speech recognition using attention-based recurrent NN: First results. arXiv preprint arXiv:1412.1602.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Darpa TIMIT acoustic-phonetic continous speech corpus cd-rom",
"authors": [
{
"first": "Lori",
"middle": [
"F"
],
"last": "John S Garofolo",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lamel",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "William",
"suffix": ""
},
{
"first": "Jonathon",
"middle": [
"G"
],
"last": "Fisher",
"suffix": ""
},
{
"first": "David",
"middle": [
"S"
],
"last": "Fiscus",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pallett",
"suffix": ""
}
],
"year": 1993,
"venue": "NIST speech disc 1-1.1. NASA STI/Recon technical report n",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John S Garofolo, Lori F Lamel, William M Fisher, Jonathon G Fiscus, and David S Pallett. 1993. Darpa TIMIT acoustic-phonetic continous speech corpus cd-rom. NIST speech disc 1-1.1. NASA STI/Recon technical report n, 93.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Supervised sequence labelling",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Graves",
"suffix": ""
}
],
"year": 2012,
"venue": "Supervised Sequence Labelling with Recurrent Neural Networks",
"volume": "",
"issue": "",
"pages": "5--13",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Graves. 2012. Supervised sequence labelling. In Supervised Sequence Labelling with Recurrent Neu- ral Networks, pages 5-13. Springer.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Deep visualsemantic alignments for generating image descriptions",
"authors": [
{
"first": "Andrej",
"middle": [],
"last": "Karpathy",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Fei-Fei",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "3128--3137",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrej Karpathy and Li Fei-Fei. 2015. Deep visual- semantic alignments for generating image descrip- tions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3128-3137.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Creating corpora for speech-to-speech translation",
"authors": [
{
"first": "Genichiro",
"middle": [],
"last": "Kikui",
"suffix": ""
},
{
"first": "Eiichiro",
"middle": [],
"last": "Sumita",
"suffix": ""
},
{
"first": "Toshiyuki",
"middle": [],
"last": "Takezawa",
"suffix": ""
},
{
"first": "Seiichi",
"middle": [],
"last": "Yamamoto",
"suffix": ""
}
],
"year": 2003,
"venue": "Eighth European Conference on Speech Communication and Technology",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Genichiro Kikui, Eiichiro Sumita, Toshiyuki Takezawa, and Seiichi Yamamoto. 2003. Creating corpora for speech-to-speech translation. In Eighth European Conference on Speech Communication and Technology.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Joint ctc-attention based end-to-end speech recognition using multi-task learning",
"authors": [
{
"first": "Suyoun",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Takaaki",
"middle": [],
"last": "Hori",
"suffix": ""
},
{
"first": "Shinji",
"middle": [],
"last": "Watanabe",
"suffix": ""
}
],
"year": 2017,
"venue": "Acoustics, Speech and Signal Processing",
"volume": "",
"issue": "",
"pages": "4835--4839",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Suyoun Kim, Takaaki Hori, and Shinji Watanabe. 2017. Joint ctc-attention based end-to-end speech recognition using multi-task learning. In Acous- tics, Speech and Signal Processing (ICASSP), 2017 IEEE International Conference on, pages 4835- 4839. IEEE.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "Diederik",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.6980"
]
},
"num": null,
"urls": [],
"raw_text": "Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Moses: Open source toolkit for statistical machine translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Hoang",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "Marcello",
"middle": [],
"last": "Federico",
"suffix": ""
},
{
"first": "Nicola",
"middle": [],
"last": "Bertoldi",
"suffix": ""
},
{
"first": "Brooke",
"middle": [],
"last": "Cowan",
"suffix": ""
},
{
"first": "Wade",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Christine",
"middle": [],
"last": "Moran",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Zens",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 45th annual meeting of the ACL on interactive poster and demonstration sessions",
"volume": "",
"issue": "",
"pages": "177--180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, et al. 2007. Moses: Open source toolkit for statistical machine translation. In Pro- ceedings of the 45th annual meeting of the ACL on interactive poster and demonstration sessions, pages 177-180. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Navdeep Jaitly, and Ilya Sutskever. 2016. Learning online alignments with continuous rewards policy gradient",
"authors": [
{
"first": "Yuping",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "Chung-Cheng",
"middle": [],
"last": "Chiu",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1608.01281"
]
},
"num": null,
"urls": [],
"raw_text": "Yuping Luo, Chung-Cheng Chiu, Navdeep Jaitly, and Ilya Sutskever. 2016. Learning online alignments with continuous rewards policy gradient. arXiv preprint arXiv:1608.01281.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Effective approaches to attentionbased neural machine translation",
"authors": [
{
"first": "Minh-Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1508.04025"
]
},
"num": null,
"urls": [],
"raw_text": "Minh-Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attention- based neural machine translation. arXiv preprint arXiv:1508.04025.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Regularizing neural networks by penalizing confident output distributions",
"authors": [
{
"first": "Gabriel",
"middle": [],
"last": "Pereyra",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Tucker",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Chorowski",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1701.06548"
]
},
"num": null,
"urls": [],
"raw_text": "Gabriel Pereyra, George Tucker, Jan Chorowski, \u0141ukasz Kaiser, and Geoffrey Hinton. 2017. Regular- izing neural networks by penalizing confident output distributions. arXiv preprint arXiv:1701.06548.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "The Kaldi speech recognition toolkit",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Povey",
"suffix": ""
},
{
"first": "Arnab",
"middle": [],
"last": "Ghoshal",
"suffix": ""
},
{
"first": "Gilles",
"middle": [],
"last": "Boulianne",
"suffix": ""
},
{
"first": "Lukas",
"middle": [],
"last": "Burget",
"suffix": ""
},
{
"first": "Ondrej",
"middle": [],
"last": "Glembek",
"suffix": ""
},
{
"first": "Nagendra",
"middle": [],
"last": "Goel",
"suffix": ""
},
{
"first": "Mirko",
"middle": [],
"last": "Hannemann",
"suffix": ""
},
{
"first": "Petr",
"middle": [],
"last": "Motlicek",
"suffix": ""
},
{
"first": "Yanmin",
"middle": [],
"last": "Qian",
"suffix": ""
},
{
"first": "Petr",
"middle": [],
"last": "Schwarz",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Silovsky",
"suffix": ""
},
{
"first": "Georg",
"middle": [],
"last": "Stemmer",
"suffix": ""
},
{
"first": "Karel",
"middle": [],
"last": "Vesely",
"suffix": ""
}
],
"year": 2011,
"venue": "IEEE 2011 Workshop on Automatic Speech Recognition and Understanding",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Povey, Arnab Ghoshal, Gilles Boulianne, Lukas Burget, Ondrej Glembek, Nagendra Goel, Mirko Hannemann, Petr Motlicek, Yanmin Qian, Petr Schwarz, Jan Silovsky, Georg Stemmer, and Karel Vesely. 2011. The Kaldi speech recognition toolkit. In IEEE 2011 Workshop on Automatic Speech Recognition and Understanding. IEEE Signal Pro- cessing Society. IEEE Catalog No.: CFP11SRW- USB.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Online and linear-time attention by enforcing monotonic alignments",
"authors": [
{
"first": "Colin",
"middle": [],
"last": "Raffel",
"suffix": ""
},
{
"first": "Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Ron",
"middle": [
"J"
],
"last": "Liu",
"suffix": ""
},
{
"first": "Douglas",
"middle": [],
"last": "Weiss",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Eck",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1704.00784"
]
},
"num": null,
"urls": [],
"raw_text": "Colin Raffel, Thang Luong, Peter J Liu, Ron J Weiss, and Douglas Eck. 2017. Online and linear-time at- tention by enforcing monotonic alignments. arXiv preprint arXiv:1704.00784.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "The dynamic representation of scenes",
"authors": [
{
"first": "A",
"middle": [],
"last": "Ronald",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rensink",
"suffix": ""
}
],
"year": 2000,
"venue": "Visual cognition",
"volume": "7",
"issue": "1-3",
"pages": "17--42",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronald A Rensink. 2000. The dynamic representation of scenes. Visual cognition, 7(1-3):17-42.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Sequence-to-Sequence learning with neural networks",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Quoc V",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2014,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "3104--3112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence-to-Sequence learning with neural net- works. In Advances in neural information process- ing systems, pages 3104-3112.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Chainer: a next-generation open source framework for deep learning",
"authors": [
{
"first": "Seiya",
"middle": [],
"last": "Tokui",
"suffix": ""
},
{
"first": "Kenta",
"middle": [],
"last": "Oono",
"suffix": ""
},
{
"first": "Shohei",
"middle": [],
"last": "Hido",
"suffix": ""
},
{
"first": "Justin",
"middle": [],
"last": "Clayton",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of Workshop on Machine Learning Systems (Learn-ingSys) in The Twenty-ninth Annual Conference on Neural Information Processing Systems (NIPS)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Seiya Tokui, Kenta Oono, Shohei Hido, and Justin Clayton. 2015. Chainer: a next-generation open source framework for deep learning. In Proceedings of Workshop on Machine Learning Systems (Learn- ingSys) in The Twenty-ninth Annual Conference on Neural Information Processing Systems (NIPS).",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Google's neural machine translation system: Bridging the gap between human and machine translation",
"authors": [
{
"first": "Yonghui",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Norouzi",
"suffix": ""
},
{
"first": "Maxim",
"middle": [],
"last": "Macherey",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Krikun",
"suffix": ""
},
{
"first": "Qin",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Klaus",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Macherey",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1609.08144"
]
},
"num": null,
"urls": [],
"raw_text": "Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google's neural ma- chine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Show, attend and tell: Neural image caption generation with visual attention",
"authors": [
{
"first": "Kelvin",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Ba",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Kiros",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Aaron",
"middle": [
"C"
],
"last": "Courville",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "Richard",
"middle": [
"S"
],
"last": "Zemel",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 32nd International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "2048--2057",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron C. Courville, Ruslan Salakhutdinov, Richard S. Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual attention. In Proceedings of the 32nd In- ternational Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015, pages 2048- 2057.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Sequenceto-sequence neural net models for grapheme-tophoneme conversion",
"authors": [
{
"first": "Kaisheng",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Zweig",
"suffix": ""
}
],
"year": 2015,
"venue": "Sixteenth Annual Conference of the International Speech Communication Association",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kaisheng Yao and Geoffrey Zweig. 2015. Sequence- to-sequence neural net models for grapheme-to- phoneme conversion. In Sixteenth Annual Confer- ence of the International Speech Communication As- sociation.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "Local monotonic attention. Part (1) of Figure 2. Assume we have source sequence with length S, which is encoded by the stack of Bi-LSTM (see Figure 1) into S encoded states h e = [h e 1 , ..., h e S ]",
"num": null,
"type_str": "figure"
},
"TABREF1": {
"text": "Results from baseline and proposed method on G2P task with CMUDict test set",
"content": "<table><tr><td>Model</td><td>PER (%)</td><td>WER (%)</td></tr><tr><td>Baseline</td><td/><td/></tr><tr><td>Enc-Dec LSTM (2 lyr) (Yao and Zweig, 2015)</td><td colspan=\"2\">7.63 28.61</td></tr><tr><td>Bi-LSTM (3 lyr) (Yao and Zweig, 2015)</td><td colspan=\"2\">5.45 23.55</td></tr><tr><td>Att Enc-Dec with Global MLP Scorer (ours)</td><td colspan=\"2\">5.96 25.55</td></tr><tr><td>Att Enc-Dec with local-m (ours) (Luong et al., 2015)</td><td colspan=\"2\">5.64 24.32</td></tr><tr><td>Proposed</td><td/><td/></tr><tr><td>Att Enc-Dec + Unconst (exp) (2\u03c3 = 2)</td><td colspan=\"2\">5.45 23.15</td></tr><tr><td>Att Enc-Dec + Unconst (exp) (2\u03c3 = 3)</td><td colspan=\"2\">5.43 23.19</td></tr></table>",
"num": null,
"type_str": "table",
"html": null
},
"TABREF2": {
"text": "",
"content": "<table><tr><td colspan=\"2\">: Results from baseline and proposed</td></tr><tr><td colspan=\"2\">method on English-to-France and Indonesian-to-</td></tr><tr><td>English translation tasks.</td><td/></tr><tr><td>Model</td><td>BLEU</td></tr><tr><td colspan=\"2\">BTEC English to France</td></tr><tr><td>Baseline</td><td/></tr><tr><td>Att Enc-Dec with Global MLP Scorer</td><td>49.0</td></tr><tr><td>Att Enc-Dec with local-m (ours) (Luong et al., 2015)</td><td>50.4</td></tr><tr><td>Proposed</td><td/></tr><tr><td>Att Enc-Dec + Unconst (exp) (2\u03c3 = 4)</td><td>51.2</td></tr><tr><td>Att Enc-Dec + Unconst (exp) (2\u03c3 = 6)</td><td>51.1</td></tr><tr><td colspan=\"2\">BTEC Indonesian to English</td></tr><tr><td>Baseline</td><td/></tr><tr><td>Att Enc-Dec with Global MLP Scorer</td><td>38.2</td></tr><tr><td>Att Enc-Dec with local-m (ours) (Luong et al., 2015)</td><td>39.8</td></tr><tr><td>Proposed</td><td/></tr><tr><td>Att Enc-Dec + Unconst (exp) (2\u03c3 = 4)</td><td>40.9</td></tr><tr><td>Att Enc-Dec + Unconst (exp) (2\u03c3 = 6)</td><td>41.8</td></tr><tr><td>7.3 Result Discussion</td><td/></tr></table>",
"num": null,
"type_str": "table",
"html": null
}
}
}
}