Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "I17-1045",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:39:27.566538Z"
},
"title": "Attentive Language Models",
"authors": [
{
"first": "Giancarlo",
"middle": [
"D"
],
"last": "Salton",
"suffix": "",
"affiliation": {},
"email": "giancarlo.salton@mydit.ie"
},
{
"first": "Robert",
"middle": [
"J"
],
"last": "Ross",
"suffix": "",
"affiliation": {},
"email": "robert.ross@dit.ie"
},
{
"first": "John",
"middle": [
"D"
],
"last": "Kelleher",
"suffix": "",
"affiliation": {},
"email": "john.d.kelleher@dit.ie"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we extend Recurrent Neural Network Language Models (RNN-LMs) with an attention mechanism. We show that an Attentive RNN-LM (with 14.5M parameters) achieves a better perplexity than larger RNN-LMs (with 66M parameters) and achieves performance comparable to an ensemble of 10 similar sized RNN-LMs. We also show that an Attentive RNN-LM needs less contextual information to achieve similar results to the stateof-the-art on the wikitext2 dataset.",
"pdf_parse": {
"paper_id": "I17-1045",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we extend Recurrent Neural Network Language Models (RNN-LMs) with an attention mechanism. We show that an Attentive RNN-LM (with 14.5M parameters) achieves a better perplexity than larger RNN-LMs (with 66M parameters) and achieves performance comparable to an ensemble of 10 similar sized RNN-LMs. We also show that an Attentive RNN-LM needs less contextual information to achieve similar results to the stateof-the-art on the wikitext2 dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Language Models (LMs) are an essential component in a range of Natural Language Processing applications, such as Statistical Machine Translation and Speech Recognition (Schwenk et al., 2012) . An LM provides a probability for a sequence of words in a given language, reflecting fluency and the likelihood of that word sequence occurring in that language.",
"cite_spans": [
{
"start": 168,
"end": 190,
"text": "(Schwenk et al., 2012)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In recent years Recurrent Neural Networks (RNNs) have improved the state-of-the-art in LM research (J\u00f3zefowicz et al., 2016) . Sequential data prediction, however, is still considered a challenge in Artificial Intelligence (Mikolov et al., 2010) given that, in general, prediction accuracy degrades as the size of sequences increase.",
"cite_spans": [
{
"start": 99,
"end": 124,
"text": "(J\u00f3zefowicz et al., 2016)",
"ref_id": "BIBREF6"
},
{
"start": 223,
"end": 245,
"text": "(Mikolov et al., 2010)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "RNN-LMs sequentially propagate forward a context vector by integrating the information generated by each prediction step into the context used for the next prediction. One consequence of this forward propagation of information is that older information tends to fade from the context as new information is integrated into the context. As a result, RNN-LMs struggle in situations where there is a long-distance dependency because the relevant information from the start of the dependency has faded by the time the model has spanned the dependency. A second problem is that the context can be dominated by the more recent information so when an RNN-LM does make an error this error can be propagated forward resulting in a cascade of errors through the rest of the sequence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In recent sequence-to-sequence research the concept of \"attention\" has been developed to enable RNNs to align different parts of the input and output sequences. Examples of attention based architectures include Neural Machine Translation (NMT) (Bahdanau et al., 2015; Luong et al., 2015) and image captioning (Xu et al., 2015) .",
"cite_spans": [
{
"start": 244,
"end": 267,
"text": "(Bahdanau et al., 2015;",
"ref_id": "BIBREF0"
},
{
"start": 268,
"end": 287,
"text": "Luong et al., 2015)",
"ref_id": "BIBREF7"
},
{
"start": 309,
"end": 326,
"text": "(Xu et al., 2015)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper we extend the RNN-LM context mechanism with an attention mechanism that enables the model to bring forward context information from different points in the context sequence history. We hypothesis that this attention mechanism enables RNN-LMs to: (a) bridge long-distance dependencies, thereby avoiding errors; and, (b) to overlook recent errors by choosing to focus on contextual information preceding the error, thereby avoiding error propagation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We show that a medium sized 1 Attentive RNN-LM 2 achieves better performance than larger \"standard\" models and performance comparable to an ensemble of 10 \"medium\" sized LSTM RNN-LMs on the PTB. We also show that an Attentive RNN-LM needs less contextual information in order to achieve similar results to state-ofthe-art results over the wikitext2 dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Outline: \u00a72 introduces RNN-LMs and related research, \u00a73 outlines our approach, \u00a74 describes our experiments, \u00a75 presents our analysis of the models' performance and \u00a76 our conclusions. 1 We adopt the terminology of Zaremba et al. (2015) and Press and Wolf (2016) when referring to the size of the RNNs.",
"cite_spans": [
{
"start": 185,
"end": 186,
"text": "1",
"ref_id": null
},
{
"start": 215,
"end": 236,
"text": "Zaremba et al. (2015)",
"ref_id": "BIBREF17"
},
{
"start": 241,
"end": 262,
"text": "Press and Wolf (2016)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2 Code available at https://github.com/ giancds/attentive_lm 2 RNN-Language Models RNN-LMs model the probability of a sequence of words by modelling the joint probability of the words in the sequence using the chain rule:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "p(w 1 , . . . , w N ) = N t=1 p(w n |w 1 , . . . , w n\u22121 ) (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "where N is the number of words in the sequence. The context of the word sequence is modelled by an RNN and for each position in the sequence the probability distribution over the vocabulary is calculated using a softmax on the output related to that position of the RNN's last layer (i.e., the last layer's hidden state) (J\u00f3zefowicz et al., 2016) . Examples of such models include Zaremba et al. (2015) and Press and Wolf (2016) . These models are composed of LSTM units (Hochreiter and Schmidhuber, 1997) and apply regularization to improve the RNN performance. In addition, Press and Wolf (2016) also uses the same embedding matrix that is used to transform the input words to transform the output of the last RNN layer to feed it to the softmax layer to make the next prediction.",
"cite_spans": [
{
"start": 321,
"end": 346,
"text": "(J\u00f3zefowicz et al., 2016)",
"ref_id": "BIBREF6"
},
{
"start": 381,
"end": 402,
"text": "Zaremba et al. (2015)",
"ref_id": "BIBREF17"
},
{
"start": 407,
"end": 428,
"text": "Press and Wolf (2016)",
"ref_id": "BIBREF12"
},
{
"start": 471,
"end": 505,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF5"
},
{
"start": 576,
"end": 597,
"text": "Press and Wolf (2016)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Attention mechanisms were first proposed in \"encoder-decoder\" architectures for NMT systems. Bahdanau et al. (2015) proposed a model that stores all the encoder RNN's outputs and uses them together with the decoder RNN's state h t\u22121 to compute a context vector that, in turn, is used to compute the state h t . In Luong et al. (2015) a generalization of the model of Bahdanau et al. (2015) is presented which uses the decoder RNN's state, in this instance h t rather than h t\u22121 , along with the outputs of the encoder RNN to compute a context vector that it then concatenated with h t before making the next prediction. Both models have similar performance and achieve state-of-theart performance for some language pairs; however, they suffer from repeating words or dropping translations at the output (Mi et al., 2016) .",
"cite_spans": [
{
"start": 93,
"end": 115,
"text": "Bahdanau et al. (2015)",
"ref_id": "BIBREF0"
},
{
"start": 314,
"end": 333,
"text": "Luong et al. (2015)",
"ref_id": "BIBREF7"
},
{
"start": 367,
"end": 389,
"text": "Bahdanau et al. (2015)",
"ref_id": "BIBREF0"
},
{
"start": 803,
"end": 820,
"text": "(Mi et al., 2016)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There is previous work on using past information to improve RNN-LMs. Tran et al. (2016) propose an extension to LSTM cells to include memory areas, which depend on input words, at the output of every hidden layer. The model produces good results but the dependency on input words expands the number of parameters in each LSTM cell in proportion to the vocabulary size in use.",
"cite_spans": [
{
"start": 60,
"end": 87,
"text": "RNN-LMs. Tran et al. (2016)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Similarly, Cheng et al. (2016) propose storing the LSTM's memory cells of every layer at each timestep and draw a context vector for each memory cell for each new input to attend to previous content and compute its output. Although their model requires fewer parameters than the model of Tran et al. 2016, the performance of the model is worse than regularized \"standard\" RNN-LM as in Zaremba et al. (2015) and Press and Wolf (2016) . Daniluk et al. (2017) propose an augmented version of the attention mechanism proposed by Bahdanau et al. 2015on which their model outputs 3 vectors called key-value-predict. The key (a vector of real numbers) is used to retrieve a single hidden state from the past. Grave et al. (2017) propose an LM augmented with a \"memory cache\" that stores tuples of hidden-states plus word embeddings (for the word predicted from that hidden state). The memory cache is used to help the current prediction by retrieving the word embedding associated with the hidden state in the memory most similar to the current hidden state. Merity et al. (2017) proposed a mixture model that includes an RNN and a pointer network. This model computes one distribution for the softmax component and one distribution for the pointer network, using a sentinel gating function to combine both distributions. In spite of the fact that their model is similar to the model of Grave et al. (2017) , their model requires an extra transformation between the current state of the RNN and those stored in the memory.",
"cite_spans": [
{
"start": 11,
"end": 30,
"text": "Cheng et al. (2016)",
"ref_id": "BIBREF1"
},
{
"start": 385,
"end": 406,
"text": "Zaremba et al. (2015)",
"ref_id": "BIBREF17"
},
{
"start": 411,
"end": 432,
"text": "Press and Wolf (2016)",
"ref_id": "BIBREF12"
},
{
"start": 435,
"end": 456,
"text": "Daniluk et al. (2017)",
"ref_id": "BIBREF2"
},
{
"start": 702,
"end": 721,
"text": "Grave et al. (2017)",
"ref_id": "BIBREF4"
},
{
"start": 1052,
"end": 1072,
"text": "Merity et al. (2017)",
"ref_id": "BIBREF9"
},
{
"start": 1380,
"end": 1399,
"text": "Grave et al. (2017)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "These recent models have a number of drawbacks. The systems that extend the architecture of LSTM units struggle to process large vocabularies because the system memory expands to the size of the vocabulary. For systems that retrieve a single hidden-state or word from memory, if the prediction is not correct, the RNN-LM will not receive the correct past information. Finally, the models of Merity et al. (2017) and Grave et al. (2017) use a fixed-length memory of L previous hidden states to store and retrieve information from the past (100 states in the case of Merity et al. (2017) and 2,000 states in the case of Grave et al. (2017) ). As we shall explain in \u00a73 our \"attentive\" RNN-LMs have a memory of dynamic-length that grows with the length of the input and therefore, in general, are computationally cheaper.",
"cite_spans": [
{
"start": 391,
"end": 411,
"text": "Merity et al. (2017)",
"ref_id": "BIBREF9"
},
{
"start": 416,
"end": 435,
"text": "Grave et al. (2017)",
"ref_id": "BIBREF4"
},
{
"start": 565,
"end": 585,
"text": "Merity et al. (2017)",
"ref_id": "BIBREF9"
},
{
"start": 618,
"end": 637,
"text": "Grave et al. (2017)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We see our \"attentive\" RNN-LM (see \u00a73) as a generalized version of these models as we rely on the encoded information in the hidden state of the RNN-LM to represent previous input words and we use a set of attention weights (instead of a key) to retrieve information from the past inputs. The main advantages of our approach are: (a) our model does not need vocabulary sized matrices in the computations of the attention mechanism and therefore has a reduced number of parameters; and (b) as we use all previous hidden states of the RNN-LM in the computation for the attention weights, all of those states will influence the next prediction based on the weights calculated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work we extend RNN-LMs to include an attention mechanism over previous inputs. We employ a multi-layered RNN to encode the input and, at each timestep, we store the output of the last recurrent layer (i.e., its hidden state h t ) into a memory buffer. We compute a score for each hidden state h i (\u2200 i \u2208 {1, . . . , t \u2212 1}) stored in memory and use these scores to weight each h i . From these weighted hidden states we generate a context vector c t that is concatenated with the current hidden state h t to predict the next word in the sequence. Figure 1 illustrates a step of our model when predicting the fourth word in a sequence.",
"cite_spans": [],
"ref_spans": [
{
"start": 555,
"end": 563,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Attentive Language Models",
"sec_num": "3"
},
{
"text": "We propose two different attention score functions that can be used to compute the context vector c t . One calculates the attention score of each h i using just the information in the state (the single(h i ) score introduced below). The other calculates the attention scores for each h i by combining the information from that state with the information from the current state h t (the combined(h i , h t ) score described below). Each of these mechanisms defines a separate Attentive RNN-LMs and we report results for each of these LMs in our experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attentive Language Models",
"sec_num": "3"
},
{
"text": "More formally, each h t is computed as follows, where x t is the input at timestep t:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attentive Language Models",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h t = RN N (x t , h t\u22121 )",
"eq_num": "(2)"
}
],
"section": "Attentive Language Models",
"sec_num": "3"
},
{
"text": "The context vector c t is then generated using Eq. (3) where each scalar weight a i is a softmax (Eq. (4)) and the score for each hidden state (h i ) in the memory buffer is one of Eq. (5) or Eq. (6).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attentive Language Models",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "c t = t\u22121 i=1 a i h i",
"eq_num": "(3)"
}
],
"section": "Attentive Language Models",
"sec_num": "3"
},
{
"text": "Figure 1: Illustration of a step of the Attentive RNN-LM with combined score. In this example, the model receives the third word as input (w 3 ) after storing the previous states (h 1 and h 2 ) in memory. After producing h 3 , the model computes the context vector (in this case c 3 ) that will be concatenated to h 3 before the softmax layer for the prediction of the fourth word w 4 . Note that if the single score is in use (Eq. (9)), the arrow from the RNN output for h 3 to the attention layer is dropped. Also note that h 3 is stored in memory only at the end of this process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attentive Language Models",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "a i = exp(score(h i , h t )) t\u22121 j=1 exp(score(h j , h t ))",
"eq_num": "(4)"
}
],
"section": "Attentive Language Models",
"sec_num": "3"
},
{
"text": "score(h i , h t ) = single(h i ) (5) combined(h i , h t ) (6)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attentive Language Models",
"sec_num": "3"
},
{
"text": "We then merge c t with the current state h t using a concatenation layer 3 , where W c is a matrix of parameters and b t is a bias vector.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attentive Language Models",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h t = tanh(W c [h t ; c t ] + b t )",
"eq_num": "(7)"
}
],
"section": "Attentive Language Models",
"sec_num": "3"
},
{
"text": "We compute the next word probability using Eq.8 where W is a matrix of parameters and b is a bias vector.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attentive Language Models",
"sec_num": "3"
},
{
"text": "p(w t |w <t , x) = sof tmax(Wh t + b) (8)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attentive Language Models",
"sec_num": "3"
},
{
"text": "Single score. This score is calculated for each h i using just the information stored the state in itself. The score single(h i ) is defined as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attentive Language Models",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "single(h i ) = v s tanh(W s h i )",
"eq_num": "(9)"
}
],
"section": "Attentive Language Models",
"sec_num": "3"
},
{
"text": "where the parameter matrix W s and vector v s are both learned by the attention mechanism and represents the dot product.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attentive Language Models",
"sec_num": "3"
},
{
"text": "When applying the single(h i ) score, we can think of the score a i as a scalar summary of the \"absolute relevance\" of the state h i . When a new state h t is added to the buffer its scalar summary a i is calculated by first using Eq.9 to get the score for the state and then applying a softmax function over the set of state scores including the score for the new state. Although the scores for each state do not change from one timestep to the next, applying the softmax results in recalculation of the distribution of the scalar summaries for all the states h 0 , . . . , h t . As a result the a i 's for each state in Eq.3 changes from one prediction to the next as new states are added and the weight mass is distributed across more states.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attentive Language Models",
"sec_num": "3"
},
{
"text": "Combined score. This score is calculated for each h i by combining the information from that state with the information from the current state h t . The score combined(h i , h t ) is defined as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attentive Language Models",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "combined(h i , h t ) = v s tanh(W s h i + W q h t )",
"eq_num": "(10)"
}
],
"section": "Attentive Language Models",
"sec_num": "3"
},
{
"text": "where the parameter matrices W s and W q and vector v s are learned by the attention mechanism, and is the same as in Eq. 9. Notice that because W q h t does not depend on any other state and is used in the computations with all other h i , we can efficiently compute it once and use the results in Eq. 10, thus reducing computation time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attentive Language Models",
"sec_num": "3"
},
{
"text": "The score a i defined by combined(h i , h t ), can be understood as the \"relative relevance\" of state h i to the current state h t . Using this attention mechanism the score for each h i is different for each timestep according to its relevance to the current hidden state h t of the RNN. Consequently, the scores for each h i and the distribution over these scores changes from one timestep to the next. Using this scoring, the model can decide whether it should pay more attention to the current state, to a previous state or use past states to \"supplement\" the information for the next prediction. In \u00a75 we present and analysis of how the model attends to different parts of its history as it generates a sequence of predictions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attentive Language Models",
"sec_num": "3"
},
{
"text": "To evaluate our Attentive RNN-LMs we conducted experiments over the PTB (Marcus et al., 1994) and wikitext2 (Merity et al., 2017) datasets. We first describe the setup of our Attentive RNN-LM for the PTB ( \u00a74.1) and wikitext2 ( \u00a74.2) datasets and then discuss the results ( \u00a74.3). We compare our results on PTB to Zaremba et al. 2015and Press and Wolf (2016) the best performing LSTM-LMs on the PTB, two memory augmented networks (Grave et al. (2017) and Merity et al. (2017) ) and PTB state-of-the-art ensemble models of Zaremba et al. (2015) . On wikitext2 we take (Merity et al., 2017) , the creators of the dataset, and (Grave et al., 2017) , the current state-of-theart, as our baselines.",
"cite_spans": [
{
"start": 72,
"end": 93,
"text": "(Marcus et al., 1994)",
"ref_id": "BIBREF8"
},
{
"start": 108,
"end": 129,
"text": "(Merity et al., 2017)",
"ref_id": "BIBREF9"
},
{
"start": 337,
"end": 358,
"text": "Press and Wolf (2016)",
"ref_id": "BIBREF12"
},
{
"start": 430,
"end": 450,
"text": "(Grave et al. (2017)",
"ref_id": "BIBREF4"
},
{
"start": 455,
"end": 475,
"text": "Merity et al. (2017)",
"ref_id": "BIBREF9"
},
{
"start": 522,
"end": 543,
"text": "Zaremba et al. (2015)",
"ref_id": "BIBREF17"
},
{
"start": 567,
"end": 588,
"text": "(Merity et al., 2017)",
"ref_id": "BIBREF9"
},
{
"start": 624,
"end": 644,
"text": "(Grave et al., 2017)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "We evaluate our Attentive RNN-LM over the PTB dataset using the standard split which consists of 887K, 70K and 78K tokens on the training, validation and test sets respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PTB Setup",
"sec_num": "4.1"
},
{
"text": "We follow, in part, the parameterization used by Zaremba et al. (2015) and Press and Wolf (2016) with some changes. We trained an Attentive RNN-LM with 2 layers of 650 LSTM units using Stochastic Gradient Descent (SGD) with an initial learning rate of 1.0, halving the learning rate at each epoch after 12 epochs, to minimize the average negative log probability of the target words.",
"cite_spans": [
{
"start": 49,
"end": 70,
"text": "Zaremba et al. (2015)",
"ref_id": "BIBREF17"
},
{
"start": 75,
"end": 96,
"text": "Press and Wolf (2016)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "PTB Setup",
"sec_num": "4.1"
},
{
"text": "We train the models until we do not get any perplexity improvements over the validation set with an early stop counter of 10 epochs (i.e., patience of 10 epochs). Once the model runs out of patience, we rollback its parameters and use the model that achieved the best validation perplexity to calculate the perplexity over the test set. We initialize the weight matrices of the network uniformly in [\u22120.05, 0.05] while all biases are initialized to a constant value at 0.0. We also apply 50% dropout (Srivastava et al., 2014) to the non-recurrent con-nections and clip the norm of the gradients, normalized by mini-batch size, at 5.0. In all our experiments, we follow Press and Wolf (2016) and tie the matrix W in Eq. (8) to be the embedding matrix (which also has 650 dimensions) used to represent the input words.",
"cite_spans": [
{
"start": 500,
"end": 525,
"text": "(Srivastava et al., 2014)",
"ref_id": "BIBREF14"
},
{
"start": 669,
"end": 690,
"text": "Press and Wolf (2016)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "PTB Setup",
"sec_num": "4.1"
},
{
"text": "Contrary to Zaremba et al. (2015) and Press and Wolf (2016), we do not allow successive minibatches to sequentially traverse the dataset. In other words, we follow the standard practice to reinitialize the hidden state of the network at the beginning of each mini-batch, by setting it to all zeros. This way, we do not allow the attention window to span across sentence boundaries 4 . We use all sentences in the training set, we truncate all sentences longer than 35 words and pad all sentences shorter than 35 words with a special symbol so all sentences are the same size. We use a vocabulary size of 10K words and a batch size of 32. All UNK words (following the pre-processing of (2015)) were kept during the training, validation and testing phases.",
"cite_spans": [
{
"start": 12,
"end": 33,
"text": "Zaremba et al. (2015)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "PTB Setup",
"sec_num": "4.1"
},
{
"text": "We also evaluate our Attentive RNN-LM over the wikitext2 dataset (Merity et al., 2017) . We use the standard train, validation and test splits which consists of around 2M, 217K tokens and 245k tokens respectively. This dataset is composed of \"Good\" and \"Featured\" articles on Wikipedia.",
"cite_spans": [
{
"start": 65,
"end": 86,
"text": "(Merity et al., 2017)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "wikitext2 Setup",
"sec_num": "4.2"
},
{
"text": "There is an important difference between how we trained and tested our models on the wiki-text2 dataset and how the baseline systems were trained and tested. Both Merity et al. (2017) and Grave et al. (2017) permitted the memory buffers of their systems to span sentence boundaries (and, indeed, they also did mini-batch traversal which allowed the memory buffers to traverse mini-batch boundaries) whereas we reset our systems memory at each sentence boundary. This difference is important because in the wikitext2 dataset the sentences are not shuffled and, therefore, are sequentially related to each other. As a result, systems that carry sequential information from previous sentences into the current sentence have an advantage in that they utilise contextual cues from the preceding sentence to inform the predictions at the start of the new sentence. By compari-son, systems that reset their memory at the start of each sentence must reconstruct their context models from scratch and face a \"cold-start\" problem for the early predictions in the sentence.",
"cite_spans": [
{
"start": 163,
"end": 183,
"text": "Merity et al. (2017)",
"ref_id": "BIBREF9"
},
{
"start": 188,
"end": 207,
"text": "Grave et al. (2017)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "wikitext2 Setup",
"sec_num": "4.2"
},
{
"text": "The core reason why (Merity et al., 2017) and (Grave et al., 2017) did not reset their memories across sentence boundaries and we do is that these baseline systems use a fixed length memory whereas our \"attention\" mechanism has a variable length memory. A variable length memory has benefits in terms of both computational cost and the fact that the memory size is dynamically fitted to the context. However, just as the system designer for a fixed length memory LM must fix the memory size hyper-parameter in some fashion, so to the designer of a variable length memory LM must decide when the memory buffer is reset. For our work, we have decided to reset our memory buffer at sentence boundaries because many of the tasks for which LMs are used (e.g. NMT) work on a sentence by sentence basis. If required it would be possible for us to extend the memory buffer to the start of the preceding sentence (or some other landmark is the sequence history). However, this would incur extra computational cost, and as we shall see our Attentive RNN-LMs are still competitive on the wikitext2 dataset despite the fact that the baselines systems are given access to longer context sequences.",
"cite_spans": [
{
"start": 20,
"end": 41,
"text": "(Merity et al., 2017)",
"ref_id": "BIBREF9"
},
{
"start": 46,
"end": 66,
"text": "(Grave et al., 2017)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "wikitext2 Setup",
"sec_num": "4.2"
},
{
"text": "We trained an Attentive RNN-LM with 2 layers of 1000 LSTM units using Stochastic Gradient Descent (SGD) with an initial learning rate of 1.0, decaying the learning rate by a factor of 1.15 at each epoch after 14 epochs, to minimize the average negative log probability of the target words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "wikitext2 Setup",
"sec_num": "4.2"
},
{
"text": "Similarly to the PTB model we also train this model with an early stop counter of 10 epochs and we initialize the weight matrices of the network uniformly in [\u22120.05, 0.05] while all biases are initialized to a constant value at 0.0. We apply 65% dropout to the non-recurrent connections and clip the norm of the gradients, normalized by minibatch size, at 5.0. In all our experiments, we also follow Press and Wolf (2016) and tie the matrix W in Eq. (8) to be the embedding matrix (which has 1000 dimensions for this model) used to represent the input words. We use all sentences in the training set, we truncate all sentences longer than 35 words and pad all sentences shorter than 35 words with a special symbol so all sentences are the same length. We use a vocabulary size of 33,278 and a batch size of 32. All UNK words (following the pre-processing of (2017)) were kept during the training, validation and testing phases.",
"cite_spans": [
{
"start": 400,
"end": 421,
"text": "Press and Wolf (2016)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "wikitext2 Setup",
"sec_num": "4.2"
},
{
"text": "In Table 1 we report the results of our experiments on the PTB dataset. As we can see in this table, the Attentive RNN-LMs outperforms all other single models on the PTB dataset. Although Attentive RNN-LMs have less parameters (10M) than the large \"regularized\" LSTM-LMs (66M parameters), they were capable of reducing the perplexity over both validation and test sets. This result shows that using an Attentive RNN-LM it is possible to achieve better perplexity scores with far fewer model parameters. Furthermore, Attentive RNN-LMs are able to achieve roughly the same results as the averaging of 10 RNN-LM models (with no attention) of the same size.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "In addition, there is little difference between the results of the Attentive RNN-LM with single score (Eq.9) and the Attentive RNN-LM with combined score (Eq.10) with the single score slightly outperforming the the combined score. We believe this is because the model using the combined(h i , h t ) score has more parameters to optimize and, thus, more difficulties in settling to a good local optima.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "In Table 2 we report the results on the wikitext2 dataset. Despite the fact that we reset the memory for the Attentive RNN-LM at each sentence boundary whereas the caches for the baseline systems span sentence boundaries, our best Attentive RNN-LM is within 1 perplexity point of the system of (2017) (which is allowed to cache 2,000 previous hidden states), and has a lower perplexity than all of the other baselines.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "The purpose of our attention mechanism is to enable an RNN-LM to bridge long distance dependencies in language. Therefore, we expect the attention mechanism to select previous hidden states that are relevant to the current predictions. To analyse whether the attention mechanism is functioning as intend we analysed the evolution of attention weights in our Attentive RNN-LM as we calculated the perplexity for samples sentences using the models trained over the wikitext2 5 . Figure 2 show the evolution of attention weights, using both single and combined scoring, when calculating perplexities for 2 sentences containing nominal modifiers. In addition, Figure 3 show the evolution of attention weights for two sentences containing relative clauses, once again using both single and combined scoring. The words in the X-axis (horizontal) are the inputs at each timestep and the words in the Y-axis (vertical) are the next (or predicted) words. We suppressed weights that either equal to 1.0 (black squares) or 0.0 (white squares). Note that given the rounding to 4 decimal places, weights in some rows of the matrices may not sum to 1.0.",
"cite_spans": [],
"ref_spans": [
{
"start": 477,
"end": 485,
"text": "Figure 2",
"ref_id": null
},
{
"start": 656,
"end": 664,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis of the Models",
"sec_num": "5"
},
{
"text": "None of the attention mechanisms worked as a proper attention mechanism. In other words, none of the mechanisms generated larger weights for specific words in the sentence, in comparison to the other words in the same sentence. Comparing the attention weights generated by both combined score and single score for both sentences, it is striking that the distribution of attention weights is very similar. For both Attentive RNN-LM models the attention spreads out across the history in a relatively equal fashion.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of the Models",
"sec_num": "5"
},
{
"text": "Indeed, both models seem to take into consideration all previous states, creating a smoothing effect for the hidden states in the buffer. Therefore, no single state dominates the context vector by receiving a much larger attention weight than the others. We believe that this behaviour enables the models to bring forward information from the beginning of the sentence at the time it is making a prediction. This way, the models do not let information fade away from the context as it progresses to subsequent steps in a sequence and all previous information about the words that preceded the current timestep is available to the classifier in a manner that disregards recency.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of the Models",
"sec_num": "5"
},
{
"text": "As a consequence of the smoothing effect, the model does not necessarily need to store information about the context of the sequence in the recurrent connections of the RNN. This behaviour enable the model to retrieve information from the buffer to remember past words without relying solely on the RNN's internal \"memory\". Therefore, the model can maximize the features extracted about an input word, creating an advantage over other RNN-LMs that need to both extract features and keep context regarding the sequence in its connections. (Zaremba et al., 2015) 100M 76.7 73.3 10 Medium regularized LSTMs (Zaremba et al., 2015) 200M 75.2 72.0 2 Large regularized LSTMs (Zaremba et al., 2015) 122M 76.9 73.6 10 Large regularized LSTMs (Zaremba et al., 2015) 660M 72.8 69.5 38 Large regularized LSTMs (Zaremba et al., 2015) 2508M 71.9 68.7 Table 1 : Perplexity results over the PTB. Symbols: WT = weight tying (Press and Wolf, 2016) ; WD = weight decay and BD = Bayesian Dropout, both suggested by Gal and Ghahramani (2015) . (Grave et al., 2017) --68.9 Table 2 : Perplexity results over the wikitext2.",
"cite_spans": [
{
"start": 538,
"end": 560,
"text": "(Zaremba et al., 2015)",
"ref_id": "BIBREF17"
},
{
"start": 604,
"end": 626,
"text": "(Zaremba et al., 2015)",
"ref_id": "BIBREF17"
},
{
"start": 668,
"end": 690,
"text": "(Zaremba et al., 2015)",
"ref_id": "BIBREF17"
},
{
"start": 733,
"end": 755,
"text": "(Zaremba et al., 2015)",
"ref_id": "BIBREF17"
},
{
"start": 798,
"end": 820,
"text": "(Zaremba et al., 2015)",
"ref_id": "BIBREF17"
},
{
"start": 907,
"end": 929,
"text": "(Press and Wolf, 2016)",
"ref_id": "BIBREF12"
},
{
"start": 995,
"end": 1020,
"text": "Gal and Ghahramani (2015)",
"ref_id": "BIBREF3"
},
{
"start": 1023,
"end": 1043,
"text": "(Grave et al., 2017)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 837,
"end": 844,
"text": "Table 1",
"ref_id": null
},
{
"start": 1051,
"end": 1058,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis of the Models",
"sec_num": "5"
},
{
"text": "Another interpretation of the smoothing effect is that it \"reinforces\" the signal in a similar fashion to residual connections in other RNNs and Deep Neural Networks architectures. Other RNN architectures use these residual connections as a shortcut to \"reinforce\" the signal of the current input and, thus, it still considers the current input only. The Attentive RNN-LM, however, uses all the previous hidden states to achieve a similar effect and create a stronger signal to the softmax classifier.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "This paper proposes the use of attention mechanisms in RNN-LMs. These attention mechanisms enable an RNN-LM to consider information from its past when it is predicting the next word. We believe that this can help the LM to overcome the fading of relevant information as it traverses a long-distance dependency within a sequence and also to recover from a mistaken prediction by focusing on the context preceding the error.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "Our results show that an Attentive RNN-LM outperforms both RNN-LM models that use and that do not use past information to predict the next word in a sequence when trained on the PTB dataset. Furthermore, our Attentive RNN-LM achieves this performance using far fewer units than the \"standard\" RNN-LM and, therefore, less model parameters. Our results also show that our Attentive RNN-LM achieves similar results to an ensemble that averages over 10 similar sized (in terms of number of LSTM units) RNN-LMs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "In addition, our results demonstrate that our Attentive RNN-LM achieves similar to state-of-the-art results over the wikitext2 dataset. It is an interesting result given that we do not allow our model to look beyond the boundaries of the current sequence it is processing, whilst the state-of-the-art model is allowed to store 2,000 previous states in its cache.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "In future work we plan to (a) test the performance of ensembles of Attentive RNN-LMs and (b) to study the use of the Attentive RNN-LM as the decoder within an NMT system. Figure 2 : Plot of attention weights for two sentences containing nominal modifiers. On the left column are the attention weights calculated by the combined score. On the right column are the attention weights calculated by the single score. The words in the X-axis (horizontal) are the inputs at each timestep and the words in the Y-axis (vertical) are the next (or predicted) words. We suppressed weights that either equal to 1.0 (black squares) or 0.0 (white squares). Note that given the rounding to 4 decimal places, weights in some rows of the matrices may not sum to 1.0. Figure 3 : Plot of attention weights for two sentences containing relative clauses. On the left column are the attention weights calculated by the combined score. On the right column are the attention weights calculated by the single score. The words in the X-axis (horizontal) are the inputs at each timestep and the words in the Y-axis (vertical) are the next (or predicted) words. We suppressed weights that either equal to 1.0 (black squares) or 0.0 (white squares). Note that given the rounding to 4 decimal places, weights in some rows of the matrices may not sum to 1.0.",
"cite_spans": [],
"ref_spans": [
{
"start": 171,
"end": 179,
"text": "Figure 2",
"ref_id": null
},
{
"start": 750,
"end": 758,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "We also have experimented with using a dot product and a feedforward layer to combine ht and ct and also using only ct, but those results were far below previous work in RNN-LM so we do not report them here.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We also experimented to with successive mini-batches to sequentially traverse the dataset as inZaremba et al. (2015) but the performance of the model dropped considerably so we do not report those results here.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The behaviour of the models on wikitext2 is similar to that of the models trained and evaluated on the PTB dataset, so for space reasons we only present the wikitext2 analysis here.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This research was partly funded by the ADAPT Centre.The ADAPT Centre is funded under the SFI Research Centres Programme (Grant 13/RC/2106) and is co-funded under the European Regional Development Fund. Giancarlo D. Salton would like to thank CAPES (\"Coordena\u00e7\u00e3o de Aperfei\u00e7oamento de Pessoal de N\u00edvel Superior\") for his Science Without Borders scholarship, proc n. 9050-13-2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In International Conference on Learning Representations, volume abs/1409.0473v6.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Long short-term memory-networks for machine reading",
"authors": [
{
"first": "Jianpeng",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "551--561",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jianpeng Cheng, Li Dong, and Mirella Lapata. 2016. Long short-term memory-networks for machine reading. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Process- ing, pages 551-561, Austin, Texas. Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Frustratingly Short Attention Spans in Neural Language Modeling. 5th International Conference on Learning Representations",
"authors": [
{
"first": "Michal",
"middle": [],
"last": "Daniluk",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rockt\u00e4schel",
"suffix": ""
},
{
"first": "Johannes",
"middle": [],
"last": "Welbl",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michal Daniluk, Tim Rockt\u00e4schel, Johannes Welbl, and Sebastian Riedel. 2017. Frustratingly Short At- tention Spans in Neural Language Modeling. 5th International Conference on Learning Representa- tions (ICLR'2017).",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A theoretically grounded application of dropout in recurrent neural networks",
"authors": [
{
"first": "Yarin",
"middle": [],
"last": "Gal",
"suffix": ""
},
{
"first": "Zoubin",
"middle": [],
"last": "Ghahramani",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yarin Gal and Zoubin Ghahramani. 2015. A theoret- ically grounded application of dropout in recurrent neural networks.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Improving neural language models with a continuous cache",
"authors": [
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Nicolas",
"middle": [],
"last": "Usunier",
"suffix": ""
}
],
"year": 2017,
"venue": "5th International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Edouard Grave, Armand Joulin, and Nicolas Usunier. 2017. Improving neural language models with a continuous cache. 5th International Conference on Learning Representations (ICLR'2017).",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "9",
"issue": "",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. volume 9, pages 1735-1780.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Exploring the limits of language modeling",
"authors": [
{
"first": "Rafal",
"middle": [],
"last": "J\u00f3zefowicz",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Yonghui",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rafal J\u00f3zefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. 2016. Exploring the lim- its of language modeling.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Effective approaches to attentionbased neural machine translation",
"authors": [
{
"first": "Minh-Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1412--1421",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Minh-Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention- based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Nat- ural Language Processing, pages 1412-1421.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "The penn treebank: Annotating predicate argument structure",
"authors": [
{
"first": "Mitchell",
"middle": [],
"last": "Marcus",
"suffix": ""
},
{
"first": "Grace",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Mary",
"middle": [
"Ann"
],
"last": "Marcinkiewicz",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Macintyre",
"suffix": ""
},
{
"first": "Ann",
"middle": [],
"last": "Bies",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Ferguson",
"suffix": ""
},
{
"first": "Karen",
"middle": [],
"last": "Katz",
"suffix": ""
},
{
"first": "Britta",
"middle": [],
"last": "Schasberger",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of the Workshop on Human Language Technology",
"volume": "",
"issue": "",
"pages": "114--119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mitchell Marcus, Grace Kim, Mary Ann Marcinkiewicz, Robert MacIntyre, Ann Bies, Mark Ferguson, Karen Katz, and Britta Schas- berger. 1994. The penn treebank: Annotating predicate argument structure. In Proceedings of the Workshop on Human Language Technology, pages 114-119.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Pointer sentinel mixture models",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Merity",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Bradbury",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2017,
"venue": "5th International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2017. Pointer sentinel mixture models. 5th International Conference on Learning Representations (ICLR'2017).",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Coverage embedding models for neural machine translation",
"authors": [
{
"first": "Haitao",
"middle": [],
"last": "Mi",
"suffix": ""
},
{
"first": "Zhiguo",
"middle": [],
"last": "Baskaran Sankaran",
"suffix": ""
},
{
"first": "Abe",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ittycheriah",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "955--960",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Haitao Mi, Baskaran Sankaran, Zhiguo Wang, and Abe Ittycheriah. 2016. Coverage embedding models for neural machine translation. In Proceedings of the 2016 Conference on Empirical Methods in Natu- ral Language Processing, pages 955-960, Austin, Texas. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Recurrent neural network based language model",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Karafi\u00e1t",
"suffix": ""
},
{
"first": "Luk\u00e1s",
"middle": [],
"last": "Burget",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Cernock\u00fd",
"suffix": ""
},
{
"first": "Sanjeev",
"middle": [],
"last": "Khudanpur",
"suffix": ""
}
],
"year": 2010,
"venue": "IN-TERSPEECH 2010, 11th Annual Conference of the International Speech Communication Association",
"volume": "",
"issue": "",
"pages": "1045--1048",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Martin Karafi\u00e1t, Luk\u00e1s Burget, Jan Cernock\u00fd, and Sanjeev Khudanpur. 2010. Recur- rent neural network based language model. In IN- TERSPEECH 2010, 11th Annual Conference of the International Speech Communication Association, Makuhari, Chiba, Japan, September 26-30, 2010, pages 1045-1048.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Using the output embedding to improve language models",
"authors": [
{
"first": "Ofir",
"middle": [],
"last": "Press",
"suffix": ""
},
{
"first": "Lior",
"middle": [],
"last": "Wolf",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ofir Press and Lior Wolf. 2016. Using the output embedding to improve language models. volume abs/1608.05859.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Large, pruned or continuous space language models on a gpu for statistical machine translation",
"authors": [
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Rousseau",
"suffix": ""
},
{
"first": "Mohammed",
"middle": [],
"last": "Attik",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the NAACL-HLT 2012 Workshop: Will We Ever Really Replace the N-gram Model? On the Future of Language Modeling for HLT",
"volume": "",
"issue": "",
"pages": "11--19",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Holger Schwenk, Anthony Rousseau, and Mohammed Attik. 2012. Large, pruned or continuous space lan- guage models on a gpu for statistical machine trans- lation. In Proceedings of the NAACL-HLT 2012 Workshop: Will We Ever Really Replace the N-gram Model? On the Future of Language Modeling for HLT, pages 11-19.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Dropout: A simple way to prevent neural networks from overfitting",
"authors": [
{
"first": "Nitish",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Krizhevsky",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
}
],
"year": 2014,
"venue": "Journal of Machine Learning Research",
"volume": "15",
"issue": "",
"pages": "1929--1958",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Re- search, 15:1929-1958.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Recurrent memory network for language modeling. arXiv",
"authors": [
{
"first": "M",
"middle": [],
"last": "Ke",
"suffix": ""
},
{
"first": "Arianna",
"middle": [],
"last": "Tran",
"suffix": ""
},
{
"first": "Christof",
"middle": [],
"last": "Bisazza",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Monz",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ke M. Tran, Arianna Bisazza, and Christof Monz. 2016. Recurrent memory network for language modeling. arXiv, abs/1601.01272.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Show, attend and tell: Neural image caption generation with visual attention",
"authors": [
{
"first": "Kelvin",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Ba",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Kiros",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Aaron",
"middle": [],
"last": "Courville",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhudinov",
"suffix": ""
},
{
"first": "Rich",
"middle": [],
"last": "Zemel",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 32nd International Conference on Machine Learning (ICML-15)",
"volume": "",
"issue": "",
"pages": "2048--2057",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual atten- tion. In Proceedings of the 32nd International Conference on Machine Learning (ICML-15), pages 2048-2057.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Recurrent neural network regularization",
"authors": [
{
"first": "Wojciech",
"middle": [],
"last": "Zaremba",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. 2015. Recurrent neural network regularization. vol- ume abs/1409.2329.",
"links": null
}
},
"ref_entries": {}
}
}