gem_id
stringlengths
37
41
paper_id
stringlengths
3
4
paper_title
stringlengths
19
183
paper_abstract
stringlengths
168
1.38k
paper_content
sequence
paper_headers
sequence
slide_id
stringlengths
37
41
slide_title
stringlengths
2
85
slide_content_text
stringlengths
11
2.55k
target
stringlengths
11
2.55k
references
list
GEM-SciDuet-train-30#paper-1041#slide-34
1041
LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modeling Structure Makes Them Better
Language exhibits hierarchical structure, but recent work using a subject-verb agreement diagnostic argued that state-ofthe-art language models, LSTMs, fail to learn long-range syntax-sensitive dependencies. Using the same diagnostic, we show that, in fact, LSTMs do succeed in learning such dependencies-provided they have enough capacity. We then explore whether models that have access to explicit syntactic information learn agreement more effectively, and how the way in which this structural information is incorporated into the model impacts performance. We find that the mere presence of syntactic information does not improve accuracy, but when model architecture is determined by syntax, number agreement is improved. Further, we find that the choice of how syntactic structure is built affects how well number agreement is learned: top-down construction outperforms leftcorner and bottom-up variants in capturing long-distance structural dependencies.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189 ], "paper_content_text": [ "Introduction Recurrent neural networks (RNNs) are remarkably effective models of sequential data.", "Recent years have witnessed the widespread adoption of recurrent architectures such as LSTMs (Hochreiter and Schmidhuber, 1997) in various NLP tasks, with state of the art results in language modeling and conditional generation tasks like machine translation (Bahdanau et al., 2015) and text summarization (See et al., 2017) .", "Here we revisit the question asked by Linzen et al.", "(2016) : as RNNs model word sequences without explicit notions of hierarchical structure, Figure 1 : An example of the number agreement task with two attractors and a subject-verb distance of five.", "to what extent are these models able to learn non-local syntactic dependencies in natural language?", "Identifying number agreement between subjects and verbs-especially in the presence of attractors-can be understood as a cognitivelymotivated probe that seeks to distinguish hierarchical theories from sequential ones, as models that rely on sequential cues like the most recent noun would favor the incorrect verb form.", "We provide an example of this task in Fig.", "1 , where the plural form of the verb have agrees with the distant subject parts, rather than the adjacent attractors (underlined) of the singular form.", "Contrary to the findings of Linzen et al.", "(2016) , our experiments suggest that sequential LSTMs are able to capture structural dependencies to a large extent, even for cases with multiple attractors ( §2).", "Our finding suggests that network capacity plays a crucial role in capturing structural dependencies with multiple attractors.", "Nevertheless, we find that a strong character LSTM language model-which lacks explicit word representation and has to capture much longer sequential dependencies in order to learn non-local structural dependencies effectively-performs much worse in the number agreement task.", "Given the strong performance of word-based LSTM language models, are there are any substantial benefits, in terms of number agreement accuracy, to explicitly modeling hierarchical structures as an inductive bias?", "We discover that a certain class of LSTM language models that explicitly models syntactic structures, the recurrent neural network grammars (Dyer et al., 2016, RN-NGs) , considerably outperforms sequential LSTM language models for cases with multiple attractors ( §3).", "We present experiments affirming that this gain is due to an explicit composition operator rather than the presence of predicted syntactic annotations.", "Rather surprisingly, syntactic LSTM language models without explicit composition have no advantage over sequential LSTMs that operate on word sequences, although these models can nevertheless be excellent predictors of phrase structures (Choe and Charniak, 2016) .", "Having established the importance of modeling structures, we explore the hypothesis that how we build the structure affects the model's ability to identify structural dependencies in English.", "As RNNGs build phrase-structure trees through top-down operations, we propose extensions to the structure-building sequences and model architecture that enable left-corner (Henderson, 2003 (Henderson, , 2004 and bottom-up (Chelba and Jelinek, 2000; Emami and Jelinek, 2005) generation orders ( §4).", "Extensive prior work has characterized topdown, left-corner, and bottom-up parsing strategies in terms of cognitive plausibility (Pulman, 1986; Abney and Johnson, 1991; Resnik, 1992) and neurophysiological evidence in human sentence processing (Nelson et al., 2017) .", "Here we move away from the realm of parsing and evaluate the three strategies as models of generation instead, and address the following empirical question: which generation order is most appropriately biased to model structural dependencies in English, as indicated by number agreement accuracy?", "Our key finding is that the top-down generation outperforms left-corner and bottom-up variants for difficult cases with multiple attractors.", "In theory, the three traversal strategies approximate the same chain rule that decompose the joint probability of words and phrase-structure trees, denoted as p(x, y), differently and as such will impose different biases on the learner.", "In §4.3, we show that the three variants achieve similar perplexities on a held-out validation set.", "As we observe different patterns in number agreement, this demonstrates that while perplexity can be a useful diagnostic tool, it may not be sensitive enough for comparing models in terms of how well they capture grammatical intuitions.", "Number Agreement with LSTM Language Models We revisit the number agreement task with LSTMs trained on language modeling objectives, as proposed by Linzen et al.", "(2016) .", "Experimental Settings.", "We use the same parsed Wikipedia corpus, verb inflectors, preprocessing steps, and dataset split as Linzen et al.", "(2016) .", "1 Word types beyond the most frequent 10,000 are converted to their respective POS tags.", "We summarize the corpus statistics of the dataset, along with the test set distribution of the number of attractors, in Table 1 .", "Similar to Linzen et al.", "(2016) , we only include test cases where all intervening nouns are of the opposite number forms than the subject noun.", "All models are implemented using the DyNet library (Neubig et al., 2017) .", "Training was done using a language modeling objective that predicts the next word given the prefix; at test time we compute agreement error rates by comparing the probability of the correct verb form with the incorrect one.", "We report performance of a few different LSTM hidden layer configurations, while other hyper-parameters are selected based on a grid search.", "2 Following Linzen Table 2 indicates that, given enough capacity, LSTM language models without explicit syntactic supervision are able to perform well in number agreement.", "For cases with multiple attractors, we observe that the LSTM language model with 50 hidden units trails behind its larger counterparts by a substantial margin despite comparable performance for zero attractor cases, suggesting that network capacity plays an especially important role in propagating relevant structural information across a large number of steps.", "5 Our experiment independently derives the same finding as the recent work of Gulordava et al.", "(2018) , who also find that LSTMs trained with language modeling objectives are able to learn number agreement well; here we additionally identify model capacity as one of the reasons for the discrepancy with the Linzen et al.", "(2016) results.", "While the pretrained large-scale language model of Jozefowicz et al.", "(2016) has certain advantages in terms of model capacity, more training data, and richer vocabulary, we suspect that the poorer performance is due to differences between their training domain and the number agreement testing domain, although the model still performs reasonably well in the number agreement test set.", "Prior work has confirmed the notion that, in many cases, statistical models are able to achieve good performance under some aggregate metric by overfitting to patterns that are predictive in most cases, often at the expense of more difficult, infrequent instances that require deeper language understanding abilities (Rimell et al., 2009; Jia and Liang, 2017) .", "In the vast majority of cases, structural dependencies between subjects and verbs highly overlap with sequential dependencies (Table 1).", "Nevertheless, the fact that number agreement accuracy gets worse as the number of attractors increases is consistent with a sequential recency bias in LSTMs: under this conjecture, identifying the correct structural dependency becomes harder when there are more adjacent nouns of different number forms than the true subject.", "If the sequential recency conjecture is correct, then LSTMs would perform worse when the structural dependency is more distant in the sequences, compared to cases where the structural dependency is more adjacent.", "We empirically test this conjecture by running a strong character-based LSTM language model of that achieved state of the art results on EnWiki8 from the Hutter Prize dataset (Hutter, 2012) , with 1,800 hidden units and 10 million parameters.", "The character LSTM is trained, validated, and tested 6 on the same split of the Linzen et al.", "(2016) number agreement dataset.", "A priori, we expect that number agreement is harder for character LSTMs for two reasons.", "First, character LSTMs lack explicit word representaforms much better for cases with multiple attractors.", "tions, thus succeeding in this task requires identifying structural dependencies between two sequences of character tokens, while word-based LSTMs only need to resolve dependencies between word tokens.", "Second, by nature of modeling characters, non-local structural dependencies are sequentially further apart than in the wordbased language model.", "On the other hand, character LSTMs have the ability to exploit and share informative morphological cues, such as the fact that plural nouns in English tend to end with 's'.", "As demonstrated on the last row of Table 2 , we find that the character LSTM language model performs much worse at number agreement with multiple attractors compared to its word-based counterparts.", "This finding is consistent with that of Sennrich (2017) , who find that character-level decoders in neural machine translation perform worse than subword models in capturing morphosyntactic agreement.", "To some extent, our finding demonstrates the limitations that character LSTMs face in learning structure from language modeling objectives, despite earlier evidence that character LSTM language models are able to implicitly acquire a lexicon (Le Godais et al., 2017) .", "Number Agreement with RNNGs Given the strong performance of sequential LSTMs in number agreement, is there any further benefit to explicitly modeling hierarchical structures?", "We focus on recurrent neural network grammars (Dyer et al., 2016, RNNGs) , which jointly model the probability of phrase-structure trees and strings, p(x, y), through structurebuilding actions and explicit compositions for representing completed constituents.", "Our choice of RNNGs is motivated by the findings of Kuncoro et al.", "(2017) , who find evidence for syntactic headedness in RNNG phrasal representations.", "Intuitively, the ability to learn heads is beneficial for this task, as the representation for the noun phrase \"The flowers in the vase\" would be similar to the syntactic head flowers rather than vase.", "In some sense, the composition operator can be understood as injecting a structural recency bias into the model design, as subjects and verbs that are sequentially apart are encouraged to be close together in the RNNGs' representation.", "Recurrent Neural Network Grammars RNNGs (Dyer et al., 2016) are language models that estimate the joint probability of string terminals and phrase-structure tree nonterminals.", "Here we use stack-only RNNGs that achieve better perplexity and parsing performance (Kuncoro et al., 2017) .", "Given the current stack configuration, the objective function of RNNGs is to predict the correct structure-building operation according to a top-down, left-to-right traversal of the phrasestructure tree; a partial traversal for the input sentence \"The flowers in the vase are blooming\" is illustrated in Fig.", "3(a) .", "7 The structural inductive bias of RNNGs derives from an explicit composition operator that represents completed constituents; for instance, the constituent (NP The flowers) is represented by a single composite element on the stack, rather than as four separate symbols.", "During each REDUCE action, the topmost stack elements that belong to the new constituent are popped from the stack and then composed by the composition function; the composed symbol is then pushed back into the stack.", "The model is trained in an end-to-end manner by minimizing the cross-entropy loss relative to a sample of gold trees.", "Experiments Here we summarize the experimental settings of running RNNGs on the number agreement dataset and discuss the empirical findings.", "Experimental settings.", "We obtain phrasestructure trees for the Linzen et al.", "(2016) dataset using a publicly available discriminative model 8 trained on the Penn Treebank (Marcus et al., 1993) .", "At training time, we use these predicted trees to derive action sequences on the training set, and train the RNNG model on these sequences.", "9 At test time, we compare the probabilities of the correct and incorrect verb forms given the prefix, which now includes both nonterminal and terminal symbols.", "An example of the stack contents (i.e.", "the prefix) when predicting the verb is provided in Fig.", "3(a) .", "We similarly run a grid search over the same hyper-parameter range as the sequential LSTM and compare the results with the strongest sequential LSTM baseline from §2.", "Figure 2 : Number agreement error rates for sequential LSTM language models (left), sequential syntactic LSTM language models (Choe and Charniak, 2016, center), and RNNGs (right).", "Discussion.", "Fig.", "2 shows that RNNGs (rightmost) achieve much better number agreement accuracy compared to LSTM language models (leftmost) for difficult cases with four and five attractors, with around 30% error rate reductions, along with a 13% error rate reduction (from 9% to 7.8%) for three attractors.", "We attribute the slightly worse performance of RNNGs on cases with zero and one attractor to the presence of intervening structure-building actions that separate the subject and the verb, such as a REDUCE (step 6 in Fig.", "3(a) ) action to complete the noun phrase and at least one action to predict a verb phrase (step 15 in Fig.", "3(a) ) before the verb itself is introduced, while LSTM language models benefit from shorter dependencies for zero and one attractor cases.", "The performance gain of RNNGs might arise from two potential causes.", "First, RNNGs have access to predicted syntactic annotations, while LSTM language models operate solely on word sequences.", "Second, RNNGs incorporate explicit compositions, which encourage hierarhical representations and potentially the discovery of syntactic (rather than sequential) dependencies.", "Would LSTMs that have access to syntactic annotations, but without the explicit composition function, benefit from the same performance gain as RNNGs?", "To answer this question, we run sequential LSTMs over the same phrase-structure trees (Choe and Charniak, 2016) , similarly estimating the joint probability of phrase-structure nonterminals and string terminals but without an explicit composition operator.", "Taking the example in Fig.", "3(a) , the sequential syntactic LSTM would have fifteen 10 symbols on the LSTM when predicting the verb, as opposed to three symbols in the case of RNNGs' stack LSTM.", "In theory, the sequential LSTM over the phrase-structure trees (Choe and Charniak, 2016) may be able to incorporate a similar, albeit implicit, composition process as RNNGs and consequently derive similarly syntactic heads, although there is no inductive bias that explicitly encourages such process.", "Fig.", "2 suggests that the sequential syntactic LSTMs (center) perform comparably with sequential LSTMs without syntax for multiple attractor cases, and worse than RNNGs for nearly all attractors; the gap is highest for multiple attractors.", "This result showcases the importance of an explicit composition operator and hierarchical representations in identifying structural dependencies, as indicated by number agreement accuracy.", "Our finding is consistent with the recent work of Yogatama et al.", "(2018) , who find that introducing elements of hierarchical modeling through a stackstructured memory is beneficial for number agreement, outperforming LSTM language models and attention-augmented variants by increasing margins as the number of attractor grows.", "Further Analysis In order to better interpret the results, we conduct further analysis into the perplexities of each model, followed by a discussion on the effect of incrementality constraints on the RNNG when predicting number agreement.", "Perplexity.", "To what extent does the success of RNNGs in the number agreement task with multiple attractors correlate with better performance under the perplexity metric?", "We answer this question by using an importance sampling marginalization procedure (Dyer et al., 2016) to obtain an estimate of p(x) under both RNNGs and the sequential syntactic LSTM model.", "Following Dyer et al.", "(2016) , for each sentence on the validation set we sample 100 candidate trees from a discriminative model 11 as our proposal distribution.", "As demonstrated in Table 3 , the LSTM language model has the lowest validation set perplexity despite substantially worse performance than RN-NGs in number agreement with multiple attractors, suggesting that lower perplexity is not neces-sarily correlated with number agreement success.", "Incrementality constraints.", "As the syntactic prefix was derived from a discriminative model that has access to unprocessed words, one potential concern is that this prefix might violate the incrementality constraints and benefit the RNNG over the LSTM language model.", "To address this concern, we remark that the empirical evidence from Fig.", "2 and Table 3 indicates that the LSTM language model without syntactic annotation outperforms the sequential LSTM with syntactic annotation in terms of both perplexity and number agreement throughout nearly all attractor settings, suggesting that the predicted syntactic prefix does not give any unfair advantage to the syntactic models.", "Furthermore, we run an experiment where the syntactic prefix is instead derived from an incremental beam search procedure of Fried et al.", "(2017) .", "12 To this end, we take the highest scoring beam entry at the time that the verb is generated to be the syntactic prefix; this procedure is applied to both the correct and incorrect verb forms.", "13 We then similarly compare the probabilities of the correct and incorrect verb form given each respective syntactic prefix to obtain number agreement accuracy.", "Our finding suggests that using the fully incremental tree prefix leads to even better RNNG number agreement performance for four and five attractors, achieving 7.1% and 8.2% error rates, respectively, compared to 9.4% and 12% for the RNNG error rates in Fig.", "2 .", "Top-Down, Left-Corner, and Bottom-Up Traversals In this section, we propose two new variants of RNNGs that construct trees using a different con- 12 As the beam search procedure is time-consuming, we randomly sample 500 cases for each attractor and compute the number agreement accuracy on these samples.", "13 Consequently, the correct and incorrect forms of the sentence might have different partial trees, as the highest scoring beam entries may be different for each alternative.", "struction order than the top-down, left-to-right order used above.", "These are a bottom-up construction order ( §4.1) and a left-corner construction order ( §4.2), analogous to the well-known parsing strategies (e.g.", "Hale, 2014, chapter 3).", "They differ from these classic strategies insofar as they do not announce the phrase-structural content of an entire branch at the same time, adopting instead a node-by-node enumeration reminescent of Markov Grammars (Charniak, 1997) .", "This stepby-step arrangement extends to the derived string as well; since all variants generate words from left to right, the models can be compared using number agreement as a diagnostic.", "14 Here we state our hypothesis on why the build order matters.", "The three generation strategies represent different chain rule decompositions of the joint probability of strings and phrase-structure trees, thereby imposing different biases on the learner.", "Earlier work in parsing has characterized the plausibility of top-down, left-corner, and bottom-up strategies as viable candidates of human sentence processing, especially in terms of memory constraints and human difficulties with center embedding constructions (Johnson-Laird, 1983; Pulman, 1986; Abney and Johnson, 1991; Resnik, 1992 , inter alia), along with neurophysiological evidence in human sentence processing (Nelson et al., 2017) .", "Here we cast the three strategies as models of language generation (Manning and Carpenter, 1997), and focus on the empirical question: which generation order has the most appropriate bias in modeling non-local structural dependencies in English?", "These alternative orders organize the learning problem so as to yield intermediate states in generation that condition on different aspects of the grammatical structure.", "In number agreement, this amounts to making an agreement controller, such as the word flowers in Fig.", "3 , more or less salient.", "If it is more salient, the model should be better-able to inflect the main verb in agreement with this controller, without getting distracted by the attractors.", "The three proposed build orders are compared in Fig.", "3 , showing the respective configurations (i.e.", "the prefix) when generating the main verb in a sentence with a single attractor.", "15 In ad-dition, we show concrete action sequences for a simpler sentence in each section.", "Bottom-Up Traversal In bottom-up traversals, phrases are recursively constructed and labeled with the nonterminal type once all their daughters have been built, as illustrated in Fig.", "4 .", "Bottom-up traversals benefit from shorter stack depths compared to top-down due to the lack of incomplete nonterminals.", "As the commitment to label the nonterminal type of a phrase is delayed until its constituents are complete, this means that the generation of a child node cannot condition on the label of its parent node.", "In n-ary branching trees, bottom-up completion of constituents requires a procedure for determining how many of the most recent elements on the stack should be daughters of the node that is being constructed.", "16 Conceptually, rather than having a single REDUCE operation as we have before, we have a complex REDUCE(X, n) operation that must determine the type of the constituent (i.e., X) as well as the number of daughters (i.e., n).", "In step 5 of Fig.", "4 , the newly formed NP constituent only covers the terminal worms, and neither the unattached terminal eats nor the constituent (NP The fox) is part of the new noun phrase.", "We implement this extent decision using a stick-breaking construction-using the stack LSTM encoding, a single-layer feedforward network, and a logistic output layer-which decides whether the top element on the stack should be the leftmost child of the new constituent (i.e.", "whether or not the new constituent is complete after popping the current topmost stack element), as illustrated in Fig.", "5 .", "If not, the process is then repeated after the topmost stack element is popped.", "Once the extent of the new nonterminal has been decided, we parameterize the decision of the nonterminal label type; in Fig.", "5 this is an NP.", "A second difference to top-down generation is that when a single constituent remains on the stack, the sentence is not necessarily complete (see step 3 of Fig.", "4 for examples where this happens).", "We thus introduce an explicit STOP action (step 8, Fig.", "4) , indicating the tree is complete, which is only assigned non-zero probability when the stack has a during the history of the full generation process vary considerably in the invariances and the kinds of actions they predict.", "16 This mechanism is not necessary with strictly binary branching trees, since each new nonterminal always consists of the two children at the top of the stack.", "Left-Corner Traversal Left-corner traversals combine some aspects of top-down and bottom-up processing.", "As illustrated in Fig.", "6 , this works by first generating the leftmost terminal of the tree, The (step 0), before proceeding bottom-up to predict its parent NP (step 1) and then top-down to predict the rest of its children (step 2).", "A REDUCE action similarly calls the composition operator once the phrase is complete (e.g.", "step 3).", "The complete constituent (NP The fox) is the leftmost child of its parent node, thus an NT SW(S) action is done next (step 4).", "The NT SW(X) action is similar to the NT(X) from the top-down generator, in that it introduces an open nonterminal node and must be matched later by a corresponding REDUCE operation, but, in addition, swaps the two topmost elements at the top of the stack.", "This is necessary because the parent nonterminal node is not built until after its left-most child has been constructed.", "In step 1 of Fig.", "6 , with a single element The on the stack, the action NT SW(NP) adds the open nonterminal symbol NP to become the topmost stack element, but after applying the swap operator the stack now contains (NP | The (step 2).", "Experiments We optimize the hyper-parameters of each RNNG variant using grid searches based on validation set perplexity.", "Table 4 summarizes average stack depths and perplexities 17 on the Linzen et al.", "(2016) validation set.", "We evaluate each of the variants in terms of number agreement accuracy as an evidence of its suitability to model structural dependencies in English, presented in Table 5 .", "To account for randomness in training, we report the error rate summary statistics of ten different runs.", "Avg.", "(±sdev)/min/max n=2 n=3 n=4 LM 5.8(±0.2)/5.5/6.0 9.6(±0.7)/8.8/10.1 14.1(±1.2)/13.0/15.3 TD 5.5(±0.4)/4.9/5.8 7.8(±0.6)/7.4/8.0 8.9(±1.1)/7.9/9.8 LC 5.4(±0.3)/5.2/5.5 8.2(±0.4)/7.9/8.7 9.9(±1.3)/8.8/11.5 BU 5.7(±0.3) 5.5/5.8 8.5(±0.7)/8.0/9.3 9.7(±1.1)/9.0/11.3 Table 5 : Number agreement error rates for topdown (TD), left-corner (LC), and bottom-up (BU) RNNGs, broken down by the number of attractors.", "LM indicates the best sequential language model baseline ( §2).", "We report the mean, standard deviation, and minimum/maximum of 10 different random seeds of each model.", "Discussion.", "In Table 5 , we focus on empirical results for cases where the structural dependencies matter the most, corresponding to cases with two, three, and four attractors.", "All three RNNG variants outperform the sequential LSTM language model baseline for these cases.", "Nevertheless, the top-down variant outperforms both left-corner and bottom-up strategies for difficult cases with three or more attractors, suggesting that the top-down strategy is most appropriately biased to model difficult number agreement dependencies in English.", "We run an approximate randomization test by stratifying the output and permuting within each stratum (Yeh, 2000) and find that, for four attractors, the performance difference between the top-down RNNG and the other variants is statistically significant at p < 0.05.", "The success of the top-down traversal in the domain of number-agreement prediction is consistent with a classical view in parsing that argues top-down parsing is the most human-like parsing strategy since it is the most anticipatory.", "Only anticipatory representations, it is said, could explain the rapid, incremental processing that humans seem to exhibit (Marslen-Wilson, 1973; Tanenhaus et al., 1995) ; this line of thinking similarly motivates Charniak (2010) , among others.", "While most work in this domain has been concerned with the parsing problem, our findings suggest that anticipatory mechanisms are also beneficial in capturing structural dependencies in language modeling.", "We note that our results are achieved using models that, in theory, are able to condition on the entire derivation history, while earlier work in sentence processing has focused on cognitive memory considerations, such as the memory-bounded model of Schuler et al.", "(2010) .", "Conclusion Given enough capacity, LSTMs trained on language modeling objectives are able to learn syntax-sensitive dependencies, as evidenced by accurate number agreement accuracy with multiple attractors.", "Despite this strong performance, we discover explicit modeling of structure does improve the model's ability to discover non-local structural dependencies when determining the distribution over subsequent word generation.", "Recurrent neural network grammars (RNNGs), which jointly model phrase-structure trees and strings and employ an explicit composition operator, substantially outperform LSTM language models and syntactic language models without explicit compositions; this highlights the importance of a hierarchical inductive bias in capturing structural dependencies.", "We explore the possibility that how the structure is built affects number agreement performance.", "Through novel extensions to RNNGs that enable the use of left-corner and bottom-up generation strategies, we discover that this is indeed the case: the three RNNG variants have different generalization properties for number agreement, with the top-down traversal strategy performing best for cases with multiple attractors." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "4", "4.1", "4.2", "4.3", "5" ], "paper_header_content": [ "Introduction", "Number Agreement with LSTM Language Models", "Number Agreement with RNNGs", "Recurrent Neural Network Grammars", "Experiments", "Further Analysis", "Top-Down, Left-Corner, and Bottom-Up Traversals", "Bottom-Up Traversal", "Left-Corner Traversal", "Experiments", "Conclusion" ] }
GEM-SciDuet-train-30#paper-1041#slide-34
Part Three Recap and Outlook
We proposed two new RNNG variants with different tree construction orders: left-corner and bottom-up RNNGs. Top-down construction still performs best in number agreement. It is the most anticipatory (Marslen-Wilson, 1973; Tanenhaus et al., We can apply the three strategies to parsing and as linking hypothesis to human brain signal during comprehension (Hale et al., 2018). LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018)
We proposed two new RNNG variants with different tree construction orders: left-corner and bottom-up RNNGs. Top-down construction still performs best in number agreement. It is the most anticipatory (Marslen-Wilson, 1973; Tanenhaus et al., We can apply the three strategies to parsing and as linking hypothesis to human brain signal during comprehension (Hale et al., 2018). LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018)
[]
GEM-SciDuet-train-30#paper-1041#slide-35
1041
LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modeling Structure Makes Them Better
Language exhibits hierarchical structure, but recent work using a subject-verb agreement diagnostic argued that state-ofthe-art language models, LSTMs, fail to learn long-range syntax-sensitive dependencies. Using the same diagnostic, we show that, in fact, LSTMs do succeed in learning such dependencies-provided they have enough capacity. We then explore whether models that have access to explicit syntactic information learn agreement more effectively, and how the way in which this structural information is incorporated into the model impacts performance. We find that the mere presence of syntactic information does not improve accuracy, but when model architecture is determined by syntax, number agreement is improved. Further, we find that the choice of how syntactic structure is built affects how well number agreement is learned: top-down construction outperforms leftcorner and bottom-up variants in capturing long-distance structural dependencies.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189 ], "paper_content_text": [ "Introduction Recurrent neural networks (RNNs) are remarkably effective models of sequential data.", "Recent years have witnessed the widespread adoption of recurrent architectures such as LSTMs (Hochreiter and Schmidhuber, 1997) in various NLP tasks, with state of the art results in language modeling and conditional generation tasks like machine translation (Bahdanau et al., 2015) and text summarization (See et al., 2017) .", "Here we revisit the question asked by Linzen et al.", "(2016) : as RNNs model word sequences without explicit notions of hierarchical structure, Figure 1 : An example of the number agreement task with two attractors and a subject-verb distance of five.", "to what extent are these models able to learn non-local syntactic dependencies in natural language?", "Identifying number agreement between subjects and verbs-especially in the presence of attractors-can be understood as a cognitivelymotivated probe that seeks to distinguish hierarchical theories from sequential ones, as models that rely on sequential cues like the most recent noun would favor the incorrect verb form.", "We provide an example of this task in Fig.", "1 , where the plural form of the verb have agrees with the distant subject parts, rather than the adjacent attractors (underlined) of the singular form.", "Contrary to the findings of Linzen et al.", "(2016) , our experiments suggest that sequential LSTMs are able to capture structural dependencies to a large extent, even for cases with multiple attractors ( §2).", "Our finding suggests that network capacity plays a crucial role in capturing structural dependencies with multiple attractors.", "Nevertheless, we find that a strong character LSTM language model-which lacks explicit word representation and has to capture much longer sequential dependencies in order to learn non-local structural dependencies effectively-performs much worse in the number agreement task.", "Given the strong performance of word-based LSTM language models, are there are any substantial benefits, in terms of number agreement accuracy, to explicitly modeling hierarchical structures as an inductive bias?", "We discover that a certain class of LSTM language models that explicitly models syntactic structures, the recurrent neural network grammars (Dyer et al., 2016, RN-NGs) , considerably outperforms sequential LSTM language models for cases with multiple attractors ( §3).", "We present experiments affirming that this gain is due to an explicit composition operator rather than the presence of predicted syntactic annotations.", "Rather surprisingly, syntactic LSTM language models without explicit composition have no advantage over sequential LSTMs that operate on word sequences, although these models can nevertheless be excellent predictors of phrase structures (Choe and Charniak, 2016) .", "Having established the importance of modeling structures, we explore the hypothesis that how we build the structure affects the model's ability to identify structural dependencies in English.", "As RNNGs build phrase-structure trees through top-down operations, we propose extensions to the structure-building sequences and model architecture that enable left-corner (Henderson, 2003 (Henderson, , 2004 and bottom-up (Chelba and Jelinek, 2000; Emami and Jelinek, 2005) generation orders ( §4).", "Extensive prior work has characterized topdown, left-corner, and bottom-up parsing strategies in terms of cognitive plausibility (Pulman, 1986; Abney and Johnson, 1991; Resnik, 1992) and neurophysiological evidence in human sentence processing (Nelson et al., 2017) .", "Here we move away from the realm of parsing and evaluate the three strategies as models of generation instead, and address the following empirical question: which generation order is most appropriately biased to model structural dependencies in English, as indicated by number agreement accuracy?", "Our key finding is that the top-down generation outperforms left-corner and bottom-up variants for difficult cases with multiple attractors.", "In theory, the three traversal strategies approximate the same chain rule that decompose the joint probability of words and phrase-structure trees, denoted as p(x, y), differently and as such will impose different biases on the learner.", "In §4.3, we show that the three variants achieve similar perplexities on a held-out validation set.", "As we observe different patterns in number agreement, this demonstrates that while perplexity can be a useful diagnostic tool, it may not be sensitive enough for comparing models in terms of how well they capture grammatical intuitions.", "Number Agreement with LSTM Language Models We revisit the number agreement task with LSTMs trained on language modeling objectives, as proposed by Linzen et al.", "(2016) .", "Experimental Settings.", "We use the same parsed Wikipedia corpus, verb inflectors, preprocessing steps, and dataset split as Linzen et al.", "(2016) .", "1 Word types beyond the most frequent 10,000 are converted to their respective POS tags.", "We summarize the corpus statistics of the dataset, along with the test set distribution of the number of attractors, in Table 1 .", "Similar to Linzen et al.", "(2016) , we only include test cases where all intervening nouns are of the opposite number forms than the subject noun.", "All models are implemented using the DyNet library (Neubig et al., 2017) .", "Training was done using a language modeling objective that predicts the next word given the prefix; at test time we compute agreement error rates by comparing the probability of the correct verb form with the incorrect one.", "We report performance of a few different LSTM hidden layer configurations, while other hyper-parameters are selected based on a grid search.", "2 Following Linzen Table 2 indicates that, given enough capacity, LSTM language models without explicit syntactic supervision are able to perform well in number agreement.", "For cases with multiple attractors, we observe that the LSTM language model with 50 hidden units trails behind its larger counterparts by a substantial margin despite comparable performance for zero attractor cases, suggesting that network capacity plays an especially important role in propagating relevant structural information across a large number of steps.", "5 Our experiment independently derives the same finding as the recent work of Gulordava et al.", "(2018) , who also find that LSTMs trained with language modeling objectives are able to learn number agreement well; here we additionally identify model capacity as one of the reasons for the discrepancy with the Linzen et al.", "(2016) results.", "While the pretrained large-scale language model of Jozefowicz et al.", "(2016) has certain advantages in terms of model capacity, more training data, and richer vocabulary, we suspect that the poorer performance is due to differences between their training domain and the number agreement testing domain, although the model still performs reasonably well in the number agreement test set.", "Prior work has confirmed the notion that, in many cases, statistical models are able to achieve good performance under some aggregate metric by overfitting to patterns that are predictive in most cases, often at the expense of more difficult, infrequent instances that require deeper language understanding abilities (Rimell et al., 2009; Jia and Liang, 2017) .", "In the vast majority of cases, structural dependencies between subjects and verbs highly overlap with sequential dependencies (Table 1).", "Nevertheless, the fact that number agreement accuracy gets worse as the number of attractors increases is consistent with a sequential recency bias in LSTMs: under this conjecture, identifying the correct structural dependency becomes harder when there are more adjacent nouns of different number forms than the true subject.", "If the sequential recency conjecture is correct, then LSTMs would perform worse when the structural dependency is more distant in the sequences, compared to cases where the structural dependency is more adjacent.", "We empirically test this conjecture by running a strong character-based LSTM language model of that achieved state of the art results on EnWiki8 from the Hutter Prize dataset (Hutter, 2012) , with 1,800 hidden units and 10 million parameters.", "The character LSTM is trained, validated, and tested 6 on the same split of the Linzen et al.", "(2016) number agreement dataset.", "A priori, we expect that number agreement is harder for character LSTMs for two reasons.", "First, character LSTMs lack explicit word representaforms much better for cases with multiple attractors.", "tions, thus succeeding in this task requires identifying structural dependencies between two sequences of character tokens, while word-based LSTMs only need to resolve dependencies between word tokens.", "Second, by nature of modeling characters, non-local structural dependencies are sequentially further apart than in the wordbased language model.", "On the other hand, character LSTMs have the ability to exploit and share informative morphological cues, such as the fact that plural nouns in English tend to end with 's'.", "As demonstrated on the last row of Table 2 , we find that the character LSTM language model performs much worse at number agreement with multiple attractors compared to its word-based counterparts.", "This finding is consistent with that of Sennrich (2017) , who find that character-level decoders in neural machine translation perform worse than subword models in capturing morphosyntactic agreement.", "To some extent, our finding demonstrates the limitations that character LSTMs face in learning structure from language modeling objectives, despite earlier evidence that character LSTM language models are able to implicitly acquire a lexicon (Le Godais et al., 2017) .", "Number Agreement with RNNGs Given the strong performance of sequential LSTMs in number agreement, is there any further benefit to explicitly modeling hierarchical structures?", "We focus on recurrent neural network grammars (Dyer et al., 2016, RNNGs) , which jointly model the probability of phrase-structure trees and strings, p(x, y), through structurebuilding actions and explicit compositions for representing completed constituents.", "Our choice of RNNGs is motivated by the findings of Kuncoro et al.", "(2017) , who find evidence for syntactic headedness in RNNG phrasal representations.", "Intuitively, the ability to learn heads is beneficial for this task, as the representation for the noun phrase \"The flowers in the vase\" would be similar to the syntactic head flowers rather than vase.", "In some sense, the composition operator can be understood as injecting a structural recency bias into the model design, as subjects and verbs that are sequentially apart are encouraged to be close together in the RNNGs' representation.", "Recurrent Neural Network Grammars RNNGs (Dyer et al., 2016) are language models that estimate the joint probability of string terminals and phrase-structure tree nonterminals.", "Here we use stack-only RNNGs that achieve better perplexity and parsing performance (Kuncoro et al., 2017) .", "Given the current stack configuration, the objective function of RNNGs is to predict the correct structure-building operation according to a top-down, left-to-right traversal of the phrasestructure tree; a partial traversal for the input sentence \"The flowers in the vase are blooming\" is illustrated in Fig.", "3(a) .", "7 The structural inductive bias of RNNGs derives from an explicit composition operator that represents completed constituents; for instance, the constituent (NP The flowers) is represented by a single composite element on the stack, rather than as four separate symbols.", "During each REDUCE action, the topmost stack elements that belong to the new constituent are popped from the stack and then composed by the composition function; the composed symbol is then pushed back into the stack.", "The model is trained in an end-to-end manner by minimizing the cross-entropy loss relative to a sample of gold trees.", "Experiments Here we summarize the experimental settings of running RNNGs on the number agreement dataset and discuss the empirical findings.", "Experimental settings.", "We obtain phrasestructure trees for the Linzen et al.", "(2016) dataset using a publicly available discriminative model 8 trained on the Penn Treebank (Marcus et al., 1993) .", "At training time, we use these predicted trees to derive action sequences on the training set, and train the RNNG model on these sequences.", "9 At test time, we compare the probabilities of the correct and incorrect verb forms given the prefix, which now includes both nonterminal and terminal symbols.", "An example of the stack contents (i.e.", "the prefix) when predicting the verb is provided in Fig.", "3(a) .", "We similarly run a grid search over the same hyper-parameter range as the sequential LSTM and compare the results with the strongest sequential LSTM baseline from §2.", "Figure 2 : Number agreement error rates for sequential LSTM language models (left), sequential syntactic LSTM language models (Choe and Charniak, 2016, center), and RNNGs (right).", "Discussion.", "Fig.", "2 shows that RNNGs (rightmost) achieve much better number agreement accuracy compared to LSTM language models (leftmost) for difficult cases with four and five attractors, with around 30% error rate reductions, along with a 13% error rate reduction (from 9% to 7.8%) for three attractors.", "We attribute the slightly worse performance of RNNGs on cases with zero and one attractor to the presence of intervening structure-building actions that separate the subject and the verb, such as a REDUCE (step 6 in Fig.", "3(a) ) action to complete the noun phrase and at least one action to predict a verb phrase (step 15 in Fig.", "3(a) ) before the verb itself is introduced, while LSTM language models benefit from shorter dependencies for zero and one attractor cases.", "The performance gain of RNNGs might arise from two potential causes.", "First, RNNGs have access to predicted syntactic annotations, while LSTM language models operate solely on word sequences.", "Second, RNNGs incorporate explicit compositions, which encourage hierarhical representations and potentially the discovery of syntactic (rather than sequential) dependencies.", "Would LSTMs that have access to syntactic annotations, but without the explicit composition function, benefit from the same performance gain as RNNGs?", "To answer this question, we run sequential LSTMs over the same phrase-structure trees (Choe and Charniak, 2016) , similarly estimating the joint probability of phrase-structure nonterminals and string terminals but without an explicit composition operator.", "Taking the example in Fig.", "3(a) , the sequential syntactic LSTM would have fifteen 10 symbols on the LSTM when predicting the verb, as opposed to three symbols in the case of RNNGs' stack LSTM.", "In theory, the sequential LSTM over the phrase-structure trees (Choe and Charniak, 2016) may be able to incorporate a similar, albeit implicit, composition process as RNNGs and consequently derive similarly syntactic heads, although there is no inductive bias that explicitly encourages such process.", "Fig.", "2 suggests that the sequential syntactic LSTMs (center) perform comparably with sequential LSTMs without syntax for multiple attractor cases, and worse than RNNGs for nearly all attractors; the gap is highest for multiple attractors.", "This result showcases the importance of an explicit composition operator and hierarchical representations in identifying structural dependencies, as indicated by number agreement accuracy.", "Our finding is consistent with the recent work of Yogatama et al.", "(2018) , who find that introducing elements of hierarchical modeling through a stackstructured memory is beneficial for number agreement, outperforming LSTM language models and attention-augmented variants by increasing margins as the number of attractor grows.", "Further Analysis In order to better interpret the results, we conduct further analysis into the perplexities of each model, followed by a discussion on the effect of incrementality constraints on the RNNG when predicting number agreement.", "Perplexity.", "To what extent does the success of RNNGs in the number agreement task with multiple attractors correlate with better performance under the perplexity metric?", "We answer this question by using an importance sampling marginalization procedure (Dyer et al., 2016) to obtain an estimate of p(x) under both RNNGs and the sequential syntactic LSTM model.", "Following Dyer et al.", "(2016) , for each sentence on the validation set we sample 100 candidate trees from a discriminative model 11 as our proposal distribution.", "As demonstrated in Table 3 , the LSTM language model has the lowest validation set perplexity despite substantially worse performance than RN-NGs in number agreement with multiple attractors, suggesting that lower perplexity is not neces-sarily correlated with number agreement success.", "Incrementality constraints.", "As the syntactic prefix was derived from a discriminative model that has access to unprocessed words, one potential concern is that this prefix might violate the incrementality constraints and benefit the RNNG over the LSTM language model.", "To address this concern, we remark that the empirical evidence from Fig.", "2 and Table 3 indicates that the LSTM language model without syntactic annotation outperforms the sequential LSTM with syntactic annotation in terms of both perplexity and number agreement throughout nearly all attractor settings, suggesting that the predicted syntactic prefix does not give any unfair advantage to the syntactic models.", "Furthermore, we run an experiment where the syntactic prefix is instead derived from an incremental beam search procedure of Fried et al.", "(2017) .", "12 To this end, we take the highest scoring beam entry at the time that the verb is generated to be the syntactic prefix; this procedure is applied to both the correct and incorrect verb forms.", "13 We then similarly compare the probabilities of the correct and incorrect verb form given each respective syntactic prefix to obtain number agreement accuracy.", "Our finding suggests that using the fully incremental tree prefix leads to even better RNNG number agreement performance for four and five attractors, achieving 7.1% and 8.2% error rates, respectively, compared to 9.4% and 12% for the RNNG error rates in Fig.", "2 .", "Top-Down, Left-Corner, and Bottom-Up Traversals In this section, we propose two new variants of RNNGs that construct trees using a different con- 12 As the beam search procedure is time-consuming, we randomly sample 500 cases for each attractor and compute the number agreement accuracy on these samples.", "13 Consequently, the correct and incorrect forms of the sentence might have different partial trees, as the highest scoring beam entries may be different for each alternative.", "struction order than the top-down, left-to-right order used above.", "These are a bottom-up construction order ( §4.1) and a left-corner construction order ( §4.2), analogous to the well-known parsing strategies (e.g.", "Hale, 2014, chapter 3).", "They differ from these classic strategies insofar as they do not announce the phrase-structural content of an entire branch at the same time, adopting instead a node-by-node enumeration reminescent of Markov Grammars (Charniak, 1997) .", "This stepby-step arrangement extends to the derived string as well; since all variants generate words from left to right, the models can be compared using number agreement as a diagnostic.", "14 Here we state our hypothesis on why the build order matters.", "The three generation strategies represent different chain rule decompositions of the joint probability of strings and phrase-structure trees, thereby imposing different biases on the learner.", "Earlier work in parsing has characterized the plausibility of top-down, left-corner, and bottom-up strategies as viable candidates of human sentence processing, especially in terms of memory constraints and human difficulties with center embedding constructions (Johnson-Laird, 1983; Pulman, 1986; Abney and Johnson, 1991; Resnik, 1992 , inter alia), along with neurophysiological evidence in human sentence processing (Nelson et al., 2017) .", "Here we cast the three strategies as models of language generation (Manning and Carpenter, 1997), and focus on the empirical question: which generation order has the most appropriate bias in modeling non-local structural dependencies in English?", "These alternative orders organize the learning problem so as to yield intermediate states in generation that condition on different aspects of the grammatical structure.", "In number agreement, this amounts to making an agreement controller, such as the word flowers in Fig.", "3 , more or less salient.", "If it is more salient, the model should be better-able to inflect the main verb in agreement with this controller, without getting distracted by the attractors.", "The three proposed build orders are compared in Fig.", "3 , showing the respective configurations (i.e.", "the prefix) when generating the main verb in a sentence with a single attractor.", "15 In ad-dition, we show concrete action sequences for a simpler sentence in each section.", "Bottom-Up Traversal In bottom-up traversals, phrases are recursively constructed and labeled with the nonterminal type once all their daughters have been built, as illustrated in Fig.", "4 .", "Bottom-up traversals benefit from shorter stack depths compared to top-down due to the lack of incomplete nonterminals.", "As the commitment to label the nonterminal type of a phrase is delayed until its constituents are complete, this means that the generation of a child node cannot condition on the label of its parent node.", "In n-ary branching trees, bottom-up completion of constituents requires a procedure for determining how many of the most recent elements on the stack should be daughters of the node that is being constructed.", "16 Conceptually, rather than having a single REDUCE operation as we have before, we have a complex REDUCE(X, n) operation that must determine the type of the constituent (i.e., X) as well as the number of daughters (i.e., n).", "In step 5 of Fig.", "4 , the newly formed NP constituent only covers the terminal worms, and neither the unattached terminal eats nor the constituent (NP The fox) is part of the new noun phrase.", "We implement this extent decision using a stick-breaking construction-using the stack LSTM encoding, a single-layer feedforward network, and a logistic output layer-which decides whether the top element on the stack should be the leftmost child of the new constituent (i.e.", "whether or not the new constituent is complete after popping the current topmost stack element), as illustrated in Fig.", "5 .", "If not, the process is then repeated after the topmost stack element is popped.", "Once the extent of the new nonterminal has been decided, we parameterize the decision of the nonterminal label type; in Fig.", "5 this is an NP.", "A second difference to top-down generation is that when a single constituent remains on the stack, the sentence is not necessarily complete (see step 3 of Fig.", "4 for examples where this happens).", "We thus introduce an explicit STOP action (step 8, Fig.", "4) , indicating the tree is complete, which is only assigned non-zero probability when the stack has a during the history of the full generation process vary considerably in the invariances and the kinds of actions they predict.", "16 This mechanism is not necessary with strictly binary branching trees, since each new nonterminal always consists of the two children at the top of the stack.", "Left-Corner Traversal Left-corner traversals combine some aspects of top-down and bottom-up processing.", "As illustrated in Fig.", "6 , this works by first generating the leftmost terminal of the tree, The (step 0), before proceeding bottom-up to predict its parent NP (step 1) and then top-down to predict the rest of its children (step 2).", "A REDUCE action similarly calls the composition operator once the phrase is complete (e.g.", "step 3).", "The complete constituent (NP The fox) is the leftmost child of its parent node, thus an NT SW(S) action is done next (step 4).", "The NT SW(X) action is similar to the NT(X) from the top-down generator, in that it introduces an open nonterminal node and must be matched later by a corresponding REDUCE operation, but, in addition, swaps the two topmost elements at the top of the stack.", "This is necessary because the parent nonterminal node is not built until after its left-most child has been constructed.", "In step 1 of Fig.", "6 , with a single element The on the stack, the action NT SW(NP) adds the open nonterminal symbol NP to become the topmost stack element, but after applying the swap operator the stack now contains (NP | The (step 2).", "Experiments We optimize the hyper-parameters of each RNNG variant using grid searches based on validation set perplexity.", "Table 4 summarizes average stack depths and perplexities 17 on the Linzen et al.", "(2016) validation set.", "We evaluate each of the variants in terms of number agreement accuracy as an evidence of its suitability to model structural dependencies in English, presented in Table 5 .", "To account for randomness in training, we report the error rate summary statistics of ten different runs.", "Avg.", "(±sdev)/min/max n=2 n=3 n=4 LM 5.8(±0.2)/5.5/6.0 9.6(±0.7)/8.8/10.1 14.1(±1.2)/13.0/15.3 TD 5.5(±0.4)/4.9/5.8 7.8(±0.6)/7.4/8.0 8.9(±1.1)/7.9/9.8 LC 5.4(±0.3)/5.2/5.5 8.2(±0.4)/7.9/8.7 9.9(±1.3)/8.8/11.5 BU 5.7(±0.3) 5.5/5.8 8.5(±0.7)/8.0/9.3 9.7(±1.1)/9.0/11.3 Table 5 : Number agreement error rates for topdown (TD), left-corner (LC), and bottom-up (BU) RNNGs, broken down by the number of attractors.", "LM indicates the best sequential language model baseline ( §2).", "We report the mean, standard deviation, and minimum/maximum of 10 different random seeds of each model.", "Discussion.", "In Table 5 , we focus on empirical results for cases where the structural dependencies matter the most, corresponding to cases with two, three, and four attractors.", "All three RNNG variants outperform the sequential LSTM language model baseline for these cases.", "Nevertheless, the top-down variant outperforms both left-corner and bottom-up strategies for difficult cases with three or more attractors, suggesting that the top-down strategy is most appropriately biased to model difficult number agreement dependencies in English.", "We run an approximate randomization test by stratifying the output and permuting within each stratum (Yeh, 2000) and find that, for four attractors, the performance difference between the top-down RNNG and the other variants is statistically significant at p < 0.05.", "The success of the top-down traversal in the domain of number-agreement prediction is consistent with a classical view in parsing that argues top-down parsing is the most human-like parsing strategy since it is the most anticipatory.", "Only anticipatory representations, it is said, could explain the rapid, incremental processing that humans seem to exhibit (Marslen-Wilson, 1973; Tanenhaus et al., 1995) ; this line of thinking similarly motivates Charniak (2010) , among others.", "While most work in this domain has been concerned with the parsing problem, our findings suggest that anticipatory mechanisms are also beneficial in capturing structural dependencies in language modeling.", "We note that our results are achieved using models that, in theory, are able to condition on the entire derivation history, while earlier work in sentence processing has focused on cognitive memory considerations, such as the memory-bounded model of Schuler et al.", "(2010) .", "Conclusion Given enough capacity, LSTMs trained on language modeling objectives are able to learn syntax-sensitive dependencies, as evidenced by accurate number agreement accuracy with multiple attractors.", "Despite this strong performance, we discover explicit modeling of structure does improve the model's ability to discover non-local structural dependencies when determining the distribution over subsequent word generation.", "Recurrent neural network grammars (RNNGs), which jointly model phrase-structure trees and strings and employ an explicit composition operator, substantially outperform LSTM language models and syntactic language models without explicit compositions; this highlights the importance of a hierarchical inductive bias in capturing structural dependencies.", "We explore the possibility that how the structure is built affects number agreement performance.", "Through novel extensions to RNNGs that enable the use of left-corner and bottom-up generation strategies, we discover that this is indeed the case: the three RNNG variants have different generalization properties for number agreement, with the top-down traversal strategy performing best for cases with multiple attractors." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "4", "4.1", "4.2", "4.3", "5" ], "paper_header_content": [ "Introduction", "Number Agreement with LSTM Language Models", "Number Agreement with RNNGs", "Recurrent Neural Network Grammars", "Experiments", "Further Analysis", "Top-Down, Left-Corner, and Bottom-Up Traversals", "Bottom-Up Traversal", "Left-Corner Traversal", "Experiments", "Conclusion" ] }
GEM-SciDuet-train-30#paper-1041#slide-35
Conclusion
LSTM language models with enough capacity can learn number agreement well, while a strong character LSTM performs much worse. Explicitly modelling the syntactic structure with RNNGs that have a hierarchical inductive bias leads to much better number agreement. Syntactic annotation alone does not help if the model is still sequential. Top-down construction order outperforms left-corner and bottom-up variants in difficult number agreement cases. Perplexity does not completely correlate with number agreement. LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018)
LSTM language models with enough capacity can learn number agreement well, while a strong character LSTM performs much worse. Explicitly modelling the syntactic structure with RNNGs that have a hierarchical inductive bias leads to much better number agreement. Syntactic annotation alone does not help if the model is still sequential. Top-down construction order outperforms left-corner and bottom-up variants in difficult number agreement cases. Perplexity does not completely correlate with number agreement. LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modelling Structure Makes Them Better - Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom (ACL 2018)
[]
GEM-SciDuet-train-31#paper-1044#slide-0
1044
Semantically-Aligned Equation Generation for Solving and Reasoning Math Word Problems
Solving math word problems is a challenging task that requires accurate natural language understanding to bridge natural language texts and math expressions. Motivated by the intuition about how human generates the equations given the problem texts, this paper presents a neural approach to automatically solve math word problems by operating symbols according to their semantic meanings in texts. This paper views the process of generating equations as a bridge between the semantic world and the symbolic world, where the proposed neural math solver is based on an encoderdecoder framework. In the proposed model, the encoder is designed to understand the semantics of problems, and the decoder focuses on tracking semantic meanings of the generated symbols and then deciding which symbol to generate next. The preliminary experiments are conducted in a benchmark dataset Math23K, and our model significantly outperforms both the state-of-the-art single model and the best non-retrieval-based model over about 10% accuracy, demonstrating the effectiveness of bridging the symbolic and semantic worlds from math word problems. 1
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214 ], "paper_content_text": [ "Introduction Automatically solving math word problems has been an interesting research topic and also been viewed as a way of evaluating machines' ability (Mandal and Naskar, 2019) .", "For human, writing down an equation that solves a math word problem requires the ability of reading comprehension, reasoning, and sometimes real world understanding.", "Specifically, to solve a math word problem, we first need to know the goal of the given problem, then understand the semantic 1 The source code is available at https://github.", "com/MiuLab/E2EMathSolver.", "meaning of each numerical number in the problem, perform reasoning based on the comprehension in the previous step, and finally decide what to write in the equation.", "Most prior work about solving math word problems relied on hand-crafted features, which required more human knowledge.", "Because those features are often in the lexical level, it is not clear whether machines really understand the math problems.", "Also, most prior work evaluated their approaches on relatively small datasets, and the capability of generalization is concerned.", "This paper considers the reasoning procedure when writing down the associated equation given a problem.", "Figure 1 illustrates the problem solving process.", "The illustration shows that human actually assigns the semantic meaning to each number when manipulating symbols, including operands (numbers) and operators (+ − ×÷).", "Also, we believe that the semantic meaning of operands can help us decide which operator to use.", "For example, the summation of \"price of one pen\" and \"number of pens Tom bought\" is meaningless; therefore the addition would not be chosen.", "Following the observation above, this paper proposes a novel encoder decoder model, where the encoder extracts semantic meanings of numbers in the problem, and the decoder is equipped with a stack that facilitates tracking the semantic meanings of operands.", "The contributions of this paper are 4-fold: • This paper is the first work that models semantic meanings of operands and operators for math word problems.", "• This paper proposes an end-to-end neural math solver with a novel decoding process that utilizes the stack to generate associated equations.", "Figure 1 : The solving process of the math word problem \"Each notebok takes $0.5 and each pen takes $1.", "Tom has $10.", "How many notebook can he buy after buying 5 pens?\"", "and the associated equation is x = (10 − 1 × 5) ÷ 0.5.", "The associated equation is x = (10 − 1 × 5) ÷ 0.5.", "• This paper achieves the state-of-the-art performance on the large benchmark dataset Math23K.", "• This paper is capable of providing interpretation and reasoning for the math word problem solving procedure.", "Related Work There is a lot of prior work that utilized handcrafted features, such as POS tags, paths in the dependency trees, keywords, etc., to allow the model to focus on the quantities in the problems Hosseini et al., 2014; Koncel-Kedziorski et al., 2015; Roy et al., 2016; Upadhyay and Chang, 2017; Roy and Roth, 2018; Wang et al., 2018) .", "Recently, Mehta et al.", "; Wang et al.", "; Ling et al.", "attempted at learning models without predefined features.", "Following the recent trend, the proposed end-to-end model in this paper does not need any hand-crafted features.", "Kushman et al.", "first extracted templates about math expressions from the training answers, and then trained models to select templates and map quantities in the problem to the slots in the template.", "Such two-stage approach has been tried and achieved good results (Upadhyay and Chang, 2017) .", "The prior work highly relied on human knowledge, where they parsed problems into equations by choosing the expression tree with the highest score calculated by an operator classifier, working on a hand-crafted \"trigger list\" containing quantities and noun phrases in the problem, or utilizing features extracted from text spans (Roy et al., , 2016 Koncel-Kedziorski et al., 2015) .", "Shi et al.", "defined a Dolphin language to connect math word problems and logical forms, and generated rules to parse math word problems.", "Upadhyay et al.", "parsed math word problems without explicit equation annotations.", "Roy and Roth clas-sified math word problems into 4 types and used rules to decide the operators accordingly.", "Wang et al.", "trained the parser using reinforcement learning with hand-crafted features.", "Hosseini et al.", "modeled the problem text as transition of world states, and the equation is generated as the world states changing.", "Our work uses a similar intuition, but hand-crafted features are not required and our model can be trained in an end-to-end manner.", "Some end-to-end approaches have been proposed, such as generating equations directly via a seq2seq model (Wang et al., 2017) .", "Ling et al.", "tried to generate solutions along with its rationals with a seq2seq-like model for better interpretability.", "This paper belongs to the end-to-end category, but different from the previous work; we are the first approach that generates equations with stack actions, which facilitate us to simulate the way how human solves problems.", "Furthermore, the proposed approach is the first model that is more interpretable and provides reasoning steps without the need of rational annotations.", "End-to-End Neural Math Solver Our approach composes of two parts, an encoder and a decoder, where the process of solving math word problems is viewed as transforming multiple text spans from the problems into the target information the problems ask for.", "In the example shown in Figure 1 , all numbers in the problem are attached with the associated semantics.", "Motivated by the observation, we design an encoder to extract the semantic representation of each number in the problem text.", "Considering that human usually manipulates those numbers and operators (such as addition, subtraction, etc.)", "based on their semantics for problem solving, a decoder is designed to construct the equation, where the semantics is aligned with the representations extracted by the encoder.", "The idea of the proposed model Tom has $ 10 5 pens ?", "Encoder Stack Attention Operation Selector Apply OP OP Return Decoder Operand Selector Semantic Transformer Each notebook takes $0.5 and each pen takes $1.", "Tom has $10.", "How many notebooks can he buy after buying 5 pens?", "Stack Attention is to imitate the human reasoning process for solving math word problems.", "The model architecture is illustrated in Figure 2 .", "Encoder The encoder aims to extract the semantic representation of each constant needed for solving problems.", "However, the needed constants may come from either the given problem texts or domain knowledge, so we detail these two procedures as follows.", "Constant Representation Extraction For each math word problem, we are given a passage consisting of words {w P t } m t=1 , whose word embeddings are {e P t } m t=1 .", "The problem text includes some numbers, which we refer as constants.", "The positions of constants in the problem text are denoted as {p i } n i=1 .", "In order to capture the semantic representation of each constant by considering its contexts, a bidirectional long short-term memory (BLSTM) is adopted as the encoder (Hochreiter and Schmidhuber, 1997) : h E t , c E t = BLSTM(h E t−1 , c E t−1 , e P t ), (1) and then for the i-th constant in the problem, its semantic representation e c i is modeled by the corresponding BLSTM output vector: e c i = h E p i .", "(2) External Constant Leveraging External constants, including 1 and π, are leveraged, because they are required to solve a math word problem, but not mentioned in the problem text.", "Due to their absence from the problem text, we cannot extract their semantic meanings by BLSTM in (2) .", "Instead, we model their semantic representation e π , e 1 as parts of the model parameters.", "They are randomly initialized and are learned during model training.", "Decoder The decoder aims at constructing the equation that can solve the given problem.", "We generate the equation by applying stack actions on a stack to mimic the way how human understands an equation.", "Human knows the semantic meaning of each term in the equation, even compositing of operands and operators like the term \"(10−1×5)\" in Figure 1 .", "Then what operator to apply on a pair operands can be chosen based on their semantic meanings accordingly.", "Hence we design our model to generate the equation in a postfix manner: a operator is chosen base on the semantic representations of two operands the operator is going to apply to.", "Note that the operands a operator can apply to can be any results generated previously.", "That is the reason why we use \"stack\" as our data structure in order to keep track of the operands a operator is going to apply to.", "The stack contains both symbolic and semantic representations of operands, denoted as S = [(v S lt , e S lt ), (v S lt−1 , e S lt−1 ), · · · , (v S 1 , e S 1 )], (3) where v S of each pair is the symbolic part, such as x + 1, while e S is the semantic representation, which is a vector.", "The components in the decoder are shown in the right part of Figure 2 , each of which is detailed below.", "Decoding State Features At each decoding step, decisions are made based on features of the current state.", "At each step, fea- tures r sa and r opd are extracted to select a stack action (section 3.3.2) and an operand to push (section 3.3.3).", "Specifically, the features are the gated concatenation of following vectors: • h D t is the output of an LSTM, which encodes the history of applied actions: h D t , c D t = LSTM(h D t−1 , c D t−1 , res t−1 ), (4) where res t−1 is the result from the previous stack action similar to the seq2seq model (Sutskever et al., 2014) .", "For example, if the previous stack action o t−1 is \"push\", then res t−1 is the semantic representation pushed into the stack.", "If the previous stack action o t−1 is to apply an operator , then res t−1 is the semantic representation generated by f .", "• s t is the stack status.", "It is crucial because some operators are only applicable to certain combinations of operand semantics, which is similar to the type system in programming languages.", "For example, operating multiplication is applicable to the combination of \"quantity of an item\" and \"price of an item\", while operating addition is not.", "Considering that all math operators supported here (+, −, ×, ÷) are binary operators, the semantic representations of the stack's top 2 elements at the time t − 1 are considered: s t = [e S lt ; e S lt ].", "(5) • q t incorporates problem information in the decision.", "It is believed that the attention mechanism (Luong et al., 2015) can effectively capture dependency for longer distance.", "Thus, the attention mechanism over the encoding problem h E 1 , h E 2 , · · · is adopted: q t = Attention(h D t , {h E i } m i=1 ), (6) where the attention function in this paper is defined as a function with learnable parameters w, W, b: Attention(u, {v i } m i=1 ) = m i=1 α i h i , (7) α i = exp(s i ) m l=1 exp(s i ) , (8) s i = w T tanh(W T [u; v i ] + b).", "(9) In order to model the dynamic features for different decoding steps, features in r sa t is gated as follows: r sa t = [g sa t,1 · h D t ; g sa t,2 · s t ; g sa t,3 · q t ], (10) g sa t = σ(W sa · [h D t ; s t ; q t ]), (11) where σ is a sigmoid function and W sa is a learned gating parameter.", "r opd t is defined similarly, but with a different learned gating parameter W opd .", "Stack Action Selector The stack action selector is to select an stack action at each decoding step (section 3.3.2) until the unknowns are solved.", "The probability of choosing action a at the decoding step t is calculated with a network NN constituted of one hidden layer and ReLU as the activation function: P (Y t |{y i } t−1 i=1 , {w i } m i=1 ) (12) = StackActionSelector(r sa t ) = softmax(NN(r sa t )) , where r sa t is decoding state features as defined in section 3.3.", "Stack Actions The available stack actions are listed below: • Variable generation: The semantic representation of an unknown variable x is generated dynamically as the first action in the decoding process.", "Note that this procedure provides the flexibility of solving problems with more than one unknown variables.", "The decoder module can decide how many unknown variables are required to solve the problem, and the semantic representation of the unknown variable is generated with an attention mechanism: e x = Attention(h D t , {h E i } m i=1 ).", "(13) • Push: This stack action pushes the operand chosen by the operand selector (section 3.3.3).", "Both the symbolic representation v * and semantic representation e * of the chosen operand would be pushed to the stack S in (3).", "Then the stack state becomes S = [(v S * , e S * ), (v S lt , e S lt ), · · · , (v S 1 , e S 1 )].", "(14) • Operator application ( ∈ {+, −, ×, ÷}): One stack action pops two elements from the top of the stack, which contains two pairs, (v i , e i ) and (v j , e j ), and then the associated symbolic operator, v k = v i v j , is recorded.", "Also, a semantic transformation function f for that operator is invoked, which generates the semantic representation of v k by transforming semantic representations of v i and v j to e k = f (e i , e j ).", "Therefore, after an operator is applied to the stack specified in (3) , the stack state becomes S =[(v S lt v S lt−1 , f (e S lt , e S lt−1 )), (15) (v S lt−2 , e S lt−2 ), · · · , (v S 1 , e S 1 )].", "• Equal application: When the equal application is chosen, it implies that an equation is completed.", "This stack action pops 2 tuples from the stack, (v i , e i ), (v j , e j ), and then v i = v j is recorded.", "If one of them is an unknown variable, the problem is solved.", "Therefore, after an OP is applied to the stack specified in (3) , the stack state becomes S = [(v S lt−2 , e S lt−2 ), · · · , (v S 1 , e S 1 )].", "(16) Operand Selector When the stack action selector has decided to push an operand, the operand selector aims at choosing which operand to push.", "The operand candidates e include constants provided in the problem text whose semantic representations are e c 1 , e c 2 , · · · , e c n , unknown variable whose semantic representation is e x , and two external constants 1 and π whose semantic representations are e 1 , e π : e = [e c 1 , e c 2 , · · · , e c n , e 1 , e π , e x ].", "An operand has both symbolic and semantic representations, but the selection focuses on its semantic meaning; this procedure is the same as what human does when solving math word problems.", "Inspired by addressing mechanisms of neural Turing machine (NTM) (Graves et al., 2014) , the probability of choosing the i-th operand candidate is the attention weights of r t over the semantic representations of the operand candidates as in (8) : P (Z t | {y i } t−1 i=1 , {w i } m i=1 ) (18) = OperandSelector(r opd t ) = AttentionWeight(r opd t , {e i } m i=1 ∪ {e 1 , e π , e x }), and r opd t is defined in section 3.3.", "Semantic Transformer A semantic transformer is proposed to generate the semantic representation of a new symbol resulted from applying an operator, which provides the capability of interpretation and reasoning for the target task.", "The semantic transformer for an operator ∈ {+, −, ×, ÷} transforms semantic representations of two operands e 1 , e 2 into f (e 1 , e 2 ) = tanh(U ReLU(W [e 1 ; e 2 ]+b )+c ), where W , U , b , c are model parameters.", "Semantic transformers for different operators have different parameters in order to model different transformations.", "Training Both stack action selection and operand selection can be trained in a fully supervised way by giving problems and associated ground truth equations.", "Because our model generates the equation with stack actions, the equation is first transformed into its postfix representation.", "Let the postfix representation of the target equation be y 1 , · · · y t , · · · , y T , where y t can be either an operator (+, −, ×, ÷, =) or a target operand.", "Then for each time step t, the loss can be computed as L(y t ) = L 1 (push op) + L 2 (y t ) y t is an operand L 1 (y t ) otherwise , where L 1 is the stack action selection loss and L 2 is the operand selection loss defined as L 1 (y t ) = − log P (Y t = y t | {o i } t−1 i=1 , {w i } m i=1 ), L 2 (y t ) = − log P (Z t = y t | r t ).", "The objective of our training process is to minimize the total loss for the whole equation, T t=1 L(y t ).", "Inference When performing inference, at each time step t, the stack action with the highest probability P (Y t |{ỹ i } t−1 i=1 , {w i } m i=1 ) is chosen.", "If the chosen stack action is \"push\", the operand with the highest probability P (Z t |{Ỹ i } t−1 i=1 , {w i } m i=1 ) is chosen.", "When the stack has less than 2 elements, the probability of applying operator +, −, ×, ÷, = would be masked out to prevent illegal stack actions, so all generated equations must be legal math expressions.", "The decoder decodes until the unknown variable can be solved.", "After the equations are generated, a Python package SymPy (Meurer et al., 2017) is used to solve the unknown variable.", "The inference procedure example is illustrated in Figure 3 .", "The detailed algorithm can be found in Algorithm 1.", "Experiments To evaluate the performance of the proposed model, we conduct the experiments on the benchmark dataset and analyze the learned semantics.", "Settings The experiments are benchmarked on the dataset Math23k (Wang et al., 2017) , which contains 23,162 math problems with annotated equations.", "Each problem can be solved by a singleunknown-variable equation and only uses operators +, −, ×, ÷.", "Also, except π and 1, quantities in the equation can be found in the problem text.", "There are also other large scale datasets like Dol-phin18K (Shi et al., 2015) and AQuA (Ling et al., 2017) , containing 18,460 and 100,000 math word problems respectively.", "The reasons about not evaluating on these two datasets are 1) Dolphin18k contains some unlabeled math word problems and some incorrect labels, and 2) AQuA contains rational for solving the problems, but the equations in the rational are not formal (e.g.", "mixed with texts, using x to represent ×, etc.)", "and inconsistent.", "Therefore, the following experiments are performed and analyzed using Math23K, the only large scaled, good-quality dataset. )", "do h D t ← LSTM(h D t−1 , ct−1, ret) st ← S.get top2() h E ← Attention(h D t−1 , h E ) rt ← [h D t , st, h E ] psa ← StackActionSelector(rt) p opd ← OperandSelector(rt) if training then Target equation y is available when training.", "Yt ← yt if yt is operand then loss ← loss + L1(push) + L2(yt) else loss ← loss + L1(yt) end if else Yt ← StackActionSelector(r sa t ) if Yt = push then Zt ← OperandSelector(r opd t ) end if end if if Yt = gen var then e x ← Attention(h D t , h E ) ret ← e x else if Yt = push then S.push(vZ t , eZ t ) ret ← eZ t else if Yt ∈ {+, Results The results are shown in Our proposed end-to-end model belongs to the generation category, and the single model performance achieved by our proposed model is new state-of-the-art (> 65%) and even better than the hybrid model result (64.7%).", "In addition, we are the first to report character-based performance on this dataset, and the character-based results are slightly better than the word-based ones.", "Among the single model performance, our models obtain about more than 7% accuracy improvement compared to the previous best one (Wang et al., 2017) .", "The performance of our character-based model also shows that our model is capable of learning the relatively accurate semantic representations without word boundaries and achieves better performance.", "Ablation Test To better understand the performance contributed by each proposed component, we perform a series of ablation tests by removing components one by one and then checking the performance by 5-fold cross validation.", "Table 2 shows the ablation results.", "Char-Based v.s.", "Word-Based As reported above, using word-based model instead of character-based model only causes 0.5% performance drop.", "To fairly compare with prior word- Table 2 : 5-fold cross validation results of ablation tests.", "based models, the following ablation tests are performed on the word-based approach.", "Word-Based -Gate It uses r t instead of r sa t and r opr t as the input of both StackActionSelector and OperandSelector.", "Word-Based -Gate -Attention Considering that the prior generation-based model (seq2seq) did not use any attention mechanism, we compare the models with and without the attention mechanism.", "Removing attention means excluding q t−1 in (11), so the input of both operator and operand selector becomes r t = [h D t ; s t ].", "The result implies that our model is not better than previous models solely because of the attention.", "Word-Based -Gate -Attention -Stack To check the effectiveness of the stack status (s t in (11)), the experiments of removing the stack status from the input of both operator and operand selectors (r t = h D t ) are conducted.", "The results well justify our idea of choosing operators based on semantic meanings of operands.", "Word-Based -Semantic Transformer To validate the effectiveness of the idea that views an operator as a semantic transformer, we modify the semantic transformer function of the operator into f (e 1 , e 2 ) = e , where e is a learnable parameter and is different for different operators.", "Therefore, e acts like the embedding of the operator , and the decoding process is more similar to a general seq2seq model.", "The results show that the semantic transformer in the original model encodes not only the last operator applied on the operands but other information that helps the selectors.", "Word-Based -Semantic Representation To explicitly evaluate the effectiveness of operands' semantic representations, we rewrite semantic representation of the i-th operand in the problem texts q u a n ti fi e r 个 b a n a n a 香 蕉 , e v e r y 每 ( b a s k e t) < u n k > 6 .0 q u a n ti fi e r 个 , ta k e o ff 拿 掉 h o w m a n y 多 少 q u a n ti fi e r 个 , th e n 就 c a n 可 以 e x a c tl y 正 好 fi ll 装 9 .0 q u a n ti fi e r 个 b a s k e ts 篮 子 了 < u n k > .", "9.0 6.0 58.0 Figure 4 : The self-attention map visualization of operands' semantic expressions for the problem \"There are 58 bananas.", "Each basket can contain 6 bananas.", "How many bananas are needed to be token off such that exactly 9 baskets are filled?\".", "from (2) to e c i = b c i , where b c i is a parameter.", "Thus for every problem, the representation of the i-th operand is identical, even though their meanings in different problems may be different.", "This modification assumes that no semantic information is captured by b c i , which can merely represent a symbolic placeholder in an equation.", "Because the semantic transformer is to transform the semantic representations, applying this component is meaningless.", "Here the semantic transformer is also replaced with f (e 1 , e 2 ) = e as the setting of the previous ablation test.", "The results show that the model without using semantic representations of operands causes a significant accuracy drop of 3.5%.", "The main contribution of this paper about modeling semantic meanings of symbols is validated and well demonstrated here.", "Qualitative Analysis To further analyze whether the proposed model can provide interpretation and reasoning, we visualize the learned semantic representations of constants to check where the important cues are, Constant Embedding Analysis To better understand the information encoded in the semantic representations of constants in the problem, a self-attention is performed when their semantic representations are extracted by the encoder.", "Namely, we rewrite (2) as e c i = Attention(h E p i , {h E t } m t=1 .", "(20) Then we check the trained self-attention map (α in the attention function) on the validation dataset.", "For some problems, the self-attention that generates semantic representations of constants in the problem concentrates on the number's quantifier or unit, and sometimes it also focuses on informative verbs, such as \"gain\", \"get\", \"fill\", etc., in the sentence.", "For example, Figure 4 shows the attention weights for an example math word problem, where lighter colors indicate higher weights.", "The numbers \"58\" and \"6\" focus more on the quantifier-related words (e.g.", "\"every\" and \"how many\"), while \"9\" pays higher attention to the verb \"fill\".", "The results are consistent with those handcraft features for solving math word problems proposed by the prior research (Hosseini et al., 2014; .", "Hence, we demonstrate that the automatically learned semantic representations indeed capture critical information that facilitates solving math word problems without providing human-crafted knowledge.", "Decoding Process Visualization We visualize the attention map (q t in (6) ) to see how the attention helps the decoding process.", "An example is shown in the top of Figure 5 , where most attention focuses on the end of the sentence.", "Unlike the machine translation task, the attention shows the word-level alignment between source and target languages, solving math word problems requires high-level understanding due to the task complexity.", "To further analyze the effectiveness of the proposed gating mechanisms for stack action and operand selection, the activation of gates g sa , g opd at each step of the decoding process is shown in the bottom of Figure 5 .", "It shows that most of time, the gate activation is high, demonstrating that the proposed gating mechanisms play an important role during decoding.", "We also observe a common phenomenon that the activation g sa 2 , which controls how much attention the stack action selector puts on the stack state when deciding an stack action, is usually low until the last \"operator application\" stack action.", "For example, in the example of Figure 5 , g sa 2 is less than 0.20 till the last argument selection stack action, and activates when deciding the division operator application (÷) and the equal application (=).", "It may result from the higher-level semantics of the operand (6.75−2.75) on the stack when selecting the stack action division operator application (÷).", "In terms Problem & Results 红花有60朵,黄花比红花多1/6朵,黄花有多少朵. (There are 60 red flowers.", "Yellow flowers are more than red ones by 1/6.", "How many yellow flowers are there?)", "Generated Equation: 60 + 1 6 Correct Answer: 70 火车 48 小时行驶 5920 千米,汽车 25 小时行驶 2250 千米,汽车平均每小时比火车每小时慢 多少 千米 ? (The train travels 5920 kilometers in hours, and the car travels 2250 kilometers in 25 hours.", "How many kilometers per hour is the car slower than the train?)", "Generated Equation: 2250 ÷ 25 − 5920 ÷ 48 Correct Answer: 33 1 3 小红前面 5 人,后面 7 人,一共有多少人? (There are 5 people in front of Little Red and 7 people behind.", "How many persons are there in total?)", "Generated Equation: 5 + 7 Correct Answer: 13 Figure 5: Word attention and gate activation (g sa and g opd ) visualization when generating stack actions for the problem \"6.75 deducting 5 times of an unknown number is 2.75.", "What is the unknown number?", "\", where the associated equation is x = (6.75 − 2.75) ÷ 5.", "Note that g opd is meaningful only when the t-th stack action is push op.", "of the activation of g opd , we find that three features are important in most cases, demonstrating the effectiveness of the proposed mechanisms.", "Error Analysis We randomly sample some results predicted incorrectly by our model shown in Table 3 .", "In the first example, the error is due to the language ambiguity, and such ambiguity cannot be resolved without considering the exact value of the number.", "From the second example, although our model identifies the problem as a comparison problem successfully, it handles the order of the operands incorrectly.", "For the third problem, it cannot be solved by using only the surface meaning but requires some common sense.", "Therefore, above phenomena show the difficulty of solving math word problems and the large room for improvement.", "Conclusion We propose an end-to-end neural math solver using an encoder-decoder framework that incorporates semantic representations of numbers in order to generate mathematical symbols for solving math word problems.", "The experiments show that the proposed model achieves the state-of-the-art performance on the benchmark dataset, and empirically demonstrate the effectiveness of each component in the model.", "In sum, the proposed neural math solver is designed based on how human performs reasoning when writing equations, providing better interpretation without the need of labeled rationals.", "A Algorithm Detail The training and inference procedures are shown in Algortihm 1.", "B Hyperparameter Setup The model is trained with the optimizer adam (Kingma and Ba, 2014), and the learning rate is set to 0.001.", "Pretrained embeddings using FastText (Joulin et al., 2016 ) are adopted.", "The hidden state size of LSTM used in the encoder and decoder is 256.", "The dimension of hidden layers in attention, semantic transformer and operand/stack action selector is 256.", "The dropout rate is set as 0.1 before inputting the decoder LSTM, before the stack action selector and after the hidden layer of the stack action selector and attention.", "The reported accuracy is the result of 5-fold cross-validation, same as Wang et al.", "for fair comparison.", "C Error Analysis between Seq2Seq We implement the seq2seq model as proposed by Wang et al.", "and compare the performance difference between our proposed model and the baseline seq2seq model.", "Table 4 shows the generated results seq2seq predicts correctly but our model predicts incorrectly.", "Table 5 show the results our model can predict correctly but seq2seq cannot." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.1.1", "3.1.2", "3.2", "3.3", "3.3.1", "3.3.2", "3.3.3", "3.3.4", "3.4", "3.5", "4", "4.1", "4.2", "4.3", "5", "5.1", "5.2", "5.3", "6" ], "paper_header_content": [ "Introduction", "Related Work", "End-to-End Neural Math Solver", "Encoder", "Constant Representation Extraction", "External Constant Leveraging", "Decoder", "Decoding State Features", "Stack Action Selector", "Stack Actions", "Operand Selector", "Semantic Transformer", "Training", "Inference", "Experiments", "Settings", "Results", "Ablation Test", "Qualitative Analysis", "Constant Embedding Analysis", "Decoding Process Visualization", "Error Analysis", "Conclusion" ] }
GEM-SciDuet-train-31#paper-1044#slide-0
Math Word Problem
Each notebook takes $0.5 and each pen takes $1. Tom has $10. How many notebooks can h e buy after buying pens? Reasonin g & Solving
Each notebook takes $0.5 and each pen takes $1. Tom has $10. How many notebooks can h e buy after buying pens? Reasonin g & Solving
[]
GEM-SciDuet-train-31#paper-1044#slide-1
1044
Semantically-Aligned Equation Generation for Solving and Reasoning Math Word Problems
Solving math word problems is a challenging task that requires accurate natural language understanding to bridge natural language texts and math expressions. Motivated by the intuition about how human generates the equations given the problem texts, this paper presents a neural approach to automatically solve math word problems by operating symbols according to their semantic meanings in texts. This paper views the process of generating equations as a bridge between the semantic world and the symbolic world, where the proposed neural math solver is based on an encoderdecoder framework. In the proposed model, the encoder is designed to understand the semantics of problems, and the decoder focuses on tracking semantic meanings of the generated symbols and then deciding which symbol to generate next. The preliminary experiments are conducted in a benchmark dataset Math23K, and our model significantly outperforms both the state-of-the-art single model and the best non-retrieval-based model over about 10% accuracy, demonstrating the effectiveness of bridging the symbolic and semantic worlds from math word problems. 1
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214 ], "paper_content_text": [ "Introduction Automatically solving math word problems has been an interesting research topic and also been viewed as a way of evaluating machines' ability (Mandal and Naskar, 2019) .", "For human, writing down an equation that solves a math word problem requires the ability of reading comprehension, reasoning, and sometimes real world understanding.", "Specifically, to solve a math word problem, we first need to know the goal of the given problem, then understand the semantic 1 The source code is available at https://github.", "com/MiuLab/E2EMathSolver.", "meaning of each numerical number in the problem, perform reasoning based on the comprehension in the previous step, and finally decide what to write in the equation.", "Most prior work about solving math word problems relied on hand-crafted features, which required more human knowledge.", "Because those features are often in the lexical level, it is not clear whether machines really understand the math problems.", "Also, most prior work evaluated their approaches on relatively small datasets, and the capability of generalization is concerned.", "This paper considers the reasoning procedure when writing down the associated equation given a problem.", "Figure 1 illustrates the problem solving process.", "The illustration shows that human actually assigns the semantic meaning to each number when manipulating symbols, including operands (numbers) and operators (+ − ×÷).", "Also, we believe that the semantic meaning of operands can help us decide which operator to use.", "For example, the summation of \"price of one pen\" and \"number of pens Tom bought\" is meaningless; therefore the addition would not be chosen.", "Following the observation above, this paper proposes a novel encoder decoder model, where the encoder extracts semantic meanings of numbers in the problem, and the decoder is equipped with a stack that facilitates tracking the semantic meanings of operands.", "The contributions of this paper are 4-fold: • This paper is the first work that models semantic meanings of operands and operators for math word problems.", "• This paper proposes an end-to-end neural math solver with a novel decoding process that utilizes the stack to generate associated equations.", "Figure 1 : The solving process of the math word problem \"Each notebok takes $0.5 and each pen takes $1.", "Tom has $10.", "How many notebook can he buy after buying 5 pens?\"", "and the associated equation is x = (10 − 1 × 5) ÷ 0.5.", "The associated equation is x = (10 − 1 × 5) ÷ 0.5.", "• This paper achieves the state-of-the-art performance on the large benchmark dataset Math23K.", "• This paper is capable of providing interpretation and reasoning for the math word problem solving procedure.", "Related Work There is a lot of prior work that utilized handcrafted features, such as POS tags, paths in the dependency trees, keywords, etc., to allow the model to focus on the quantities in the problems Hosseini et al., 2014; Koncel-Kedziorski et al., 2015; Roy et al., 2016; Upadhyay and Chang, 2017; Roy and Roth, 2018; Wang et al., 2018) .", "Recently, Mehta et al.", "; Wang et al.", "; Ling et al.", "attempted at learning models without predefined features.", "Following the recent trend, the proposed end-to-end model in this paper does not need any hand-crafted features.", "Kushman et al.", "first extracted templates about math expressions from the training answers, and then trained models to select templates and map quantities in the problem to the slots in the template.", "Such two-stage approach has been tried and achieved good results (Upadhyay and Chang, 2017) .", "The prior work highly relied on human knowledge, where they parsed problems into equations by choosing the expression tree with the highest score calculated by an operator classifier, working on a hand-crafted \"trigger list\" containing quantities and noun phrases in the problem, or utilizing features extracted from text spans (Roy et al., , 2016 Koncel-Kedziorski et al., 2015) .", "Shi et al.", "defined a Dolphin language to connect math word problems and logical forms, and generated rules to parse math word problems.", "Upadhyay et al.", "parsed math word problems without explicit equation annotations.", "Roy and Roth clas-sified math word problems into 4 types and used rules to decide the operators accordingly.", "Wang et al.", "trained the parser using reinforcement learning with hand-crafted features.", "Hosseini et al.", "modeled the problem text as transition of world states, and the equation is generated as the world states changing.", "Our work uses a similar intuition, but hand-crafted features are not required and our model can be trained in an end-to-end manner.", "Some end-to-end approaches have been proposed, such as generating equations directly via a seq2seq model (Wang et al., 2017) .", "Ling et al.", "tried to generate solutions along with its rationals with a seq2seq-like model for better interpretability.", "This paper belongs to the end-to-end category, but different from the previous work; we are the first approach that generates equations with stack actions, which facilitate us to simulate the way how human solves problems.", "Furthermore, the proposed approach is the first model that is more interpretable and provides reasoning steps without the need of rational annotations.", "End-to-End Neural Math Solver Our approach composes of two parts, an encoder and a decoder, where the process of solving math word problems is viewed as transforming multiple text spans from the problems into the target information the problems ask for.", "In the example shown in Figure 1 , all numbers in the problem are attached with the associated semantics.", "Motivated by the observation, we design an encoder to extract the semantic representation of each number in the problem text.", "Considering that human usually manipulates those numbers and operators (such as addition, subtraction, etc.)", "based on their semantics for problem solving, a decoder is designed to construct the equation, where the semantics is aligned with the representations extracted by the encoder.", "The idea of the proposed model Tom has $ 10 5 pens ?", "Encoder Stack Attention Operation Selector Apply OP OP Return Decoder Operand Selector Semantic Transformer Each notebook takes $0.5 and each pen takes $1.", "Tom has $10.", "How many notebooks can he buy after buying 5 pens?", "Stack Attention is to imitate the human reasoning process for solving math word problems.", "The model architecture is illustrated in Figure 2 .", "Encoder The encoder aims to extract the semantic representation of each constant needed for solving problems.", "However, the needed constants may come from either the given problem texts or domain knowledge, so we detail these two procedures as follows.", "Constant Representation Extraction For each math word problem, we are given a passage consisting of words {w P t } m t=1 , whose word embeddings are {e P t } m t=1 .", "The problem text includes some numbers, which we refer as constants.", "The positions of constants in the problem text are denoted as {p i } n i=1 .", "In order to capture the semantic representation of each constant by considering its contexts, a bidirectional long short-term memory (BLSTM) is adopted as the encoder (Hochreiter and Schmidhuber, 1997) : h E t , c E t = BLSTM(h E t−1 , c E t−1 , e P t ), (1) and then for the i-th constant in the problem, its semantic representation e c i is modeled by the corresponding BLSTM output vector: e c i = h E p i .", "(2) External Constant Leveraging External constants, including 1 and π, are leveraged, because they are required to solve a math word problem, but not mentioned in the problem text.", "Due to their absence from the problem text, we cannot extract their semantic meanings by BLSTM in (2) .", "Instead, we model their semantic representation e π , e 1 as parts of the model parameters.", "They are randomly initialized and are learned during model training.", "Decoder The decoder aims at constructing the equation that can solve the given problem.", "We generate the equation by applying stack actions on a stack to mimic the way how human understands an equation.", "Human knows the semantic meaning of each term in the equation, even compositing of operands and operators like the term \"(10−1×5)\" in Figure 1 .", "Then what operator to apply on a pair operands can be chosen based on their semantic meanings accordingly.", "Hence we design our model to generate the equation in a postfix manner: a operator is chosen base on the semantic representations of two operands the operator is going to apply to.", "Note that the operands a operator can apply to can be any results generated previously.", "That is the reason why we use \"stack\" as our data structure in order to keep track of the operands a operator is going to apply to.", "The stack contains both symbolic and semantic representations of operands, denoted as S = [(v S lt , e S lt ), (v S lt−1 , e S lt−1 ), · · · , (v S 1 , e S 1 )], (3) where v S of each pair is the symbolic part, such as x + 1, while e S is the semantic representation, which is a vector.", "The components in the decoder are shown in the right part of Figure 2 , each of which is detailed below.", "Decoding State Features At each decoding step, decisions are made based on features of the current state.", "At each step, fea- tures r sa and r opd are extracted to select a stack action (section 3.3.2) and an operand to push (section 3.3.3).", "Specifically, the features are the gated concatenation of following vectors: • h D t is the output of an LSTM, which encodes the history of applied actions: h D t , c D t = LSTM(h D t−1 , c D t−1 , res t−1 ), (4) where res t−1 is the result from the previous stack action similar to the seq2seq model (Sutskever et al., 2014) .", "For example, if the previous stack action o t−1 is \"push\", then res t−1 is the semantic representation pushed into the stack.", "If the previous stack action o t−1 is to apply an operator , then res t−1 is the semantic representation generated by f .", "• s t is the stack status.", "It is crucial because some operators are only applicable to certain combinations of operand semantics, which is similar to the type system in programming languages.", "For example, operating multiplication is applicable to the combination of \"quantity of an item\" and \"price of an item\", while operating addition is not.", "Considering that all math operators supported here (+, −, ×, ÷) are binary operators, the semantic representations of the stack's top 2 elements at the time t − 1 are considered: s t = [e S lt ; e S lt ].", "(5) • q t incorporates problem information in the decision.", "It is believed that the attention mechanism (Luong et al., 2015) can effectively capture dependency for longer distance.", "Thus, the attention mechanism over the encoding problem h E 1 , h E 2 , · · · is adopted: q t = Attention(h D t , {h E i } m i=1 ), (6) where the attention function in this paper is defined as a function with learnable parameters w, W, b: Attention(u, {v i } m i=1 ) = m i=1 α i h i , (7) α i = exp(s i ) m l=1 exp(s i ) , (8) s i = w T tanh(W T [u; v i ] + b).", "(9) In order to model the dynamic features for different decoding steps, features in r sa t is gated as follows: r sa t = [g sa t,1 · h D t ; g sa t,2 · s t ; g sa t,3 · q t ], (10) g sa t = σ(W sa · [h D t ; s t ; q t ]), (11) where σ is a sigmoid function and W sa is a learned gating parameter.", "r opd t is defined similarly, but with a different learned gating parameter W opd .", "Stack Action Selector The stack action selector is to select an stack action at each decoding step (section 3.3.2) until the unknowns are solved.", "The probability of choosing action a at the decoding step t is calculated with a network NN constituted of one hidden layer and ReLU as the activation function: P (Y t |{y i } t−1 i=1 , {w i } m i=1 ) (12) = StackActionSelector(r sa t ) = softmax(NN(r sa t )) , where r sa t is decoding state features as defined in section 3.3.", "Stack Actions The available stack actions are listed below: • Variable generation: The semantic representation of an unknown variable x is generated dynamically as the first action in the decoding process.", "Note that this procedure provides the flexibility of solving problems with more than one unknown variables.", "The decoder module can decide how many unknown variables are required to solve the problem, and the semantic representation of the unknown variable is generated with an attention mechanism: e x = Attention(h D t , {h E i } m i=1 ).", "(13) • Push: This stack action pushes the operand chosen by the operand selector (section 3.3.3).", "Both the symbolic representation v * and semantic representation e * of the chosen operand would be pushed to the stack S in (3).", "Then the stack state becomes S = [(v S * , e S * ), (v S lt , e S lt ), · · · , (v S 1 , e S 1 )].", "(14) • Operator application ( ∈ {+, −, ×, ÷}): One stack action pops two elements from the top of the stack, which contains two pairs, (v i , e i ) and (v j , e j ), and then the associated symbolic operator, v k = v i v j , is recorded.", "Also, a semantic transformation function f for that operator is invoked, which generates the semantic representation of v k by transforming semantic representations of v i and v j to e k = f (e i , e j ).", "Therefore, after an operator is applied to the stack specified in (3) , the stack state becomes S =[(v S lt v S lt−1 , f (e S lt , e S lt−1 )), (15) (v S lt−2 , e S lt−2 ), · · · , (v S 1 , e S 1 )].", "• Equal application: When the equal application is chosen, it implies that an equation is completed.", "This stack action pops 2 tuples from the stack, (v i , e i ), (v j , e j ), and then v i = v j is recorded.", "If one of them is an unknown variable, the problem is solved.", "Therefore, after an OP is applied to the stack specified in (3) , the stack state becomes S = [(v S lt−2 , e S lt−2 ), · · · , (v S 1 , e S 1 )].", "(16) Operand Selector When the stack action selector has decided to push an operand, the operand selector aims at choosing which operand to push.", "The operand candidates e include constants provided in the problem text whose semantic representations are e c 1 , e c 2 , · · · , e c n , unknown variable whose semantic representation is e x , and two external constants 1 and π whose semantic representations are e 1 , e π : e = [e c 1 , e c 2 , · · · , e c n , e 1 , e π , e x ].", "An operand has both symbolic and semantic representations, but the selection focuses on its semantic meaning; this procedure is the same as what human does when solving math word problems.", "Inspired by addressing mechanisms of neural Turing machine (NTM) (Graves et al., 2014) , the probability of choosing the i-th operand candidate is the attention weights of r t over the semantic representations of the operand candidates as in (8) : P (Z t | {y i } t−1 i=1 , {w i } m i=1 ) (18) = OperandSelector(r opd t ) = AttentionWeight(r opd t , {e i } m i=1 ∪ {e 1 , e π , e x }), and r opd t is defined in section 3.3.", "Semantic Transformer A semantic transformer is proposed to generate the semantic representation of a new symbol resulted from applying an operator, which provides the capability of interpretation and reasoning for the target task.", "The semantic transformer for an operator ∈ {+, −, ×, ÷} transforms semantic representations of two operands e 1 , e 2 into f (e 1 , e 2 ) = tanh(U ReLU(W [e 1 ; e 2 ]+b )+c ), where W , U , b , c are model parameters.", "Semantic transformers for different operators have different parameters in order to model different transformations.", "Training Both stack action selection and operand selection can be trained in a fully supervised way by giving problems and associated ground truth equations.", "Because our model generates the equation with stack actions, the equation is first transformed into its postfix representation.", "Let the postfix representation of the target equation be y 1 , · · · y t , · · · , y T , where y t can be either an operator (+, −, ×, ÷, =) or a target operand.", "Then for each time step t, the loss can be computed as L(y t ) = L 1 (push op) + L 2 (y t ) y t is an operand L 1 (y t ) otherwise , where L 1 is the stack action selection loss and L 2 is the operand selection loss defined as L 1 (y t ) = − log P (Y t = y t | {o i } t−1 i=1 , {w i } m i=1 ), L 2 (y t ) = − log P (Z t = y t | r t ).", "The objective of our training process is to minimize the total loss for the whole equation, T t=1 L(y t ).", "Inference When performing inference, at each time step t, the stack action with the highest probability P (Y t |{ỹ i } t−1 i=1 , {w i } m i=1 ) is chosen.", "If the chosen stack action is \"push\", the operand with the highest probability P (Z t |{Ỹ i } t−1 i=1 , {w i } m i=1 ) is chosen.", "When the stack has less than 2 elements, the probability of applying operator +, −, ×, ÷, = would be masked out to prevent illegal stack actions, so all generated equations must be legal math expressions.", "The decoder decodes until the unknown variable can be solved.", "After the equations are generated, a Python package SymPy (Meurer et al., 2017) is used to solve the unknown variable.", "The inference procedure example is illustrated in Figure 3 .", "The detailed algorithm can be found in Algorithm 1.", "Experiments To evaluate the performance of the proposed model, we conduct the experiments on the benchmark dataset and analyze the learned semantics.", "Settings The experiments are benchmarked on the dataset Math23k (Wang et al., 2017) , which contains 23,162 math problems with annotated equations.", "Each problem can be solved by a singleunknown-variable equation and only uses operators +, −, ×, ÷.", "Also, except π and 1, quantities in the equation can be found in the problem text.", "There are also other large scale datasets like Dol-phin18K (Shi et al., 2015) and AQuA (Ling et al., 2017) , containing 18,460 and 100,000 math word problems respectively.", "The reasons about not evaluating on these two datasets are 1) Dolphin18k contains some unlabeled math word problems and some incorrect labels, and 2) AQuA contains rational for solving the problems, but the equations in the rational are not formal (e.g.", "mixed with texts, using x to represent ×, etc.)", "and inconsistent.", "Therefore, the following experiments are performed and analyzed using Math23K, the only large scaled, good-quality dataset. )", "do h D t ← LSTM(h D t−1 , ct−1, ret) st ← S.get top2() h E ← Attention(h D t−1 , h E ) rt ← [h D t , st, h E ] psa ← StackActionSelector(rt) p opd ← OperandSelector(rt) if training then Target equation y is available when training.", "Yt ← yt if yt is operand then loss ← loss + L1(push) + L2(yt) else loss ← loss + L1(yt) end if else Yt ← StackActionSelector(r sa t ) if Yt = push then Zt ← OperandSelector(r opd t ) end if end if if Yt = gen var then e x ← Attention(h D t , h E ) ret ← e x else if Yt = push then S.push(vZ t , eZ t ) ret ← eZ t else if Yt ∈ {+, Results The results are shown in Our proposed end-to-end model belongs to the generation category, and the single model performance achieved by our proposed model is new state-of-the-art (> 65%) and even better than the hybrid model result (64.7%).", "In addition, we are the first to report character-based performance on this dataset, and the character-based results are slightly better than the word-based ones.", "Among the single model performance, our models obtain about more than 7% accuracy improvement compared to the previous best one (Wang et al., 2017) .", "The performance of our character-based model also shows that our model is capable of learning the relatively accurate semantic representations without word boundaries and achieves better performance.", "Ablation Test To better understand the performance contributed by each proposed component, we perform a series of ablation tests by removing components one by one and then checking the performance by 5-fold cross validation.", "Table 2 shows the ablation results.", "Char-Based v.s.", "Word-Based As reported above, using word-based model instead of character-based model only causes 0.5% performance drop.", "To fairly compare with prior word- Table 2 : 5-fold cross validation results of ablation tests.", "based models, the following ablation tests are performed on the word-based approach.", "Word-Based -Gate It uses r t instead of r sa t and r opr t as the input of both StackActionSelector and OperandSelector.", "Word-Based -Gate -Attention Considering that the prior generation-based model (seq2seq) did not use any attention mechanism, we compare the models with and without the attention mechanism.", "Removing attention means excluding q t−1 in (11), so the input of both operator and operand selector becomes r t = [h D t ; s t ].", "The result implies that our model is not better than previous models solely because of the attention.", "Word-Based -Gate -Attention -Stack To check the effectiveness of the stack status (s t in (11)), the experiments of removing the stack status from the input of both operator and operand selectors (r t = h D t ) are conducted.", "The results well justify our idea of choosing operators based on semantic meanings of operands.", "Word-Based -Semantic Transformer To validate the effectiveness of the idea that views an operator as a semantic transformer, we modify the semantic transformer function of the operator into f (e 1 , e 2 ) = e , where e is a learnable parameter and is different for different operators.", "Therefore, e acts like the embedding of the operator , and the decoding process is more similar to a general seq2seq model.", "The results show that the semantic transformer in the original model encodes not only the last operator applied on the operands but other information that helps the selectors.", "Word-Based -Semantic Representation To explicitly evaluate the effectiveness of operands' semantic representations, we rewrite semantic representation of the i-th operand in the problem texts q u a n ti fi e r 个 b a n a n a 香 蕉 , e v e r y 每 ( b a s k e t) < u n k > 6 .0 q u a n ti fi e r 个 , ta k e o ff 拿 掉 h o w m a n y 多 少 q u a n ti fi e r 个 , th e n 就 c a n 可 以 e x a c tl y 正 好 fi ll 装 9 .0 q u a n ti fi e r 个 b a s k e ts 篮 子 了 < u n k > .", "9.0 6.0 58.0 Figure 4 : The self-attention map visualization of operands' semantic expressions for the problem \"There are 58 bananas.", "Each basket can contain 6 bananas.", "How many bananas are needed to be token off such that exactly 9 baskets are filled?\".", "from (2) to e c i = b c i , where b c i is a parameter.", "Thus for every problem, the representation of the i-th operand is identical, even though their meanings in different problems may be different.", "This modification assumes that no semantic information is captured by b c i , which can merely represent a symbolic placeholder in an equation.", "Because the semantic transformer is to transform the semantic representations, applying this component is meaningless.", "Here the semantic transformer is also replaced with f (e 1 , e 2 ) = e as the setting of the previous ablation test.", "The results show that the model without using semantic representations of operands causes a significant accuracy drop of 3.5%.", "The main contribution of this paper about modeling semantic meanings of symbols is validated and well demonstrated here.", "Qualitative Analysis To further analyze whether the proposed model can provide interpretation and reasoning, we visualize the learned semantic representations of constants to check where the important cues are, Constant Embedding Analysis To better understand the information encoded in the semantic representations of constants in the problem, a self-attention is performed when their semantic representations are extracted by the encoder.", "Namely, we rewrite (2) as e c i = Attention(h E p i , {h E t } m t=1 .", "(20) Then we check the trained self-attention map (α in the attention function) on the validation dataset.", "For some problems, the self-attention that generates semantic representations of constants in the problem concentrates on the number's quantifier or unit, and sometimes it also focuses on informative verbs, such as \"gain\", \"get\", \"fill\", etc., in the sentence.", "For example, Figure 4 shows the attention weights for an example math word problem, where lighter colors indicate higher weights.", "The numbers \"58\" and \"6\" focus more on the quantifier-related words (e.g.", "\"every\" and \"how many\"), while \"9\" pays higher attention to the verb \"fill\".", "The results are consistent with those handcraft features for solving math word problems proposed by the prior research (Hosseini et al., 2014; .", "Hence, we demonstrate that the automatically learned semantic representations indeed capture critical information that facilitates solving math word problems without providing human-crafted knowledge.", "Decoding Process Visualization We visualize the attention map (q t in (6) ) to see how the attention helps the decoding process.", "An example is shown in the top of Figure 5 , where most attention focuses on the end of the sentence.", "Unlike the machine translation task, the attention shows the word-level alignment between source and target languages, solving math word problems requires high-level understanding due to the task complexity.", "To further analyze the effectiveness of the proposed gating mechanisms for stack action and operand selection, the activation of gates g sa , g opd at each step of the decoding process is shown in the bottom of Figure 5 .", "It shows that most of time, the gate activation is high, demonstrating that the proposed gating mechanisms play an important role during decoding.", "We also observe a common phenomenon that the activation g sa 2 , which controls how much attention the stack action selector puts on the stack state when deciding an stack action, is usually low until the last \"operator application\" stack action.", "For example, in the example of Figure 5 , g sa 2 is less than 0.20 till the last argument selection stack action, and activates when deciding the division operator application (÷) and the equal application (=).", "It may result from the higher-level semantics of the operand (6.75−2.75) on the stack when selecting the stack action division operator application (÷).", "In terms Problem & Results 红花有60朵,黄花比红花多1/6朵,黄花有多少朵. (There are 60 red flowers.", "Yellow flowers are more than red ones by 1/6.", "How many yellow flowers are there?)", "Generated Equation: 60 + 1 6 Correct Answer: 70 火车 48 小时行驶 5920 千米,汽车 25 小时行驶 2250 千米,汽车平均每小时比火车每小时慢 多少 千米 ? (The train travels 5920 kilometers in hours, and the car travels 2250 kilometers in 25 hours.", "How many kilometers per hour is the car slower than the train?)", "Generated Equation: 2250 ÷ 25 − 5920 ÷ 48 Correct Answer: 33 1 3 小红前面 5 人,后面 7 人,一共有多少人? (There are 5 people in front of Little Red and 7 people behind.", "How many persons are there in total?)", "Generated Equation: 5 + 7 Correct Answer: 13 Figure 5: Word attention and gate activation (g sa and g opd ) visualization when generating stack actions for the problem \"6.75 deducting 5 times of an unknown number is 2.75.", "What is the unknown number?", "\", where the associated equation is x = (6.75 − 2.75) ÷ 5.", "Note that g opd is meaningful only when the t-th stack action is push op.", "of the activation of g opd , we find that three features are important in most cases, demonstrating the effectiveness of the proposed mechanisms.", "Error Analysis We randomly sample some results predicted incorrectly by our model shown in Table 3 .", "In the first example, the error is due to the language ambiguity, and such ambiguity cannot be resolved without considering the exact value of the number.", "From the second example, although our model identifies the problem as a comparison problem successfully, it handles the order of the operands incorrectly.", "For the third problem, it cannot be solved by using only the surface meaning but requires some common sense.", "Therefore, above phenomena show the difficulty of solving math word problems and the large room for improvement.", "Conclusion We propose an end-to-end neural math solver using an encoder-decoder framework that incorporates semantic representations of numbers in order to generate mathematical symbols for solving math word problems.", "The experiments show that the proposed model achieves the state-of-the-art performance on the benchmark dataset, and empirically demonstrate the effectiveness of each component in the model.", "In sum, the proposed neural math solver is designed based on how human performs reasoning when writing equations, providing better interpretation without the need of labeled rationals.", "A Algorithm Detail The training and inference procedures are shown in Algortihm 1.", "B Hyperparameter Setup The model is trained with the optimizer adam (Kingma and Ba, 2014), and the learning rate is set to 0.001.", "Pretrained embeddings using FastText (Joulin et al., 2016 ) are adopted.", "The hidden state size of LSTM used in the encoder and decoder is 256.", "The dimension of hidden layers in attention, semantic transformer and operand/stack action selector is 256.", "The dropout rate is set as 0.1 before inputting the decoder LSTM, before the stack action selector and after the hidden layer of the stack action selector and attention.", "The reported accuracy is the result of 5-fold cross-validation, same as Wang et al.", "for fair comparison.", "C Error Analysis between Seq2Seq We implement the seq2seq model as proposed by Wang et al.", "and compare the performance difference between our proposed model and the baseline seq2seq model.", "Table 4 shows the generated results seq2seq predicts correctly but our model predicts incorrectly.", "Table 5 show the results our model can predict correctly but seq2seq cannot." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.1.1", "3.1.2", "3.2", "3.3", "3.3.1", "3.3.2", "3.3.3", "3.3.4", "3.4", "3.5", "4", "4.1", "4.2", "4.3", "5", "5.1", "5.2", "5.3", "6" ], "paper_header_content": [ "Introduction", "Related Work", "End-to-End Neural Math Solver", "Encoder", "Constant Representation Extraction", "External Constant Leveraging", "Decoder", "Decoding State Features", "Stack Action Selector", "Stack Actions", "Operand Selector", "Semantic Transformer", "Training", "Inference", "Experiments", "Settings", "Results", "Ablation Test", "Qualitative Analysis", "Constant Embedding Analysis", "Decoding Process Visualization", "Error Analysis", "Conclusion" ] }
GEM-SciDuet-train-31#paper-1044#slide-1
Prior Work
Non-neural approaches Deep learning M I U L A B (Kushman et al., Upadhyay and Chang) (W ang et al., Ling et al.) Rely on hand-crafted features! Do es not use the structure of math expression. Our model is end-to-end and structural!
Non-neural approaches Deep learning M I U L A B (Kushman et al., Upadhyay and Chang) (W ang et al., Ling et al.) Rely on hand-crafted features! Do es not use the structure of math expression. Our model is end-to-end and structural!
[]
GEM-SciDuet-train-31#paper-1044#slide-2
1044
Semantically-Aligned Equation Generation for Solving and Reasoning Math Word Problems
Solving math word problems is a challenging task that requires accurate natural language understanding to bridge natural language texts and math expressions. Motivated by the intuition about how human generates the equations given the problem texts, this paper presents a neural approach to automatically solve math word problems by operating symbols according to their semantic meanings in texts. This paper views the process of generating equations as a bridge between the semantic world and the symbolic world, where the proposed neural math solver is based on an encoderdecoder framework. In the proposed model, the encoder is designed to understand the semantics of problems, and the decoder focuses on tracking semantic meanings of the generated symbols and then deciding which symbol to generate next. The preliminary experiments are conducted in a benchmark dataset Math23K, and our model significantly outperforms both the state-of-the-art single model and the best non-retrieval-based model over about 10% accuracy, demonstrating the effectiveness of bridging the symbolic and semantic worlds from math word problems. 1
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214 ], "paper_content_text": [ "Introduction Automatically solving math word problems has been an interesting research topic and also been viewed as a way of evaluating machines' ability (Mandal and Naskar, 2019) .", "For human, writing down an equation that solves a math word problem requires the ability of reading comprehension, reasoning, and sometimes real world understanding.", "Specifically, to solve a math word problem, we first need to know the goal of the given problem, then understand the semantic 1 The source code is available at https://github.", "com/MiuLab/E2EMathSolver.", "meaning of each numerical number in the problem, perform reasoning based on the comprehension in the previous step, and finally decide what to write in the equation.", "Most prior work about solving math word problems relied on hand-crafted features, which required more human knowledge.", "Because those features are often in the lexical level, it is not clear whether machines really understand the math problems.", "Also, most prior work evaluated their approaches on relatively small datasets, and the capability of generalization is concerned.", "This paper considers the reasoning procedure when writing down the associated equation given a problem.", "Figure 1 illustrates the problem solving process.", "The illustration shows that human actually assigns the semantic meaning to each number when manipulating symbols, including operands (numbers) and operators (+ − ×÷).", "Also, we believe that the semantic meaning of operands can help us decide which operator to use.", "For example, the summation of \"price of one pen\" and \"number of pens Tom bought\" is meaningless; therefore the addition would not be chosen.", "Following the observation above, this paper proposes a novel encoder decoder model, where the encoder extracts semantic meanings of numbers in the problem, and the decoder is equipped with a stack that facilitates tracking the semantic meanings of operands.", "The contributions of this paper are 4-fold: • This paper is the first work that models semantic meanings of operands and operators for math word problems.", "• This paper proposes an end-to-end neural math solver with a novel decoding process that utilizes the stack to generate associated equations.", "Figure 1 : The solving process of the math word problem \"Each notebok takes $0.5 and each pen takes $1.", "Tom has $10.", "How many notebook can he buy after buying 5 pens?\"", "and the associated equation is x = (10 − 1 × 5) ÷ 0.5.", "The associated equation is x = (10 − 1 × 5) ÷ 0.5.", "• This paper achieves the state-of-the-art performance on the large benchmark dataset Math23K.", "• This paper is capable of providing interpretation and reasoning for the math word problem solving procedure.", "Related Work There is a lot of prior work that utilized handcrafted features, such as POS tags, paths in the dependency trees, keywords, etc., to allow the model to focus on the quantities in the problems Hosseini et al., 2014; Koncel-Kedziorski et al., 2015; Roy et al., 2016; Upadhyay and Chang, 2017; Roy and Roth, 2018; Wang et al., 2018) .", "Recently, Mehta et al.", "; Wang et al.", "; Ling et al.", "attempted at learning models without predefined features.", "Following the recent trend, the proposed end-to-end model in this paper does not need any hand-crafted features.", "Kushman et al.", "first extracted templates about math expressions from the training answers, and then trained models to select templates and map quantities in the problem to the slots in the template.", "Such two-stage approach has been tried and achieved good results (Upadhyay and Chang, 2017) .", "The prior work highly relied on human knowledge, where they parsed problems into equations by choosing the expression tree with the highest score calculated by an operator classifier, working on a hand-crafted \"trigger list\" containing quantities and noun phrases in the problem, or utilizing features extracted from text spans (Roy et al., , 2016 Koncel-Kedziorski et al., 2015) .", "Shi et al.", "defined a Dolphin language to connect math word problems and logical forms, and generated rules to parse math word problems.", "Upadhyay et al.", "parsed math word problems without explicit equation annotations.", "Roy and Roth clas-sified math word problems into 4 types and used rules to decide the operators accordingly.", "Wang et al.", "trained the parser using reinforcement learning with hand-crafted features.", "Hosseini et al.", "modeled the problem text as transition of world states, and the equation is generated as the world states changing.", "Our work uses a similar intuition, but hand-crafted features are not required and our model can be trained in an end-to-end manner.", "Some end-to-end approaches have been proposed, such as generating equations directly via a seq2seq model (Wang et al., 2017) .", "Ling et al.", "tried to generate solutions along with its rationals with a seq2seq-like model for better interpretability.", "This paper belongs to the end-to-end category, but different from the previous work; we are the first approach that generates equations with stack actions, which facilitate us to simulate the way how human solves problems.", "Furthermore, the proposed approach is the first model that is more interpretable and provides reasoning steps without the need of rational annotations.", "End-to-End Neural Math Solver Our approach composes of two parts, an encoder and a decoder, where the process of solving math word problems is viewed as transforming multiple text spans from the problems into the target information the problems ask for.", "In the example shown in Figure 1 , all numbers in the problem are attached with the associated semantics.", "Motivated by the observation, we design an encoder to extract the semantic representation of each number in the problem text.", "Considering that human usually manipulates those numbers and operators (such as addition, subtraction, etc.)", "based on their semantics for problem solving, a decoder is designed to construct the equation, where the semantics is aligned with the representations extracted by the encoder.", "The idea of the proposed model Tom has $ 10 5 pens ?", "Encoder Stack Attention Operation Selector Apply OP OP Return Decoder Operand Selector Semantic Transformer Each notebook takes $0.5 and each pen takes $1.", "Tom has $10.", "How many notebooks can he buy after buying 5 pens?", "Stack Attention is to imitate the human reasoning process for solving math word problems.", "The model architecture is illustrated in Figure 2 .", "Encoder The encoder aims to extract the semantic representation of each constant needed for solving problems.", "However, the needed constants may come from either the given problem texts or domain knowledge, so we detail these two procedures as follows.", "Constant Representation Extraction For each math word problem, we are given a passage consisting of words {w P t } m t=1 , whose word embeddings are {e P t } m t=1 .", "The problem text includes some numbers, which we refer as constants.", "The positions of constants in the problem text are denoted as {p i } n i=1 .", "In order to capture the semantic representation of each constant by considering its contexts, a bidirectional long short-term memory (BLSTM) is adopted as the encoder (Hochreiter and Schmidhuber, 1997) : h E t , c E t = BLSTM(h E t−1 , c E t−1 , e P t ), (1) and then for the i-th constant in the problem, its semantic representation e c i is modeled by the corresponding BLSTM output vector: e c i = h E p i .", "(2) External Constant Leveraging External constants, including 1 and π, are leveraged, because they are required to solve a math word problem, but not mentioned in the problem text.", "Due to their absence from the problem text, we cannot extract their semantic meanings by BLSTM in (2) .", "Instead, we model their semantic representation e π , e 1 as parts of the model parameters.", "They are randomly initialized and are learned during model training.", "Decoder The decoder aims at constructing the equation that can solve the given problem.", "We generate the equation by applying stack actions on a stack to mimic the way how human understands an equation.", "Human knows the semantic meaning of each term in the equation, even compositing of operands and operators like the term \"(10−1×5)\" in Figure 1 .", "Then what operator to apply on a pair operands can be chosen based on their semantic meanings accordingly.", "Hence we design our model to generate the equation in a postfix manner: a operator is chosen base on the semantic representations of two operands the operator is going to apply to.", "Note that the operands a operator can apply to can be any results generated previously.", "That is the reason why we use \"stack\" as our data structure in order to keep track of the operands a operator is going to apply to.", "The stack contains both symbolic and semantic representations of operands, denoted as S = [(v S lt , e S lt ), (v S lt−1 , e S lt−1 ), · · · , (v S 1 , e S 1 )], (3) where v S of each pair is the symbolic part, such as x + 1, while e S is the semantic representation, which is a vector.", "The components in the decoder are shown in the right part of Figure 2 , each of which is detailed below.", "Decoding State Features At each decoding step, decisions are made based on features of the current state.", "At each step, fea- tures r sa and r opd are extracted to select a stack action (section 3.3.2) and an operand to push (section 3.3.3).", "Specifically, the features are the gated concatenation of following vectors: • h D t is the output of an LSTM, which encodes the history of applied actions: h D t , c D t = LSTM(h D t−1 , c D t−1 , res t−1 ), (4) where res t−1 is the result from the previous stack action similar to the seq2seq model (Sutskever et al., 2014) .", "For example, if the previous stack action o t−1 is \"push\", then res t−1 is the semantic representation pushed into the stack.", "If the previous stack action o t−1 is to apply an operator , then res t−1 is the semantic representation generated by f .", "• s t is the stack status.", "It is crucial because some operators are only applicable to certain combinations of operand semantics, which is similar to the type system in programming languages.", "For example, operating multiplication is applicable to the combination of \"quantity of an item\" and \"price of an item\", while operating addition is not.", "Considering that all math operators supported here (+, −, ×, ÷) are binary operators, the semantic representations of the stack's top 2 elements at the time t − 1 are considered: s t = [e S lt ; e S lt ].", "(5) • q t incorporates problem information in the decision.", "It is believed that the attention mechanism (Luong et al., 2015) can effectively capture dependency for longer distance.", "Thus, the attention mechanism over the encoding problem h E 1 , h E 2 , · · · is adopted: q t = Attention(h D t , {h E i } m i=1 ), (6) where the attention function in this paper is defined as a function with learnable parameters w, W, b: Attention(u, {v i } m i=1 ) = m i=1 α i h i , (7) α i = exp(s i ) m l=1 exp(s i ) , (8) s i = w T tanh(W T [u; v i ] + b).", "(9) In order to model the dynamic features for different decoding steps, features in r sa t is gated as follows: r sa t = [g sa t,1 · h D t ; g sa t,2 · s t ; g sa t,3 · q t ], (10) g sa t = σ(W sa · [h D t ; s t ; q t ]), (11) where σ is a sigmoid function and W sa is a learned gating parameter.", "r opd t is defined similarly, but with a different learned gating parameter W opd .", "Stack Action Selector The stack action selector is to select an stack action at each decoding step (section 3.3.2) until the unknowns are solved.", "The probability of choosing action a at the decoding step t is calculated with a network NN constituted of one hidden layer and ReLU as the activation function: P (Y t |{y i } t−1 i=1 , {w i } m i=1 ) (12) = StackActionSelector(r sa t ) = softmax(NN(r sa t )) , where r sa t is decoding state features as defined in section 3.3.", "Stack Actions The available stack actions are listed below: • Variable generation: The semantic representation of an unknown variable x is generated dynamically as the first action in the decoding process.", "Note that this procedure provides the flexibility of solving problems with more than one unknown variables.", "The decoder module can decide how many unknown variables are required to solve the problem, and the semantic representation of the unknown variable is generated with an attention mechanism: e x = Attention(h D t , {h E i } m i=1 ).", "(13) • Push: This stack action pushes the operand chosen by the operand selector (section 3.3.3).", "Both the symbolic representation v * and semantic representation e * of the chosen operand would be pushed to the stack S in (3).", "Then the stack state becomes S = [(v S * , e S * ), (v S lt , e S lt ), · · · , (v S 1 , e S 1 )].", "(14) • Operator application ( ∈ {+, −, ×, ÷}): One stack action pops two elements from the top of the stack, which contains two pairs, (v i , e i ) and (v j , e j ), and then the associated symbolic operator, v k = v i v j , is recorded.", "Also, a semantic transformation function f for that operator is invoked, which generates the semantic representation of v k by transforming semantic representations of v i and v j to e k = f (e i , e j ).", "Therefore, after an operator is applied to the stack specified in (3) , the stack state becomes S =[(v S lt v S lt−1 , f (e S lt , e S lt−1 )), (15) (v S lt−2 , e S lt−2 ), · · · , (v S 1 , e S 1 )].", "• Equal application: When the equal application is chosen, it implies that an equation is completed.", "This stack action pops 2 tuples from the stack, (v i , e i ), (v j , e j ), and then v i = v j is recorded.", "If one of them is an unknown variable, the problem is solved.", "Therefore, after an OP is applied to the stack specified in (3) , the stack state becomes S = [(v S lt−2 , e S lt−2 ), · · · , (v S 1 , e S 1 )].", "(16) Operand Selector When the stack action selector has decided to push an operand, the operand selector aims at choosing which operand to push.", "The operand candidates e include constants provided in the problem text whose semantic representations are e c 1 , e c 2 , · · · , e c n , unknown variable whose semantic representation is e x , and two external constants 1 and π whose semantic representations are e 1 , e π : e = [e c 1 , e c 2 , · · · , e c n , e 1 , e π , e x ].", "An operand has both symbolic and semantic representations, but the selection focuses on its semantic meaning; this procedure is the same as what human does when solving math word problems.", "Inspired by addressing mechanisms of neural Turing machine (NTM) (Graves et al., 2014) , the probability of choosing the i-th operand candidate is the attention weights of r t over the semantic representations of the operand candidates as in (8) : P (Z t | {y i } t−1 i=1 , {w i } m i=1 ) (18) = OperandSelector(r opd t ) = AttentionWeight(r opd t , {e i } m i=1 ∪ {e 1 , e π , e x }), and r opd t is defined in section 3.3.", "Semantic Transformer A semantic transformer is proposed to generate the semantic representation of a new symbol resulted from applying an operator, which provides the capability of interpretation and reasoning for the target task.", "The semantic transformer for an operator ∈ {+, −, ×, ÷} transforms semantic representations of two operands e 1 , e 2 into f (e 1 , e 2 ) = tanh(U ReLU(W [e 1 ; e 2 ]+b )+c ), where W , U , b , c are model parameters.", "Semantic transformers for different operators have different parameters in order to model different transformations.", "Training Both stack action selection and operand selection can be trained in a fully supervised way by giving problems and associated ground truth equations.", "Because our model generates the equation with stack actions, the equation is first transformed into its postfix representation.", "Let the postfix representation of the target equation be y 1 , · · · y t , · · · , y T , where y t can be either an operator (+, −, ×, ÷, =) or a target operand.", "Then for each time step t, the loss can be computed as L(y t ) = L 1 (push op) + L 2 (y t ) y t is an operand L 1 (y t ) otherwise , where L 1 is the stack action selection loss and L 2 is the operand selection loss defined as L 1 (y t ) = − log P (Y t = y t | {o i } t−1 i=1 , {w i } m i=1 ), L 2 (y t ) = − log P (Z t = y t | r t ).", "The objective of our training process is to minimize the total loss for the whole equation, T t=1 L(y t ).", "Inference When performing inference, at each time step t, the stack action with the highest probability P (Y t |{ỹ i } t−1 i=1 , {w i } m i=1 ) is chosen.", "If the chosen stack action is \"push\", the operand with the highest probability P (Z t |{Ỹ i } t−1 i=1 , {w i } m i=1 ) is chosen.", "When the stack has less than 2 elements, the probability of applying operator +, −, ×, ÷, = would be masked out to prevent illegal stack actions, so all generated equations must be legal math expressions.", "The decoder decodes until the unknown variable can be solved.", "After the equations are generated, a Python package SymPy (Meurer et al., 2017) is used to solve the unknown variable.", "The inference procedure example is illustrated in Figure 3 .", "The detailed algorithm can be found in Algorithm 1.", "Experiments To evaluate the performance of the proposed model, we conduct the experiments on the benchmark dataset and analyze the learned semantics.", "Settings The experiments are benchmarked on the dataset Math23k (Wang et al., 2017) , which contains 23,162 math problems with annotated equations.", "Each problem can be solved by a singleunknown-variable equation and only uses operators +, −, ×, ÷.", "Also, except π and 1, quantities in the equation can be found in the problem text.", "There are also other large scale datasets like Dol-phin18K (Shi et al., 2015) and AQuA (Ling et al., 2017) , containing 18,460 and 100,000 math word problems respectively.", "The reasons about not evaluating on these two datasets are 1) Dolphin18k contains some unlabeled math word problems and some incorrect labels, and 2) AQuA contains rational for solving the problems, but the equations in the rational are not formal (e.g.", "mixed with texts, using x to represent ×, etc.)", "and inconsistent.", "Therefore, the following experiments are performed and analyzed using Math23K, the only large scaled, good-quality dataset. )", "do h D t ← LSTM(h D t−1 , ct−1, ret) st ← S.get top2() h E ← Attention(h D t−1 , h E ) rt ← [h D t , st, h E ] psa ← StackActionSelector(rt) p opd ← OperandSelector(rt) if training then Target equation y is available when training.", "Yt ← yt if yt is operand then loss ← loss + L1(push) + L2(yt) else loss ← loss + L1(yt) end if else Yt ← StackActionSelector(r sa t ) if Yt = push then Zt ← OperandSelector(r opd t ) end if end if if Yt = gen var then e x ← Attention(h D t , h E ) ret ← e x else if Yt = push then S.push(vZ t , eZ t ) ret ← eZ t else if Yt ∈ {+, Results The results are shown in Our proposed end-to-end model belongs to the generation category, and the single model performance achieved by our proposed model is new state-of-the-art (> 65%) and even better than the hybrid model result (64.7%).", "In addition, we are the first to report character-based performance on this dataset, and the character-based results are slightly better than the word-based ones.", "Among the single model performance, our models obtain about more than 7% accuracy improvement compared to the previous best one (Wang et al., 2017) .", "The performance of our character-based model also shows that our model is capable of learning the relatively accurate semantic representations without word boundaries and achieves better performance.", "Ablation Test To better understand the performance contributed by each proposed component, we perform a series of ablation tests by removing components one by one and then checking the performance by 5-fold cross validation.", "Table 2 shows the ablation results.", "Char-Based v.s.", "Word-Based As reported above, using word-based model instead of character-based model only causes 0.5% performance drop.", "To fairly compare with prior word- Table 2 : 5-fold cross validation results of ablation tests.", "based models, the following ablation tests are performed on the word-based approach.", "Word-Based -Gate It uses r t instead of r sa t and r opr t as the input of both StackActionSelector and OperandSelector.", "Word-Based -Gate -Attention Considering that the prior generation-based model (seq2seq) did not use any attention mechanism, we compare the models with and without the attention mechanism.", "Removing attention means excluding q t−1 in (11), so the input of both operator and operand selector becomes r t = [h D t ; s t ].", "The result implies that our model is not better than previous models solely because of the attention.", "Word-Based -Gate -Attention -Stack To check the effectiveness of the stack status (s t in (11)), the experiments of removing the stack status from the input of both operator and operand selectors (r t = h D t ) are conducted.", "The results well justify our idea of choosing operators based on semantic meanings of operands.", "Word-Based -Semantic Transformer To validate the effectiveness of the idea that views an operator as a semantic transformer, we modify the semantic transformer function of the operator into f (e 1 , e 2 ) = e , where e is a learnable parameter and is different for different operators.", "Therefore, e acts like the embedding of the operator , and the decoding process is more similar to a general seq2seq model.", "The results show that the semantic transformer in the original model encodes not only the last operator applied on the operands but other information that helps the selectors.", "Word-Based -Semantic Representation To explicitly evaluate the effectiveness of operands' semantic representations, we rewrite semantic representation of the i-th operand in the problem texts q u a n ti fi e r 个 b a n a n a 香 蕉 , e v e r y 每 ( b a s k e t) < u n k > 6 .0 q u a n ti fi e r 个 , ta k e o ff 拿 掉 h o w m a n y 多 少 q u a n ti fi e r 个 , th e n 就 c a n 可 以 e x a c tl y 正 好 fi ll 装 9 .0 q u a n ti fi e r 个 b a s k e ts 篮 子 了 < u n k > .", "9.0 6.0 58.0 Figure 4 : The self-attention map visualization of operands' semantic expressions for the problem \"There are 58 bananas.", "Each basket can contain 6 bananas.", "How many bananas are needed to be token off such that exactly 9 baskets are filled?\".", "from (2) to e c i = b c i , where b c i is a parameter.", "Thus for every problem, the representation of the i-th operand is identical, even though their meanings in different problems may be different.", "This modification assumes that no semantic information is captured by b c i , which can merely represent a symbolic placeholder in an equation.", "Because the semantic transformer is to transform the semantic representations, applying this component is meaningless.", "Here the semantic transformer is also replaced with f (e 1 , e 2 ) = e as the setting of the previous ablation test.", "The results show that the model without using semantic representations of operands causes a significant accuracy drop of 3.5%.", "The main contribution of this paper about modeling semantic meanings of symbols is validated and well demonstrated here.", "Qualitative Analysis To further analyze whether the proposed model can provide interpretation and reasoning, we visualize the learned semantic representations of constants to check where the important cues are, Constant Embedding Analysis To better understand the information encoded in the semantic representations of constants in the problem, a self-attention is performed when their semantic representations are extracted by the encoder.", "Namely, we rewrite (2) as e c i = Attention(h E p i , {h E t } m t=1 .", "(20) Then we check the trained self-attention map (α in the attention function) on the validation dataset.", "For some problems, the self-attention that generates semantic representations of constants in the problem concentrates on the number's quantifier or unit, and sometimes it also focuses on informative verbs, such as \"gain\", \"get\", \"fill\", etc., in the sentence.", "For example, Figure 4 shows the attention weights for an example math word problem, where lighter colors indicate higher weights.", "The numbers \"58\" and \"6\" focus more on the quantifier-related words (e.g.", "\"every\" and \"how many\"), while \"9\" pays higher attention to the verb \"fill\".", "The results are consistent with those handcraft features for solving math word problems proposed by the prior research (Hosseini et al., 2014; .", "Hence, we demonstrate that the automatically learned semantic representations indeed capture critical information that facilitates solving math word problems without providing human-crafted knowledge.", "Decoding Process Visualization We visualize the attention map (q t in (6) ) to see how the attention helps the decoding process.", "An example is shown in the top of Figure 5 , where most attention focuses on the end of the sentence.", "Unlike the machine translation task, the attention shows the word-level alignment between source and target languages, solving math word problems requires high-level understanding due to the task complexity.", "To further analyze the effectiveness of the proposed gating mechanisms for stack action and operand selection, the activation of gates g sa , g opd at each step of the decoding process is shown in the bottom of Figure 5 .", "It shows that most of time, the gate activation is high, demonstrating that the proposed gating mechanisms play an important role during decoding.", "We also observe a common phenomenon that the activation g sa 2 , which controls how much attention the stack action selector puts on the stack state when deciding an stack action, is usually low until the last \"operator application\" stack action.", "For example, in the example of Figure 5 , g sa 2 is less than 0.20 till the last argument selection stack action, and activates when deciding the division operator application (÷) and the equal application (=).", "It may result from the higher-level semantics of the operand (6.75−2.75) on the stack when selecting the stack action division operator application (÷).", "In terms Problem & Results 红花有60朵,黄花比红花多1/6朵,黄花有多少朵. (There are 60 red flowers.", "Yellow flowers are more than red ones by 1/6.", "How many yellow flowers are there?)", "Generated Equation: 60 + 1 6 Correct Answer: 70 火车 48 小时行驶 5920 千米,汽车 25 小时行驶 2250 千米,汽车平均每小时比火车每小时慢 多少 千米 ? (The train travels 5920 kilometers in hours, and the car travels 2250 kilometers in 25 hours.", "How many kilometers per hour is the car slower than the train?)", "Generated Equation: 2250 ÷ 25 − 5920 ÷ 48 Correct Answer: 33 1 3 小红前面 5 人,后面 7 人,一共有多少人? (There are 5 people in front of Little Red and 7 people behind.", "How many persons are there in total?)", "Generated Equation: 5 + 7 Correct Answer: 13 Figure 5: Word attention and gate activation (g sa and g opd ) visualization when generating stack actions for the problem \"6.75 deducting 5 times of an unknown number is 2.75.", "What is the unknown number?", "\", where the associated equation is x = (6.75 − 2.75) ÷ 5.", "Note that g opd is meaningful only when the t-th stack action is push op.", "of the activation of g opd , we find that three features are important in most cases, demonstrating the effectiveness of the proposed mechanisms.", "Error Analysis We randomly sample some results predicted incorrectly by our model shown in Table 3 .", "In the first example, the error is due to the language ambiguity, and such ambiguity cannot be resolved without considering the exact value of the number.", "From the second example, although our model identifies the problem as a comparison problem successfully, it handles the order of the operands incorrectly.", "For the third problem, it cannot be solved by using only the surface meaning but requires some common sense.", "Therefore, above phenomena show the difficulty of solving math word problems and the large room for improvement.", "Conclusion We propose an end-to-end neural math solver using an encoder-decoder framework that incorporates semantic representations of numbers in order to generate mathematical symbols for solving math word problems.", "The experiments show that the proposed model achieves the state-of-the-art performance on the benchmark dataset, and empirically demonstrate the effectiveness of each component in the model.", "In sum, the proposed neural math solver is designed based on how human performs reasoning when writing equations, providing better interpretation without the need of labeled rationals.", "A Algorithm Detail The training and inference procedures are shown in Algortihm 1.", "B Hyperparameter Setup The model is trained with the optimizer adam (Kingma and Ba, 2014), and the learning rate is set to 0.001.", "Pretrained embeddings using FastText (Joulin et al., 2016 ) are adopted.", "The hidden state size of LSTM used in the encoder and decoder is 256.", "The dimension of hidden layers in attention, semantic transformer and operand/stack action selector is 256.", "The dropout rate is set as 0.1 before inputting the decoder LSTM, before the stack action selector and after the hidden layer of the stack action selector and attention.", "The reported accuracy is the result of 5-fold cross-validation, same as Wang et al.", "for fair comparison.", "C Error Analysis between Seq2Seq We implement the seq2seq model as proposed by Wang et al.", "and compare the performance difference between our proposed model and the baseline seq2seq model.", "Table 4 shows the generated results seq2seq predicts correctly but our model predicts incorrectly.", "Table 5 show the results our model can predict correctly but seq2seq cannot." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.1.1", "3.1.2", "3.2", "3.3", "3.3.1", "3.3.2", "3.3.3", "3.3.4", "3.4", "3.5", "4", "4.1", "4.2", "4.3", "5", "5.1", "5.2", "5.3", "6" ], "paper_header_content": [ "Introduction", "Related Work", "End-to-End Neural Math Solver", "Encoder", "Constant Representation Extraction", "External Constant Leveraging", "Decoder", "Decoding State Features", "Stack Action Selector", "Stack Actions", "Operand Selector", "Semantic Transformer", "Training", "Inference", "Experiments", "Settings", "Results", "Ablation Test", "Qualitative Analysis", "Constant Embedding Analysis", "Decoding Process Visualization", "Error Analysis", "Conclusion" ] }
GEM-SciDuet-train-31#paper-1044#slide-2
Overview of the Proposed Model
M I U L A B stack action stack action stack action stack action Each notebook takes $0.5 and each pen takes $1. Decoder Tom has $10. How many notebooks can he buy after buying 5 pens?
M I U L A B stack action stack action stack action stack action Each notebook takes $0.5 and each pen takes $1. Decoder Tom has $10. How many notebooks can he buy after buying 5 pens?
[]
GEM-SciDuet-train-31#paper-1044#slide-3
1044
Semantically-Aligned Equation Generation for Solving and Reasoning Math Word Problems
Solving math word problems is a challenging task that requires accurate natural language understanding to bridge natural language texts and math expressions. Motivated by the intuition about how human generates the equations given the problem texts, this paper presents a neural approach to automatically solve math word problems by operating symbols according to their semantic meanings in texts. This paper views the process of generating equations as a bridge between the semantic world and the symbolic world, where the proposed neural math solver is based on an encoderdecoder framework. In the proposed model, the encoder is designed to understand the semantics of problems, and the decoder focuses on tracking semantic meanings of the generated symbols and then deciding which symbol to generate next. The preliminary experiments are conducted in a benchmark dataset Math23K, and our model significantly outperforms both the state-of-the-art single model and the best non-retrieval-based model over about 10% accuracy, demonstrating the effectiveness of bridging the symbolic and semantic worlds from math word problems. 1
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214 ], "paper_content_text": [ "Introduction Automatically solving math word problems has been an interesting research topic and also been viewed as a way of evaluating machines' ability (Mandal and Naskar, 2019) .", "For human, writing down an equation that solves a math word problem requires the ability of reading comprehension, reasoning, and sometimes real world understanding.", "Specifically, to solve a math word problem, we first need to know the goal of the given problem, then understand the semantic 1 The source code is available at https://github.", "com/MiuLab/E2EMathSolver.", "meaning of each numerical number in the problem, perform reasoning based on the comprehension in the previous step, and finally decide what to write in the equation.", "Most prior work about solving math word problems relied on hand-crafted features, which required more human knowledge.", "Because those features are often in the lexical level, it is not clear whether machines really understand the math problems.", "Also, most prior work evaluated their approaches on relatively small datasets, and the capability of generalization is concerned.", "This paper considers the reasoning procedure when writing down the associated equation given a problem.", "Figure 1 illustrates the problem solving process.", "The illustration shows that human actually assigns the semantic meaning to each number when manipulating symbols, including operands (numbers) and operators (+ − ×÷).", "Also, we believe that the semantic meaning of operands can help us decide which operator to use.", "For example, the summation of \"price of one pen\" and \"number of pens Tom bought\" is meaningless; therefore the addition would not be chosen.", "Following the observation above, this paper proposes a novel encoder decoder model, where the encoder extracts semantic meanings of numbers in the problem, and the decoder is equipped with a stack that facilitates tracking the semantic meanings of operands.", "The contributions of this paper are 4-fold: • This paper is the first work that models semantic meanings of operands and operators for math word problems.", "• This paper proposes an end-to-end neural math solver with a novel decoding process that utilizes the stack to generate associated equations.", "Figure 1 : The solving process of the math word problem \"Each notebok takes $0.5 and each pen takes $1.", "Tom has $10.", "How many notebook can he buy after buying 5 pens?\"", "and the associated equation is x = (10 − 1 × 5) ÷ 0.5.", "The associated equation is x = (10 − 1 × 5) ÷ 0.5.", "• This paper achieves the state-of-the-art performance on the large benchmark dataset Math23K.", "• This paper is capable of providing interpretation and reasoning for the math word problem solving procedure.", "Related Work There is a lot of prior work that utilized handcrafted features, such as POS tags, paths in the dependency trees, keywords, etc., to allow the model to focus on the quantities in the problems Hosseini et al., 2014; Koncel-Kedziorski et al., 2015; Roy et al., 2016; Upadhyay and Chang, 2017; Roy and Roth, 2018; Wang et al., 2018) .", "Recently, Mehta et al.", "; Wang et al.", "; Ling et al.", "attempted at learning models without predefined features.", "Following the recent trend, the proposed end-to-end model in this paper does not need any hand-crafted features.", "Kushman et al.", "first extracted templates about math expressions from the training answers, and then trained models to select templates and map quantities in the problem to the slots in the template.", "Such two-stage approach has been tried and achieved good results (Upadhyay and Chang, 2017) .", "The prior work highly relied on human knowledge, where they parsed problems into equations by choosing the expression tree with the highest score calculated by an operator classifier, working on a hand-crafted \"trigger list\" containing quantities and noun phrases in the problem, or utilizing features extracted from text spans (Roy et al., , 2016 Koncel-Kedziorski et al., 2015) .", "Shi et al.", "defined a Dolphin language to connect math word problems and logical forms, and generated rules to parse math word problems.", "Upadhyay et al.", "parsed math word problems without explicit equation annotations.", "Roy and Roth clas-sified math word problems into 4 types and used rules to decide the operators accordingly.", "Wang et al.", "trained the parser using reinforcement learning with hand-crafted features.", "Hosseini et al.", "modeled the problem text as transition of world states, and the equation is generated as the world states changing.", "Our work uses a similar intuition, but hand-crafted features are not required and our model can be trained in an end-to-end manner.", "Some end-to-end approaches have been proposed, such as generating equations directly via a seq2seq model (Wang et al., 2017) .", "Ling et al.", "tried to generate solutions along with its rationals with a seq2seq-like model for better interpretability.", "This paper belongs to the end-to-end category, but different from the previous work; we are the first approach that generates equations with stack actions, which facilitate us to simulate the way how human solves problems.", "Furthermore, the proposed approach is the first model that is more interpretable and provides reasoning steps without the need of rational annotations.", "End-to-End Neural Math Solver Our approach composes of two parts, an encoder and a decoder, where the process of solving math word problems is viewed as transforming multiple text spans from the problems into the target information the problems ask for.", "In the example shown in Figure 1 , all numbers in the problem are attached with the associated semantics.", "Motivated by the observation, we design an encoder to extract the semantic representation of each number in the problem text.", "Considering that human usually manipulates those numbers and operators (such as addition, subtraction, etc.)", "based on their semantics for problem solving, a decoder is designed to construct the equation, where the semantics is aligned with the representations extracted by the encoder.", "The idea of the proposed model Tom has $ 10 5 pens ?", "Encoder Stack Attention Operation Selector Apply OP OP Return Decoder Operand Selector Semantic Transformer Each notebook takes $0.5 and each pen takes $1.", "Tom has $10.", "How many notebooks can he buy after buying 5 pens?", "Stack Attention is to imitate the human reasoning process for solving math word problems.", "The model architecture is illustrated in Figure 2 .", "Encoder The encoder aims to extract the semantic representation of each constant needed for solving problems.", "However, the needed constants may come from either the given problem texts or domain knowledge, so we detail these two procedures as follows.", "Constant Representation Extraction For each math word problem, we are given a passage consisting of words {w P t } m t=1 , whose word embeddings are {e P t } m t=1 .", "The problem text includes some numbers, which we refer as constants.", "The positions of constants in the problem text are denoted as {p i } n i=1 .", "In order to capture the semantic representation of each constant by considering its contexts, a bidirectional long short-term memory (BLSTM) is adopted as the encoder (Hochreiter and Schmidhuber, 1997) : h E t , c E t = BLSTM(h E t−1 , c E t−1 , e P t ), (1) and then for the i-th constant in the problem, its semantic representation e c i is modeled by the corresponding BLSTM output vector: e c i = h E p i .", "(2) External Constant Leveraging External constants, including 1 and π, are leveraged, because they are required to solve a math word problem, but not mentioned in the problem text.", "Due to their absence from the problem text, we cannot extract their semantic meanings by BLSTM in (2) .", "Instead, we model their semantic representation e π , e 1 as parts of the model parameters.", "They are randomly initialized and are learned during model training.", "Decoder The decoder aims at constructing the equation that can solve the given problem.", "We generate the equation by applying stack actions on a stack to mimic the way how human understands an equation.", "Human knows the semantic meaning of each term in the equation, even compositing of operands and operators like the term \"(10−1×5)\" in Figure 1 .", "Then what operator to apply on a pair operands can be chosen based on their semantic meanings accordingly.", "Hence we design our model to generate the equation in a postfix manner: a operator is chosen base on the semantic representations of two operands the operator is going to apply to.", "Note that the operands a operator can apply to can be any results generated previously.", "That is the reason why we use \"stack\" as our data structure in order to keep track of the operands a operator is going to apply to.", "The stack contains both symbolic and semantic representations of operands, denoted as S = [(v S lt , e S lt ), (v S lt−1 , e S lt−1 ), · · · , (v S 1 , e S 1 )], (3) where v S of each pair is the symbolic part, such as x + 1, while e S is the semantic representation, which is a vector.", "The components in the decoder are shown in the right part of Figure 2 , each of which is detailed below.", "Decoding State Features At each decoding step, decisions are made based on features of the current state.", "At each step, fea- tures r sa and r opd are extracted to select a stack action (section 3.3.2) and an operand to push (section 3.3.3).", "Specifically, the features are the gated concatenation of following vectors: • h D t is the output of an LSTM, which encodes the history of applied actions: h D t , c D t = LSTM(h D t−1 , c D t−1 , res t−1 ), (4) where res t−1 is the result from the previous stack action similar to the seq2seq model (Sutskever et al., 2014) .", "For example, if the previous stack action o t−1 is \"push\", then res t−1 is the semantic representation pushed into the stack.", "If the previous stack action o t−1 is to apply an operator , then res t−1 is the semantic representation generated by f .", "• s t is the stack status.", "It is crucial because some operators are only applicable to certain combinations of operand semantics, which is similar to the type system in programming languages.", "For example, operating multiplication is applicable to the combination of \"quantity of an item\" and \"price of an item\", while operating addition is not.", "Considering that all math operators supported here (+, −, ×, ÷) are binary operators, the semantic representations of the stack's top 2 elements at the time t − 1 are considered: s t = [e S lt ; e S lt ].", "(5) • q t incorporates problem information in the decision.", "It is believed that the attention mechanism (Luong et al., 2015) can effectively capture dependency for longer distance.", "Thus, the attention mechanism over the encoding problem h E 1 , h E 2 , · · · is adopted: q t = Attention(h D t , {h E i } m i=1 ), (6) where the attention function in this paper is defined as a function with learnable parameters w, W, b: Attention(u, {v i } m i=1 ) = m i=1 α i h i , (7) α i = exp(s i ) m l=1 exp(s i ) , (8) s i = w T tanh(W T [u; v i ] + b).", "(9) In order to model the dynamic features for different decoding steps, features in r sa t is gated as follows: r sa t = [g sa t,1 · h D t ; g sa t,2 · s t ; g sa t,3 · q t ], (10) g sa t = σ(W sa · [h D t ; s t ; q t ]), (11) where σ is a sigmoid function and W sa is a learned gating parameter.", "r opd t is defined similarly, but with a different learned gating parameter W opd .", "Stack Action Selector The stack action selector is to select an stack action at each decoding step (section 3.3.2) until the unknowns are solved.", "The probability of choosing action a at the decoding step t is calculated with a network NN constituted of one hidden layer and ReLU as the activation function: P (Y t |{y i } t−1 i=1 , {w i } m i=1 ) (12) = StackActionSelector(r sa t ) = softmax(NN(r sa t )) , where r sa t is decoding state features as defined in section 3.3.", "Stack Actions The available stack actions are listed below: • Variable generation: The semantic representation of an unknown variable x is generated dynamically as the first action in the decoding process.", "Note that this procedure provides the flexibility of solving problems with more than one unknown variables.", "The decoder module can decide how many unknown variables are required to solve the problem, and the semantic representation of the unknown variable is generated with an attention mechanism: e x = Attention(h D t , {h E i } m i=1 ).", "(13) • Push: This stack action pushes the operand chosen by the operand selector (section 3.3.3).", "Both the symbolic representation v * and semantic representation e * of the chosen operand would be pushed to the stack S in (3).", "Then the stack state becomes S = [(v S * , e S * ), (v S lt , e S lt ), · · · , (v S 1 , e S 1 )].", "(14) • Operator application ( ∈ {+, −, ×, ÷}): One stack action pops two elements from the top of the stack, which contains two pairs, (v i , e i ) and (v j , e j ), and then the associated symbolic operator, v k = v i v j , is recorded.", "Also, a semantic transformation function f for that operator is invoked, which generates the semantic representation of v k by transforming semantic representations of v i and v j to e k = f (e i , e j ).", "Therefore, after an operator is applied to the stack specified in (3) , the stack state becomes S =[(v S lt v S lt−1 , f (e S lt , e S lt−1 )), (15) (v S lt−2 , e S lt−2 ), · · · , (v S 1 , e S 1 )].", "• Equal application: When the equal application is chosen, it implies that an equation is completed.", "This stack action pops 2 tuples from the stack, (v i , e i ), (v j , e j ), and then v i = v j is recorded.", "If one of them is an unknown variable, the problem is solved.", "Therefore, after an OP is applied to the stack specified in (3) , the stack state becomes S = [(v S lt−2 , e S lt−2 ), · · · , (v S 1 , e S 1 )].", "(16) Operand Selector When the stack action selector has decided to push an operand, the operand selector aims at choosing which operand to push.", "The operand candidates e include constants provided in the problem text whose semantic representations are e c 1 , e c 2 , · · · , e c n , unknown variable whose semantic representation is e x , and two external constants 1 and π whose semantic representations are e 1 , e π : e = [e c 1 , e c 2 , · · · , e c n , e 1 , e π , e x ].", "An operand has both symbolic and semantic representations, but the selection focuses on its semantic meaning; this procedure is the same as what human does when solving math word problems.", "Inspired by addressing mechanisms of neural Turing machine (NTM) (Graves et al., 2014) , the probability of choosing the i-th operand candidate is the attention weights of r t over the semantic representations of the operand candidates as in (8) : P (Z t | {y i } t−1 i=1 , {w i } m i=1 ) (18) = OperandSelector(r opd t ) = AttentionWeight(r opd t , {e i } m i=1 ∪ {e 1 , e π , e x }), and r opd t is defined in section 3.3.", "Semantic Transformer A semantic transformer is proposed to generate the semantic representation of a new symbol resulted from applying an operator, which provides the capability of interpretation and reasoning for the target task.", "The semantic transformer for an operator ∈ {+, −, ×, ÷} transforms semantic representations of two operands e 1 , e 2 into f (e 1 , e 2 ) = tanh(U ReLU(W [e 1 ; e 2 ]+b )+c ), where W , U , b , c are model parameters.", "Semantic transformers for different operators have different parameters in order to model different transformations.", "Training Both stack action selection and operand selection can be trained in a fully supervised way by giving problems and associated ground truth equations.", "Because our model generates the equation with stack actions, the equation is first transformed into its postfix representation.", "Let the postfix representation of the target equation be y 1 , · · · y t , · · · , y T , where y t can be either an operator (+, −, ×, ÷, =) or a target operand.", "Then for each time step t, the loss can be computed as L(y t ) = L 1 (push op) + L 2 (y t ) y t is an operand L 1 (y t ) otherwise , where L 1 is the stack action selection loss and L 2 is the operand selection loss defined as L 1 (y t ) = − log P (Y t = y t | {o i } t−1 i=1 , {w i } m i=1 ), L 2 (y t ) = − log P (Z t = y t | r t ).", "The objective of our training process is to minimize the total loss for the whole equation, T t=1 L(y t ).", "Inference When performing inference, at each time step t, the stack action with the highest probability P (Y t |{ỹ i } t−1 i=1 , {w i } m i=1 ) is chosen.", "If the chosen stack action is \"push\", the operand with the highest probability P (Z t |{Ỹ i } t−1 i=1 , {w i } m i=1 ) is chosen.", "When the stack has less than 2 elements, the probability of applying operator +, −, ×, ÷, = would be masked out to prevent illegal stack actions, so all generated equations must be legal math expressions.", "The decoder decodes until the unknown variable can be solved.", "After the equations are generated, a Python package SymPy (Meurer et al., 2017) is used to solve the unknown variable.", "The inference procedure example is illustrated in Figure 3 .", "The detailed algorithm can be found in Algorithm 1.", "Experiments To evaluate the performance of the proposed model, we conduct the experiments on the benchmark dataset and analyze the learned semantics.", "Settings The experiments are benchmarked on the dataset Math23k (Wang et al., 2017) , which contains 23,162 math problems with annotated equations.", "Each problem can be solved by a singleunknown-variable equation and only uses operators +, −, ×, ÷.", "Also, except π and 1, quantities in the equation can be found in the problem text.", "There are also other large scale datasets like Dol-phin18K (Shi et al., 2015) and AQuA (Ling et al., 2017) , containing 18,460 and 100,000 math word problems respectively.", "The reasons about not evaluating on these two datasets are 1) Dolphin18k contains some unlabeled math word problems and some incorrect labels, and 2) AQuA contains rational for solving the problems, but the equations in the rational are not formal (e.g.", "mixed with texts, using x to represent ×, etc.)", "and inconsistent.", "Therefore, the following experiments are performed and analyzed using Math23K, the only large scaled, good-quality dataset. )", "do h D t ← LSTM(h D t−1 , ct−1, ret) st ← S.get top2() h E ← Attention(h D t−1 , h E ) rt ← [h D t , st, h E ] psa ← StackActionSelector(rt) p opd ← OperandSelector(rt) if training then Target equation y is available when training.", "Yt ← yt if yt is operand then loss ← loss + L1(push) + L2(yt) else loss ← loss + L1(yt) end if else Yt ← StackActionSelector(r sa t ) if Yt = push then Zt ← OperandSelector(r opd t ) end if end if if Yt = gen var then e x ← Attention(h D t , h E ) ret ← e x else if Yt = push then S.push(vZ t , eZ t ) ret ← eZ t else if Yt ∈ {+, Results The results are shown in Our proposed end-to-end model belongs to the generation category, and the single model performance achieved by our proposed model is new state-of-the-art (> 65%) and even better than the hybrid model result (64.7%).", "In addition, we are the first to report character-based performance on this dataset, and the character-based results are slightly better than the word-based ones.", "Among the single model performance, our models obtain about more than 7% accuracy improvement compared to the previous best one (Wang et al., 2017) .", "The performance of our character-based model also shows that our model is capable of learning the relatively accurate semantic representations without word boundaries and achieves better performance.", "Ablation Test To better understand the performance contributed by each proposed component, we perform a series of ablation tests by removing components one by one and then checking the performance by 5-fold cross validation.", "Table 2 shows the ablation results.", "Char-Based v.s.", "Word-Based As reported above, using word-based model instead of character-based model only causes 0.5% performance drop.", "To fairly compare with prior word- Table 2 : 5-fold cross validation results of ablation tests.", "based models, the following ablation tests are performed on the word-based approach.", "Word-Based -Gate It uses r t instead of r sa t and r opr t as the input of both StackActionSelector and OperandSelector.", "Word-Based -Gate -Attention Considering that the prior generation-based model (seq2seq) did not use any attention mechanism, we compare the models with and without the attention mechanism.", "Removing attention means excluding q t−1 in (11), so the input of both operator and operand selector becomes r t = [h D t ; s t ].", "The result implies that our model is not better than previous models solely because of the attention.", "Word-Based -Gate -Attention -Stack To check the effectiveness of the stack status (s t in (11)), the experiments of removing the stack status from the input of both operator and operand selectors (r t = h D t ) are conducted.", "The results well justify our idea of choosing operators based on semantic meanings of operands.", "Word-Based -Semantic Transformer To validate the effectiveness of the idea that views an operator as a semantic transformer, we modify the semantic transformer function of the operator into f (e 1 , e 2 ) = e , where e is a learnable parameter and is different for different operators.", "Therefore, e acts like the embedding of the operator , and the decoding process is more similar to a general seq2seq model.", "The results show that the semantic transformer in the original model encodes not only the last operator applied on the operands but other information that helps the selectors.", "Word-Based -Semantic Representation To explicitly evaluate the effectiveness of operands' semantic representations, we rewrite semantic representation of the i-th operand in the problem texts q u a n ti fi e r 个 b a n a n a 香 蕉 , e v e r y 每 ( b a s k e t) < u n k > 6 .0 q u a n ti fi e r 个 , ta k e o ff 拿 掉 h o w m a n y 多 少 q u a n ti fi e r 个 , th e n 就 c a n 可 以 e x a c tl y 正 好 fi ll 装 9 .0 q u a n ti fi e r 个 b a s k e ts 篮 子 了 < u n k > .", "9.0 6.0 58.0 Figure 4 : The self-attention map visualization of operands' semantic expressions for the problem \"There are 58 bananas.", "Each basket can contain 6 bananas.", "How many bananas are needed to be token off such that exactly 9 baskets are filled?\".", "from (2) to e c i = b c i , where b c i is a parameter.", "Thus for every problem, the representation of the i-th operand is identical, even though their meanings in different problems may be different.", "This modification assumes that no semantic information is captured by b c i , which can merely represent a symbolic placeholder in an equation.", "Because the semantic transformer is to transform the semantic representations, applying this component is meaningless.", "Here the semantic transformer is also replaced with f (e 1 , e 2 ) = e as the setting of the previous ablation test.", "The results show that the model without using semantic representations of operands causes a significant accuracy drop of 3.5%.", "The main contribution of this paper about modeling semantic meanings of symbols is validated and well demonstrated here.", "Qualitative Analysis To further analyze whether the proposed model can provide interpretation and reasoning, we visualize the learned semantic representations of constants to check where the important cues are, Constant Embedding Analysis To better understand the information encoded in the semantic representations of constants in the problem, a self-attention is performed when their semantic representations are extracted by the encoder.", "Namely, we rewrite (2) as e c i = Attention(h E p i , {h E t } m t=1 .", "(20) Then we check the trained self-attention map (α in the attention function) on the validation dataset.", "For some problems, the self-attention that generates semantic representations of constants in the problem concentrates on the number's quantifier or unit, and sometimes it also focuses on informative verbs, such as \"gain\", \"get\", \"fill\", etc., in the sentence.", "For example, Figure 4 shows the attention weights for an example math word problem, where lighter colors indicate higher weights.", "The numbers \"58\" and \"6\" focus more on the quantifier-related words (e.g.", "\"every\" and \"how many\"), while \"9\" pays higher attention to the verb \"fill\".", "The results are consistent with those handcraft features for solving math word problems proposed by the prior research (Hosseini et al., 2014; .", "Hence, we demonstrate that the automatically learned semantic representations indeed capture critical information that facilitates solving math word problems without providing human-crafted knowledge.", "Decoding Process Visualization We visualize the attention map (q t in (6) ) to see how the attention helps the decoding process.", "An example is shown in the top of Figure 5 , where most attention focuses on the end of the sentence.", "Unlike the machine translation task, the attention shows the word-level alignment between source and target languages, solving math word problems requires high-level understanding due to the task complexity.", "To further analyze the effectiveness of the proposed gating mechanisms for stack action and operand selection, the activation of gates g sa , g opd at each step of the decoding process is shown in the bottom of Figure 5 .", "It shows that most of time, the gate activation is high, demonstrating that the proposed gating mechanisms play an important role during decoding.", "We also observe a common phenomenon that the activation g sa 2 , which controls how much attention the stack action selector puts on the stack state when deciding an stack action, is usually low until the last \"operator application\" stack action.", "For example, in the example of Figure 5 , g sa 2 is less than 0.20 till the last argument selection stack action, and activates when deciding the division operator application (÷) and the equal application (=).", "It may result from the higher-level semantics of the operand (6.75−2.75) on the stack when selecting the stack action division operator application (÷).", "In terms Problem & Results 红花有60朵,黄花比红花多1/6朵,黄花有多少朵. (There are 60 red flowers.", "Yellow flowers are more than red ones by 1/6.", "How many yellow flowers are there?)", "Generated Equation: 60 + 1 6 Correct Answer: 70 火车 48 小时行驶 5920 千米,汽车 25 小时行驶 2250 千米,汽车平均每小时比火车每小时慢 多少 千米 ? (The train travels 5920 kilometers in hours, and the car travels 2250 kilometers in 25 hours.", "How many kilometers per hour is the car slower than the train?)", "Generated Equation: 2250 ÷ 25 − 5920 ÷ 48 Correct Answer: 33 1 3 小红前面 5 人,后面 7 人,一共有多少人? (There are 5 people in front of Little Red and 7 people behind.", "How many persons are there in total?)", "Generated Equation: 5 + 7 Correct Answer: 13 Figure 5: Word attention and gate activation (g sa and g opd ) visualization when generating stack actions for the problem \"6.75 deducting 5 times of an unknown number is 2.75.", "What is the unknown number?", "\", where the associated equation is x = (6.75 − 2.75) ÷ 5.", "Note that g opd is meaningful only when the t-th stack action is push op.", "of the activation of g opd , we find that three features are important in most cases, demonstrating the effectiveness of the proposed mechanisms.", "Error Analysis We randomly sample some results predicted incorrectly by our model shown in Table 3 .", "In the first example, the error is due to the language ambiguity, and such ambiguity cannot be resolved without considering the exact value of the number.", "From the second example, although our model identifies the problem as a comparison problem successfully, it handles the order of the operands incorrectly.", "For the third problem, it cannot be solved by using only the surface meaning but requires some common sense.", "Therefore, above phenomena show the difficulty of solving math word problems and the large room for improvement.", "Conclusion We propose an end-to-end neural math solver using an encoder-decoder framework that incorporates semantic representations of numbers in order to generate mathematical symbols for solving math word problems.", "The experiments show that the proposed model achieves the state-of-the-art performance on the benchmark dataset, and empirically demonstrate the effectiveness of each component in the model.", "In sum, the proposed neural math solver is designed based on how human performs reasoning when writing equations, providing better interpretation without the need of labeled rationals.", "A Algorithm Detail The training and inference procedures are shown in Algortihm 1.", "B Hyperparameter Setup The model is trained with the optimizer adam (Kingma and Ba, 2014), and the learning rate is set to 0.001.", "Pretrained embeddings using FastText (Joulin et al., 2016 ) are adopted.", "The hidden state size of LSTM used in the encoder and decoder is 256.", "The dimension of hidden layers in attention, semantic transformer and operand/stack action selector is 256.", "The dropout rate is set as 0.1 before inputting the decoder LSTM, before the stack action selector and after the hidden layer of the stack action selector and attention.", "The reported accuracy is the result of 5-fold cross-validation, same as Wang et al.", "for fair comparison.", "C Error Analysis between Seq2Seq We implement the seq2seq model as proposed by Wang et al.", "and compare the performance difference between our proposed model and the baseline seq2seq model.", "Table 4 shows the generated results seq2seq predicts correctly but our model predicts incorrectly.", "Table 5 show the results our model can predict correctly but seq2seq cannot." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.1.1", "3.1.2", "3.2", "3.3", "3.3.1", "3.3.2", "3.3.3", "3.3.4", "3.4", "3.5", "4", "4.1", "4.2", "4.3", "5", "5.1", "5.2", "5.3", "6" ], "paper_header_content": [ "Introduction", "Related Work", "End-to-End Neural Math Solver", "Encoder", "Constant Representation Extraction", "External Constant Leveraging", "Decoder", "Decoding State Features", "Stack Action Selector", "Stack Actions", "Operand Selector", "Semantic Transformer", "Training", "Inference", "Experiments", "Settings", "Results", "Ablation Test", "Qualitative Analysis", "Constant Embedding Analysis", "Decoding Process Visualization", "Error Analysis", "Conclusion" ] }
GEM-SciDuet-train-31#paper-1044#slide-3
Look Again at the Problem
Each notebook takes $0.5 and each pen takes $1. Tom has $10. How many notebooks can h e buy after buying pens?
Each notebook takes $0.5 and each pen takes $1. Tom has $10. How many notebooks can h e buy after buying pens?
[]
GEM-SciDuet-train-31#paper-1044#slide-4
1044
Semantically-Aligned Equation Generation for Solving and Reasoning Math Word Problems
Solving math word problems is a challenging task that requires accurate natural language understanding to bridge natural language texts and math expressions. Motivated by the intuition about how human generates the equations given the problem texts, this paper presents a neural approach to automatically solve math word problems by operating symbols according to their semantic meanings in texts. This paper views the process of generating equations as a bridge between the semantic world and the symbolic world, where the proposed neural math solver is based on an encoderdecoder framework. In the proposed model, the encoder is designed to understand the semantics of problems, and the decoder focuses on tracking semantic meanings of the generated symbols and then deciding which symbol to generate next. The preliminary experiments are conducted in a benchmark dataset Math23K, and our model significantly outperforms both the state-of-the-art single model and the best non-retrieval-based model over about 10% accuracy, demonstrating the effectiveness of bridging the symbolic and semantic worlds from math word problems. 1
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214 ], "paper_content_text": [ "Introduction Automatically solving math word problems has been an interesting research topic and also been viewed as a way of evaluating machines' ability (Mandal and Naskar, 2019) .", "For human, writing down an equation that solves a math word problem requires the ability of reading comprehension, reasoning, and sometimes real world understanding.", "Specifically, to solve a math word problem, we first need to know the goal of the given problem, then understand the semantic 1 The source code is available at https://github.", "com/MiuLab/E2EMathSolver.", "meaning of each numerical number in the problem, perform reasoning based on the comprehension in the previous step, and finally decide what to write in the equation.", "Most prior work about solving math word problems relied on hand-crafted features, which required more human knowledge.", "Because those features are often in the lexical level, it is not clear whether machines really understand the math problems.", "Also, most prior work evaluated their approaches on relatively small datasets, and the capability of generalization is concerned.", "This paper considers the reasoning procedure when writing down the associated equation given a problem.", "Figure 1 illustrates the problem solving process.", "The illustration shows that human actually assigns the semantic meaning to each number when manipulating symbols, including operands (numbers) and operators (+ − ×÷).", "Also, we believe that the semantic meaning of operands can help us decide which operator to use.", "For example, the summation of \"price of one pen\" and \"number of pens Tom bought\" is meaningless; therefore the addition would not be chosen.", "Following the observation above, this paper proposes a novel encoder decoder model, where the encoder extracts semantic meanings of numbers in the problem, and the decoder is equipped with a stack that facilitates tracking the semantic meanings of operands.", "The contributions of this paper are 4-fold: • This paper is the first work that models semantic meanings of operands and operators for math word problems.", "• This paper proposes an end-to-end neural math solver with a novel decoding process that utilizes the stack to generate associated equations.", "Figure 1 : The solving process of the math word problem \"Each notebok takes $0.5 and each pen takes $1.", "Tom has $10.", "How many notebook can he buy after buying 5 pens?\"", "and the associated equation is x = (10 − 1 × 5) ÷ 0.5.", "The associated equation is x = (10 − 1 × 5) ÷ 0.5.", "• This paper achieves the state-of-the-art performance on the large benchmark dataset Math23K.", "• This paper is capable of providing interpretation and reasoning for the math word problem solving procedure.", "Related Work There is a lot of prior work that utilized handcrafted features, such as POS tags, paths in the dependency trees, keywords, etc., to allow the model to focus on the quantities in the problems Hosseini et al., 2014; Koncel-Kedziorski et al., 2015; Roy et al., 2016; Upadhyay and Chang, 2017; Roy and Roth, 2018; Wang et al., 2018) .", "Recently, Mehta et al.", "; Wang et al.", "; Ling et al.", "attempted at learning models without predefined features.", "Following the recent trend, the proposed end-to-end model in this paper does not need any hand-crafted features.", "Kushman et al.", "first extracted templates about math expressions from the training answers, and then trained models to select templates and map quantities in the problem to the slots in the template.", "Such two-stage approach has been tried and achieved good results (Upadhyay and Chang, 2017) .", "The prior work highly relied on human knowledge, where they parsed problems into equations by choosing the expression tree with the highest score calculated by an operator classifier, working on a hand-crafted \"trigger list\" containing quantities and noun phrases in the problem, or utilizing features extracted from text spans (Roy et al., , 2016 Koncel-Kedziorski et al., 2015) .", "Shi et al.", "defined a Dolphin language to connect math word problems and logical forms, and generated rules to parse math word problems.", "Upadhyay et al.", "parsed math word problems without explicit equation annotations.", "Roy and Roth clas-sified math word problems into 4 types and used rules to decide the operators accordingly.", "Wang et al.", "trained the parser using reinforcement learning with hand-crafted features.", "Hosseini et al.", "modeled the problem text as transition of world states, and the equation is generated as the world states changing.", "Our work uses a similar intuition, but hand-crafted features are not required and our model can be trained in an end-to-end manner.", "Some end-to-end approaches have been proposed, such as generating equations directly via a seq2seq model (Wang et al., 2017) .", "Ling et al.", "tried to generate solutions along with its rationals with a seq2seq-like model for better interpretability.", "This paper belongs to the end-to-end category, but different from the previous work; we are the first approach that generates equations with stack actions, which facilitate us to simulate the way how human solves problems.", "Furthermore, the proposed approach is the first model that is more interpretable and provides reasoning steps without the need of rational annotations.", "End-to-End Neural Math Solver Our approach composes of two parts, an encoder and a decoder, where the process of solving math word problems is viewed as transforming multiple text spans from the problems into the target information the problems ask for.", "In the example shown in Figure 1 , all numbers in the problem are attached with the associated semantics.", "Motivated by the observation, we design an encoder to extract the semantic representation of each number in the problem text.", "Considering that human usually manipulates those numbers and operators (such as addition, subtraction, etc.)", "based on their semantics for problem solving, a decoder is designed to construct the equation, where the semantics is aligned with the representations extracted by the encoder.", "The idea of the proposed model Tom has $ 10 5 pens ?", "Encoder Stack Attention Operation Selector Apply OP OP Return Decoder Operand Selector Semantic Transformer Each notebook takes $0.5 and each pen takes $1.", "Tom has $10.", "How many notebooks can he buy after buying 5 pens?", "Stack Attention is to imitate the human reasoning process for solving math word problems.", "The model architecture is illustrated in Figure 2 .", "Encoder The encoder aims to extract the semantic representation of each constant needed for solving problems.", "However, the needed constants may come from either the given problem texts or domain knowledge, so we detail these two procedures as follows.", "Constant Representation Extraction For each math word problem, we are given a passage consisting of words {w P t } m t=1 , whose word embeddings are {e P t } m t=1 .", "The problem text includes some numbers, which we refer as constants.", "The positions of constants in the problem text are denoted as {p i } n i=1 .", "In order to capture the semantic representation of each constant by considering its contexts, a bidirectional long short-term memory (BLSTM) is adopted as the encoder (Hochreiter and Schmidhuber, 1997) : h E t , c E t = BLSTM(h E t−1 , c E t−1 , e P t ), (1) and then for the i-th constant in the problem, its semantic representation e c i is modeled by the corresponding BLSTM output vector: e c i = h E p i .", "(2) External Constant Leveraging External constants, including 1 and π, are leveraged, because they are required to solve a math word problem, but not mentioned in the problem text.", "Due to their absence from the problem text, we cannot extract their semantic meanings by BLSTM in (2) .", "Instead, we model their semantic representation e π , e 1 as parts of the model parameters.", "They are randomly initialized and are learned during model training.", "Decoder The decoder aims at constructing the equation that can solve the given problem.", "We generate the equation by applying stack actions on a stack to mimic the way how human understands an equation.", "Human knows the semantic meaning of each term in the equation, even compositing of operands and operators like the term \"(10−1×5)\" in Figure 1 .", "Then what operator to apply on a pair operands can be chosen based on their semantic meanings accordingly.", "Hence we design our model to generate the equation in a postfix manner: a operator is chosen base on the semantic representations of two operands the operator is going to apply to.", "Note that the operands a operator can apply to can be any results generated previously.", "That is the reason why we use \"stack\" as our data structure in order to keep track of the operands a operator is going to apply to.", "The stack contains both symbolic and semantic representations of operands, denoted as S = [(v S lt , e S lt ), (v S lt−1 , e S lt−1 ), · · · , (v S 1 , e S 1 )], (3) where v S of each pair is the symbolic part, such as x + 1, while e S is the semantic representation, which is a vector.", "The components in the decoder are shown in the right part of Figure 2 , each of which is detailed below.", "Decoding State Features At each decoding step, decisions are made based on features of the current state.", "At each step, fea- tures r sa and r opd are extracted to select a stack action (section 3.3.2) and an operand to push (section 3.3.3).", "Specifically, the features are the gated concatenation of following vectors: • h D t is the output of an LSTM, which encodes the history of applied actions: h D t , c D t = LSTM(h D t−1 , c D t−1 , res t−1 ), (4) where res t−1 is the result from the previous stack action similar to the seq2seq model (Sutskever et al., 2014) .", "For example, if the previous stack action o t−1 is \"push\", then res t−1 is the semantic representation pushed into the stack.", "If the previous stack action o t−1 is to apply an operator , then res t−1 is the semantic representation generated by f .", "• s t is the stack status.", "It is crucial because some operators are only applicable to certain combinations of operand semantics, which is similar to the type system in programming languages.", "For example, operating multiplication is applicable to the combination of \"quantity of an item\" and \"price of an item\", while operating addition is not.", "Considering that all math operators supported here (+, −, ×, ÷) are binary operators, the semantic representations of the stack's top 2 elements at the time t − 1 are considered: s t = [e S lt ; e S lt ].", "(5) • q t incorporates problem information in the decision.", "It is believed that the attention mechanism (Luong et al., 2015) can effectively capture dependency for longer distance.", "Thus, the attention mechanism over the encoding problem h E 1 , h E 2 , · · · is adopted: q t = Attention(h D t , {h E i } m i=1 ), (6) where the attention function in this paper is defined as a function with learnable parameters w, W, b: Attention(u, {v i } m i=1 ) = m i=1 α i h i , (7) α i = exp(s i ) m l=1 exp(s i ) , (8) s i = w T tanh(W T [u; v i ] + b).", "(9) In order to model the dynamic features for different decoding steps, features in r sa t is gated as follows: r sa t = [g sa t,1 · h D t ; g sa t,2 · s t ; g sa t,3 · q t ], (10) g sa t = σ(W sa · [h D t ; s t ; q t ]), (11) where σ is a sigmoid function and W sa is a learned gating parameter.", "r opd t is defined similarly, but with a different learned gating parameter W opd .", "Stack Action Selector The stack action selector is to select an stack action at each decoding step (section 3.3.2) until the unknowns are solved.", "The probability of choosing action a at the decoding step t is calculated with a network NN constituted of one hidden layer and ReLU as the activation function: P (Y t |{y i } t−1 i=1 , {w i } m i=1 ) (12) = StackActionSelector(r sa t ) = softmax(NN(r sa t )) , where r sa t is decoding state features as defined in section 3.3.", "Stack Actions The available stack actions are listed below: • Variable generation: The semantic representation of an unknown variable x is generated dynamically as the first action in the decoding process.", "Note that this procedure provides the flexibility of solving problems with more than one unknown variables.", "The decoder module can decide how many unknown variables are required to solve the problem, and the semantic representation of the unknown variable is generated with an attention mechanism: e x = Attention(h D t , {h E i } m i=1 ).", "(13) • Push: This stack action pushes the operand chosen by the operand selector (section 3.3.3).", "Both the symbolic representation v * and semantic representation e * of the chosen operand would be pushed to the stack S in (3).", "Then the stack state becomes S = [(v S * , e S * ), (v S lt , e S lt ), · · · , (v S 1 , e S 1 )].", "(14) • Operator application ( ∈ {+, −, ×, ÷}): One stack action pops two elements from the top of the stack, which contains two pairs, (v i , e i ) and (v j , e j ), and then the associated symbolic operator, v k = v i v j , is recorded.", "Also, a semantic transformation function f for that operator is invoked, which generates the semantic representation of v k by transforming semantic representations of v i and v j to e k = f (e i , e j ).", "Therefore, after an operator is applied to the stack specified in (3) , the stack state becomes S =[(v S lt v S lt−1 , f (e S lt , e S lt−1 )), (15) (v S lt−2 , e S lt−2 ), · · · , (v S 1 , e S 1 )].", "• Equal application: When the equal application is chosen, it implies that an equation is completed.", "This stack action pops 2 tuples from the stack, (v i , e i ), (v j , e j ), and then v i = v j is recorded.", "If one of them is an unknown variable, the problem is solved.", "Therefore, after an OP is applied to the stack specified in (3) , the stack state becomes S = [(v S lt−2 , e S lt−2 ), · · · , (v S 1 , e S 1 )].", "(16) Operand Selector When the stack action selector has decided to push an operand, the operand selector aims at choosing which operand to push.", "The operand candidates e include constants provided in the problem text whose semantic representations are e c 1 , e c 2 , · · · , e c n , unknown variable whose semantic representation is e x , and two external constants 1 and π whose semantic representations are e 1 , e π : e = [e c 1 , e c 2 , · · · , e c n , e 1 , e π , e x ].", "An operand has both symbolic and semantic representations, but the selection focuses on its semantic meaning; this procedure is the same as what human does when solving math word problems.", "Inspired by addressing mechanisms of neural Turing machine (NTM) (Graves et al., 2014) , the probability of choosing the i-th operand candidate is the attention weights of r t over the semantic representations of the operand candidates as in (8) : P (Z t | {y i } t−1 i=1 , {w i } m i=1 ) (18) = OperandSelector(r opd t ) = AttentionWeight(r opd t , {e i } m i=1 ∪ {e 1 , e π , e x }), and r opd t is defined in section 3.3.", "Semantic Transformer A semantic transformer is proposed to generate the semantic representation of a new symbol resulted from applying an operator, which provides the capability of interpretation and reasoning for the target task.", "The semantic transformer for an operator ∈ {+, −, ×, ÷} transforms semantic representations of two operands e 1 , e 2 into f (e 1 , e 2 ) = tanh(U ReLU(W [e 1 ; e 2 ]+b )+c ), where W , U , b , c are model parameters.", "Semantic transformers for different operators have different parameters in order to model different transformations.", "Training Both stack action selection and operand selection can be trained in a fully supervised way by giving problems and associated ground truth equations.", "Because our model generates the equation with stack actions, the equation is first transformed into its postfix representation.", "Let the postfix representation of the target equation be y 1 , · · · y t , · · · , y T , where y t can be either an operator (+, −, ×, ÷, =) or a target operand.", "Then for each time step t, the loss can be computed as L(y t ) = L 1 (push op) + L 2 (y t ) y t is an operand L 1 (y t ) otherwise , where L 1 is the stack action selection loss and L 2 is the operand selection loss defined as L 1 (y t ) = − log P (Y t = y t | {o i } t−1 i=1 , {w i } m i=1 ), L 2 (y t ) = − log P (Z t = y t | r t ).", "The objective of our training process is to minimize the total loss for the whole equation, T t=1 L(y t ).", "Inference When performing inference, at each time step t, the stack action with the highest probability P (Y t |{ỹ i } t−1 i=1 , {w i } m i=1 ) is chosen.", "If the chosen stack action is \"push\", the operand with the highest probability P (Z t |{Ỹ i } t−1 i=1 , {w i } m i=1 ) is chosen.", "When the stack has less than 2 elements, the probability of applying operator +, −, ×, ÷, = would be masked out to prevent illegal stack actions, so all generated equations must be legal math expressions.", "The decoder decodes until the unknown variable can be solved.", "After the equations are generated, a Python package SymPy (Meurer et al., 2017) is used to solve the unknown variable.", "The inference procedure example is illustrated in Figure 3 .", "The detailed algorithm can be found in Algorithm 1.", "Experiments To evaluate the performance of the proposed model, we conduct the experiments on the benchmark dataset and analyze the learned semantics.", "Settings The experiments are benchmarked on the dataset Math23k (Wang et al., 2017) , which contains 23,162 math problems with annotated equations.", "Each problem can be solved by a singleunknown-variable equation and only uses operators +, −, ×, ÷.", "Also, except π and 1, quantities in the equation can be found in the problem text.", "There are also other large scale datasets like Dol-phin18K (Shi et al., 2015) and AQuA (Ling et al., 2017) , containing 18,460 and 100,000 math word problems respectively.", "The reasons about not evaluating on these two datasets are 1) Dolphin18k contains some unlabeled math word problems and some incorrect labels, and 2) AQuA contains rational for solving the problems, but the equations in the rational are not formal (e.g.", "mixed with texts, using x to represent ×, etc.)", "and inconsistent.", "Therefore, the following experiments are performed and analyzed using Math23K, the only large scaled, good-quality dataset. )", "do h D t ← LSTM(h D t−1 , ct−1, ret) st ← S.get top2() h E ← Attention(h D t−1 , h E ) rt ← [h D t , st, h E ] psa ← StackActionSelector(rt) p opd ← OperandSelector(rt) if training then Target equation y is available when training.", "Yt ← yt if yt is operand then loss ← loss + L1(push) + L2(yt) else loss ← loss + L1(yt) end if else Yt ← StackActionSelector(r sa t ) if Yt = push then Zt ← OperandSelector(r opd t ) end if end if if Yt = gen var then e x ← Attention(h D t , h E ) ret ← e x else if Yt = push then S.push(vZ t , eZ t ) ret ← eZ t else if Yt ∈ {+, Results The results are shown in Our proposed end-to-end model belongs to the generation category, and the single model performance achieved by our proposed model is new state-of-the-art (> 65%) and even better than the hybrid model result (64.7%).", "In addition, we are the first to report character-based performance on this dataset, and the character-based results are slightly better than the word-based ones.", "Among the single model performance, our models obtain about more than 7% accuracy improvement compared to the previous best one (Wang et al., 2017) .", "The performance of our character-based model also shows that our model is capable of learning the relatively accurate semantic representations without word boundaries and achieves better performance.", "Ablation Test To better understand the performance contributed by each proposed component, we perform a series of ablation tests by removing components one by one and then checking the performance by 5-fold cross validation.", "Table 2 shows the ablation results.", "Char-Based v.s.", "Word-Based As reported above, using word-based model instead of character-based model only causes 0.5% performance drop.", "To fairly compare with prior word- Table 2 : 5-fold cross validation results of ablation tests.", "based models, the following ablation tests are performed on the word-based approach.", "Word-Based -Gate It uses r t instead of r sa t and r opr t as the input of both StackActionSelector and OperandSelector.", "Word-Based -Gate -Attention Considering that the prior generation-based model (seq2seq) did not use any attention mechanism, we compare the models with and without the attention mechanism.", "Removing attention means excluding q t−1 in (11), so the input of both operator and operand selector becomes r t = [h D t ; s t ].", "The result implies that our model is not better than previous models solely because of the attention.", "Word-Based -Gate -Attention -Stack To check the effectiveness of the stack status (s t in (11)), the experiments of removing the stack status from the input of both operator and operand selectors (r t = h D t ) are conducted.", "The results well justify our idea of choosing operators based on semantic meanings of operands.", "Word-Based -Semantic Transformer To validate the effectiveness of the idea that views an operator as a semantic transformer, we modify the semantic transformer function of the operator into f (e 1 , e 2 ) = e , where e is a learnable parameter and is different for different operators.", "Therefore, e acts like the embedding of the operator , and the decoding process is more similar to a general seq2seq model.", "The results show that the semantic transformer in the original model encodes not only the last operator applied on the operands but other information that helps the selectors.", "Word-Based -Semantic Representation To explicitly evaluate the effectiveness of operands' semantic representations, we rewrite semantic representation of the i-th operand in the problem texts q u a n ti fi e r 个 b a n a n a 香 蕉 , e v e r y 每 ( b a s k e t) < u n k > 6 .0 q u a n ti fi e r 个 , ta k e o ff 拿 掉 h o w m a n y 多 少 q u a n ti fi e r 个 , th e n 就 c a n 可 以 e x a c tl y 正 好 fi ll 装 9 .0 q u a n ti fi e r 个 b a s k e ts 篮 子 了 < u n k > .", "9.0 6.0 58.0 Figure 4 : The self-attention map visualization of operands' semantic expressions for the problem \"There are 58 bananas.", "Each basket can contain 6 bananas.", "How many bananas are needed to be token off such that exactly 9 baskets are filled?\".", "from (2) to e c i = b c i , where b c i is a parameter.", "Thus for every problem, the representation of the i-th operand is identical, even though their meanings in different problems may be different.", "This modification assumes that no semantic information is captured by b c i , which can merely represent a symbolic placeholder in an equation.", "Because the semantic transformer is to transform the semantic representations, applying this component is meaningless.", "Here the semantic transformer is also replaced with f (e 1 , e 2 ) = e as the setting of the previous ablation test.", "The results show that the model without using semantic representations of operands causes a significant accuracy drop of 3.5%.", "The main contribution of this paper about modeling semantic meanings of symbols is validated and well demonstrated here.", "Qualitative Analysis To further analyze whether the proposed model can provide interpretation and reasoning, we visualize the learned semantic representations of constants to check where the important cues are, Constant Embedding Analysis To better understand the information encoded in the semantic representations of constants in the problem, a self-attention is performed when their semantic representations are extracted by the encoder.", "Namely, we rewrite (2) as e c i = Attention(h E p i , {h E t } m t=1 .", "(20) Then we check the trained self-attention map (α in the attention function) on the validation dataset.", "For some problems, the self-attention that generates semantic representations of constants in the problem concentrates on the number's quantifier or unit, and sometimes it also focuses on informative verbs, such as \"gain\", \"get\", \"fill\", etc., in the sentence.", "For example, Figure 4 shows the attention weights for an example math word problem, where lighter colors indicate higher weights.", "The numbers \"58\" and \"6\" focus more on the quantifier-related words (e.g.", "\"every\" and \"how many\"), while \"9\" pays higher attention to the verb \"fill\".", "The results are consistent with those handcraft features for solving math word problems proposed by the prior research (Hosseini et al., 2014; .", "Hence, we demonstrate that the automatically learned semantic representations indeed capture critical information that facilitates solving math word problems without providing human-crafted knowledge.", "Decoding Process Visualization We visualize the attention map (q t in (6) ) to see how the attention helps the decoding process.", "An example is shown in the top of Figure 5 , where most attention focuses on the end of the sentence.", "Unlike the machine translation task, the attention shows the word-level alignment between source and target languages, solving math word problems requires high-level understanding due to the task complexity.", "To further analyze the effectiveness of the proposed gating mechanisms for stack action and operand selection, the activation of gates g sa , g opd at each step of the decoding process is shown in the bottom of Figure 5 .", "It shows that most of time, the gate activation is high, demonstrating that the proposed gating mechanisms play an important role during decoding.", "We also observe a common phenomenon that the activation g sa 2 , which controls how much attention the stack action selector puts on the stack state when deciding an stack action, is usually low until the last \"operator application\" stack action.", "For example, in the example of Figure 5 , g sa 2 is less than 0.20 till the last argument selection stack action, and activates when deciding the division operator application (÷) and the equal application (=).", "It may result from the higher-level semantics of the operand (6.75−2.75) on the stack when selecting the stack action division operator application (÷).", "In terms Problem & Results 红花有60朵,黄花比红花多1/6朵,黄花有多少朵. (There are 60 red flowers.", "Yellow flowers are more than red ones by 1/6.", "How many yellow flowers are there?)", "Generated Equation: 60 + 1 6 Correct Answer: 70 火车 48 小时行驶 5920 千米,汽车 25 小时行驶 2250 千米,汽车平均每小时比火车每小时慢 多少 千米 ? (The train travels 5920 kilometers in hours, and the car travels 2250 kilometers in 25 hours.", "How many kilometers per hour is the car slower than the train?)", "Generated Equation: 2250 ÷ 25 − 5920 ÷ 48 Correct Answer: 33 1 3 小红前面 5 人,后面 7 人,一共有多少人? (There are 5 people in front of Little Red and 7 people behind.", "How many persons are there in total?)", "Generated Equation: 5 + 7 Correct Answer: 13 Figure 5: Word attention and gate activation (g sa and g opd ) visualization when generating stack actions for the problem \"6.75 deducting 5 times of an unknown number is 2.75.", "What is the unknown number?", "\", where the associated equation is x = (6.75 − 2.75) ÷ 5.", "Note that g opd is meaningful only when the t-th stack action is push op.", "of the activation of g opd , we find that three features are important in most cases, demonstrating the effectiveness of the proposed mechanisms.", "Error Analysis We randomly sample some results predicted incorrectly by our model shown in Table 3 .", "In the first example, the error is due to the language ambiguity, and such ambiguity cannot be resolved without considering the exact value of the number.", "From the second example, although our model identifies the problem as a comparison problem successfully, it handles the order of the operands incorrectly.", "For the third problem, it cannot be solved by using only the surface meaning but requires some common sense.", "Therefore, above phenomena show the difficulty of solving math word problems and the large room for improvement.", "Conclusion We propose an end-to-end neural math solver using an encoder-decoder framework that incorporates semantic representations of numbers in order to generate mathematical symbols for solving math word problems.", "The experiments show that the proposed model achieves the state-of-the-art performance on the benchmark dataset, and empirically demonstrate the effectiveness of each component in the model.", "In sum, the proposed neural math solver is designed based on how human performs reasoning when writing equations, providing better interpretation without the need of labeled rationals.", "A Algorithm Detail The training and inference procedures are shown in Algortihm 1.", "B Hyperparameter Setup The model is trained with the optimizer adam (Kingma and Ba, 2014), and the learning rate is set to 0.001.", "Pretrained embeddings using FastText (Joulin et al., 2016 ) are adopted.", "The hidden state size of LSTM used in the encoder and decoder is 256.", "The dimension of hidden layers in attention, semantic transformer and operand/stack action selector is 256.", "The dropout rate is set as 0.1 before inputting the decoder LSTM, before the stack action selector and after the hidden layer of the stack action selector and attention.", "The reported accuracy is the result of 5-fold cross-validation, same as Wang et al.", "for fair comparison.", "C Error Analysis between Seq2Seq We implement the seq2seq model as proposed by Wang et al.", "and compare the performance difference between our proposed model and the baseline seq2seq model.", "Table 4 shows the generated results seq2seq predicts correctly but our model predicts incorrectly.", "Table 5 show the results our model can predict correctly but seq2seq cannot." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.1.1", "3.1.2", "3.2", "3.3", "3.3.1", "3.3.2", "3.3.3", "3.3.4", "3.4", "3.5", "4", "4.1", "4.2", "4.3", "5", "5.1", "5.2", "5.3", "6" ], "paper_header_content": [ "Introduction", "Related Work", "End-to-End Neural Math Solver", "Encoder", "Constant Representation Extraction", "External Constant Leveraging", "Decoder", "Decoding State Features", "Stack Action Selector", "Stack Actions", "Operand Selector", "Semantic Transformer", "Training", "Inference", "Experiments", "Settings", "Results", "Ablation Test", "Qualitative Analysis", "Constant Embedding Analysis", "Decoding Process Visualization", "Error Analysis", "Conclusion" ] }
GEM-SciDuet-train-31#paper-1044#slide-4
Semantic Meaning of the Operands
Each notebook takes $0.5 and each pen takes $1. Tom has M I U L A B $10. How many notebooks can h e buy after buying pens? The amount of money Tom has Price of a notebook Price of a pen Number of pens bought
Each notebook takes $0.5 and each pen takes $1. Tom has M I U L A B $10. How many notebooks can h e buy after buying pens? The amount of money Tom has Price of a notebook Price of a pen Number of pens bought
[]
GEM-SciDuet-train-31#paper-1044#slide-6
1044
Semantically-Aligned Equation Generation for Solving and Reasoning Math Word Problems
Solving math word problems is a challenging task that requires accurate natural language understanding to bridge natural language texts and math expressions. Motivated by the intuition about how human generates the equations given the problem texts, this paper presents a neural approach to automatically solve math word problems by operating symbols according to their semantic meanings in texts. This paper views the process of generating equations as a bridge between the semantic world and the symbolic world, where the proposed neural math solver is based on an encoderdecoder framework. In the proposed model, the encoder is designed to understand the semantics of problems, and the decoder focuses on tracking semantic meanings of the generated symbols and then deciding which symbol to generate next. The preliminary experiments are conducted in a benchmark dataset Math23K, and our model significantly outperforms both the state-of-the-art single model and the best non-retrieval-based model over about 10% accuracy, demonstrating the effectiveness of bridging the symbolic and semantic worlds from math word problems. 1
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214 ], "paper_content_text": [ "Introduction Automatically solving math word problems has been an interesting research topic and also been viewed as a way of evaluating machines' ability (Mandal and Naskar, 2019) .", "For human, writing down an equation that solves a math word problem requires the ability of reading comprehension, reasoning, and sometimes real world understanding.", "Specifically, to solve a math word problem, we first need to know the goal of the given problem, then understand the semantic 1 The source code is available at https://github.", "com/MiuLab/E2EMathSolver.", "meaning of each numerical number in the problem, perform reasoning based on the comprehension in the previous step, and finally decide what to write in the equation.", "Most prior work about solving math word problems relied on hand-crafted features, which required more human knowledge.", "Because those features are often in the lexical level, it is not clear whether machines really understand the math problems.", "Also, most prior work evaluated their approaches on relatively small datasets, and the capability of generalization is concerned.", "This paper considers the reasoning procedure when writing down the associated equation given a problem.", "Figure 1 illustrates the problem solving process.", "The illustration shows that human actually assigns the semantic meaning to each number when manipulating symbols, including operands (numbers) and operators (+ − ×÷).", "Also, we believe that the semantic meaning of operands can help us decide which operator to use.", "For example, the summation of \"price of one pen\" and \"number of pens Tom bought\" is meaningless; therefore the addition would not be chosen.", "Following the observation above, this paper proposes a novel encoder decoder model, where the encoder extracts semantic meanings of numbers in the problem, and the decoder is equipped with a stack that facilitates tracking the semantic meanings of operands.", "The contributions of this paper are 4-fold: • This paper is the first work that models semantic meanings of operands and operators for math word problems.", "• This paper proposes an end-to-end neural math solver with a novel decoding process that utilizes the stack to generate associated equations.", "Figure 1 : The solving process of the math word problem \"Each notebok takes $0.5 and each pen takes $1.", "Tom has $10.", "How many notebook can he buy after buying 5 pens?\"", "and the associated equation is x = (10 − 1 × 5) ÷ 0.5.", "The associated equation is x = (10 − 1 × 5) ÷ 0.5.", "• This paper achieves the state-of-the-art performance on the large benchmark dataset Math23K.", "• This paper is capable of providing interpretation and reasoning for the math word problem solving procedure.", "Related Work There is a lot of prior work that utilized handcrafted features, such as POS tags, paths in the dependency trees, keywords, etc., to allow the model to focus on the quantities in the problems Hosseini et al., 2014; Koncel-Kedziorski et al., 2015; Roy et al., 2016; Upadhyay and Chang, 2017; Roy and Roth, 2018; Wang et al., 2018) .", "Recently, Mehta et al.", "; Wang et al.", "; Ling et al.", "attempted at learning models without predefined features.", "Following the recent trend, the proposed end-to-end model in this paper does not need any hand-crafted features.", "Kushman et al.", "first extracted templates about math expressions from the training answers, and then trained models to select templates and map quantities in the problem to the slots in the template.", "Such two-stage approach has been tried and achieved good results (Upadhyay and Chang, 2017) .", "The prior work highly relied on human knowledge, where they parsed problems into equations by choosing the expression tree with the highest score calculated by an operator classifier, working on a hand-crafted \"trigger list\" containing quantities and noun phrases in the problem, or utilizing features extracted from text spans (Roy et al., , 2016 Koncel-Kedziorski et al., 2015) .", "Shi et al.", "defined a Dolphin language to connect math word problems and logical forms, and generated rules to parse math word problems.", "Upadhyay et al.", "parsed math word problems without explicit equation annotations.", "Roy and Roth clas-sified math word problems into 4 types and used rules to decide the operators accordingly.", "Wang et al.", "trained the parser using reinforcement learning with hand-crafted features.", "Hosseini et al.", "modeled the problem text as transition of world states, and the equation is generated as the world states changing.", "Our work uses a similar intuition, but hand-crafted features are not required and our model can be trained in an end-to-end manner.", "Some end-to-end approaches have been proposed, such as generating equations directly via a seq2seq model (Wang et al., 2017) .", "Ling et al.", "tried to generate solutions along with its rationals with a seq2seq-like model for better interpretability.", "This paper belongs to the end-to-end category, but different from the previous work; we are the first approach that generates equations with stack actions, which facilitate us to simulate the way how human solves problems.", "Furthermore, the proposed approach is the first model that is more interpretable and provides reasoning steps without the need of rational annotations.", "End-to-End Neural Math Solver Our approach composes of two parts, an encoder and a decoder, where the process of solving math word problems is viewed as transforming multiple text spans from the problems into the target information the problems ask for.", "In the example shown in Figure 1 , all numbers in the problem are attached with the associated semantics.", "Motivated by the observation, we design an encoder to extract the semantic representation of each number in the problem text.", "Considering that human usually manipulates those numbers and operators (such as addition, subtraction, etc.)", "based on their semantics for problem solving, a decoder is designed to construct the equation, where the semantics is aligned with the representations extracted by the encoder.", "The idea of the proposed model Tom has $ 10 5 pens ?", "Encoder Stack Attention Operation Selector Apply OP OP Return Decoder Operand Selector Semantic Transformer Each notebook takes $0.5 and each pen takes $1.", "Tom has $10.", "How many notebooks can he buy after buying 5 pens?", "Stack Attention is to imitate the human reasoning process for solving math word problems.", "The model architecture is illustrated in Figure 2 .", "Encoder The encoder aims to extract the semantic representation of each constant needed for solving problems.", "However, the needed constants may come from either the given problem texts or domain knowledge, so we detail these two procedures as follows.", "Constant Representation Extraction For each math word problem, we are given a passage consisting of words {w P t } m t=1 , whose word embeddings are {e P t } m t=1 .", "The problem text includes some numbers, which we refer as constants.", "The positions of constants in the problem text are denoted as {p i } n i=1 .", "In order to capture the semantic representation of each constant by considering its contexts, a bidirectional long short-term memory (BLSTM) is adopted as the encoder (Hochreiter and Schmidhuber, 1997) : h E t , c E t = BLSTM(h E t−1 , c E t−1 , e P t ), (1) and then for the i-th constant in the problem, its semantic representation e c i is modeled by the corresponding BLSTM output vector: e c i = h E p i .", "(2) External Constant Leveraging External constants, including 1 and π, are leveraged, because they are required to solve a math word problem, but not mentioned in the problem text.", "Due to their absence from the problem text, we cannot extract their semantic meanings by BLSTM in (2) .", "Instead, we model their semantic representation e π , e 1 as parts of the model parameters.", "They are randomly initialized and are learned during model training.", "Decoder The decoder aims at constructing the equation that can solve the given problem.", "We generate the equation by applying stack actions on a stack to mimic the way how human understands an equation.", "Human knows the semantic meaning of each term in the equation, even compositing of operands and operators like the term \"(10−1×5)\" in Figure 1 .", "Then what operator to apply on a pair operands can be chosen based on their semantic meanings accordingly.", "Hence we design our model to generate the equation in a postfix manner: a operator is chosen base on the semantic representations of two operands the operator is going to apply to.", "Note that the operands a operator can apply to can be any results generated previously.", "That is the reason why we use \"stack\" as our data structure in order to keep track of the operands a operator is going to apply to.", "The stack contains both symbolic and semantic representations of operands, denoted as S = [(v S lt , e S lt ), (v S lt−1 , e S lt−1 ), · · · , (v S 1 , e S 1 )], (3) where v S of each pair is the symbolic part, such as x + 1, while e S is the semantic representation, which is a vector.", "The components in the decoder are shown in the right part of Figure 2 , each of which is detailed below.", "Decoding State Features At each decoding step, decisions are made based on features of the current state.", "At each step, fea- tures r sa and r opd are extracted to select a stack action (section 3.3.2) and an operand to push (section 3.3.3).", "Specifically, the features are the gated concatenation of following vectors: • h D t is the output of an LSTM, which encodes the history of applied actions: h D t , c D t = LSTM(h D t−1 , c D t−1 , res t−1 ), (4) where res t−1 is the result from the previous stack action similar to the seq2seq model (Sutskever et al., 2014) .", "For example, if the previous stack action o t−1 is \"push\", then res t−1 is the semantic representation pushed into the stack.", "If the previous stack action o t−1 is to apply an operator , then res t−1 is the semantic representation generated by f .", "• s t is the stack status.", "It is crucial because some operators are only applicable to certain combinations of operand semantics, which is similar to the type system in programming languages.", "For example, operating multiplication is applicable to the combination of \"quantity of an item\" and \"price of an item\", while operating addition is not.", "Considering that all math operators supported here (+, −, ×, ÷) are binary operators, the semantic representations of the stack's top 2 elements at the time t − 1 are considered: s t = [e S lt ; e S lt ].", "(5) • q t incorporates problem information in the decision.", "It is believed that the attention mechanism (Luong et al., 2015) can effectively capture dependency for longer distance.", "Thus, the attention mechanism over the encoding problem h E 1 , h E 2 , · · · is adopted: q t = Attention(h D t , {h E i } m i=1 ), (6) where the attention function in this paper is defined as a function with learnable parameters w, W, b: Attention(u, {v i } m i=1 ) = m i=1 α i h i , (7) α i = exp(s i ) m l=1 exp(s i ) , (8) s i = w T tanh(W T [u; v i ] + b).", "(9) In order to model the dynamic features for different decoding steps, features in r sa t is gated as follows: r sa t = [g sa t,1 · h D t ; g sa t,2 · s t ; g sa t,3 · q t ], (10) g sa t = σ(W sa · [h D t ; s t ; q t ]), (11) where σ is a sigmoid function and W sa is a learned gating parameter.", "r opd t is defined similarly, but with a different learned gating parameter W opd .", "Stack Action Selector The stack action selector is to select an stack action at each decoding step (section 3.3.2) until the unknowns are solved.", "The probability of choosing action a at the decoding step t is calculated with a network NN constituted of one hidden layer and ReLU as the activation function: P (Y t |{y i } t−1 i=1 , {w i } m i=1 ) (12) = StackActionSelector(r sa t ) = softmax(NN(r sa t )) , where r sa t is decoding state features as defined in section 3.3.", "Stack Actions The available stack actions are listed below: • Variable generation: The semantic representation of an unknown variable x is generated dynamically as the first action in the decoding process.", "Note that this procedure provides the flexibility of solving problems with more than one unknown variables.", "The decoder module can decide how many unknown variables are required to solve the problem, and the semantic representation of the unknown variable is generated with an attention mechanism: e x = Attention(h D t , {h E i } m i=1 ).", "(13) • Push: This stack action pushes the operand chosen by the operand selector (section 3.3.3).", "Both the symbolic representation v * and semantic representation e * of the chosen operand would be pushed to the stack S in (3).", "Then the stack state becomes S = [(v S * , e S * ), (v S lt , e S lt ), · · · , (v S 1 , e S 1 )].", "(14) • Operator application ( ∈ {+, −, ×, ÷}): One stack action pops two elements from the top of the stack, which contains two pairs, (v i , e i ) and (v j , e j ), and then the associated symbolic operator, v k = v i v j , is recorded.", "Also, a semantic transformation function f for that operator is invoked, which generates the semantic representation of v k by transforming semantic representations of v i and v j to e k = f (e i , e j ).", "Therefore, after an operator is applied to the stack specified in (3) , the stack state becomes S =[(v S lt v S lt−1 , f (e S lt , e S lt−1 )), (15) (v S lt−2 , e S lt−2 ), · · · , (v S 1 , e S 1 )].", "• Equal application: When the equal application is chosen, it implies that an equation is completed.", "This stack action pops 2 tuples from the stack, (v i , e i ), (v j , e j ), and then v i = v j is recorded.", "If one of them is an unknown variable, the problem is solved.", "Therefore, after an OP is applied to the stack specified in (3) , the stack state becomes S = [(v S lt−2 , e S lt−2 ), · · · , (v S 1 , e S 1 )].", "(16) Operand Selector When the stack action selector has decided to push an operand, the operand selector aims at choosing which operand to push.", "The operand candidates e include constants provided in the problem text whose semantic representations are e c 1 , e c 2 , · · · , e c n , unknown variable whose semantic representation is e x , and two external constants 1 and π whose semantic representations are e 1 , e π : e = [e c 1 , e c 2 , · · · , e c n , e 1 , e π , e x ].", "An operand has both symbolic and semantic representations, but the selection focuses on its semantic meaning; this procedure is the same as what human does when solving math word problems.", "Inspired by addressing mechanisms of neural Turing machine (NTM) (Graves et al., 2014) , the probability of choosing the i-th operand candidate is the attention weights of r t over the semantic representations of the operand candidates as in (8) : P (Z t | {y i } t−1 i=1 , {w i } m i=1 ) (18) = OperandSelector(r opd t ) = AttentionWeight(r opd t , {e i } m i=1 ∪ {e 1 , e π , e x }), and r opd t is defined in section 3.3.", "Semantic Transformer A semantic transformer is proposed to generate the semantic representation of a new symbol resulted from applying an operator, which provides the capability of interpretation and reasoning for the target task.", "The semantic transformer for an operator ∈ {+, −, ×, ÷} transforms semantic representations of two operands e 1 , e 2 into f (e 1 , e 2 ) = tanh(U ReLU(W [e 1 ; e 2 ]+b )+c ), where W , U , b , c are model parameters.", "Semantic transformers for different operators have different parameters in order to model different transformations.", "Training Both stack action selection and operand selection can be trained in a fully supervised way by giving problems and associated ground truth equations.", "Because our model generates the equation with stack actions, the equation is first transformed into its postfix representation.", "Let the postfix representation of the target equation be y 1 , · · · y t , · · · , y T , where y t can be either an operator (+, −, ×, ÷, =) or a target operand.", "Then for each time step t, the loss can be computed as L(y t ) = L 1 (push op) + L 2 (y t ) y t is an operand L 1 (y t ) otherwise , where L 1 is the stack action selection loss and L 2 is the operand selection loss defined as L 1 (y t ) = − log P (Y t = y t | {o i } t−1 i=1 , {w i } m i=1 ), L 2 (y t ) = − log P (Z t = y t | r t ).", "The objective of our training process is to minimize the total loss for the whole equation, T t=1 L(y t ).", "Inference When performing inference, at each time step t, the stack action with the highest probability P (Y t |{ỹ i } t−1 i=1 , {w i } m i=1 ) is chosen.", "If the chosen stack action is \"push\", the operand with the highest probability P (Z t |{Ỹ i } t−1 i=1 , {w i } m i=1 ) is chosen.", "When the stack has less than 2 elements, the probability of applying operator +, −, ×, ÷, = would be masked out to prevent illegal stack actions, so all generated equations must be legal math expressions.", "The decoder decodes until the unknown variable can be solved.", "After the equations are generated, a Python package SymPy (Meurer et al., 2017) is used to solve the unknown variable.", "The inference procedure example is illustrated in Figure 3 .", "The detailed algorithm can be found in Algorithm 1.", "Experiments To evaluate the performance of the proposed model, we conduct the experiments on the benchmark dataset and analyze the learned semantics.", "Settings The experiments are benchmarked on the dataset Math23k (Wang et al., 2017) , which contains 23,162 math problems with annotated equations.", "Each problem can be solved by a singleunknown-variable equation and only uses operators +, −, ×, ÷.", "Also, except π and 1, quantities in the equation can be found in the problem text.", "There are also other large scale datasets like Dol-phin18K (Shi et al., 2015) and AQuA (Ling et al., 2017) , containing 18,460 and 100,000 math word problems respectively.", "The reasons about not evaluating on these two datasets are 1) Dolphin18k contains some unlabeled math word problems and some incorrect labels, and 2) AQuA contains rational for solving the problems, but the equations in the rational are not formal (e.g.", "mixed with texts, using x to represent ×, etc.)", "and inconsistent.", "Therefore, the following experiments are performed and analyzed using Math23K, the only large scaled, good-quality dataset. )", "do h D t ← LSTM(h D t−1 , ct−1, ret) st ← S.get top2() h E ← Attention(h D t−1 , h E ) rt ← [h D t , st, h E ] psa ← StackActionSelector(rt) p opd ← OperandSelector(rt) if training then Target equation y is available when training.", "Yt ← yt if yt is operand then loss ← loss + L1(push) + L2(yt) else loss ← loss + L1(yt) end if else Yt ← StackActionSelector(r sa t ) if Yt = push then Zt ← OperandSelector(r opd t ) end if end if if Yt = gen var then e x ← Attention(h D t , h E ) ret ← e x else if Yt = push then S.push(vZ t , eZ t ) ret ← eZ t else if Yt ∈ {+, Results The results are shown in Our proposed end-to-end model belongs to the generation category, and the single model performance achieved by our proposed model is new state-of-the-art (> 65%) and even better than the hybrid model result (64.7%).", "In addition, we are the first to report character-based performance on this dataset, and the character-based results are slightly better than the word-based ones.", "Among the single model performance, our models obtain about more than 7% accuracy improvement compared to the previous best one (Wang et al., 2017) .", "The performance of our character-based model also shows that our model is capable of learning the relatively accurate semantic representations without word boundaries and achieves better performance.", "Ablation Test To better understand the performance contributed by each proposed component, we perform a series of ablation tests by removing components one by one and then checking the performance by 5-fold cross validation.", "Table 2 shows the ablation results.", "Char-Based v.s.", "Word-Based As reported above, using word-based model instead of character-based model only causes 0.5% performance drop.", "To fairly compare with prior word- Table 2 : 5-fold cross validation results of ablation tests.", "based models, the following ablation tests are performed on the word-based approach.", "Word-Based -Gate It uses r t instead of r sa t and r opr t as the input of both StackActionSelector and OperandSelector.", "Word-Based -Gate -Attention Considering that the prior generation-based model (seq2seq) did not use any attention mechanism, we compare the models with and without the attention mechanism.", "Removing attention means excluding q t−1 in (11), so the input of both operator and operand selector becomes r t = [h D t ; s t ].", "The result implies that our model is not better than previous models solely because of the attention.", "Word-Based -Gate -Attention -Stack To check the effectiveness of the stack status (s t in (11)), the experiments of removing the stack status from the input of both operator and operand selectors (r t = h D t ) are conducted.", "The results well justify our idea of choosing operators based on semantic meanings of operands.", "Word-Based -Semantic Transformer To validate the effectiveness of the idea that views an operator as a semantic transformer, we modify the semantic transformer function of the operator into f (e 1 , e 2 ) = e , where e is a learnable parameter and is different for different operators.", "Therefore, e acts like the embedding of the operator , and the decoding process is more similar to a general seq2seq model.", "The results show that the semantic transformer in the original model encodes not only the last operator applied on the operands but other information that helps the selectors.", "Word-Based -Semantic Representation To explicitly evaluate the effectiveness of operands' semantic representations, we rewrite semantic representation of the i-th operand in the problem texts q u a n ti fi e r 个 b a n a n a 香 蕉 , e v e r y 每 ( b a s k e t) < u n k > 6 .0 q u a n ti fi e r 个 , ta k e o ff 拿 掉 h o w m a n y 多 少 q u a n ti fi e r 个 , th e n 就 c a n 可 以 e x a c tl y 正 好 fi ll 装 9 .0 q u a n ti fi e r 个 b a s k e ts 篮 子 了 < u n k > .", "9.0 6.0 58.0 Figure 4 : The self-attention map visualization of operands' semantic expressions for the problem \"There are 58 bananas.", "Each basket can contain 6 bananas.", "How many bananas are needed to be token off such that exactly 9 baskets are filled?\".", "from (2) to e c i = b c i , where b c i is a parameter.", "Thus for every problem, the representation of the i-th operand is identical, even though their meanings in different problems may be different.", "This modification assumes that no semantic information is captured by b c i , which can merely represent a symbolic placeholder in an equation.", "Because the semantic transformer is to transform the semantic representations, applying this component is meaningless.", "Here the semantic transformer is also replaced with f (e 1 , e 2 ) = e as the setting of the previous ablation test.", "The results show that the model without using semantic representations of operands causes a significant accuracy drop of 3.5%.", "The main contribution of this paper about modeling semantic meanings of symbols is validated and well demonstrated here.", "Qualitative Analysis To further analyze whether the proposed model can provide interpretation and reasoning, we visualize the learned semantic representations of constants to check where the important cues are, Constant Embedding Analysis To better understand the information encoded in the semantic representations of constants in the problem, a self-attention is performed when their semantic representations are extracted by the encoder.", "Namely, we rewrite (2) as e c i = Attention(h E p i , {h E t } m t=1 .", "(20) Then we check the trained self-attention map (α in the attention function) on the validation dataset.", "For some problems, the self-attention that generates semantic representations of constants in the problem concentrates on the number's quantifier or unit, and sometimes it also focuses on informative verbs, such as \"gain\", \"get\", \"fill\", etc., in the sentence.", "For example, Figure 4 shows the attention weights for an example math word problem, where lighter colors indicate higher weights.", "The numbers \"58\" and \"6\" focus more on the quantifier-related words (e.g.", "\"every\" and \"how many\"), while \"9\" pays higher attention to the verb \"fill\".", "The results are consistent with those handcraft features for solving math word problems proposed by the prior research (Hosseini et al., 2014; .", "Hence, we demonstrate that the automatically learned semantic representations indeed capture critical information that facilitates solving math word problems without providing human-crafted knowledge.", "Decoding Process Visualization We visualize the attention map (q t in (6) ) to see how the attention helps the decoding process.", "An example is shown in the top of Figure 5 , where most attention focuses on the end of the sentence.", "Unlike the machine translation task, the attention shows the word-level alignment between source and target languages, solving math word problems requires high-level understanding due to the task complexity.", "To further analyze the effectiveness of the proposed gating mechanisms for stack action and operand selection, the activation of gates g sa , g opd at each step of the decoding process is shown in the bottom of Figure 5 .", "It shows that most of time, the gate activation is high, demonstrating that the proposed gating mechanisms play an important role during decoding.", "We also observe a common phenomenon that the activation g sa 2 , which controls how much attention the stack action selector puts on the stack state when deciding an stack action, is usually low until the last \"operator application\" stack action.", "For example, in the example of Figure 5 , g sa 2 is less than 0.20 till the last argument selection stack action, and activates when deciding the division operator application (÷) and the equal application (=).", "It may result from the higher-level semantics of the operand (6.75−2.75) on the stack when selecting the stack action division operator application (÷).", "In terms Problem & Results 红花有60朵,黄花比红花多1/6朵,黄花有多少朵. (There are 60 red flowers.", "Yellow flowers are more than red ones by 1/6.", "How many yellow flowers are there?)", "Generated Equation: 60 + 1 6 Correct Answer: 70 火车 48 小时行驶 5920 千米,汽车 25 小时行驶 2250 千米,汽车平均每小时比火车每小时慢 多少 千米 ? (The train travels 5920 kilometers in hours, and the car travels 2250 kilometers in 25 hours.", "How many kilometers per hour is the car slower than the train?)", "Generated Equation: 2250 ÷ 25 − 5920 ÷ 48 Correct Answer: 33 1 3 小红前面 5 人,后面 7 人,一共有多少人? (There are 5 people in front of Little Red and 7 people behind.", "How many persons are there in total?)", "Generated Equation: 5 + 7 Correct Answer: 13 Figure 5: Word attention and gate activation (g sa and g opd ) visualization when generating stack actions for the problem \"6.75 deducting 5 times of an unknown number is 2.75.", "What is the unknown number?", "\", where the associated equation is x = (6.75 − 2.75) ÷ 5.", "Note that g opd is meaningful only when the t-th stack action is push op.", "of the activation of g opd , we find that three features are important in most cases, demonstrating the effectiveness of the proposed mechanisms.", "Error Analysis We randomly sample some results predicted incorrectly by our model shown in Table 3 .", "In the first example, the error is due to the language ambiguity, and such ambiguity cannot be resolved without considering the exact value of the number.", "From the second example, although our model identifies the problem as a comparison problem successfully, it handles the order of the operands incorrectly.", "For the third problem, it cannot be solved by using only the surface meaning but requires some common sense.", "Therefore, above phenomena show the difficulty of solving math word problems and the large room for improvement.", "Conclusion We propose an end-to-end neural math solver using an encoder-decoder framework that incorporates semantic representations of numbers in order to generate mathematical symbols for solving math word problems.", "The experiments show that the proposed model achieves the state-of-the-art performance on the benchmark dataset, and empirically demonstrate the effectiveness of each component in the model.", "In sum, the proposed neural math solver is designed based on how human performs reasoning when writing equations, providing better interpretation without the need of labeled rationals.", "A Algorithm Detail The training and inference procedures are shown in Algortihm 1.", "B Hyperparameter Setup The model is trained with the optimizer adam (Kingma and Ba, 2014), and the learning rate is set to 0.001.", "Pretrained embeddings using FastText (Joulin et al., 2016 ) are adopted.", "The hidden state size of LSTM used in the encoder and decoder is 256.", "The dimension of hidden layers in attention, semantic transformer and operand/stack action selector is 256.", "The dropout rate is set as 0.1 before inputting the decoder LSTM, before the stack action selector and after the hidden layer of the stack action selector and attention.", "The reported accuracy is the result of 5-fold cross-validation, same as Wang et al.", "for fair comparison.", "C Error Analysis between Seq2Seq We implement the seq2seq model as proposed by Wang et al.", "and compare the performance difference between our proposed model and the baseline seq2seq model.", "Table 4 shows the generated results seq2seq predicts correctly but our model predicts incorrectly.", "Table 5 show the results our model can predict correctly but seq2seq cannot." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.1.1", "3.1.2", "3.2", "3.3", "3.3.1", "3.3.2", "3.3.3", "3.3.4", "3.4", "3.5", "4", "4.1", "4.2", "4.3", "5", "5.1", "5.2", "5.3", "6" ], "paper_header_content": [ "Introduction", "Related Work", "End-to-End Neural Math Solver", "Encoder", "Constant Representation Extraction", "External Constant Leveraging", "Decoder", "Decoding State Features", "Stack Action Selector", "Stack Actions", "Operand Selector", "Semantic Transformer", "Training", "Inference", "Experiments", "Settings", "Results", "Ablation Test", "Qualitative Analysis", "Constant Embedding Analysis", "Decoding Process Visualization", "Error Analysis", "Conclusion" ] }
GEM-SciDuet-train-31#paper-1044#slide-6
Preprocess
Each notebook takes $0.5 and each pen takes $1. Tom has $10. How many notebooks can h e buy after buying pens?
Each notebook takes $0.5 and each pen takes $1. Tom has $10. How many notebooks can h e buy after buying pens?
[]
GEM-SciDuet-train-31#paper-1044#slide-7
1044
Semantically-Aligned Equation Generation for Solving and Reasoning Math Word Problems
Solving math word problems is a challenging task that requires accurate natural language understanding to bridge natural language texts and math expressions. Motivated by the intuition about how human generates the equations given the problem texts, this paper presents a neural approach to automatically solve math word problems by operating symbols according to their semantic meanings in texts. This paper views the process of generating equations as a bridge between the semantic world and the symbolic world, where the proposed neural math solver is based on an encoderdecoder framework. In the proposed model, the encoder is designed to understand the semantics of problems, and the decoder focuses on tracking semantic meanings of the generated symbols and then deciding which symbol to generate next. The preliminary experiments are conducted in a benchmark dataset Math23K, and our model significantly outperforms both the state-of-the-art single model and the best non-retrieval-based model over about 10% accuracy, demonstrating the effectiveness of bridging the symbolic and semantic worlds from math word problems. 1
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214 ], "paper_content_text": [ "Introduction Automatically solving math word problems has been an interesting research topic and also been viewed as a way of evaluating machines' ability (Mandal and Naskar, 2019) .", "For human, writing down an equation that solves a math word problem requires the ability of reading comprehension, reasoning, and sometimes real world understanding.", "Specifically, to solve a math word problem, we first need to know the goal of the given problem, then understand the semantic 1 The source code is available at https://github.", "com/MiuLab/E2EMathSolver.", "meaning of each numerical number in the problem, perform reasoning based on the comprehension in the previous step, and finally decide what to write in the equation.", "Most prior work about solving math word problems relied on hand-crafted features, which required more human knowledge.", "Because those features are often in the lexical level, it is not clear whether machines really understand the math problems.", "Also, most prior work evaluated their approaches on relatively small datasets, and the capability of generalization is concerned.", "This paper considers the reasoning procedure when writing down the associated equation given a problem.", "Figure 1 illustrates the problem solving process.", "The illustration shows that human actually assigns the semantic meaning to each number when manipulating symbols, including operands (numbers) and operators (+ − ×÷).", "Also, we believe that the semantic meaning of operands can help us decide which operator to use.", "For example, the summation of \"price of one pen\" and \"number of pens Tom bought\" is meaningless; therefore the addition would not be chosen.", "Following the observation above, this paper proposes a novel encoder decoder model, where the encoder extracts semantic meanings of numbers in the problem, and the decoder is equipped with a stack that facilitates tracking the semantic meanings of operands.", "The contributions of this paper are 4-fold: • This paper is the first work that models semantic meanings of operands and operators for math word problems.", "• This paper proposes an end-to-end neural math solver with a novel decoding process that utilizes the stack to generate associated equations.", "Figure 1 : The solving process of the math word problem \"Each notebok takes $0.5 and each pen takes $1.", "Tom has $10.", "How many notebook can he buy after buying 5 pens?\"", "and the associated equation is x = (10 − 1 × 5) ÷ 0.5.", "The associated equation is x = (10 − 1 × 5) ÷ 0.5.", "• This paper achieves the state-of-the-art performance on the large benchmark dataset Math23K.", "• This paper is capable of providing interpretation and reasoning for the math word problem solving procedure.", "Related Work There is a lot of prior work that utilized handcrafted features, such as POS tags, paths in the dependency trees, keywords, etc., to allow the model to focus on the quantities in the problems Hosseini et al., 2014; Koncel-Kedziorski et al., 2015; Roy et al., 2016; Upadhyay and Chang, 2017; Roy and Roth, 2018; Wang et al., 2018) .", "Recently, Mehta et al.", "; Wang et al.", "; Ling et al.", "attempted at learning models without predefined features.", "Following the recent trend, the proposed end-to-end model in this paper does not need any hand-crafted features.", "Kushman et al.", "first extracted templates about math expressions from the training answers, and then trained models to select templates and map quantities in the problem to the slots in the template.", "Such two-stage approach has been tried and achieved good results (Upadhyay and Chang, 2017) .", "The prior work highly relied on human knowledge, where they parsed problems into equations by choosing the expression tree with the highest score calculated by an operator classifier, working on a hand-crafted \"trigger list\" containing quantities and noun phrases in the problem, or utilizing features extracted from text spans (Roy et al., , 2016 Koncel-Kedziorski et al., 2015) .", "Shi et al.", "defined a Dolphin language to connect math word problems and logical forms, and generated rules to parse math word problems.", "Upadhyay et al.", "parsed math word problems without explicit equation annotations.", "Roy and Roth clas-sified math word problems into 4 types and used rules to decide the operators accordingly.", "Wang et al.", "trained the parser using reinforcement learning with hand-crafted features.", "Hosseini et al.", "modeled the problem text as transition of world states, and the equation is generated as the world states changing.", "Our work uses a similar intuition, but hand-crafted features are not required and our model can be trained in an end-to-end manner.", "Some end-to-end approaches have been proposed, such as generating equations directly via a seq2seq model (Wang et al., 2017) .", "Ling et al.", "tried to generate solutions along with its rationals with a seq2seq-like model for better interpretability.", "This paper belongs to the end-to-end category, but different from the previous work; we are the first approach that generates equations with stack actions, which facilitate us to simulate the way how human solves problems.", "Furthermore, the proposed approach is the first model that is more interpretable and provides reasoning steps without the need of rational annotations.", "End-to-End Neural Math Solver Our approach composes of two parts, an encoder and a decoder, where the process of solving math word problems is viewed as transforming multiple text spans from the problems into the target information the problems ask for.", "In the example shown in Figure 1 , all numbers in the problem are attached with the associated semantics.", "Motivated by the observation, we design an encoder to extract the semantic representation of each number in the problem text.", "Considering that human usually manipulates those numbers and operators (such as addition, subtraction, etc.)", "based on their semantics for problem solving, a decoder is designed to construct the equation, where the semantics is aligned with the representations extracted by the encoder.", "The idea of the proposed model Tom has $ 10 5 pens ?", "Encoder Stack Attention Operation Selector Apply OP OP Return Decoder Operand Selector Semantic Transformer Each notebook takes $0.5 and each pen takes $1.", "Tom has $10.", "How many notebooks can he buy after buying 5 pens?", "Stack Attention is to imitate the human reasoning process for solving math word problems.", "The model architecture is illustrated in Figure 2 .", "Encoder The encoder aims to extract the semantic representation of each constant needed for solving problems.", "However, the needed constants may come from either the given problem texts or domain knowledge, so we detail these two procedures as follows.", "Constant Representation Extraction For each math word problem, we are given a passage consisting of words {w P t } m t=1 , whose word embeddings are {e P t } m t=1 .", "The problem text includes some numbers, which we refer as constants.", "The positions of constants in the problem text are denoted as {p i } n i=1 .", "In order to capture the semantic representation of each constant by considering its contexts, a bidirectional long short-term memory (BLSTM) is adopted as the encoder (Hochreiter and Schmidhuber, 1997) : h E t , c E t = BLSTM(h E t−1 , c E t−1 , e P t ), (1) and then for the i-th constant in the problem, its semantic representation e c i is modeled by the corresponding BLSTM output vector: e c i = h E p i .", "(2) External Constant Leveraging External constants, including 1 and π, are leveraged, because they are required to solve a math word problem, but not mentioned in the problem text.", "Due to their absence from the problem text, we cannot extract their semantic meanings by BLSTM in (2) .", "Instead, we model their semantic representation e π , e 1 as parts of the model parameters.", "They are randomly initialized and are learned during model training.", "Decoder The decoder aims at constructing the equation that can solve the given problem.", "We generate the equation by applying stack actions on a stack to mimic the way how human understands an equation.", "Human knows the semantic meaning of each term in the equation, even compositing of operands and operators like the term \"(10−1×5)\" in Figure 1 .", "Then what operator to apply on a pair operands can be chosen based on their semantic meanings accordingly.", "Hence we design our model to generate the equation in a postfix manner: a operator is chosen base on the semantic representations of two operands the operator is going to apply to.", "Note that the operands a operator can apply to can be any results generated previously.", "That is the reason why we use \"stack\" as our data structure in order to keep track of the operands a operator is going to apply to.", "The stack contains both symbolic and semantic representations of operands, denoted as S = [(v S lt , e S lt ), (v S lt−1 , e S lt−1 ), · · · , (v S 1 , e S 1 )], (3) where v S of each pair is the symbolic part, such as x + 1, while e S is the semantic representation, which is a vector.", "The components in the decoder are shown in the right part of Figure 2 , each of which is detailed below.", "Decoding State Features At each decoding step, decisions are made based on features of the current state.", "At each step, fea- tures r sa and r opd are extracted to select a stack action (section 3.3.2) and an operand to push (section 3.3.3).", "Specifically, the features are the gated concatenation of following vectors: • h D t is the output of an LSTM, which encodes the history of applied actions: h D t , c D t = LSTM(h D t−1 , c D t−1 , res t−1 ), (4) where res t−1 is the result from the previous stack action similar to the seq2seq model (Sutskever et al., 2014) .", "For example, if the previous stack action o t−1 is \"push\", then res t−1 is the semantic representation pushed into the stack.", "If the previous stack action o t−1 is to apply an operator , then res t−1 is the semantic representation generated by f .", "• s t is the stack status.", "It is crucial because some operators are only applicable to certain combinations of operand semantics, which is similar to the type system in programming languages.", "For example, operating multiplication is applicable to the combination of \"quantity of an item\" and \"price of an item\", while operating addition is not.", "Considering that all math operators supported here (+, −, ×, ÷) are binary operators, the semantic representations of the stack's top 2 elements at the time t − 1 are considered: s t = [e S lt ; e S lt ].", "(5) • q t incorporates problem information in the decision.", "It is believed that the attention mechanism (Luong et al., 2015) can effectively capture dependency for longer distance.", "Thus, the attention mechanism over the encoding problem h E 1 , h E 2 , · · · is adopted: q t = Attention(h D t , {h E i } m i=1 ), (6) where the attention function in this paper is defined as a function with learnable parameters w, W, b: Attention(u, {v i } m i=1 ) = m i=1 α i h i , (7) α i = exp(s i ) m l=1 exp(s i ) , (8) s i = w T tanh(W T [u; v i ] + b).", "(9) In order to model the dynamic features for different decoding steps, features in r sa t is gated as follows: r sa t = [g sa t,1 · h D t ; g sa t,2 · s t ; g sa t,3 · q t ], (10) g sa t = σ(W sa · [h D t ; s t ; q t ]), (11) where σ is a sigmoid function and W sa is a learned gating parameter.", "r opd t is defined similarly, but with a different learned gating parameter W opd .", "Stack Action Selector The stack action selector is to select an stack action at each decoding step (section 3.3.2) until the unknowns are solved.", "The probability of choosing action a at the decoding step t is calculated with a network NN constituted of one hidden layer and ReLU as the activation function: P (Y t |{y i } t−1 i=1 , {w i } m i=1 ) (12) = StackActionSelector(r sa t ) = softmax(NN(r sa t )) , where r sa t is decoding state features as defined in section 3.3.", "Stack Actions The available stack actions are listed below: • Variable generation: The semantic representation of an unknown variable x is generated dynamically as the first action in the decoding process.", "Note that this procedure provides the flexibility of solving problems with more than one unknown variables.", "The decoder module can decide how many unknown variables are required to solve the problem, and the semantic representation of the unknown variable is generated with an attention mechanism: e x = Attention(h D t , {h E i } m i=1 ).", "(13) • Push: This stack action pushes the operand chosen by the operand selector (section 3.3.3).", "Both the symbolic representation v * and semantic representation e * of the chosen operand would be pushed to the stack S in (3).", "Then the stack state becomes S = [(v S * , e S * ), (v S lt , e S lt ), · · · , (v S 1 , e S 1 )].", "(14) • Operator application ( ∈ {+, −, ×, ÷}): One stack action pops two elements from the top of the stack, which contains two pairs, (v i , e i ) and (v j , e j ), and then the associated symbolic operator, v k = v i v j , is recorded.", "Also, a semantic transformation function f for that operator is invoked, which generates the semantic representation of v k by transforming semantic representations of v i and v j to e k = f (e i , e j ).", "Therefore, after an operator is applied to the stack specified in (3) , the stack state becomes S =[(v S lt v S lt−1 , f (e S lt , e S lt−1 )), (15) (v S lt−2 , e S lt−2 ), · · · , (v S 1 , e S 1 )].", "• Equal application: When the equal application is chosen, it implies that an equation is completed.", "This stack action pops 2 tuples from the stack, (v i , e i ), (v j , e j ), and then v i = v j is recorded.", "If one of them is an unknown variable, the problem is solved.", "Therefore, after an OP is applied to the stack specified in (3) , the stack state becomes S = [(v S lt−2 , e S lt−2 ), · · · , (v S 1 , e S 1 )].", "(16) Operand Selector When the stack action selector has decided to push an operand, the operand selector aims at choosing which operand to push.", "The operand candidates e include constants provided in the problem text whose semantic representations are e c 1 , e c 2 , · · · , e c n , unknown variable whose semantic representation is e x , and two external constants 1 and π whose semantic representations are e 1 , e π : e = [e c 1 , e c 2 , · · · , e c n , e 1 , e π , e x ].", "An operand has both symbolic and semantic representations, but the selection focuses on its semantic meaning; this procedure is the same as what human does when solving math word problems.", "Inspired by addressing mechanisms of neural Turing machine (NTM) (Graves et al., 2014) , the probability of choosing the i-th operand candidate is the attention weights of r t over the semantic representations of the operand candidates as in (8) : P (Z t | {y i } t−1 i=1 , {w i } m i=1 ) (18) = OperandSelector(r opd t ) = AttentionWeight(r opd t , {e i } m i=1 ∪ {e 1 , e π , e x }), and r opd t is defined in section 3.3.", "Semantic Transformer A semantic transformer is proposed to generate the semantic representation of a new symbol resulted from applying an operator, which provides the capability of interpretation and reasoning for the target task.", "The semantic transformer for an operator ∈ {+, −, ×, ÷} transforms semantic representations of two operands e 1 , e 2 into f (e 1 , e 2 ) = tanh(U ReLU(W [e 1 ; e 2 ]+b )+c ), where W , U , b , c are model parameters.", "Semantic transformers for different operators have different parameters in order to model different transformations.", "Training Both stack action selection and operand selection can be trained in a fully supervised way by giving problems and associated ground truth equations.", "Because our model generates the equation with stack actions, the equation is first transformed into its postfix representation.", "Let the postfix representation of the target equation be y 1 , · · · y t , · · · , y T , where y t can be either an operator (+, −, ×, ÷, =) or a target operand.", "Then for each time step t, the loss can be computed as L(y t ) = L 1 (push op) + L 2 (y t ) y t is an operand L 1 (y t ) otherwise , where L 1 is the stack action selection loss and L 2 is the operand selection loss defined as L 1 (y t ) = − log P (Y t = y t | {o i } t−1 i=1 , {w i } m i=1 ), L 2 (y t ) = − log P (Z t = y t | r t ).", "The objective of our training process is to minimize the total loss for the whole equation, T t=1 L(y t ).", "Inference When performing inference, at each time step t, the stack action with the highest probability P (Y t |{ỹ i } t−1 i=1 , {w i } m i=1 ) is chosen.", "If the chosen stack action is \"push\", the operand with the highest probability P (Z t |{Ỹ i } t−1 i=1 , {w i } m i=1 ) is chosen.", "When the stack has less than 2 elements, the probability of applying operator +, −, ×, ÷, = would be masked out to prevent illegal stack actions, so all generated equations must be legal math expressions.", "The decoder decodes until the unknown variable can be solved.", "After the equations are generated, a Python package SymPy (Meurer et al., 2017) is used to solve the unknown variable.", "The inference procedure example is illustrated in Figure 3 .", "The detailed algorithm can be found in Algorithm 1.", "Experiments To evaluate the performance of the proposed model, we conduct the experiments on the benchmark dataset and analyze the learned semantics.", "Settings The experiments are benchmarked on the dataset Math23k (Wang et al., 2017) , which contains 23,162 math problems with annotated equations.", "Each problem can be solved by a singleunknown-variable equation and only uses operators +, −, ×, ÷.", "Also, except π and 1, quantities in the equation can be found in the problem text.", "There are also other large scale datasets like Dol-phin18K (Shi et al., 2015) and AQuA (Ling et al., 2017) , containing 18,460 and 100,000 math word problems respectively.", "The reasons about not evaluating on these two datasets are 1) Dolphin18k contains some unlabeled math word problems and some incorrect labels, and 2) AQuA contains rational for solving the problems, but the equations in the rational are not formal (e.g.", "mixed with texts, using x to represent ×, etc.)", "and inconsistent.", "Therefore, the following experiments are performed and analyzed using Math23K, the only large scaled, good-quality dataset. )", "do h D t ← LSTM(h D t−1 , ct−1, ret) st ← S.get top2() h E ← Attention(h D t−1 , h E ) rt ← [h D t , st, h E ] psa ← StackActionSelector(rt) p opd ← OperandSelector(rt) if training then Target equation y is available when training.", "Yt ← yt if yt is operand then loss ← loss + L1(push) + L2(yt) else loss ← loss + L1(yt) end if else Yt ← StackActionSelector(r sa t ) if Yt = push then Zt ← OperandSelector(r opd t ) end if end if if Yt = gen var then e x ← Attention(h D t , h E ) ret ← e x else if Yt = push then S.push(vZ t , eZ t ) ret ← eZ t else if Yt ∈ {+, Results The results are shown in Our proposed end-to-end model belongs to the generation category, and the single model performance achieved by our proposed model is new state-of-the-art (> 65%) and even better than the hybrid model result (64.7%).", "In addition, we are the first to report character-based performance on this dataset, and the character-based results are slightly better than the word-based ones.", "Among the single model performance, our models obtain about more than 7% accuracy improvement compared to the previous best one (Wang et al., 2017) .", "The performance of our character-based model also shows that our model is capable of learning the relatively accurate semantic representations without word boundaries and achieves better performance.", "Ablation Test To better understand the performance contributed by each proposed component, we perform a series of ablation tests by removing components one by one and then checking the performance by 5-fold cross validation.", "Table 2 shows the ablation results.", "Char-Based v.s.", "Word-Based As reported above, using word-based model instead of character-based model only causes 0.5% performance drop.", "To fairly compare with prior word- Table 2 : 5-fold cross validation results of ablation tests.", "based models, the following ablation tests are performed on the word-based approach.", "Word-Based -Gate It uses r t instead of r sa t and r opr t as the input of both StackActionSelector and OperandSelector.", "Word-Based -Gate -Attention Considering that the prior generation-based model (seq2seq) did not use any attention mechanism, we compare the models with and without the attention mechanism.", "Removing attention means excluding q t−1 in (11), so the input of both operator and operand selector becomes r t = [h D t ; s t ].", "The result implies that our model is not better than previous models solely because of the attention.", "Word-Based -Gate -Attention -Stack To check the effectiveness of the stack status (s t in (11)), the experiments of removing the stack status from the input of both operator and operand selectors (r t = h D t ) are conducted.", "The results well justify our idea of choosing operators based on semantic meanings of operands.", "Word-Based -Semantic Transformer To validate the effectiveness of the idea that views an operator as a semantic transformer, we modify the semantic transformer function of the operator into f (e 1 , e 2 ) = e , where e is a learnable parameter and is different for different operators.", "Therefore, e acts like the embedding of the operator , and the decoding process is more similar to a general seq2seq model.", "The results show that the semantic transformer in the original model encodes not only the last operator applied on the operands but other information that helps the selectors.", "Word-Based -Semantic Representation To explicitly evaluate the effectiveness of operands' semantic representations, we rewrite semantic representation of the i-th operand in the problem texts q u a n ti fi e r 个 b a n a n a 香 蕉 , e v e r y 每 ( b a s k e t) < u n k > 6 .0 q u a n ti fi e r 个 , ta k e o ff 拿 掉 h o w m a n y 多 少 q u a n ti fi e r 个 , th e n 就 c a n 可 以 e x a c tl y 正 好 fi ll 装 9 .0 q u a n ti fi e r 个 b a s k e ts 篮 子 了 < u n k > .", "9.0 6.0 58.0 Figure 4 : The self-attention map visualization of operands' semantic expressions for the problem \"There are 58 bananas.", "Each basket can contain 6 bananas.", "How many bananas are needed to be token off such that exactly 9 baskets are filled?\".", "from (2) to e c i = b c i , where b c i is a parameter.", "Thus for every problem, the representation of the i-th operand is identical, even though their meanings in different problems may be different.", "This modification assumes that no semantic information is captured by b c i , which can merely represent a symbolic placeholder in an equation.", "Because the semantic transformer is to transform the semantic representations, applying this component is meaningless.", "Here the semantic transformer is also replaced with f (e 1 , e 2 ) = e as the setting of the previous ablation test.", "The results show that the model without using semantic representations of operands causes a significant accuracy drop of 3.5%.", "The main contribution of this paper about modeling semantic meanings of symbols is validated and well demonstrated here.", "Qualitative Analysis To further analyze whether the proposed model can provide interpretation and reasoning, we visualize the learned semantic representations of constants to check where the important cues are, Constant Embedding Analysis To better understand the information encoded in the semantic representations of constants in the problem, a self-attention is performed when their semantic representations are extracted by the encoder.", "Namely, we rewrite (2) as e c i = Attention(h E p i , {h E t } m t=1 .", "(20) Then we check the trained self-attention map (α in the attention function) on the validation dataset.", "For some problems, the self-attention that generates semantic representations of constants in the problem concentrates on the number's quantifier or unit, and sometimes it also focuses on informative verbs, such as \"gain\", \"get\", \"fill\", etc., in the sentence.", "For example, Figure 4 shows the attention weights for an example math word problem, where lighter colors indicate higher weights.", "The numbers \"58\" and \"6\" focus more on the quantifier-related words (e.g.", "\"every\" and \"how many\"), while \"9\" pays higher attention to the verb \"fill\".", "The results are consistent with those handcraft features for solving math word problems proposed by the prior research (Hosseini et al., 2014; .", "Hence, we demonstrate that the automatically learned semantic representations indeed capture critical information that facilitates solving math word problems without providing human-crafted knowledge.", "Decoding Process Visualization We visualize the attention map (q t in (6) ) to see how the attention helps the decoding process.", "An example is shown in the top of Figure 5 , where most attention focuses on the end of the sentence.", "Unlike the machine translation task, the attention shows the word-level alignment between source and target languages, solving math word problems requires high-level understanding due to the task complexity.", "To further analyze the effectiveness of the proposed gating mechanisms for stack action and operand selection, the activation of gates g sa , g opd at each step of the decoding process is shown in the bottom of Figure 5 .", "It shows that most of time, the gate activation is high, demonstrating that the proposed gating mechanisms play an important role during decoding.", "We also observe a common phenomenon that the activation g sa 2 , which controls how much attention the stack action selector puts on the stack state when deciding an stack action, is usually low until the last \"operator application\" stack action.", "For example, in the example of Figure 5 , g sa 2 is less than 0.20 till the last argument selection stack action, and activates when deciding the division operator application (÷) and the equal application (=).", "It may result from the higher-level semantics of the operand (6.75−2.75) on the stack when selecting the stack action division operator application (÷).", "In terms Problem & Results 红花有60朵,黄花比红花多1/6朵,黄花有多少朵. (There are 60 red flowers.", "Yellow flowers are more than red ones by 1/6.", "How many yellow flowers are there?)", "Generated Equation: 60 + 1 6 Correct Answer: 70 火车 48 小时行驶 5920 千米,汽车 25 小时行驶 2250 千米,汽车平均每小时比火车每小时慢 多少 千米 ? (The train travels 5920 kilometers in hours, and the car travels 2250 kilometers in 25 hours.", "How many kilometers per hour is the car slower than the train?)", "Generated Equation: 2250 ÷ 25 − 5920 ÷ 48 Correct Answer: 33 1 3 小红前面 5 人,后面 7 人,一共有多少人? (There are 5 people in front of Little Red and 7 people behind.", "How many persons are there in total?)", "Generated Equation: 5 + 7 Correct Answer: 13 Figure 5: Word attention and gate activation (g sa and g opd ) visualization when generating stack actions for the problem \"6.75 deducting 5 times of an unknown number is 2.75.", "What is the unknown number?", "\", where the associated equation is x = (6.75 − 2.75) ÷ 5.", "Note that g opd is meaningful only when the t-th stack action is push op.", "of the activation of g opd , we find that three features are important in most cases, demonstrating the effectiveness of the proposed mechanisms.", "Error Analysis We randomly sample some results predicted incorrectly by our model shown in Table 3 .", "In the first example, the error is due to the language ambiguity, and such ambiguity cannot be resolved without considering the exact value of the number.", "From the second example, although our model identifies the problem as a comparison problem successfully, it handles the order of the operands incorrectly.", "For the third problem, it cannot be solved by using only the surface meaning but requires some common sense.", "Therefore, above phenomena show the difficulty of solving math word problems and the large room for improvement.", "Conclusion We propose an end-to-end neural math solver using an encoder-decoder framework that incorporates semantic representations of numbers in order to generate mathematical symbols for solving math word problems.", "The experiments show that the proposed model achieves the state-of-the-art performance on the benchmark dataset, and empirically demonstrate the effectiveness of each component in the model.", "In sum, the proposed neural math solver is designed based on how human performs reasoning when writing equations, providing better interpretation without the need of labeled rationals.", "A Algorithm Detail The training and inference procedures are shown in Algortihm 1.", "B Hyperparameter Setup The model is trained with the optimizer adam (Kingma and Ba, 2014), and the learning rate is set to 0.001.", "Pretrained embeddings using FastText (Joulin et al., 2016 ) are adopted.", "The hidden state size of LSTM used in the encoder and decoder is 256.", "The dimension of hidden layers in attention, semantic transformer and operand/stack action selector is 256.", "The dropout rate is set as 0.1 before inputting the decoder LSTM, before the stack action selector and after the hidden layer of the stack action selector and attention.", "The reported accuracy is the result of 5-fold cross-validation, same as Wang et al.", "for fair comparison.", "C Error Analysis between Seq2Seq We implement the seq2seq model as proposed by Wang et al.", "and compare the performance difference between our proposed model and the baseline seq2seq model.", "Table 4 shows the generated results seq2seq predicts correctly but our model predicts incorrectly.", "Table 5 show the results our model can predict correctly but seq2seq cannot." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.1.1", "3.1.2", "3.2", "3.3", "3.3.1", "3.3.2", "3.3.3", "3.3.4", "3.4", "3.5", "4", "4.1", "4.2", "4.3", "5", "5.1", "5.2", "5.3", "6" ], "paper_header_content": [ "Introduction", "Related Work", "End-to-End Neural Math Solver", "Encoder", "Constant Representation Extraction", "External Constant Leveraging", "Decoder", "Decoding State Features", "Stack Action Selector", "Stack Actions", "Operand Selector", "Semantic Transformer", "Training", "Inference", "Experiments", "Settings", "Results", "Ablation Test", "Qualitative Analysis", "Constant Embedding Analysis", "Decoding Process Visualization", "Error Analysis", "Conclusion" ] }
GEM-SciDuet-train-31#paper-1044#slide-7
Symbol Encoding
Each notebook takes $0.5 and each pen takes $1. Tom has M I U L A B How many notebooks can h e buy after buying pens? Symbolic Part Semantic Part
Each notebook takes $0.5 and each pen takes $1. Tom has M I U L A B How many notebooks can h e buy after buying pens? Symbolic Part Semantic Part
[]
GEM-SciDuet-train-31#paper-1044#slide-8
1044
Semantically-Aligned Equation Generation for Solving and Reasoning Math Word Problems
Solving math word problems is a challenging task that requires accurate natural language understanding to bridge natural language texts and math expressions. Motivated by the intuition about how human generates the equations given the problem texts, this paper presents a neural approach to automatically solve math word problems by operating symbols according to their semantic meanings in texts. This paper views the process of generating equations as a bridge between the semantic world and the symbolic world, where the proposed neural math solver is based on an encoderdecoder framework. In the proposed model, the encoder is designed to understand the semantics of problems, and the decoder focuses on tracking semantic meanings of the generated symbols and then deciding which symbol to generate next. The preliminary experiments are conducted in a benchmark dataset Math23K, and our model significantly outperforms both the state-of-the-art single model and the best non-retrieval-based model over about 10% accuracy, demonstrating the effectiveness of bridging the symbolic and semantic worlds from math word problems. 1
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214 ], "paper_content_text": [ "Introduction Automatically solving math word problems has been an interesting research topic and also been viewed as a way of evaluating machines' ability (Mandal and Naskar, 2019) .", "For human, writing down an equation that solves a math word problem requires the ability of reading comprehension, reasoning, and sometimes real world understanding.", "Specifically, to solve a math word problem, we first need to know the goal of the given problem, then understand the semantic 1 The source code is available at https://github.", "com/MiuLab/E2EMathSolver.", "meaning of each numerical number in the problem, perform reasoning based on the comprehension in the previous step, and finally decide what to write in the equation.", "Most prior work about solving math word problems relied on hand-crafted features, which required more human knowledge.", "Because those features are often in the lexical level, it is not clear whether machines really understand the math problems.", "Also, most prior work evaluated their approaches on relatively small datasets, and the capability of generalization is concerned.", "This paper considers the reasoning procedure when writing down the associated equation given a problem.", "Figure 1 illustrates the problem solving process.", "The illustration shows that human actually assigns the semantic meaning to each number when manipulating symbols, including operands (numbers) and operators (+ − ×÷).", "Also, we believe that the semantic meaning of operands can help us decide which operator to use.", "For example, the summation of \"price of one pen\" and \"number of pens Tom bought\" is meaningless; therefore the addition would not be chosen.", "Following the observation above, this paper proposes a novel encoder decoder model, where the encoder extracts semantic meanings of numbers in the problem, and the decoder is equipped with a stack that facilitates tracking the semantic meanings of operands.", "The contributions of this paper are 4-fold: • This paper is the first work that models semantic meanings of operands and operators for math word problems.", "• This paper proposes an end-to-end neural math solver with a novel decoding process that utilizes the stack to generate associated equations.", "Figure 1 : The solving process of the math word problem \"Each notebok takes $0.5 and each pen takes $1.", "Tom has $10.", "How many notebook can he buy after buying 5 pens?\"", "and the associated equation is x = (10 − 1 × 5) ÷ 0.5.", "The associated equation is x = (10 − 1 × 5) ÷ 0.5.", "• This paper achieves the state-of-the-art performance on the large benchmark dataset Math23K.", "• This paper is capable of providing interpretation and reasoning for the math word problem solving procedure.", "Related Work There is a lot of prior work that utilized handcrafted features, such as POS tags, paths in the dependency trees, keywords, etc., to allow the model to focus on the quantities in the problems Hosseini et al., 2014; Koncel-Kedziorski et al., 2015; Roy et al., 2016; Upadhyay and Chang, 2017; Roy and Roth, 2018; Wang et al., 2018) .", "Recently, Mehta et al.", "; Wang et al.", "; Ling et al.", "attempted at learning models without predefined features.", "Following the recent trend, the proposed end-to-end model in this paper does not need any hand-crafted features.", "Kushman et al.", "first extracted templates about math expressions from the training answers, and then trained models to select templates and map quantities in the problem to the slots in the template.", "Such two-stage approach has been tried and achieved good results (Upadhyay and Chang, 2017) .", "The prior work highly relied on human knowledge, where they parsed problems into equations by choosing the expression tree with the highest score calculated by an operator classifier, working on a hand-crafted \"trigger list\" containing quantities and noun phrases in the problem, or utilizing features extracted from text spans (Roy et al., , 2016 Koncel-Kedziorski et al., 2015) .", "Shi et al.", "defined a Dolphin language to connect math word problems and logical forms, and generated rules to parse math word problems.", "Upadhyay et al.", "parsed math word problems without explicit equation annotations.", "Roy and Roth clas-sified math word problems into 4 types and used rules to decide the operators accordingly.", "Wang et al.", "trained the parser using reinforcement learning with hand-crafted features.", "Hosseini et al.", "modeled the problem text as transition of world states, and the equation is generated as the world states changing.", "Our work uses a similar intuition, but hand-crafted features are not required and our model can be trained in an end-to-end manner.", "Some end-to-end approaches have been proposed, such as generating equations directly via a seq2seq model (Wang et al., 2017) .", "Ling et al.", "tried to generate solutions along with its rationals with a seq2seq-like model for better interpretability.", "This paper belongs to the end-to-end category, but different from the previous work; we are the first approach that generates equations with stack actions, which facilitate us to simulate the way how human solves problems.", "Furthermore, the proposed approach is the first model that is more interpretable and provides reasoning steps without the need of rational annotations.", "End-to-End Neural Math Solver Our approach composes of two parts, an encoder and a decoder, where the process of solving math word problems is viewed as transforming multiple text spans from the problems into the target information the problems ask for.", "In the example shown in Figure 1 , all numbers in the problem are attached with the associated semantics.", "Motivated by the observation, we design an encoder to extract the semantic representation of each number in the problem text.", "Considering that human usually manipulates those numbers and operators (such as addition, subtraction, etc.)", "based on their semantics for problem solving, a decoder is designed to construct the equation, where the semantics is aligned with the representations extracted by the encoder.", "The idea of the proposed model Tom has $ 10 5 pens ?", "Encoder Stack Attention Operation Selector Apply OP OP Return Decoder Operand Selector Semantic Transformer Each notebook takes $0.5 and each pen takes $1.", "Tom has $10.", "How many notebooks can he buy after buying 5 pens?", "Stack Attention is to imitate the human reasoning process for solving math word problems.", "The model architecture is illustrated in Figure 2 .", "Encoder The encoder aims to extract the semantic representation of each constant needed for solving problems.", "However, the needed constants may come from either the given problem texts or domain knowledge, so we detail these two procedures as follows.", "Constant Representation Extraction For each math word problem, we are given a passage consisting of words {w P t } m t=1 , whose word embeddings are {e P t } m t=1 .", "The problem text includes some numbers, which we refer as constants.", "The positions of constants in the problem text are denoted as {p i } n i=1 .", "In order to capture the semantic representation of each constant by considering its contexts, a bidirectional long short-term memory (BLSTM) is adopted as the encoder (Hochreiter and Schmidhuber, 1997) : h E t , c E t = BLSTM(h E t−1 , c E t−1 , e P t ), (1) and then for the i-th constant in the problem, its semantic representation e c i is modeled by the corresponding BLSTM output vector: e c i = h E p i .", "(2) External Constant Leveraging External constants, including 1 and π, are leveraged, because they are required to solve a math word problem, but not mentioned in the problem text.", "Due to their absence from the problem text, we cannot extract their semantic meanings by BLSTM in (2) .", "Instead, we model their semantic representation e π , e 1 as parts of the model parameters.", "They are randomly initialized and are learned during model training.", "Decoder The decoder aims at constructing the equation that can solve the given problem.", "We generate the equation by applying stack actions on a stack to mimic the way how human understands an equation.", "Human knows the semantic meaning of each term in the equation, even compositing of operands and operators like the term \"(10−1×5)\" in Figure 1 .", "Then what operator to apply on a pair operands can be chosen based on their semantic meanings accordingly.", "Hence we design our model to generate the equation in a postfix manner: a operator is chosen base on the semantic representations of two operands the operator is going to apply to.", "Note that the operands a operator can apply to can be any results generated previously.", "That is the reason why we use \"stack\" as our data structure in order to keep track of the operands a operator is going to apply to.", "The stack contains both symbolic and semantic representations of operands, denoted as S = [(v S lt , e S lt ), (v S lt−1 , e S lt−1 ), · · · , (v S 1 , e S 1 )], (3) where v S of each pair is the symbolic part, such as x + 1, while e S is the semantic representation, which is a vector.", "The components in the decoder are shown in the right part of Figure 2 , each of which is detailed below.", "Decoding State Features At each decoding step, decisions are made based on features of the current state.", "At each step, fea- tures r sa and r opd are extracted to select a stack action (section 3.3.2) and an operand to push (section 3.3.3).", "Specifically, the features are the gated concatenation of following vectors: • h D t is the output of an LSTM, which encodes the history of applied actions: h D t , c D t = LSTM(h D t−1 , c D t−1 , res t−1 ), (4) where res t−1 is the result from the previous stack action similar to the seq2seq model (Sutskever et al., 2014) .", "For example, if the previous stack action o t−1 is \"push\", then res t−1 is the semantic representation pushed into the stack.", "If the previous stack action o t−1 is to apply an operator , then res t−1 is the semantic representation generated by f .", "• s t is the stack status.", "It is crucial because some operators are only applicable to certain combinations of operand semantics, which is similar to the type system in programming languages.", "For example, operating multiplication is applicable to the combination of \"quantity of an item\" and \"price of an item\", while operating addition is not.", "Considering that all math operators supported here (+, −, ×, ÷) are binary operators, the semantic representations of the stack's top 2 elements at the time t − 1 are considered: s t = [e S lt ; e S lt ].", "(5) • q t incorporates problem information in the decision.", "It is believed that the attention mechanism (Luong et al., 2015) can effectively capture dependency for longer distance.", "Thus, the attention mechanism over the encoding problem h E 1 , h E 2 , · · · is adopted: q t = Attention(h D t , {h E i } m i=1 ), (6) where the attention function in this paper is defined as a function with learnable parameters w, W, b: Attention(u, {v i } m i=1 ) = m i=1 α i h i , (7) α i = exp(s i ) m l=1 exp(s i ) , (8) s i = w T tanh(W T [u; v i ] + b).", "(9) In order to model the dynamic features for different decoding steps, features in r sa t is gated as follows: r sa t = [g sa t,1 · h D t ; g sa t,2 · s t ; g sa t,3 · q t ], (10) g sa t = σ(W sa · [h D t ; s t ; q t ]), (11) where σ is a sigmoid function and W sa is a learned gating parameter.", "r opd t is defined similarly, but with a different learned gating parameter W opd .", "Stack Action Selector The stack action selector is to select an stack action at each decoding step (section 3.3.2) until the unknowns are solved.", "The probability of choosing action a at the decoding step t is calculated with a network NN constituted of one hidden layer and ReLU as the activation function: P (Y t |{y i } t−1 i=1 , {w i } m i=1 ) (12) = StackActionSelector(r sa t ) = softmax(NN(r sa t )) , where r sa t is decoding state features as defined in section 3.3.", "Stack Actions The available stack actions are listed below: • Variable generation: The semantic representation of an unknown variable x is generated dynamically as the first action in the decoding process.", "Note that this procedure provides the flexibility of solving problems with more than one unknown variables.", "The decoder module can decide how many unknown variables are required to solve the problem, and the semantic representation of the unknown variable is generated with an attention mechanism: e x = Attention(h D t , {h E i } m i=1 ).", "(13) • Push: This stack action pushes the operand chosen by the operand selector (section 3.3.3).", "Both the symbolic representation v * and semantic representation e * of the chosen operand would be pushed to the stack S in (3).", "Then the stack state becomes S = [(v S * , e S * ), (v S lt , e S lt ), · · · , (v S 1 , e S 1 )].", "(14) • Operator application ( ∈ {+, −, ×, ÷}): One stack action pops two elements from the top of the stack, which contains two pairs, (v i , e i ) and (v j , e j ), and then the associated symbolic operator, v k = v i v j , is recorded.", "Also, a semantic transformation function f for that operator is invoked, which generates the semantic representation of v k by transforming semantic representations of v i and v j to e k = f (e i , e j ).", "Therefore, after an operator is applied to the stack specified in (3) , the stack state becomes S =[(v S lt v S lt−1 , f (e S lt , e S lt−1 )), (15) (v S lt−2 , e S lt−2 ), · · · , (v S 1 , e S 1 )].", "• Equal application: When the equal application is chosen, it implies that an equation is completed.", "This stack action pops 2 tuples from the stack, (v i , e i ), (v j , e j ), and then v i = v j is recorded.", "If one of them is an unknown variable, the problem is solved.", "Therefore, after an OP is applied to the stack specified in (3) , the stack state becomes S = [(v S lt−2 , e S lt−2 ), · · · , (v S 1 , e S 1 )].", "(16) Operand Selector When the stack action selector has decided to push an operand, the operand selector aims at choosing which operand to push.", "The operand candidates e include constants provided in the problem text whose semantic representations are e c 1 , e c 2 , · · · , e c n , unknown variable whose semantic representation is e x , and two external constants 1 and π whose semantic representations are e 1 , e π : e = [e c 1 , e c 2 , · · · , e c n , e 1 , e π , e x ].", "An operand has both symbolic and semantic representations, but the selection focuses on its semantic meaning; this procedure is the same as what human does when solving math word problems.", "Inspired by addressing mechanisms of neural Turing machine (NTM) (Graves et al., 2014) , the probability of choosing the i-th operand candidate is the attention weights of r t over the semantic representations of the operand candidates as in (8) : P (Z t | {y i } t−1 i=1 , {w i } m i=1 ) (18) = OperandSelector(r opd t ) = AttentionWeight(r opd t , {e i } m i=1 ∪ {e 1 , e π , e x }), and r opd t is defined in section 3.3.", "Semantic Transformer A semantic transformer is proposed to generate the semantic representation of a new symbol resulted from applying an operator, which provides the capability of interpretation and reasoning for the target task.", "The semantic transformer for an operator ∈ {+, −, ×, ÷} transforms semantic representations of two operands e 1 , e 2 into f (e 1 , e 2 ) = tanh(U ReLU(W [e 1 ; e 2 ]+b )+c ), where W , U , b , c are model parameters.", "Semantic transformers for different operators have different parameters in order to model different transformations.", "Training Both stack action selection and operand selection can be trained in a fully supervised way by giving problems and associated ground truth equations.", "Because our model generates the equation with stack actions, the equation is first transformed into its postfix representation.", "Let the postfix representation of the target equation be y 1 , · · · y t , · · · , y T , where y t can be either an operator (+, −, ×, ÷, =) or a target operand.", "Then for each time step t, the loss can be computed as L(y t ) = L 1 (push op) + L 2 (y t ) y t is an operand L 1 (y t ) otherwise , where L 1 is the stack action selection loss and L 2 is the operand selection loss defined as L 1 (y t ) = − log P (Y t = y t | {o i } t−1 i=1 , {w i } m i=1 ), L 2 (y t ) = − log P (Z t = y t | r t ).", "The objective of our training process is to minimize the total loss for the whole equation, T t=1 L(y t ).", "Inference When performing inference, at each time step t, the stack action with the highest probability P (Y t |{ỹ i } t−1 i=1 , {w i } m i=1 ) is chosen.", "If the chosen stack action is \"push\", the operand with the highest probability P (Z t |{Ỹ i } t−1 i=1 , {w i } m i=1 ) is chosen.", "When the stack has less than 2 elements, the probability of applying operator +, −, ×, ÷, = would be masked out to prevent illegal stack actions, so all generated equations must be legal math expressions.", "The decoder decodes until the unknown variable can be solved.", "After the equations are generated, a Python package SymPy (Meurer et al., 2017) is used to solve the unknown variable.", "The inference procedure example is illustrated in Figure 3 .", "The detailed algorithm can be found in Algorithm 1.", "Experiments To evaluate the performance of the proposed model, we conduct the experiments on the benchmark dataset and analyze the learned semantics.", "Settings The experiments are benchmarked on the dataset Math23k (Wang et al., 2017) , which contains 23,162 math problems with annotated equations.", "Each problem can be solved by a singleunknown-variable equation and only uses operators +, −, ×, ÷.", "Also, except π and 1, quantities in the equation can be found in the problem text.", "There are also other large scale datasets like Dol-phin18K (Shi et al., 2015) and AQuA (Ling et al., 2017) , containing 18,460 and 100,000 math word problems respectively.", "The reasons about not evaluating on these two datasets are 1) Dolphin18k contains some unlabeled math word problems and some incorrect labels, and 2) AQuA contains rational for solving the problems, but the equations in the rational are not formal (e.g.", "mixed with texts, using x to represent ×, etc.)", "and inconsistent.", "Therefore, the following experiments are performed and analyzed using Math23K, the only large scaled, good-quality dataset. )", "do h D t ← LSTM(h D t−1 , ct−1, ret) st ← S.get top2() h E ← Attention(h D t−1 , h E ) rt ← [h D t , st, h E ] psa ← StackActionSelector(rt) p opd ← OperandSelector(rt) if training then Target equation y is available when training.", "Yt ← yt if yt is operand then loss ← loss + L1(push) + L2(yt) else loss ← loss + L1(yt) end if else Yt ← StackActionSelector(r sa t ) if Yt = push then Zt ← OperandSelector(r opd t ) end if end if if Yt = gen var then e x ← Attention(h D t , h E ) ret ← e x else if Yt = push then S.push(vZ t , eZ t ) ret ← eZ t else if Yt ∈ {+, Results The results are shown in Our proposed end-to-end model belongs to the generation category, and the single model performance achieved by our proposed model is new state-of-the-art (> 65%) and even better than the hybrid model result (64.7%).", "In addition, we are the first to report character-based performance on this dataset, and the character-based results are slightly better than the word-based ones.", "Among the single model performance, our models obtain about more than 7% accuracy improvement compared to the previous best one (Wang et al., 2017) .", "The performance of our character-based model also shows that our model is capable of learning the relatively accurate semantic representations without word boundaries and achieves better performance.", "Ablation Test To better understand the performance contributed by each proposed component, we perform a series of ablation tests by removing components one by one and then checking the performance by 5-fold cross validation.", "Table 2 shows the ablation results.", "Char-Based v.s.", "Word-Based As reported above, using word-based model instead of character-based model only causes 0.5% performance drop.", "To fairly compare with prior word- Table 2 : 5-fold cross validation results of ablation tests.", "based models, the following ablation tests are performed on the word-based approach.", "Word-Based -Gate It uses r t instead of r sa t and r opr t as the input of both StackActionSelector and OperandSelector.", "Word-Based -Gate -Attention Considering that the prior generation-based model (seq2seq) did not use any attention mechanism, we compare the models with and without the attention mechanism.", "Removing attention means excluding q t−1 in (11), so the input of both operator and operand selector becomes r t = [h D t ; s t ].", "The result implies that our model is not better than previous models solely because of the attention.", "Word-Based -Gate -Attention -Stack To check the effectiveness of the stack status (s t in (11)), the experiments of removing the stack status from the input of both operator and operand selectors (r t = h D t ) are conducted.", "The results well justify our idea of choosing operators based on semantic meanings of operands.", "Word-Based -Semantic Transformer To validate the effectiveness of the idea that views an operator as a semantic transformer, we modify the semantic transformer function of the operator into f (e 1 , e 2 ) = e , where e is a learnable parameter and is different for different operators.", "Therefore, e acts like the embedding of the operator , and the decoding process is more similar to a general seq2seq model.", "The results show that the semantic transformer in the original model encodes not only the last operator applied on the operands but other information that helps the selectors.", "Word-Based -Semantic Representation To explicitly evaluate the effectiveness of operands' semantic representations, we rewrite semantic representation of the i-th operand in the problem texts q u a n ti fi e r 个 b a n a n a 香 蕉 , e v e r y 每 ( b a s k e t) < u n k > 6 .0 q u a n ti fi e r 个 , ta k e o ff 拿 掉 h o w m a n y 多 少 q u a n ti fi e r 个 , th e n 就 c a n 可 以 e x a c tl y 正 好 fi ll 装 9 .0 q u a n ti fi e r 个 b a s k e ts 篮 子 了 < u n k > .", "9.0 6.0 58.0 Figure 4 : The self-attention map visualization of operands' semantic expressions for the problem \"There are 58 bananas.", "Each basket can contain 6 bananas.", "How many bananas are needed to be token off such that exactly 9 baskets are filled?\".", "from (2) to e c i = b c i , where b c i is a parameter.", "Thus for every problem, the representation of the i-th operand is identical, even though their meanings in different problems may be different.", "This modification assumes that no semantic information is captured by b c i , which can merely represent a symbolic placeholder in an equation.", "Because the semantic transformer is to transform the semantic representations, applying this component is meaningless.", "Here the semantic transformer is also replaced with f (e 1 , e 2 ) = e as the setting of the previous ablation test.", "The results show that the model without using semantic representations of operands causes a significant accuracy drop of 3.5%.", "The main contribution of this paper about modeling semantic meanings of symbols is validated and well demonstrated here.", "Qualitative Analysis To further analyze whether the proposed model can provide interpretation and reasoning, we visualize the learned semantic representations of constants to check where the important cues are, Constant Embedding Analysis To better understand the information encoded in the semantic representations of constants in the problem, a self-attention is performed when their semantic representations are extracted by the encoder.", "Namely, we rewrite (2) as e c i = Attention(h E p i , {h E t } m t=1 .", "(20) Then we check the trained self-attention map (α in the attention function) on the validation dataset.", "For some problems, the self-attention that generates semantic representations of constants in the problem concentrates on the number's quantifier or unit, and sometimes it also focuses on informative verbs, such as \"gain\", \"get\", \"fill\", etc., in the sentence.", "For example, Figure 4 shows the attention weights for an example math word problem, where lighter colors indicate higher weights.", "The numbers \"58\" and \"6\" focus more on the quantifier-related words (e.g.", "\"every\" and \"how many\"), while \"9\" pays higher attention to the verb \"fill\".", "The results are consistent with those handcraft features for solving math word problems proposed by the prior research (Hosseini et al., 2014; .", "Hence, we demonstrate that the automatically learned semantic representations indeed capture critical information that facilitates solving math word problems without providing human-crafted knowledge.", "Decoding Process Visualization We visualize the attention map (q t in (6) ) to see how the attention helps the decoding process.", "An example is shown in the top of Figure 5 , where most attention focuses on the end of the sentence.", "Unlike the machine translation task, the attention shows the word-level alignment between source and target languages, solving math word problems requires high-level understanding due to the task complexity.", "To further analyze the effectiveness of the proposed gating mechanisms for stack action and operand selection, the activation of gates g sa , g opd at each step of the decoding process is shown in the bottom of Figure 5 .", "It shows that most of time, the gate activation is high, demonstrating that the proposed gating mechanisms play an important role during decoding.", "We also observe a common phenomenon that the activation g sa 2 , which controls how much attention the stack action selector puts on the stack state when deciding an stack action, is usually low until the last \"operator application\" stack action.", "For example, in the example of Figure 5 , g sa 2 is less than 0.20 till the last argument selection stack action, and activates when deciding the division operator application (÷) and the equal application (=).", "It may result from the higher-level semantics of the operand (6.75−2.75) on the stack when selecting the stack action division operator application (÷).", "In terms Problem & Results 红花有60朵,黄花比红花多1/6朵,黄花有多少朵. (There are 60 red flowers.", "Yellow flowers are more than red ones by 1/6.", "How many yellow flowers are there?)", "Generated Equation: 60 + 1 6 Correct Answer: 70 火车 48 小时行驶 5920 千米,汽车 25 小时行驶 2250 千米,汽车平均每小时比火车每小时慢 多少 千米 ? (The train travels 5920 kilometers in hours, and the car travels 2250 kilometers in 25 hours.", "How many kilometers per hour is the car slower than the train?)", "Generated Equation: 2250 ÷ 25 − 5920 ÷ 48 Correct Answer: 33 1 3 小红前面 5 人,后面 7 人,一共有多少人? (There are 5 people in front of Little Red and 7 people behind.", "How many persons are there in total?)", "Generated Equation: 5 + 7 Correct Answer: 13 Figure 5: Word attention and gate activation (g sa and g opd ) visualization when generating stack actions for the problem \"6.75 deducting 5 times of an unknown number is 2.75.", "What is the unknown number?", "\", where the associated equation is x = (6.75 − 2.75) ÷ 5.", "Note that g opd is meaningful only when the t-th stack action is push op.", "of the activation of g opd , we find that three features are important in most cases, demonstrating the effectiveness of the proposed mechanisms.", "Error Analysis We randomly sample some results predicted incorrectly by our model shown in Table 3 .", "In the first example, the error is due to the language ambiguity, and such ambiguity cannot be resolved without considering the exact value of the number.", "From the second example, although our model identifies the problem as a comparison problem successfully, it handles the order of the operands incorrectly.", "For the third problem, it cannot be solved by using only the surface meaning but requires some common sense.", "Therefore, above phenomena show the difficulty of solving math word problems and the large room for improvement.", "Conclusion We propose an end-to-end neural math solver using an encoder-decoder framework that incorporates semantic representations of numbers in order to generate mathematical symbols for solving math word problems.", "The experiments show that the proposed model achieves the state-of-the-art performance on the benchmark dataset, and empirically demonstrate the effectiveness of each component in the model.", "In sum, the proposed neural math solver is designed based on how human performs reasoning when writing equations, providing better interpretation without the need of labeled rationals.", "A Algorithm Detail The training and inference procedures are shown in Algortihm 1.", "B Hyperparameter Setup The model is trained with the optimizer adam (Kingma and Ba, 2014), and the learning rate is set to 0.001.", "Pretrained embeddings using FastText (Joulin et al., 2016 ) are adopted.", "The hidden state size of LSTM used in the encoder and decoder is 256.", "The dimension of hidden layers in attention, semantic transformer and operand/stack action selector is 256.", "The dropout rate is set as 0.1 before inputting the decoder LSTM, before the stack action selector and after the hidden layer of the stack action selector and attention.", "The reported accuracy is the result of 5-fold cross-validation, same as Wang et al.", "for fair comparison.", "C Error Analysis between Seq2Seq We implement the seq2seq model as proposed by Wang et al.", "and compare the performance difference between our proposed model and the baseline seq2seq model.", "Table 4 shows the generated results seq2seq predicts correctly but our model predicts incorrectly.", "Table 5 show the results our model can predict correctly but seq2seq cannot." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.1.1", "3.1.2", "3.2", "3.3", "3.3.1", "3.3.2", "3.3.3", "3.3.4", "3.4", "3.5", "4", "4.1", "4.2", "4.3", "5", "5.1", "5.2", "5.3", "6" ], "paper_header_content": [ "Introduction", "Related Work", "End-to-End Neural Math Solver", "Encoder", "Constant Representation Extraction", "External Constant Leveraging", "Decoder", "Decoding State Features", "Stack Action Selector", "Stack Actions", "Operand Selector", "Semantic Transformer", "Training", "Inference", "Experiments", "Settings", "Results", "Ablation Test", "Qualitative Analysis", "Constant Embedding Analysis", "Decoding Process Visualization", "Error Analysis", "Conclusion" ] }
GEM-SciDuet-train-31#paper-1044#slide-8
Inside Encoder
Each notebook takes and
Each notebook takes and
[]
GEM-SciDuet-train-31#paper-1044#slide-9
1044
Semantically-Aligned Equation Generation for Solving and Reasoning Math Word Problems
Solving math word problems is a challenging task that requires accurate natural language understanding to bridge natural language texts and math expressions. Motivated by the intuition about how human generates the equations given the problem texts, this paper presents a neural approach to automatically solve math word problems by operating symbols according to their semantic meanings in texts. This paper views the process of generating equations as a bridge between the semantic world and the symbolic world, where the proposed neural math solver is based on an encoderdecoder framework. In the proposed model, the encoder is designed to understand the semantics of problems, and the decoder focuses on tracking semantic meanings of the generated symbols and then deciding which symbol to generate next. The preliminary experiments are conducted in a benchmark dataset Math23K, and our model significantly outperforms both the state-of-the-art single model and the best non-retrieval-based model over about 10% accuracy, demonstrating the effectiveness of bridging the symbolic and semantic worlds from math word problems. 1
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214 ], "paper_content_text": [ "Introduction Automatically solving math word problems has been an interesting research topic and also been viewed as a way of evaluating machines' ability (Mandal and Naskar, 2019) .", "For human, writing down an equation that solves a math word problem requires the ability of reading comprehension, reasoning, and sometimes real world understanding.", "Specifically, to solve a math word problem, we first need to know the goal of the given problem, then understand the semantic 1 The source code is available at https://github.", "com/MiuLab/E2EMathSolver.", "meaning of each numerical number in the problem, perform reasoning based on the comprehension in the previous step, and finally decide what to write in the equation.", "Most prior work about solving math word problems relied on hand-crafted features, which required more human knowledge.", "Because those features are often in the lexical level, it is not clear whether machines really understand the math problems.", "Also, most prior work evaluated their approaches on relatively small datasets, and the capability of generalization is concerned.", "This paper considers the reasoning procedure when writing down the associated equation given a problem.", "Figure 1 illustrates the problem solving process.", "The illustration shows that human actually assigns the semantic meaning to each number when manipulating symbols, including operands (numbers) and operators (+ − ×÷).", "Also, we believe that the semantic meaning of operands can help us decide which operator to use.", "For example, the summation of \"price of one pen\" and \"number of pens Tom bought\" is meaningless; therefore the addition would not be chosen.", "Following the observation above, this paper proposes a novel encoder decoder model, where the encoder extracts semantic meanings of numbers in the problem, and the decoder is equipped with a stack that facilitates tracking the semantic meanings of operands.", "The contributions of this paper are 4-fold: • This paper is the first work that models semantic meanings of operands and operators for math word problems.", "• This paper proposes an end-to-end neural math solver with a novel decoding process that utilizes the stack to generate associated equations.", "Figure 1 : The solving process of the math word problem \"Each notebok takes $0.5 and each pen takes $1.", "Tom has $10.", "How many notebook can he buy after buying 5 pens?\"", "and the associated equation is x = (10 − 1 × 5) ÷ 0.5.", "The associated equation is x = (10 − 1 × 5) ÷ 0.5.", "• This paper achieves the state-of-the-art performance on the large benchmark dataset Math23K.", "• This paper is capable of providing interpretation and reasoning for the math word problem solving procedure.", "Related Work There is a lot of prior work that utilized handcrafted features, such as POS tags, paths in the dependency trees, keywords, etc., to allow the model to focus on the quantities in the problems Hosseini et al., 2014; Koncel-Kedziorski et al., 2015; Roy et al., 2016; Upadhyay and Chang, 2017; Roy and Roth, 2018; Wang et al., 2018) .", "Recently, Mehta et al.", "; Wang et al.", "; Ling et al.", "attempted at learning models without predefined features.", "Following the recent trend, the proposed end-to-end model in this paper does not need any hand-crafted features.", "Kushman et al.", "first extracted templates about math expressions from the training answers, and then trained models to select templates and map quantities in the problem to the slots in the template.", "Such two-stage approach has been tried and achieved good results (Upadhyay and Chang, 2017) .", "The prior work highly relied on human knowledge, where they parsed problems into equations by choosing the expression tree with the highest score calculated by an operator classifier, working on a hand-crafted \"trigger list\" containing quantities and noun phrases in the problem, or utilizing features extracted from text spans (Roy et al., , 2016 Koncel-Kedziorski et al., 2015) .", "Shi et al.", "defined a Dolphin language to connect math word problems and logical forms, and generated rules to parse math word problems.", "Upadhyay et al.", "parsed math word problems without explicit equation annotations.", "Roy and Roth clas-sified math word problems into 4 types and used rules to decide the operators accordingly.", "Wang et al.", "trained the parser using reinforcement learning with hand-crafted features.", "Hosseini et al.", "modeled the problem text as transition of world states, and the equation is generated as the world states changing.", "Our work uses a similar intuition, but hand-crafted features are not required and our model can be trained in an end-to-end manner.", "Some end-to-end approaches have been proposed, such as generating equations directly via a seq2seq model (Wang et al., 2017) .", "Ling et al.", "tried to generate solutions along with its rationals with a seq2seq-like model for better interpretability.", "This paper belongs to the end-to-end category, but different from the previous work; we are the first approach that generates equations with stack actions, which facilitate us to simulate the way how human solves problems.", "Furthermore, the proposed approach is the first model that is more interpretable and provides reasoning steps without the need of rational annotations.", "End-to-End Neural Math Solver Our approach composes of two parts, an encoder and a decoder, where the process of solving math word problems is viewed as transforming multiple text spans from the problems into the target information the problems ask for.", "In the example shown in Figure 1 , all numbers in the problem are attached with the associated semantics.", "Motivated by the observation, we design an encoder to extract the semantic representation of each number in the problem text.", "Considering that human usually manipulates those numbers and operators (such as addition, subtraction, etc.)", "based on their semantics for problem solving, a decoder is designed to construct the equation, where the semantics is aligned with the representations extracted by the encoder.", "The idea of the proposed model Tom has $ 10 5 pens ?", "Encoder Stack Attention Operation Selector Apply OP OP Return Decoder Operand Selector Semantic Transformer Each notebook takes $0.5 and each pen takes $1.", "Tom has $10.", "How many notebooks can he buy after buying 5 pens?", "Stack Attention is to imitate the human reasoning process for solving math word problems.", "The model architecture is illustrated in Figure 2 .", "Encoder The encoder aims to extract the semantic representation of each constant needed for solving problems.", "However, the needed constants may come from either the given problem texts or domain knowledge, so we detail these two procedures as follows.", "Constant Representation Extraction For each math word problem, we are given a passage consisting of words {w P t } m t=1 , whose word embeddings are {e P t } m t=1 .", "The problem text includes some numbers, which we refer as constants.", "The positions of constants in the problem text are denoted as {p i } n i=1 .", "In order to capture the semantic representation of each constant by considering its contexts, a bidirectional long short-term memory (BLSTM) is adopted as the encoder (Hochreiter and Schmidhuber, 1997) : h E t , c E t = BLSTM(h E t−1 , c E t−1 , e P t ), (1) and then for the i-th constant in the problem, its semantic representation e c i is modeled by the corresponding BLSTM output vector: e c i = h E p i .", "(2) External Constant Leveraging External constants, including 1 and π, are leveraged, because they are required to solve a math word problem, but not mentioned in the problem text.", "Due to their absence from the problem text, we cannot extract their semantic meanings by BLSTM in (2) .", "Instead, we model their semantic representation e π , e 1 as parts of the model parameters.", "They are randomly initialized and are learned during model training.", "Decoder The decoder aims at constructing the equation that can solve the given problem.", "We generate the equation by applying stack actions on a stack to mimic the way how human understands an equation.", "Human knows the semantic meaning of each term in the equation, even compositing of operands and operators like the term \"(10−1×5)\" in Figure 1 .", "Then what operator to apply on a pair operands can be chosen based on their semantic meanings accordingly.", "Hence we design our model to generate the equation in a postfix manner: a operator is chosen base on the semantic representations of two operands the operator is going to apply to.", "Note that the operands a operator can apply to can be any results generated previously.", "That is the reason why we use \"stack\" as our data structure in order to keep track of the operands a operator is going to apply to.", "The stack contains both symbolic and semantic representations of operands, denoted as S = [(v S lt , e S lt ), (v S lt−1 , e S lt−1 ), · · · , (v S 1 , e S 1 )], (3) where v S of each pair is the symbolic part, such as x + 1, while e S is the semantic representation, which is a vector.", "The components in the decoder are shown in the right part of Figure 2 , each of which is detailed below.", "Decoding State Features At each decoding step, decisions are made based on features of the current state.", "At each step, fea- tures r sa and r opd are extracted to select a stack action (section 3.3.2) and an operand to push (section 3.3.3).", "Specifically, the features are the gated concatenation of following vectors: • h D t is the output of an LSTM, which encodes the history of applied actions: h D t , c D t = LSTM(h D t−1 , c D t−1 , res t−1 ), (4) where res t−1 is the result from the previous stack action similar to the seq2seq model (Sutskever et al., 2014) .", "For example, if the previous stack action o t−1 is \"push\", then res t−1 is the semantic representation pushed into the stack.", "If the previous stack action o t−1 is to apply an operator , then res t−1 is the semantic representation generated by f .", "• s t is the stack status.", "It is crucial because some operators are only applicable to certain combinations of operand semantics, which is similar to the type system in programming languages.", "For example, operating multiplication is applicable to the combination of \"quantity of an item\" and \"price of an item\", while operating addition is not.", "Considering that all math operators supported here (+, −, ×, ÷) are binary operators, the semantic representations of the stack's top 2 elements at the time t − 1 are considered: s t = [e S lt ; e S lt ].", "(5) • q t incorporates problem information in the decision.", "It is believed that the attention mechanism (Luong et al., 2015) can effectively capture dependency for longer distance.", "Thus, the attention mechanism over the encoding problem h E 1 , h E 2 , · · · is adopted: q t = Attention(h D t , {h E i } m i=1 ), (6) where the attention function in this paper is defined as a function with learnable parameters w, W, b: Attention(u, {v i } m i=1 ) = m i=1 α i h i , (7) α i = exp(s i ) m l=1 exp(s i ) , (8) s i = w T tanh(W T [u; v i ] + b).", "(9) In order to model the dynamic features for different decoding steps, features in r sa t is gated as follows: r sa t = [g sa t,1 · h D t ; g sa t,2 · s t ; g sa t,3 · q t ], (10) g sa t = σ(W sa · [h D t ; s t ; q t ]), (11) where σ is a sigmoid function and W sa is a learned gating parameter.", "r opd t is defined similarly, but with a different learned gating parameter W opd .", "Stack Action Selector The stack action selector is to select an stack action at each decoding step (section 3.3.2) until the unknowns are solved.", "The probability of choosing action a at the decoding step t is calculated with a network NN constituted of one hidden layer and ReLU as the activation function: P (Y t |{y i } t−1 i=1 , {w i } m i=1 ) (12) = StackActionSelector(r sa t ) = softmax(NN(r sa t )) , where r sa t is decoding state features as defined in section 3.3.", "Stack Actions The available stack actions are listed below: • Variable generation: The semantic representation of an unknown variable x is generated dynamically as the first action in the decoding process.", "Note that this procedure provides the flexibility of solving problems with more than one unknown variables.", "The decoder module can decide how many unknown variables are required to solve the problem, and the semantic representation of the unknown variable is generated with an attention mechanism: e x = Attention(h D t , {h E i } m i=1 ).", "(13) • Push: This stack action pushes the operand chosen by the operand selector (section 3.3.3).", "Both the symbolic representation v * and semantic representation e * of the chosen operand would be pushed to the stack S in (3).", "Then the stack state becomes S = [(v S * , e S * ), (v S lt , e S lt ), · · · , (v S 1 , e S 1 )].", "(14) • Operator application ( ∈ {+, −, ×, ÷}): One stack action pops two elements from the top of the stack, which contains two pairs, (v i , e i ) and (v j , e j ), and then the associated symbolic operator, v k = v i v j , is recorded.", "Also, a semantic transformation function f for that operator is invoked, which generates the semantic representation of v k by transforming semantic representations of v i and v j to e k = f (e i , e j ).", "Therefore, after an operator is applied to the stack specified in (3) , the stack state becomes S =[(v S lt v S lt−1 , f (e S lt , e S lt−1 )), (15) (v S lt−2 , e S lt−2 ), · · · , (v S 1 , e S 1 )].", "• Equal application: When the equal application is chosen, it implies that an equation is completed.", "This stack action pops 2 tuples from the stack, (v i , e i ), (v j , e j ), and then v i = v j is recorded.", "If one of them is an unknown variable, the problem is solved.", "Therefore, after an OP is applied to the stack specified in (3) , the stack state becomes S = [(v S lt−2 , e S lt−2 ), · · · , (v S 1 , e S 1 )].", "(16) Operand Selector When the stack action selector has decided to push an operand, the operand selector aims at choosing which operand to push.", "The operand candidates e include constants provided in the problem text whose semantic representations are e c 1 , e c 2 , · · · , e c n , unknown variable whose semantic representation is e x , and two external constants 1 and π whose semantic representations are e 1 , e π : e = [e c 1 , e c 2 , · · · , e c n , e 1 , e π , e x ].", "An operand has both symbolic and semantic representations, but the selection focuses on its semantic meaning; this procedure is the same as what human does when solving math word problems.", "Inspired by addressing mechanisms of neural Turing machine (NTM) (Graves et al., 2014) , the probability of choosing the i-th operand candidate is the attention weights of r t over the semantic representations of the operand candidates as in (8) : P (Z t | {y i } t−1 i=1 , {w i } m i=1 ) (18) = OperandSelector(r opd t ) = AttentionWeight(r opd t , {e i } m i=1 ∪ {e 1 , e π , e x }), and r opd t is defined in section 3.3.", "Semantic Transformer A semantic transformer is proposed to generate the semantic representation of a new symbol resulted from applying an operator, which provides the capability of interpretation and reasoning for the target task.", "The semantic transformer for an operator ∈ {+, −, ×, ÷} transforms semantic representations of two operands e 1 , e 2 into f (e 1 , e 2 ) = tanh(U ReLU(W [e 1 ; e 2 ]+b )+c ), where W , U , b , c are model parameters.", "Semantic transformers for different operators have different parameters in order to model different transformations.", "Training Both stack action selection and operand selection can be trained in a fully supervised way by giving problems and associated ground truth equations.", "Because our model generates the equation with stack actions, the equation is first transformed into its postfix representation.", "Let the postfix representation of the target equation be y 1 , · · · y t , · · · , y T , where y t can be either an operator (+, −, ×, ÷, =) or a target operand.", "Then for each time step t, the loss can be computed as L(y t ) = L 1 (push op) + L 2 (y t ) y t is an operand L 1 (y t ) otherwise , where L 1 is the stack action selection loss and L 2 is the operand selection loss defined as L 1 (y t ) = − log P (Y t = y t | {o i } t−1 i=1 , {w i } m i=1 ), L 2 (y t ) = − log P (Z t = y t | r t ).", "The objective of our training process is to minimize the total loss for the whole equation, T t=1 L(y t ).", "Inference When performing inference, at each time step t, the stack action with the highest probability P (Y t |{ỹ i } t−1 i=1 , {w i } m i=1 ) is chosen.", "If the chosen stack action is \"push\", the operand with the highest probability P (Z t |{Ỹ i } t−1 i=1 , {w i } m i=1 ) is chosen.", "When the stack has less than 2 elements, the probability of applying operator +, −, ×, ÷, = would be masked out to prevent illegal stack actions, so all generated equations must be legal math expressions.", "The decoder decodes until the unknown variable can be solved.", "After the equations are generated, a Python package SymPy (Meurer et al., 2017) is used to solve the unknown variable.", "The inference procedure example is illustrated in Figure 3 .", "The detailed algorithm can be found in Algorithm 1.", "Experiments To evaluate the performance of the proposed model, we conduct the experiments on the benchmark dataset and analyze the learned semantics.", "Settings The experiments are benchmarked on the dataset Math23k (Wang et al., 2017) , which contains 23,162 math problems with annotated equations.", "Each problem can be solved by a singleunknown-variable equation and only uses operators +, −, ×, ÷.", "Also, except π and 1, quantities in the equation can be found in the problem text.", "There are also other large scale datasets like Dol-phin18K (Shi et al., 2015) and AQuA (Ling et al., 2017) , containing 18,460 and 100,000 math word problems respectively.", "The reasons about not evaluating on these two datasets are 1) Dolphin18k contains some unlabeled math word problems and some incorrect labels, and 2) AQuA contains rational for solving the problems, but the equations in the rational are not formal (e.g.", "mixed with texts, using x to represent ×, etc.)", "and inconsistent.", "Therefore, the following experiments are performed and analyzed using Math23K, the only large scaled, good-quality dataset. )", "do h D t ← LSTM(h D t−1 , ct−1, ret) st ← S.get top2() h E ← Attention(h D t−1 , h E ) rt ← [h D t , st, h E ] psa ← StackActionSelector(rt) p opd ← OperandSelector(rt) if training then Target equation y is available when training.", "Yt ← yt if yt is operand then loss ← loss + L1(push) + L2(yt) else loss ← loss + L1(yt) end if else Yt ← StackActionSelector(r sa t ) if Yt = push then Zt ← OperandSelector(r opd t ) end if end if if Yt = gen var then e x ← Attention(h D t , h E ) ret ← e x else if Yt = push then S.push(vZ t , eZ t ) ret ← eZ t else if Yt ∈ {+, Results The results are shown in Our proposed end-to-end model belongs to the generation category, and the single model performance achieved by our proposed model is new state-of-the-art (> 65%) and even better than the hybrid model result (64.7%).", "In addition, we are the first to report character-based performance on this dataset, and the character-based results are slightly better than the word-based ones.", "Among the single model performance, our models obtain about more than 7% accuracy improvement compared to the previous best one (Wang et al., 2017) .", "The performance of our character-based model also shows that our model is capable of learning the relatively accurate semantic representations without word boundaries and achieves better performance.", "Ablation Test To better understand the performance contributed by each proposed component, we perform a series of ablation tests by removing components one by one and then checking the performance by 5-fold cross validation.", "Table 2 shows the ablation results.", "Char-Based v.s.", "Word-Based As reported above, using word-based model instead of character-based model only causes 0.5% performance drop.", "To fairly compare with prior word- Table 2 : 5-fold cross validation results of ablation tests.", "based models, the following ablation tests are performed on the word-based approach.", "Word-Based -Gate It uses r t instead of r sa t and r opr t as the input of both StackActionSelector and OperandSelector.", "Word-Based -Gate -Attention Considering that the prior generation-based model (seq2seq) did not use any attention mechanism, we compare the models with and without the attention mechanism.", "Removing attention means excluding q t−1 in (11), so the input of both operator and operand selector becomes r t = [h D t ; s t ].", "The result implies that our model is not better than previous models solely because of the attention.", "Word-Based -Gate -Attention -Stack To check the effectiveness of the stack status (s t in (11)), the experiments of removing the stack status from the input of both operator and operand selectors (r t = h D t ) are conducted.", "The results well justify our idea of choosing operators based on semantic meanings of operands.", "Word-Based -Semantic Transformer To validate the effectiveness of the idea that views an operator as a semantic transformer, we modify the semantic transformer function of the operator into f (e 1 , e 2 ) = e , where e is a learnable parameter and is different for different operators.", "Therefore, e acts like the embedding of the operator , and the decoding process is more similar to a general seq2seq model.", "The results show that the semantic transformer in the original model encodes not only the last operator applied on the operands but other information that helps the selectors.", "Word-Based -Semantic Representation To explicitly evaluate the effectiveness of operands' semantic representations, we rewrite semantic representation of the i-th operand in the problem texts q u a n ti fi e r 个 b a n a n a 香 蕉 , e v e r y 每 ( b a s k e t) < u n k > 6 .0 q u a n ti fi e r 个 , ta k e o ff 拿 掉 h o w m a n y 多 少 q u a n ti fi e r 个 , th e n 就 c a n 可 以 e x a c tl y 正 好 fi ll 装 9 .0 q u a n ti fi e r 个 b a s k e ts 篮 子 了 < u n k > .", "9.0 6.0 58.0 Figure 4 : The self-attention map visualization of operands' semantic expressions for the problem \"There are 58 bananas.", "Each basket can contain 6 bananas.", "How many bananas are needed to be token off such that exactly 9 baskets are filled?\".", "from (2) to e c i = b c i , where b c i is a parameter.", "Thus for every problem, the representation of the i-th operand is identical, even though their meanings in different problems may be different.", "This modification assumes that no semantic information is captured by b c i , which can merely represent a symbolic placeholder in an equation.", "Because the semantic transformer is to transform the semantic representations, applying this component is meaningless.", "Here the semantic transformer is also replaced with f (e 1 , e 2 ) = e as the setting of the previous ablation test.", "The results show that the model without using semantic representations of operands causes a significant accuracy drop of 3.5%.", "The main contribution of this paper about modeling semantic meanings of symbols is validated and well demonstrated here.", "Qualitative Analysis To further analyze whether the proposed model can provide interpretation and reasoning, we visualize the learned semantic representations of constants to check where the important cues are, Constant Embedding Analysis To better understand the information encoded in the semantic representations of constants in the problem, a self-attention is performed when their semantic representations are extracted by the encoder.", "Namely, we rewrite (2) as e c i = Attention(h E p i , {h E t } m t=1 .", "(20) Then we check the trained self-attention map (α in the attention function) on the validation dataset.", "For some problems, the self-attention that generates semantic representations of constants in the problem concentrates on the number's quantifier or unit, and sometimes it also focuses on informative verbs, such as \"gain\", \"get\", \"fill\", etc., in the sentence.", "For example, Figure 4 shows the attention weights for an example math word problem, where lighter colors indicate higher weights.", "The numbers \"58\" and \"6\" focus more on the quantifier-related words (e.g.", "\"every\" and \"how many\"), while \"9\" pays higher attention to the verb \"fill\".", "The results are consistent with those handcraft features for solving math word problems proposed by the prior research (Hosseini et al., 2014; .", "Hence, we demonstrate that the automatically learned semantic representations indeed capture critical information that facilitates solving math word problems without providing human-crafted knowledge.", "Decoding Process Visualization We visualize the attention map (q t in (6) ) to see how the attention helps the decoding process.", "An example is shown in the top of Figure 5 , where most attention focuses on the end of the sentence.", "Unlike the machine translation task, the attention shows the word-level alignment between source and target languages, solving math word problems requires high-level understanding due to the task complexity.", "To further analyze the effectiveness of the proposed gating mechanisms for stack action and operand selection, the activation of gates g sa , g opd at each step of the decoding process is shown in the bottom of Figure 5 .", "It shows that most of time, the gate activation is high, demonstrating that the proposed gating mechanisms play an important role during decoding.", "We also observe a common phenomenon that the activation g sa 2 , which controls how much attention the stack action selector puts on the stack state when deciding an stack action, is usually low until the last \"operator application\" stack action.", "For example, in the example of Figure 5 , g sa 2 is less than 0.20 till the last argument selection stack action, and activates when deciding the division operator application (÷) and the equal application (=).", "It may result from the higher-level semantics of the operand (6.75−2.75) on the stack when selecting the stack action division operator application (÷).", "In terms Problem & Results 红花有60朵,黄花比红花多1/6朵,黄花有多少朵. (There are 60 red flowers.", "Yellow flowers are more than red ones by 1/6.", "How many yellow flowers are there?)", "Generated Equation: 60 + 1 6 Correct Answer: 70 火车 48 小时行驶 5920 千米,汽车 25 小时行驶 2250 千米,汽车平均每小时比火车每小时慢 多少 千米 ? (The train travels 5920 kilometers in hours, and the car travels 2250 kilometers in 25 hours.", "How many kilometers per hour is the car slower than the train?)", "Generated Equation: 2250 ÷ 25 − 5920 ÷ 48 Correct Answer: 33 1 3 小红前面 5 人,后面 7 人,一共有多少人? (There are 5 people in front of Little Red and 7 people behind.", "How many persons are there in total?)", "Generated Equation: 5 + 7 Correct Answer: 13 Figure 5: Word attention and gate activation (g sa and g opd ) visualization when generating stack actions for the problem \"6.75 deducting 5 times of an unknown number is 2.75.", "What is the unknown number?", "\", where the associated equation is x = (6.75 − 2.75) ÷ 5.", "Note that g opd is meaningful only when the t-th stack action is push op.", "of the activation of g opd , we find that three features are important in most cases, demonstrating the effectiveness of the proposed mechanisms.", "Error Analysis We randomly sample some results predicted incorrectly by our model shown in Table 3 .", "In the first example, the error is due to the language ambiguity, and such ambiguity cannot be resolved without considering the exact value of the number.", "From the second example, although our model identifies the problem as a comparison problem successfully, it handles the order of the operands incorrectly.", "For the third problem, it cannot be solved by using only the surface meaning but requires some common sense.", "Therefore, above phenomena show the difficulty of solving math word problems and the large room for improvement.", "Conclusion We propose an end-to-end neural math solver using an encoder-decoder framework that incorporates semantic representations of numbers in order to generate mathematical symbols for solving math word problems.", "The experiments show that the proposed model achieves the state-of-the-art performance on the benchmark dataset, and empirically demonstrate the effectiveness of each component in the model.", "In sum, the proposed neural math solver is designed based on how human performs reasoning when writing equations, providing better interpretation without the need of labeled rationals.", "A Algorithm Detail The training and inference procedures are shown in Algortihm 1.", "B Hyperparameter Setup The model is trained with the optimizer adam (Kingma and Ba, 2014), and the learning rate is set to 0.001.", "Pretrained embeddings using FastText (Joulin et al., 2016 ) are adopted.", "The hidden state size of LSTM used in the encoder and decoder is 256.", "The dimension of hidden layers in attention, semantic transformer and operand/stack action selector is 256.", "The dropout rate is set as 0.1 before inputting the decoder LSTM, before the stack action selector and after the hidden layer of the stack action selector and attention.", "The reported accuracy is the result of 5-fold cross-validation, same as Wang et al.", "for fair comparison.", "C Error Analysis between Seq2Seq We implement the seq2seq model as proposed by Wang et al.", "and compare the performance difference between our proposed model and the baseline seq2seq model.", "Table 4 shows the generated results seq2seq predicts correctly but our model predicts incorrectly.", "Table 5 show the results our model can predict correctly but seq2seq cannot." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.1.1", "3.1.2", "3.2", "3.3", "3.3.1", "3.3.2", "3.3.3", "3.3.4", "3.4", "3.5", "4", "4.1", "4.2", "4.3", "5", "5.1", "5.2", "5.3", "6" ], "paper_header_content": [ "Introduction", "Related Work", "End-to-End Neural Math Solver", "Encoder", "Constant Representation Extraction", "External Constant Leveraging", "Decoder", "Decoding State Features", "Stack Action Selector", "Stack Actions", "Operand Selector", "Semantic Transformer", "Training", "Inference", "Experiments", "Settings", "Results", "Ablation Test", "Qualitative Analysis", "Constant Embedding Analysis", "Decoding Process Visualization", "Error Analysis", "Conclusion" ] }
GEM-SciDuet-train-31#paper-1044#slide-9
Semantic Generation for Unknown
M I U L A B Each notebook takes and * This part is actually done when decoding, but is present at this place for illustration. Check our paper for more information.
M I U L A B Each notebook takes and * This part is actually done when decoding, but is present at this place for illustration. Check our paper for more information.
[]
GEM-SciDuet-train-31#paper-1044#slide-10
1044
Semantically-Aligned Equation Generation for Solving and Reasoning Math Word Problems
Solving math word problems is a challenging task that requires accurate natural language understanding to bridge natural language texts and math expressions. Motivated by the intuition about how human generates the equations given the problem texts, this paper presents a neural approach to automatically solve math word problems by operating symbols according to their semantic meanings in texts. This paper views the process of generating equations as a bridge between the semantic world and the symbolic world, where the proposed neural math solver is based on an encoderdecoder framework. In the proposed model, the encoder is designed to understand the semantics of problems, and the decoder focuses on tracking semantic meanings of the generated symbols and then deciding which symbol to generate next. The preliminary experiments are conducted in a benchmark dataset Math23K, and our model significantly outperforms both the state-of-the-art single model and the best non-retrieval-based model over about 10% accuracy, demonstrating the effectiveness of bridging the symbolic and semantic worlds from math word problems. 1
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214 ], "paper_content_text": [ "Introduction Automatically solving math word problems has been an interesting research topic and also been viewed as a way of evaluating machines' ability (Mandal and Naskar, 2019) .", "For human, writing down an equation that solves a math word problem requires the ability of reading comprehension, reasoning, and sometimes real world understanding.", "Specifically, to solve a math word problem, we first need to know the goal of the given problem, then understand the semantic 1 The source code is available at https://github.", "com/MiuLab/E2EMathSolver.", "meaning of each numerical number in the problem, perform reasoning based on the comprehension in the previous step, and finally decide what to write in the equation.", "Most prior work about solving math word problems relied on hand-crafted features, which required more human knowledge.", "Because those features are often in the lexical level, it is not clear whether machines really understand the math problems.", "Also, most prior work evaluated their approaches on relatively small datasets, and the capability of generalization is concerned.", "This paper considers the reasoning procedure when writing down the associated equation given a problem.", "Figure 1 illustrates the problem solving process.", "The illustration shows that human actually assigns the semantic meaning to each number when manipulating symbols, including operands (numbers) and operators (+ − ×÷).", "Also, we believe that the semantic meaning of operands can help us decide which operator to use.", "For example, the summation of \"price of one pen\" and \"number of pens Tom bought\" is meaningless; therefore the addition would not be chosen.", "Following the observation above, this paper proposes a novel encoder decoder model, where the encoder extracts semantic meanings of numbers in the problem, and the decoder is equipped with a stack that facilitates tracking the semantic meanings of operands.", "The contributions of this paper are 4-fold: • This paper is the first work that models semantic meanings of operands and operators for math word problems.", "• This paper proposes an end-to-end neural math solver with a novel decoding process that utilizes the stack to generate associated equations.", "Figure 1 : The solving process of the math word problem \"Each notebok takes $0.5 and each pen takes $1.", "Tom has $10.", "How many notebook can he buy after buying 5 pens?\"", "and the associated equation is x = (10 − 1 × 5) ÷ 0.5.", "The associated equation is x = (10 − 1 × 5) ÷ 0.5.", "• This paper achieves the state-of-the-art performance on the large benchmark dataset Math23K.", "• This paper is capable of providing interpretation and reasoning for the math word problem solving procedure.", "Related Work There is a lot of prior work that utilized handcrafted features, such as POS tags, paths in the dependency trees, keywords, etc., to allow the model to focus on the quantities in the problems Hosseini et al., 2014; Koncel-Kedziorski et al., 2015; Roy et al., 2016; Upadhyay and Chang, 2017; Roy and Roth, 2018; Wang et al., 2018) .", "Recently, Mehta et al.", "; Wang et al.", "; Ling et al.", "attempted at learning models without predefined features.", "Following the recent trend, the proposed end-to-end model in this paper does not need any hand-crafted features.", "Kushman et al.", "first extracted templates about math expressions from the training answers, and then trained models to select templates and map quantities in the problem to the slots in the template.", "Such two-stage approach has been tried and achieved good results (Upadhyay and Chang, 2017) .", "The prior work highly relied on human knowledge, where they parsed problems into equations by choosing the expression tree with the highest score calculated by an operator classifier, working on a hand-crafted \"trigger list\" containing quantities and noun phrases in the problem, or utilizing features extracted from text spans (Roy et al., , 2016 Koncel-Kedziorski et al., 2015) .", "Shi et al.", "defined a Dolphin language to connect math word problems and logical forms, and generated rules to parse math word problems.", "Upadhyay et al.", "parsed math word problems without explicit equation annotations.", "Roy and Roth clas-sified math word problems into 4 types and used rules to decide the operators accordingly.", "Wang et al.", "trained the parser using reinforcement learning with hand-crafted features.", "Hosseini et al.", "modeled the problem text as transition of world states, and the equation is generated as the world states changing.", "Our work uses a similar intuition, but hand-crafted features are not required and our model can be trained in an end-to-end manner.", "Some end-to-end approaches have been proposed, such as generating equations directly via a seq2seq model (Wang et al., 2017) .", "Ling et al.", "tried to generate solutions along with its rationals with a seq2seq-like model for better interpretability.", "This paper belongs to the end-to-end category, but different from the previous work; we are the first approach that generates equations with stack actions, which facilitate us to simulate the way how human solves problems.", "Furthermore, the proposed approach is the first model that is more interpretable and provides reasoning steps without the need of rational annotations.", "End-to-End Neural Math Solver Our approach composes of two parts, an encoder and a decoder, where the process of solving math word problems is viewed as transforming multiple text spans from the problems into the target information the problems ask for.", "In the example shown in Figure 1 , all numbers in the problem are attached with the associated semantics.", "Motivated by the observation, we design an encoder to extract the semantic representation of each number in the problem text.", "Considering that human usually manipulates those numbers and operators (such as addition, subtraction, etc.)", "based on their semantics for problem solving, a decoder is designed to construct the equation, where the semantics is aligned with the representations extracted by the encoder.", "The idea of the proposed model Tom has $ 10 5 pens ?", "Encoder Stack Attention Operation Selector Apply OP OP Return Decoder Operand Selector Semantic Transformer Each notebook takes $0.5 and each pen takes $1.", "Tom has $10.", "How many notebooks can he buy after buying 5 pens?", "Stack Attention is to imitate the human reasoning process for solving math word problems.", "The model architecture is illustrated in Figure 2 .", "Encoder The encoder aims to extract the semantic representation of each constant needed for solving problems.", "However, the needed constants may come from either the given problem texts or domain knowledge, so we detail these two procedures as follows.", "Constant Representation Extraction For each math word problem, we are given a passage consisting of words {w P t } m t=1 , whose word embeddings are {e P t } m t=1 .", "The problem text includes some numbers, which we refer as constants.", "The positions of constants in the problem text are denoted as {p i } n i=1 .", "In order to capture the semantic representation of each constant by considering its contexts, a bidirectional long short-term memory (BLSTM) is adopted as the encoder (Hochreiter and Schmidhuber, 1997) : h E t , c E t = BLSTM(h E t−1 , c E t−1 , e P t ), (1) and then for the i-th constant in the problem, its semantic representation e c i is modeled by the corresponding BLSTM output vector: e c i = h E p i .", "(2) External Constant Leveraging External constants, including 1 and π, are leveraged, because they are required to solve a math word problem, but not mentioned in the problem text.", "Due to their absence from the problem text, we cannot extract their semantic meanings by BLSTM in (2) .", "Instead, we model their semantic representation e π , e 1 as parts of the model parameters.", "They are randomly initialized and are learned during model training.", "Decoder The decoder aims at constructing the equation that can solve the given problem.", "We generate the equation by applying stack actions on a stack to mimic the way how human understands an equation.", "Human knows the semantic meaning of each term in the equation, even compositing of operands and operators like the term \"(10−1×5)\" in Figure 1 .", "Then what operator to apply on a pair operands can be chosen based on their semantic meanings accordingly.", "Hence we design our model to generate the equation in a postfix manner: a operator is chosen base on the semantic representations of two operands the operator is going to apply to.", "Note that the operands a operator can apply to can be any results generated previously.", "That is the reason why we use \"stack\" as our data structure in order to keep track of the operands a operator is going to apply to.", "The stack contains both symbolic and semantic representations of operands, denoted as S = [(v S lt , e S lt ), (v S lt−1 , e S lt−1 ), · · · , (v S 1 , e S 1 )], (3) where v S of each pair is the symbolic part, such as x + 1, while e S is the semantic representation, which is a vector.", "The components in the decoder are shown in the right part of Figure 2 , each of which is detailed below.", "Decoding State Features At each decoding step, decisions are made based on features of the current state.", "At each step, fea- tures r sa and r opd are extracted to select a stack action (section 3.3.2) and an operand to push (section 3.3.3).", "Specifically, the features are the gated concatenation of following vectors: • h D t is the output of an LSTM, which encodes the history of applied actions: h D t , c D t = LSTM(h D t−1 , c D t−1 , res t−1 ), (4) where res t−1 is the result from the previous stack action similar to the seq2seq model (Sutskever et al., 2014) .", "For example, if the previous stack action o t−1 is \"push\", then res t−1 is the semantic representation pushed into the stack.", "If the previous stack action o t−1 is to apply an operator , then res t−1 is the semantic representation generated by f .", "• s t is the stack status.", "It is crucial because some operators are only applicable to certain combinations of operand semantics, which is similar to the type system in programming languages.", "For example, operating multiplication is applicable to the combination of \"quantity of an item\" and \"price of an item\", while operating addition is not.", "Considering that all math operators supported here (+, −, ×, ÷) are binary operators, the semantic representations of the stack's top 2 elements at the time t − 1 are considered: s t = [e S lt ; e S lt ].", "(5) • q t incorporates problem information in the decision.", "It is believed that the attention mechanism (Luong et al., 2015) can effectively capture dependency for longer distance.", "Thus, the attention mechanism over the encoding problem h E 1 , h E 2 , · · · is adopted: q t = Attention(h D t , {h E i } m i=1 ), (6) where the attention function in this paper is defined as a function with learnable parameters w, W, b: Attention(u, {v i } m i=1 ) = m i=1 α i h i , (7) α i = exp(s i ) m l=1 exp(s i ) , (8) s i = w T tanh(W T [u; v i ] + b).", "(9) In order to model the dynamic features for different decoding steps, features in r sa t is gated as follows: r sa t = [g sa t,1 · h D t ; g sa t,2 · s t ; g sa t,3 · q t ], (10) g sa t = σ(W sa · [h D t ; s t ; q t ]), (11) where σ is a sigmoid function and W sa is a learned gating parameter.", "r opd t is defined similarly, but with a different learned gating parameter W opd .", "Stack Action Selector The stack action selector is to select an stack action at each decoding step (section 3.3.2) until the unknowns are solved.", "The probability of choosing action a at the decoding step t is calculated with a network NN constituted of one hidden layer and ReLU as the activation function: P (Y t |{y i } t−1 i=1 , {w i } m i=1 ) (12) = StackActionSelector(r sa t ) = softmax(NN(r sa t )) , where r sa t is decoding state features as defined in section 3.3.", "Stack Actions The available stack actions are listed below: • Variable generation: The semantic representation of an unknown variable x is generated dynamically as the first action in the decoding process.", "Note that this procedure provides the flexibility of solving problems with more than one unknown variables.", "The decoder module can decide how many unknown variables are required to solve the problem, and the semantic representation of the unknown variable is generated with an attention mechanism: e x = Attention(h D t , {h E i } m i=1 ).", "(13) • Push: This stack action pushes the operand chosen by the operand selector (section 3.3.3).", "Both the symbolic representation v * and semantic representation e * of the chosen operand would be pushed to the stack S in (3).", "Then the stack state becomes S = [(v S * , e S * ), (v S lt , e S lt ), · · · , (v S 1 , e S 1 )].", "(14) • Operator application ( ∈ {+, −, ×, ÷}): One stack action pops two elements from the top of the stack, which contains two pairs, (v i , e i ) and (v j , e j ), and then the associated symbolic operator, v k = v i v j , is recorded.", "Also, a semantic transformation function f for that operator is invoked, which generates the semantic representation of v k by transforming semantic representations of v i and v j to e k = f (e i , e j ).", "Therefore, after an operator is applied to the stack specified in (3) , the stack state becomes S =[(v S lt v S lt−1 , f (e S lt , e S lt−1 )), (15) (v S lt−2 , e S lt−2 ), · · · , (v S 1 , e S 1 )].", "• Equal application: When the equal application is chosen, it implies that an equation is completed.", "This stack action pops 2 tuples from the stack, (v i , e i ), (v j , e j ), and then v i = v j is recorded.", "If one of them is an unknown variable, the problem is solved.", "Therefore, after an OP is applied to the stack specified in (3) , the stack state becomes S = [(v S lt−2 , e S lt−2 ), · · · , (v S 1 , e S 1 )].", "(16) Operand Selector When the stack action selector has decided to push an operand, the operand selector aims at choosing which operand to push.", "The operand candidates e include constants provided in the problem text whose semantic representations are e c 1 , e c 2 , · · · , e c n , unknown variable whose semantic representation is e x , and two external constants 1 and π whose semantic representations are e 1 , e π : e = [e c 1 , e c 2 , · · · , e c n , e 1 , e π , e x ].", "An operand has both symbolic and semantic representations, but the selection focuses on its semantic meaning; this procedure is the same as what human does when solving math word problems.", "Inspired by addressing mechanisms of neural Turing machine (NTM) (Graves et al., 2014) , the probability of choosing the i-th operand candidate is the attention weights of r t over the semantic representations of the operand candidates as in (8) : P (Z t | {y i } t−1 i=1 , {w i } m i=1 ) (18) = OperandSelector(r opd t ) = AttentionWeight(r opd t , {e i } m i=1 ∪ {e 1 , e π , e x }), and r opd t is defined in section 3.3.", "Semantic Transformer A semantic transformer is proposed to generate the semantic representation of a new symbol resulted from applying an operator, which provides the capability of interpretation and reasoning for the target task.", "The semantic transformer for an operator ∈ {+, −, ×, ÷} transforms semantic representations of two operands e 1 , e 2 into f (e 1 , e 2 ) = tanh(U ReLU(W [e 1 ; e 2 ]+b )+c ), where W , U , b , c are model parameters.", "Semantic transformers for different operators have different parameters in order to model different transformations.", "Training Both stack action selection and operand selection can be trained in a fully supervised way by giving problems and associated ground truth equations.", "Because our model generates the equation with stack actions, the equation is first transformed into its postfix representation.", "Let the postfix representation of the target equation be y 1 , · · · y t , · · · , y T , where y t can be either an operator (+, −, ×, ÷, =) or a target operand.", "Then for each time step t, the loss can be computed as L(y t ) = L 1 (push op) + L 2 (y t ) y t is an operand L 1 (y t ) otherwise , where L 1 is the stack action selection loss and L 2 is the operand selection loss defined as L 1 (y t ) = − log P (Y t = y t | {o i } t−1 i=1 , {w i } m i=1 ), L 2 (y t ) = − log P (Z t = y t | r t ).", "The objective of our training process is to minimize the total loss for the whole equation, T t=1 L(y t ).", "Inference When performing inference, at each time step t, the stack action with the highest probability P (Y t |{ỹ i } t−1 i=1 , {w i } m i=1 ) is chosen.", "If the chosen stack action is \"push\", the operand with the highest probability P (Z t |{Ỹ i } t−1 i=1 , {w i } m i=1 ) is chosen.", "When the stack has less than 2 elements, the probability of applying operator +, −, ×, ÷, = would be masked out to prevent illegal stack actions, so all generated equations must be legal math expressions.", "The decoder decodes until the unknown variable can be solved.", "After the equations are generated, a Python package SymPy (Meurer et al., 2017) is used to solve the unknown variable.", "The inference procedure example is illustrated in Figure 3 .", "The detailed algorithm can be found in Algorithm 1.", "Experiments To evaluate the performance of the proposed model, we conduct the experiments on the benchmark dataset and analyze the learned semantics.", "Settings The experiments are benchmarked on the dataset Math23k (Wang et al., 2017) , which contains 23,162 math problems with annotated equations.", "Each problem can be solved by a singleunknown-variable equation and only uses operators +, −, ×, ÷.", "Also, except π and 1, quantities in the equation can be found in the problem text.", "There are also other large scale datasets like Dol-phin18K (Shi et al., 2015) and AQuA (Ling et al., 2017) , containing 18,460 and 100,000 math word problems respectively.", "The reasons about not evaluating on these two datasets are 1) Dolphin18k contains some unlabeled math word problems and some incorrect labels, and 2) AQuA contains rational for solving the problems, but the equations in the rational are not formal (e.g.", "mixed with texts, using x to represent ×, etc.)", "and inconsistent.", "Therefore, the following experiments are performed and analyzed using Math23K, the only large scaled, good-quality dataset. )", "do h D t ← LSTM(h D t−1 , ct−1, ret) st ← S.get top2() h E ← Attention(h D t−1 , h E ) rt ← [h D t , st, h E ] psa ← StackActionSelector(rt) p opd ← OperandSelector(rt) if training then Target equation y is available when training.", "Yt ← yt if yt is operand then loss ← loss + L1(push) + L2(yt) else loss ← loss + L1(yt) end if else Yt ← StackActionSelector(r sa t ) if Yt = push then Zt ← OperandSelector(r opd t ) end if end if if Yt = gen var then e x ← Attention(h D t , h E ) ret ← e x else if Yt = push then S.push(vZ t , eZ t ) ret ← eZ t else if Yt ∈ {+, Results The results are shown in Our proposed end-to-end model belongs to the generation category, and the single model performance achieved by our proposed model is new state-of-the-art (> 65%) and even better than the hybrid model result (64.7%).", "In addition, we are the first to report character-based performance on this dataset, and the character-based results are slightly better than the word-based ones.", "Among the single model performance, our models obtain about more than 7% accuracy improvement compared to the previous best one (Wang et al., 2017) .", "The performance of our character-based model also shows that our model is capable of learning the relatively accurate semantic representations without word boundaries and achieves better performance.", "Ablation Test To better understand the performance contributed by each proposed component, we perform a series of ablation tests by removing components one by one and then checking the performance by 5-fold cross validation.", "Table 2 shows the ablation results.", "Char-Based v.s.", "Word-Based As reported above, using word-based model instead of character-based model only causes 0.5% performance drop.", "To fairly compare with prior word- Table 2 : 5-fold cross validation results of ablation tests.", "based models, the following ablation tests are performed on the word-based approach.", "Word-Based -Gate It uses r t instead of r sa t and r opr t as the input of both StackActionSelector and OperandSelector.", "Word-Based -Gate -Attention Considering that the prior generation-based model (seq2seq) did not use any attention mechanism, we compare the models with and without the attention mechanism.", "Removing attention means excluding q t−1 in (11), so the input of both operator and operand selector becomes r t = [h D t ; s t ].", "The result implies that our model is not better than previous models solely because of the attention.", "Word-Based -Gate -Attention -Stack To check the effectiveness of the stack status (s t in (11)), the experiments of removing the stack status from the input of both operator and operand selectors (r t = h D t ) are conducted.", "The results well justify our idea of choosing operators based on semantic meanings of operands.", "Word-Based -Semantic Transformer To validate the effectiveness of the idea that views an operator as a semantic transformer, we modify the semantic transformer function of the operator into f (e 1 , e 2 ) = e , where e is a learnable parameter and is different for different operators.", "Therefore, e acts like the embedding of the operator , and the decoding process is more similar to a general seq2seq model.", "The results show that the semantic transformer in the original model encodes not only the last operator applied on the operands but other information that helps the selectors.", "Word-Based -Semantic Representation To explicitly evaluate the effectiveness of operands' semantic representations, we rewrite semantic representation of the i-th operand in the problem texts q u a n ti fi e r 个 b a n a n a 香 蕉 , e v e r y 每 ( b a s k e t) < u n k > 6 .0 q u a n ti fi e r 个 , ta k e o ff 拿 掉 h o w m a n y 多 少 q u a n ti fi e r 个 , th e n 就 c a n 可 以 e x a c tl y 正 好 fi ll 装 9 .0 q u a n ti fi e r 个 b a s k e ts 篮 子 了 < u n k > .", "9.0 6.0 58.0 Figure 4 : The self-attention map visualization of operands' semantic expressions for the problem \"There are 58 bananas.", "Each basket can contain 6 bananas.", "How many bananas are needed to be token off such that exactly 9 baskets are filled?\".", "from (2) to e c i = b c i , where b c i is a parameter.", "Thus for every problem, the representation of the i-th operand is identical, even though their meanings in different problems may be different.", "This modification assumes that no semantic information is captured by b c i , which can merely represent a symbolic placeholder in an equation.", "Because the semantic transformer is to transform the semantic representations, applying this component is meaningless.", "Here the semantic transformer is also replaced with f (e 1 , e 2 ) = e as the setting of the previous ablation test.", "The results show that the model without using semantic representations of operands causes a significant accuracy drop of 3.5%.", "The main contribution of this paper about modeling semantic meanings of symbols is validated and well demonstrated here.", "Qualitative Analysis To further analyze whether the proposed model can provide interpretation and reasoning, we visualize the learned semantic representations of constants to check where the important cues are, Constant Embedding Analysis To better understand the information encoded in the semantic representations of constants in the problem, a self-attention is performed when their semantic representations are extracted by the encoder.", "Namely, we rewrite (2) as e c i = Attention(h E p i , {h E t } m t=1 .", "(20) Then we check the trained self-attention map (α in the attention function) on the validation dataset.", "For some problems, the self-attention that generates semantic representations of constants in the problem concentrates on the number's quantifier or unit, and sometimes it also focuses on informative verbs, such as \"gain\", \"get\", \"fill\", etc., in the sentence.", "For example, Figure 4 shows the attention weights for an example math word problem, where lighter colors indicate higher weights.", "The numbers \"58\" and \"6\" focus more on the quantifier-related words (e.g.", "\"every\" and \"how many\"), while \"9\" pays higher attention to the verb \"fill\".", "The results are consistent with those handcraft features for solving math word problems proposed by the prior research (Hosseini et al., 2014; .", "Hence, we demonstrate that the automatically learned semantic representations indeed capture critical information that facilitates solving math word problems without providing human-crafted knowledge.", "Decoding Process Visualization We visualize the attention map (q t in (6) ) to see how the attention helps the decoding process.", "An example is shown in the top of Figure 5 , where most attention focuses on the end of the sentence.", "Unlike the machine translation task, the attention shows the word-level alignment between source and target languages, solving math word problems requires high-level understanding due to the task complexity.", "To further analyze the effectiveness of the proposed gating mechanisms for stack action and operand selection, the activation of gates g sa , g opd at each step of the decoding process is shown in the bottom of Figure 5 .", "It shows that most of time, the gate activation is high, demonstrating that the proposed gating mechanisms play an important role during decoding.", "We also observe a common phenomenon that the activation g sa 2 , which controls how much attention the stack action selector puts on the stack state when deciding an stack action, is usually low until the last \"operator application\" stack action.", "For example, in the example of Figure 5 , g sa 2 is less than 0.20 till the last argument selection stack action, and activates when deciding the division operator application (÷) and the equal application (=).", "It may result from the higher-level semantics of the operand (6.75−2.75) on the stack when selecting the stack action division operator application (÷).", "In terms Problem & Results 红花有60朵,黄花比红花多1/6朵,黄花有多少朵. (There are 60 red flowers.", "Yellow flowers are more than red ones by 1/6.", "How many yellow flowers are there?)", "Generated Equation: 60 + 1 6 Correct Answer: 70 火车 48 小时行驶 5920 千米,汽车 25 小时行驶 2250 千米,汽车平均每小时比火车每小时慢 多少 千米 ? (The train travels 5920 kilometers in hours, and the car travels 2250 kilometers in 25 hours.", "How many kilometers per hour is the car slower than the train?)", "Generated Equation: 2250 ÷ 25 − 5920 ÷ 48 Correct Answer: 33 1 3 小红前面 5 人,后面 7 人,一共有多少人? (There are 5 people in front of Little Red and 7 people behind.", "How many persons are there in total?)", "Generated Equation: 5 + 7 Correct Answer: 13 Figure 5: Word attention and gate activation (g sa and g opd ) visualization when generating stack actions for the problem \"6.75 deducting 5 times of an unknown number is 2.75.", "What is the unknown number?", "\", where the associated equation is x = (6.75 − 2.75) ÷ 5.", "Note that g opd is meaningful only when the t-th stack action is push op.", "of the activation of g opd , we find that three features are important in most cases, demonstrating the effectiveness of the proposed mechanisms.", "Error Analysis We randomly sample some results predicted incorrectly by our model shown in Table 3 .", "In the first example, the error is due to the language ambiguity, and such ambiguity cannot be resolved without considering the exact value of the number.", "From the second example, although our model identifies the problem as a comparison problem successfully, it handles the order of the operands incorrectly.", "For the third problem, it cannot be solved by using only the surface meaning but requires some common sense.", "Therefore, above phenomena show the difficulty of solving math word problems and the large room for improvement.", "Conclusion We propose an end-to-end neural math solver using an encoder-decoder framework that incorporates semantic representations of numbers in order to generate mathematical symbols for solving math word problems.", "The experiments show that the proposed model achieves the state-of-the-art performance on the benchmark dataset, and empirically demonstrate the effectiveness of each component in the model.", "In sum, the proposed neural math solver is designed based on how human performs reasoning when writing equations, providing better interpretation without the need of labeled rationals.", "A Algorithm Detail The training and inference procedures are shown in Algortihm 1.", "B Hyperparameter Setup The model is trained with the optimizer adam (Kingma and Ba, 2014), and the learning rate is set to 0.001.", "Pretrained embeddings using FastText (Joulin et al., 2016 ) are adopted.", "The hidden state size of LSTM used in the encoder and decoder is 256.", "The dimension of hidden layers in attention, semantic transformer and operand/stack action selector is 256.", "The dropout rate is set as 0.1 before inputting the decoder LSTM, before the stack action selector and after the hidden layer of the stack action selector and attention.", "The reported accuracy is the result of 5-fold cross-validation, same as Wang et al.", "for fair comparison.", "C Error Analysis between Seq2Seq We implement the seq2seq model as proposed by Wang et al.", "and compare the performance difference between our proposed model and the baseline seq2seq model.", "Table 4 shows the generated results seq2seq predicts correctly but our model predicts incorrectly.", "Table 5 show the results our model can predict correctly but seq2seq cannot." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.1.1", "3.1.2", "3.2", "3.3", "3.3.1", "3.3.2", "3.3.3", "3.3.4", "3.4", "3.5", "4", "4.1", "4.2", "4.3", "5", "5.1", "5.2", "5.3", "6" ], "paper_header_content": [ "Introduction", "Related Work", "End-to-End Neural Math Solver", "Encoder", "Constant Representation Extraction", "External Constant Leveraging", "Decoder", "Decoding State Features", "Stack Action Selector", "Stack Actions", "Operand Selector", "Semantic Transformer", "Training", "Inference", "Experiments", "Settings", "Results", "Ablation Test", "Qualitative Analysis", "Constant Embedding Analysis", "Decoding Process Visualization", "Error Analysis", "Conclusion" ] }
GEM-SciDuet-train-31#paper-1044#slide-10
Operands and Their Semantics
Each notebook takes $0.5 and each pen takes $1. Tom has $10. How many notebooks can h e buy after buying pens? Symbolic Part x Semantic Part
Each notebook takes $0.5 and each pen takes $1. Tom has $10. How many notebooks can h e buy after buying pens? Symbolic Part x Semantic Part
[]
GEM-SciDuet-train-31#paper-1044#slide-11
1044
Semantically-Aligned Equation Generation for Solving and Reasoning Math Word Problems
Solving math word problems is a challenging task that requires accurate natural language understanding to bridge natural language texts and math expressions. Motivated by the intuition about how human generates the equations given the problem texts, this paper presents a neural approach to automatically solve math word problems by operating symbols according to their semantic meanings in texts. This paper views the process of generating equations as a bridge between the semantic world and the symbolic world, where the proposed neural math solver is based on an encoderdecoder framework. In the proposed model, the encoder is designed to understand the semantics of problems, and the decoder focuses on tracking semantic meanings of the generated symbols and then deciding which symbol to generate next. The preliminary experiments are conducted in a benchmark dataset Math23K, and our model significantly outperforms both the state-of-the-art single model and the best non-retrieval-based model over about 10% accuracy, demonstrating the effectiveness of bridging the symbolic and semantic worlds from math word problems. 1
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214 ], "paper_content_text": [ "Introduction Automatically solving math word problems has been an interesting research topic and also been viewed as a way of evaluating machines' ability (Mandal and Naskar, 2019) .", "For human, writing down an equation that solves a math word problem requires the ability of reading comprehension, reasoning, and sometimes real world understanding.", "Specifically, to solve a math word problem, we first need to know the goal of the given problem, then understand the semantic 1 The source code is available at https://github.", "com/MiuLab/E2EMathSolver.", "meaning of each numerical number in the problem, perform reasoning based on the comprehension in the previous step, and finally decide what to write in the equation.", "Most prior work about solving math word problems relied on hand-crafted features, which required more human knowledge.", "Because those features are often in the lexical level, it is not clear whether machines really understand the math problems.", "Also, most prior work evaluated their approaches on relatively small datasets, and the capability of generalization is concerned.", "This paper considers the reasoning procedure when writing down the associated equation given a problem.", "Figure 1 illustrates the problem solving process.", "The illustration shows that human actually assigns the semantic meaning to each number when manipulating symbols, including operands (numbers) and operators (+ − ×÷).", "Also, we believe that the semantic meaning of operands can help us decide which operator to use.", "For example, the summation of \"price of one pen\" and \"number of pens Tom bought\" is meaningless; therefore the addition would not be chosen.", "Following the observation above, this paper proposes a novel encoder decoder model, where the encoder extracts semantic meanings of numbers in the problem, and the decoder is equipped with a stack that facilitates tracking the semantic meanings of operands.", "The contributions of this paper are 4-fold: • This paper is the first work that models semantic meanings of operands and operators for math word problems.", "• This paper proposes an end-to-end neural math solver with a novel decoding process that utilizes the stack to generate associated equations.", "Figure 1 : The solving process of the math word problem \"Each notebok takes $0.5 and each pen takes $1.", "Tom has $10.", "How many notebook can he buy after buying 5 pens?\"", "and the associated equation is x = (10 − 1 × 5) ÷ 0.5.", "The associated equation is x = (10 − 1 × 5) ÷ 0.5.", "• This paper achieves the state-of-the-art performance on the large benchmark dataset Math23K.", "• This paper is capable of providing interpretation and reasoning for the math word problem solving procedure.", "Related Work There is a lot of prior work that utilized handcrafted features, such as POS tags, paths in the dependency trees, keywords, etc., to allow the model to focus on the quantities in the problems Hosseini et al., 2014; Koncel-Kedziorski et al., 2015; Roy et al., 2016; Upadhyay and Chang, 2017; Roy and Roth, 2018; Wang et al., 2018) .", "Recently, Mehta et al.", "; Wang et al.", "; Ling et al.", "attempted at learning models without predefined features.", "Following the recent trend, the proposed end-to-end model in this paper does not need any hand-crafted features.", "Kushman et al.", "first extracted templates about math expressions from the training answers, and then trained models to select templates and map quantities in the problem to the slots in the template.", "Such two-stage approach has been tried and achieved good results (Upadhyay and Chang, 2017) .", "The prior work highly relied on human knowledge, where they parsed problems into equations by choosing the expression tree with the highest score calculated by an operator classifier, working on a hand-crafted \"trigger list\" containing quantities and noun phrases in the problem, or utilizing features extracted from text spans (Roy et al., , 2016 Koncel-Kedziorski et al., 2015) .", "Shi et al.", "defined a Dolphin language to connect math word problems and logical forms, and generated rules to parse math word problems.", "Upadhyay et al.", "parsed math word problems without explicit equation annotations.", "Roy and Roth clas-sified math word problems into 4 types and used rules to decide the operators accordingly.", "Wang et al.", "trained the parser using reinforcement learning with hand-crafted features.", "Hosseini et al.", "modeled the problem text as transition of world states, and the equation is generated as the world states changing.", "Our work uses a similar intuition, but hand-crafted features are not required and our model can be trained in an end-to-end manner.", "Some end-to-end approaches have been proposed, such as generating equations directly via a seq2seq model (Wang et al., 2017) .", "Ling et al.", "tried to generate solutions along with its rationals with a seq2seq-like model for better interpretability.", "This paper belongs to the end-to-end category, but different from the previous work; we are the first approach that generates equations with stack actions, which facilitate us to simulate the way how human solves problems.", "Furthermore, the proposed approach is the first model that is more interpretable and provides reasoning steps without the need of rational annotations.", "End-to-End Neural Math Solver Our approach composes of two parts, an encoder and a decoder, where the process of solving math word problems is viewed as transforming multiple text spans from the problems into the target information the problems ask for.", "In the example shown in Figure 1 , all numbers in the problem are attached with the associated semantics.", "Motivated by the observation, we design an encoder to extract the semantic representation of each number in the problem text.", "Considering that human usually manipulates those numbers and operators (such as addition, subtraction, etc.)", "based on their semantics for problem solving, a decoder is designed to construct the equation, where the semantics is aligned with the representations extracted by the encoder.", "The idea of the proposed model Tom has $ 10 5 pens ?", "Encoder Stack Attention Operation Selector Apply OP OP Return Decoder Operand Selector Semantic Transformer Each notebook takes $0.5 and each pen takes $1.", "Tom has $10.", "How many notebooks can he buy after buying 5 pens?", "Stack Attention is to imitate the human reasoning process for solving math word problems.", "The model architecture is illustrated in Figure 2 .", "Encoder The encoder aims to extract the semantic representation of each constant needed for solving problems.", "However, the needed constants may come from either the given problem texts or domain knowledge, so we detail these two procedures as follows.", "Constant Representation Extraction For each math word problem, we are given a passage consisting of words {w P t } m t=1 , whose word embeddings are {e P t } m t=1 .", "The problem text includes some numbers, which we refer as constants.", "The positions of constants in the problem text are denoted as {p i } n i=1 .", "In order to capture the semantic representation of each constant by considering its contexts, a bidirectional long short-term memory (BLSTM) is adopted as the encoder (Hochreiter and Schmidhuber, 1997) : h E t , c E t = BLSTM(h E t−1 , c E t−1 , e P t ), (1) and then for the i-th constant in the problem, its semantic representation e c i is modeled by the corresponding BLSTM output vector: e c i = h E p i .", "(2) External Constant Leveraging External constants, including 1 and π, are leveraged, because they are required to solve a math word problem, but not mentioned in the problem text.", "Due to their absence from the problem text, we cannot extract their semantic meanings by BLSTM in (2) .", "Instead, we model their semantic representation e π , e 1 as parts of the model parameters.", "They are randomly initialized and are learned during model training.", "Decoder The decoder aims at constructing the equation that can solve the given problem.", "We generate the equation by applying stack actions on a stack to mimic the way how human understands an equation.", "Human knows the semantic meaning of each term in the equation, even compositing of operands and operators like the term \"(10−1×5)\" in Figure 1 .", "Then what operator to apply on a pair operands can be chosen based on their semantic meanings accordingly.", "Hence we design our model to generate the equation in a postfix manner: a operator is chosen base on the semantic representations of two operands the operator is going to apply to.", "Note that the operands a operator can apply to can be any results generated previously.", "That is the reason why we use \"stack\" as our data structure in order to keep track of the operands a operator is going to apply to.", "The stack contains both symbolic and semantic representations of operands, denoted as S = [(v S lt , e S lt ), (v S lt−1 , e S lt−1 ), · · · , (v S 1 , e S 1 )], (3) where v S of each pair is the symbolic part, such as x + 1, while e S is the semantic representation, which is a vector.", "The components in the decoder are shown in the right part of Figure 2 , each of which is detailed below.", "Decoding State Features At each decoding step, decisions are made based on features of the current state.", "At each step, fea- tures r sa and r opd are extracted to select a stack action (section 3.3.2) and an operand to push (section 3.3.3).", "Specifically, the features are the gated concatenation of following vectors: • h D t is the output of an LSTM, which encodes the history of applied actions: h D t , c D t = LSTM(h D t−1 , c D t−1 , res t−1 ), (4) where res t−1 is the result from the previous stack action similar to the seq2seq model (Sutskever et al., 2014) .", "For example, if the previous stack action o t−1 is \"push\", then res t−1 is the semantic representation pushed into the stack.", "If the previous stack action o t−1 is to apply an operator , then res t−1 is the semantic representation generated by f .", "• s t is the stack status.", "It is crucial because some operators are only applicable to certain combinations of operand semantics, which is similar to the type system in programming languages.", "For example, operating multiplication is applicable to the combination of \"quantity of an item\" and \"price of an item\", while operating addition is not.", "Considering that all math operators supported here (+, −, ×, ÷) are binary operators, the semantic representations of the stack's top 2 elements at the time t − 1 are considered: s t = [e S lt ; e S lt ].", "(5) • q t incorporates problem information in the decision.", "It is believed that the attention mechanism (Luong et al., 2015) can effectively capture dependency for longer distance.", "Thus, the attention mechanism over the encoding problem h E 1 , h E 2 , · · · is adopted: q t = Attention(h D t , {h E i } m i=1 ), (6) where the attention function in this paper is defined as a function with learnable parameters w, W, b: Attention(u, {v i } m i=1 ) = m i=1 α i h i , (7) α i = exp(s i ) m l=1 exp(s i ) , (8) s i = w T tanh(W T [u; v i ] + b).", "(9) In order to model the dynamic features for different decoding steps, features in r sa t is gated as follows: r sa t = [g sa t,1 · h D t ; g sa t,2 · s t ; g sa t,3 · q t ], (10) g sa t = σ(W sa · [h D t ; s t ; q t ]), (11) where σ is a sigmoid function and W sa is a learned gating parameter.", "r opd t is defined similarly, but with a different learned gating parameter W opd .", "Stack Action Selector The stack action selector is to select an stack action at each decoding step (section 3.3.2) until the unknowns are solved.", "The probability of choosing action a at the decoding step t is calculated with a network NN constituted of one hidden layer and ReLU as the activation function: P (Y t |{y i } t−1 i=1 , {w i } m i=1 ) (12) = StackActionSelector(r sa t ) = softmax(NN(r sa t )) , where r sa t is decoding state features as defined in section 3.3.", "Stack Actions The available stack actions are listed below: • Variable generation: The semantic representation of an unknown variable x is generated dynamically as the first action in the decoding process.", "Note that this procedure provides the flexibility of solving problems with more than one unknown variables.", "The decoder module can decide how many unknown variables are required to solve the problem, and the semantic representation of the unknown variable is generated with an attention mechanism: e x = Attention(h D t , {h E i } m i=1 ).", "(13) • Push: This stack action pushes the operand chosen by the operand selector (section 3.3.3).", "Both the symbolic representation v * and semantic representation e * of the chosen operand would be pushed to the stack S in (3).", "Then the stack state becomes S = [(v S * , e S * ), (v S lt , e S lt ), · · · , (v S 1 , e S 1 )].", "(14) • Operator application ( ∈ {+, −, ×, ÷}): One stack action pops two elements from the top of the stack, which contains two pairs, (v i , e i ) and (v j , e j ), and then the associated symbolic operator, v k = v i v j , is recorded.", "Also, a semantic transformation function f for that operator is invoked, which generates the semantic representation of v k by transforming semantic representations of v i and v j to e k = f (e i , e j ).", "Therefore, after an operator is applied to the stack specified in (3) , the stack state becomes S =[(v S lt v S lt−1 , f (e S lt , e S lt−1 )), (15) (v S lt−2 , e S lt−2 ), · · · , (v S 1 , e S 1 )].", "• Equal application: When the equal application is chosen, it implies that an equation is completed.", "This stack action pops 2 tuples from the stack, (v i , e i ), (v j , e j ), and then v i = v j is recorded.", "If one of them is an unknown variable, the problem is solved.", "Therefore, after an OP is applied to the stack specified in (3) , the stack state becomes S = [(v S lt−2 , e S lt−2 ), · · · , (v S 1 , e S 1 )].", "(16) Operand Selector When the stack action selector has decided to push an operand, the operand selector aims at choosing which operand to push.", "The operand candidates e include constants provided in the problem text whose semantic representations are e c 1 , e c 2 , · · · , e c n , unknown variable whose semantic representation is e x , and two external constants 1 and π whose semantic representations are e 1 , e π : e = [e c 1 , e c 2 , · · · , e c n , e 1 , e π , e x ].", "An operand has both symbolic and semantic representations, but the selection focuses on its semantic meaning; this procedure is the same as what human does when solving math word problems.", "Inspired by addressing mechanisms of neural Turing machine (NTM) (Graves et al., 2014) , the probability of choosing the i-th operand candidate is the attention weights of r t over the semantic representations of the operand candidates as in (8) : P (Z t | {y i } t−1 i=1 , {w i } m i=1 ) (18) = OperandSelector(r opd t ) = AttentionWeight(r opd t , {e i } m i=1 ∪ {e 1 , e π , e x }), and r opd t is defined in section 3.3.", "Semantic Transformer A semantic transformer is proposed to generate the semantic representation of a new symbol resulted from applying an operator, which provides the capability of interpretation and reasoning for the target task.", "The semantic transformer for an operator ∈ {+, −, ×, ÷} transforms semantic representations of two operands e 1 , e 2 into f (e 1 , e 2 ) = tanh(U ReLU(W [e 1 ; e 2 ]+b )+c ), where W , U , b , c are model parameters.", "Semantic transformers for different operators have different parameters in order to model different transformations.", "Training Both stack action selection and operand selection can be trained in a fully supervised way by giving problems and associated ground truth equations.", "Because our model generates the equation with stack actions, the equation is first transformed into its postfix representation.", "Let the postfix representation of the target equation be y 1 , · · · y t , · · · , y T , where y t can be either an operator (+, −, ×, ÷, =) or a target operand.", "Then for each time step t, the loss can be computed as L(y t ) = L 1 (push op) + L 2 (y t ) y t is an operand L 1 (y t ) otherwise , where L 1 is the stack action selection loss and L 2 is the operand selection loss defined as L 1 (y t ) = − log P (Y t = y t | {o i } t−1 i=1 , {w i } m i=1 ), L 2 (y t ) = − log P (Z t = y t | r t ).", "The objective of our training process is to minimize the total loss for the whole equation, T t=1 L(y t ).", "Inference When performing inference, at each time step t, the stack action with the highest probability P (Y t |{ỹ i } t−1 i=1 , {w i } m i=1 ) is chosen.", "If the chosen stack action is \"push\", the operand with the highest probability P (Z t |{Ỹ i } t−1 i=1 , {w i } m i=1 ) is chosen.", "When the stack has less than 2 elements, the probability of applying operator +, −, ×, ÷, = would be masked out to prevent illegal stack actions, so all generated equations must be legal math expressions.", "The decoder decodes until the unknown variable can be solved.", "After the equations are generated, a Python package SymPy (Meurer et al., 2017) is used to solve the unknown variable.", "The inference procedure example is illustrated in Figure 3 .", "The detailed algorithm can be found in Algorithm 1.", "Experiments To evaluate the performance of the proposed model, we conduct the experiments on the benchmark dataset and analyze the learned semantics.", "Settings The experiments are benchmarked on the dataset Math23k (Wang et al., 2017) , which contains 23,162 math problems with annotated equations.", "Each problem can be solved by a singleunknown-variable equation and only uses operators +, −, ×, ÷.", "Also, except π and 1, quantities in the equation can be found in the problem text.", "There are also other large scale datasets like Dol-phin18K (Shi et al., 2015) and AQuA (Ling et al., 2017) , containing 18,460 and 100,000 math word problems respectively.", "The reasons about not evaluating on these two datasets are 1) Dolphin18k contains some unlabeled math word problems and some incorrect labels, and 2) AQuA contains rational for solving the problems, but the equations in the rational are not formal (e.g.", "mixed with texts, using x to represent ×, etc.)", "and inconsistent.", "Therefore, the following experiments are performed and analyzed using Math23K, the only large scaled, good-quality dataset. )", "do h D t ← LSTM(h D t−1 , ct−1, ret) st ← S.get top2() h E ← Attention(h D t−1 , h E ) rt ← [h D t , st, h E ] psa ← StackActionSelector(rt) p opd ← OperandSelector(rt) if training then Target equation y is available when training.", "Yt ← yt if yt is operand then loss ← loss + L1(push) + L2(yt) else loss ← loss + L1(yt) end if else Yt ← StackActionSelector(r sa t ) if Yt = push then Zt ← OperandSelector(r opd t ) end if end if if Yt = gen var then e x ← Attention(h D t , h E ) ret ← e x else if Yt = push then S.push(vZ t , eZ t ) ret ← eZ t else if Yt ∈ {+, Results The results are shown in Our proposed end-to-end model belongs to the generation category, and the single model performance achieved by our proposed model is new state-of-the-art (> 65%) and even better than the hybrid model result (64.7%).", "In addition, we are the first to report character-based performance on this dataset, and the character-based results are slightly better than the word-based ones.", "Among the single model performance, our models obtain about more than 7% accuracy improvement compared to the previous best one (Wang et al., 2017) .", "The performance of our character-based model also shows that our model is capable of learning the relatively accurate semantic representations without word boundaries and achieves better performance.", "Ablation Test To better understand the performance contributed by each proposed component, we perform a series of ablation tests by removing components one by one and then checking the performance by 5-fold cross validation.", "Table 2 shows the ablation results.", "Char-Based v.s.", "Word-Based As reported above, using word-based model instead of character-based model only causes 0.5% performance drop.", "To fairly compare with prior word- Table 2 : 5-fold cross validation results of ablation tests.", "based models, the following ablation tests are performed on the word-based approach.", "Word-Based -Gate It uses r t instead of r sa t and r opr t as the input of both StackActionSelector and OperandSelector.", "Word-Based -Gate -Attention Considering that the prior generation-based model (seq2seq) did not use any attention mechanism, we compare the models with and without the attention mechanism.", "Removing attention means excluding q t−1 in (11), so the input of both operator and operand selector becomes r t = [h D t ; s t ].", "The result implies that our model is not better than previous models solely because of the attention.", "Word-Based -Gate -Attention -Stack To check the effectiveness of the stack status (s t in (11)), the experiments of removing the stack status from the input of both operator and operand selectors (r t = h D t ) are conducted.", "The results well justify our idea of choosing operators based on semantic meanings of operands.", "Word-Based -Semantic Transformer To validate the effectiveness of the idea that views an operator as a semantic transformer, we modify the semantic transformer function of the operator into f (e 1 , e 2 ) = e , where e is a learnable parameter and is different for different operators.", "Therefore, e acts like the embedding of the operator , and the decoding process is more similar to a general seq2seq model.", "The results show that the semantic transformer in the original model encodes not only the last operator applied on the operands but other information that helps the selectors.", "Word-Based -Semantic Representation To explicitly evaluate the effectiveness of operands' semantic representations, we rewrite semantic representation of the i-th operand in the problem texts q u a n ti fi e r 个 b a n a n a 香 蕉 , e v e r y 每 ( b a s k e t) < u n k > 6 .0 q u a n ti fi e r 个 , ta k e o ff 拿 掉 h o w m a n y 多 少 q u a n ti fi e r 个 , th e n 就 c a n 可 以 e x a c tl y 正 好 fi ll 装 9 .0 q u a n ti fi e r 个 b a s k e ts 篮 子 了 < u n k > .", "9.0 6.0 58.0 Figure 4 : The self-attention map visualization of operands' semantic expressions for the problem \"There are 58 bananas.", "Each basket can contain 6 bananas.", "How many bananas are needed to be token off such that exactly 9 baskets are filled?\".", "from (2) to e c i = b c i , where b c i is a parameter.", "Thus for every problem, the representation of the i-th operand is identical, even though their meanings in different problems may be different.", "This modification assumes that no semantic information is captured by b c i , which can merely represent a symbolic placeholder in an equation.", "Because the semantic transformer is to transform the semantic representations, applying this component is meaningless.", "Here the semantic transformer is also replaced with f (e 1 , e 2 ) = e as the setting of the previous ablation test.", "The results show that the model without using semantic representations of operands causes a significant accuracy drop of 3.5%.", "The main contribution of this paper about modeling semantic meanings of symbols is validated and well demonstrated here.", "Qualitative Analysis To further analyze whether the proposed model can provide interpretation and reasoning, we visualize the learned semantic representations of constants to check where the important cues are, Constant Embedding Analysis To better understand the information encoded in the semantic representations of constants in the problem, a self-attention is performed when their semantic representations are extracted by the encoder.", "Namely, we rewrite (2) as e c i = Attention(h E p i , {h E t } m t=1 .", "(20) Then we check the trained self-attention map (α in the attention function) on the validation dataset.", "For some problems, the self-attention that generates semantic representations of constants in the problem concentrates on the number's quantifier or unit, and sometimes it also focuses on informative verbs, such as \"gain\", \"get\", \"fill\", etc., in the sentence.", "For example, Figure 4 shows the attention weights for an example math word problem, where lighter colors indicate higher weights.", "The numbers \"58\" and \"6\" focus more on the quantifier-related words (e.g.", "\"every\" and \"how many\"), while \"9\" pays higher attention to the verb \"fill\".", "The results are consistent with those handcraft features for solving math word problems proposed by the prior research (Hosseini et al., 2014; .", "Hence, we demonstrate that the automatically learned semantic representations indeed capture critical information that facilitates solving math word problems without providing human-crafted knowledge.", "Decoding Process Visualization We visualize the attention map (q t in (6) ) to see how the attention helps the decoding process.", "An example is shown in the top of Figure 5 , where most attention focuses on the end of the sentence.", "Unlike the machine translation task, the attention shows the word-level alignment between source and target languages, solving math word problems requires high-level understanding due to the task complexity.", "To further analyze the effectiveness of the proposed gating mechanisms for stack action and operand selection, the activation of gates g sa , g opd at each step of the decoding process is shown in the bottom of Figure 5 .", "It shows that most of time, the gate activation is high, demonstrating that the proposed gating mechanisms play an important role during decoding.", "We also observe a common phenomenon that the activation g sa 2 , which controls how much attention the stack action selector puts on the stack state when deciding an stack action, is usually low until the last \"operator application\" stack action.", "For example, in the example of Figure 5 , g sa 2 is less than 0.20 till the last argument selection stack action, and activates when deciding the division operator application (÷) and the equal application (=).", "It may result from the higher-level semantics of the operand (6.75−2.75) on the stack when selecting the stack action division operator application (÷).", "In terms Problem & Results 红花有60朵,黄花比红花多1/6朵,黄花有多少朵. (There are 60 red flowers.", "Yellow flowers are more than red ones by 1/6.", "How many yellow flowers are there?)", "Generated Equation: 60 + 1 6 Correct Answer: 70 火车 48 小时行驶 5920 千米,汽车 25 小时行驶 2250 千米,汽车平均每小时比火车每小时慢 多少 千米 ? (The train travels 5920 kilometers in hours, and the car travels 2250 kilometers in 25 hours.", "How many kilometers per hour is the car slower than the train?)", "Generated Equation: 2250 ÷ 25 − 5920 ÷ 48 Correct Answer: 33 1 3 小红前面 5 人,后面 7 人,一共有多少人? (There are 5 people in front of Little Red and 7 people behind.", "How many persons are there in total?)", "Generated Equation: 5 + 7 Correct Answer: 13 Figure 5: Word attention and gate activation (g sa and g opd ) visualization when generating stack actions for the problem \"6.75 deducting 5 times of an unknown number is 2.75.", "What is the unknown number?", "\", where the associated equation is x = (6.75 − 2.75) ÷ 5.", "Note that g opd is meaningful only when the t-th stack action is push op.", "of the activation of g opd , we find that three features are important in most cases, demonstrating the effectiveness of the proposed mechanisms.", "Error Analysis We randomly sample some results predicted incorrectly by our model shown in Table 3 .", "In the first example, the error is due to the language ambiguity, and such ambiguity cannot be resolved without considering the exact value of the number.", "From the second example, although our model identifies the problem as a comparison problem successfully, it handles the order of the operands incorrectly.", "For the third problem, it cannot be solved by using only the surface meaning but requires some common sense.", "Therefore, above phenomena show the difficulty of solving math word problems and the large room for improvement.", "Conclusion We propose an end-to-end neural math solver using an encoder-decoder framework that incorporates semantic representations of numbers in order to generate mathematical symbols for solving math word problems.", "The experiments show that the proposed model achieves the state-of-the-art performance on the benchmark dataset, and empirically demonstrate the effectiveness of each component in the model.", "In sum, the proposed neural math solver is designed based on how human performs reasoning when writing equations, providing better interpretation without the need of labeled rationals.", "A Algorithm Detail The training and inference procedures are shown in Algortihm 1.", "B Hyperparameter Setup The model is trained with the optimizer adam (Kingma and Ba, 2014), and the learning rate is set to 0.001.", "Pretrained embeddings using FastText (Joulin et al., 2016 ) are adopted.", "The hidden state size of LSTM used in the encoder and decoder is 256.", "The dimension of hidden layers in attention, semantic transformer and operand/stack action selector is 256.", "The dropout rate is set as 0.1 before inputting the decoder LSTM, before the stack action selector and after the hidden layer of the stack action selector and attention.", "The reported accuracy is the result of 5-fold cross-validation, same as Wang et al.", "for fair comparison.", "C Error Analysis between Seq2Seq We implement the seq2seq model as proposed by Wang et al.", "and compare the performance difference between our proposed model and the baseline seq2seq model.", "Table 4 shows the generated results seq2seq predicts correctly but our model predicts incorrectly.", "Table 5 show the results our model can predict correctly but seq2seq cannot." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.1.1", "3.1.2", "3.2", "3.3", "3.3.1", "3.3.2", "3.3.3", "3.3.4", "3.4", "3.5", "4", "4.1", "4.2", "4.3", "5", "5.1", "5.2", "5.3", "6" ], "paper_header_content": [ "Introduction", "Related Work", "End-to-End Neural Math Solver", "Encoder", "Constant Representation Extraction", "External Constant Leveraging", "Decoder", "Decoding State Features", "Stack Action Selector", "Stack Actions", "Operand Selector", "Semantic Transformer", "Training", "Inference", "Experiments", "Settings", "Results", "Ablation Test", "Qualitative Analysis", "Constant Embedding Analysis", "Decoding Process Visualization", "Error Analysis", "Conclusion" ] }
GEM-SciDuet-train-31#paper-1044#slide-11
Intuition of Using Semantics
Each notebook takes $0.5 and each pen takes $1. Tom has M I U L A B How many notebooks can h e buy after buying pens? Number of pens bough t. Price of a pen.
Each notebook takes $0.5 and each pen takes $1. Tom has M I U L A B How many notebooks can h e buy after buying pens? Number of pens bough t. Price of a pen.
[]
GEM-SciDuet-train-31#paper-1044#slide-12
1044
Semantically-Aligned Equation Generation for Solving and Reasoning Math Word Problems
Solving math word problems is a challenging task that requires accurate natural language understanding to bridge natural language texts and math expressions. Motivated by the intuition about how human generates the equations given the problem texts, this paper presents a neural approach to automatically solve math word problems by operating symbols according to their semantic meanings in texts. This paper views the process of generating equations as a bridge between the semantic world and the symbolic world, where the proposed neural math solver is based on an encoderdecoder framework. In the proposed model, the encoder is designed to understand the semantics of problems, and the decoder focuses on tracking semantic meanings of the generated symbols and then deciding which symbol to generate next. The preliminary experiments are conducted in a benchmark dataset Math23K, and our model significantly outperforms both the state-of-the-art single model and the best non-retrieval-based model over about 10% accuracy, demonstrating the effectiveness of bridging the symbolic and semantic worlds from math word problems. 1
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214 ], "paper_content_text": [ "Introduction Automatically solving math word problems has been an interesting research topic and also been viewed as a way of evaluating machines' ability (Mandal and Naskar, 2019) .", "For human, writing down an equation that solves a math word problem requires the ability of reading comprehension, reasoning, and sometimes real world understanding.", "Specifically, to solve a math word problem, we first need to know the goal of the given problem, then understand the semantic 1 The source code is available at https://github.", "com/MiuLab/E2EMathSolver.", "meaning of each numerical number in the problem, perform reasoning based on the comprehension in the previous step, and finally decide what to write in the equation.", "Most prior work about solving math word problems relied on hand-crafted features, which required more human knowledge.", "Because those features are often in the lexical level, it is not clear whether machines really understand the math problems.", "Also, most prior work evaluated their approaches on relatively small datasets, and the capability of generalization is concerned.", "This paper considers the reasoning procedure when writing down the associated equation given a problem.", "Figure 1 illustrates the problem solving process.", "The illustration shows that human actually assigns the semantic meaning to each number when manipulating symbols, including operands (numbers) and operators (+ − ×÷).", "Also, we believe that the semantic meaning of operands can help us decide which operator to use.", "For example, the summation of \"price of one pen\" and \"number of pens Tom bought\" is meaningless; therefore the addition would not be chosen.", "Following the observation above, this paper proposes a novel encoder decoder model, where the encoder extracts semantic meanings of numbers in the problem, and the decoder is equipped with a stack that facilitates tracking the semantic meanings of operands.", "The contributions of this paper are 4-fold: • This paper is the first work that models semantic meanings of operands and operators for math word problems.", "• This paper proposes an end-to-end neural math solver with a novel decoding process that utilizes the stack to generate associated equations.", "Figure 1 : The solving process of the math word problem \"Each notebok takes $0.5 and each pen takes $1.", "Tom has $10.", "How many notebook can he buy after buying 5 pens?\"", "and the associated equation is x = (10 − 1 × 5) ÷ 0.5.", "The associated equation is x = (10 − 1 × 5) ÷ 0.5.", "• This paper achieves the state-of-the-art performance on the large benchmark dataset Math23K.", "• This paper is capable of providing interpretation and reasoning for the math word problem solving procedure.", "Related Work There is a lot of prior work that utilized handcrafted features, such as POS tags, paths in the dependency trees, keywords, etc., to allow the model to focus on the quantities in the problems Hosseini et al., 2014; Koncel-Kedziorski et al., 2015; Roy et al., 2016; Upadhyay and Chang, 2017; Roy and Roth, 2018; Wang et al., 2018) .", "Recently, Mehta et al.", "; Wang et al.", "; Ling et al.", "attempted at learning models without predefined features.", "Following the recent trend, the proposed end-to-end model in this paper does not need any hand-crafted features.", "Kushman et al.", "first extracted templates about math expressions from the training answers, and then trained models to select templates and map quantities in the problem to the slots in the template.", "Such two-stage approach has been tried and achieved good results (Upadhyay and Chang, 2017) .", "The prior work highly relied on human knowledge, where they parsed problems into equations by choosing the expression tree with the highest score calculated by an operator classifier, working on a hand-crafted \"trigger list\" containing quantities and noun phrases in the problem, or utilizing features extracted from text spans (Roy et al., , 2016 Koncel-Kedziorski et al., 2015) .", "Shi et al.", "defined a Dolphin language to connect math word problems and logical forms, and generated rules to parse math word problems.", "Upadhyay et al.", "parsed math word problems without explicit equation annotations.", "Roy and Roth clas-sified math word problems into 4 types and used rules to decide the operators accordingly.", "Wang et al.", "trained the parser using reinforcement learning with hand-crafted features.", "Hosseini et al.", "modeled the problem text as transition of world states, and the equation is generated as the world states changing.", "Our work uses a similar intuition, but hand-crafted features are not required and our model can be trained in an end-to-end manner.", "Some end-to-end approaches have been proposed, such as generating equations directly via a seq2seq model (Wang et al., 2017) .", "Ling et al.", "tried to generate solutions along with its rationals with a seq2seq-like model for better interpretability.", "This paper belongs to the end-to-end category, but different from the previous work; we are the first approach that generates equations with stack actions, which facilitate us to simulate the way how human solves problems.", "Furthermore, the proposed approach is the first model that is more interpretable and provides reasoning steps without the need of rational annotations.", "End-to-End Neural Math Solver Our approach composes of two parts, an encoder and a decoder, where the process of solving math word problems is viewed as transforming multiple text spans from the problems into the target information the problems ask for.", "In the example shown in Figure 1 , all numbers in the problem are attached with the associated semantics.", "Motivated by the observation, we design an encoder to extract the semantic representation of each number in the problem text.", "Considering that human usually manipulates those numbers and operators (such as addition, subtraction, etc.)", "based on their semantics for problem solving, a decoder is designed to construct the equation, where the semantics is aligned with the representations extracted by the encoder.", "The idea of the proposed model Tom has $ 10 5 pens ?", "Encoder Stack Attention Operation Selector Apply OP OP Return Decoder Operand Selector Semantic Transformer Each notebook takes $0.5 and each pen takes $1.", "Tom has $10.", "How many notebooks can he buy after buying 5 pens?", "Stack Attention is to imitate the human reasoning process for solving math word problems.", "The model architecture is illustrated in Figure 2 .", "Encoder The encoder aims to extract the semantic representation of each constant needed for solving problems.", "However, the needed constants may come from either the given problem texts or domain knowledge, so we detail these two procedures as follows.", "Constant Representation Extraction For each math word problem, we are given a passage consisting of words {w P t } m t=1 , whose word embeddings are {e P t } m t=1 .", "The problem text includes some numbers, which we refer as constants.", "The positions of constants in the problem text are denoted as {p i } n i=1 .", "In order to capture the semantic representation of each constant by considering its contexts, a bidirectional long short-term memory (BLSTM) is adopted as the encoder (Hochreiter and Schmidhuber, 1997) : h E t , c E t = BLSTM(h E t−1 , c E t−1 , e P t ), (1) and then for the i-th constant in the problem, its semantic representation e c i is modeled by the corresponding BLSTM output vector: e c i = h E p i .", "(2) External Constant Leveraging External constants, including 1 and π, are leveraged, because they are required to solve a math word problem, but not mentioned in the problem text.", "Due to their absence from the problem text, we cannot extract their semantic meanings by BLSTM in (2) .", "Instead, we model their semantic representation e π , e 1 as parts of the model parameters.", "They are randomly initialized and are learned during model training.", "Decoder The decoder aims at constructing the equation that can solve the given problem.", "We generate the equation by applying stack actions on a stack to mimic the way how human understands an equation.", "Human knows the semantic meaning of each term in the equation, even compositing of operands and operators like the term \"(10−1×5)\" in Figure 1 .", "Then what operator to apply on a pair operands can be chosen based on their semantic meanings accordingly.", "Hence we design our model to generate the equation in a postfix manner: a operator is chosen base on the semantic representations of two operands the operator is going to apply to.", "Note that the operands a operator can apply to can be any results generated previously.", "That is the reason why we use \"stack\" as our data structure in order to keep track of the operands a operator is going to apply to.", "The stack contains both symbolic and semantic representations of operands, denoted as S = [(v S lt , e S lt ), (v S lt−1 , e S lt−1 ), · · · , (v S 1 , e S 1 )], (3) where v S of each pair is the symbolic part, such as x + 1, while e S is the semantic representation, which is a vector.", "The components in the decoder are shown in the right part of Figure 2 , each of which is detailed below.", "Decoding State Features At each decoding step, decisions are made based on features of the current state.", "At each step, fea- tures r sa and r opd are extracted to select a stack action (section 3.3.2) and an operand to push (section 3.3.3).", "Specifically, the features are the gated concatenation of following vectors: • h D t is the output of an LSTM, which encodes the history of applied actions: h D t , c D t = LSTM(h D t−1 , c D t−1 , res t−1 ), (4) where res t−1 is the result from the previous stack action similar to the seq2seq model (Sutskever et al., 2014) .", "For example, if the previous stack action o t−1 is \"push\", then res t−1 is the semantic representation pushed into the stack.", "If the previous stack action o t−1 is to apply an operator , then res t−1 is the semantic representation generated by f .", "• s t is the stack status.", "It is crucial because some operators are only applicable to certain combinations of operand semantics, which is similar to the type system in programming languages.", "For example, operating multiplication is applicable to the combination of \"quantity of an item\" and \"price of an item\", while operating addition is not.", "Considering that all math operators supported here (+, −, ×, ÷) are binary operators, the semantic representations of the stack's top 2 elements at the time t − 1 are considered: s t = [e S lt ; e S lt ].", "(5) • q t incorporates problem information in the decision.", "It is believed that the attention mechanism (Luong et al., 2015) can effectively capture dependency for longer distance.", "Thus, the attention mechanism over the encoding problem h E 1 , h E 2 , · · · is adopted: q t = Attention(h D t , {h E i } m i=1 ), (6) where the attention function in this paper is defined as a function with learnable parameters w, W, b: Attention(u, {v i } m i=1 ) = m i=1 α i h i , (7) α i = exp(s i ) m l=1 exp(s i ) , (8) s i = w T tanh(W T [u; v i ] + b).", "(9) In order to model the dynamic features for different decoding steps, features in r sa t is gated as follows: r sa t = [g sa t,1 · h D t ; g sa t,2 · s t ; g sa t,3 · q t ], (10) g sa t = σ(W sa · [h D t ; s t ; q t ]), (11) where σ is a sigmoid function and W sa is a learned gating parameter.", "r opd t is defined similarly, but with a different learned gating parameter W opd .", "Stack Action Selector The stack action selector is to select an stack action at each decoding step (section 3.3.2) until the unknowns are solved.", "The probability of choosing action a at the decoding step t is calculated with a network NN constituted of one hidden layer and ReLU as the activation function: P (Y t |{y i } t−1 i=1 , {w i } m i=1 ) (12) = StackActionSelector(r sa t ) = softmax(NN(r sa t )) , where r sa t is decoding state features as defined in section 3.3.", "Stack Actions The available stack actions are listed below: • Variable generation: The semantic representation of an unknown variable x is generated dynamically as the first action in the decoding process.", "Note that this procedure provides the flexibility of solving problems with more than one unknown variables.", "The decoder module can decide how many unknown variables are required to solve the problem, and the semantic representation of the unknown variable is generated with an attention mechanism: e x = Attention(h D t , {h E i } m i=1 ).", "(13) • Push: This stack action pushes the operand chosen by the operand selector (section 3.3.3).", "Both the symbolic representation v * and semantic representation e * of the chosen operand would be pushed to the stack S in (3).", "Then the stack state becomes S = [(v S * , e S * ), (v S lt , e S lt ), · · · , (v S 1 , e S 1 )].", "(14) • Operator application ( ∈ {+, −, ×, ÷}): One stack action pops two elements from the top of the stack, which contains two pairs, (v i , e i ) and (v j , e j ), and then the associated symbolic operator, v k = v i v j , is recorded.", "Also, a semantic transformation function f for that operator is invoked, which generates the semantic representation of v k by transforming semantic representations of v i and v j to e k = f (e i , e j ).", "Therefore, after an operator is applied to the stack specified in (3) , the stack state becomes S =[(v S lt v S lt−1 , f (e S lt , e S lt−1 )), (15) (v S lt−2 , e S lt−2 ), · · · , (v S 1 , e S 1 )].", "• Equal application: When the equal application is chosen, it implies that an equation is completed.", "This stack action pops 2 tuples from the stack, (v i , e i ), (v j , e j ), and then v i = v j is recorded.", "If one of them is an unknown variable, the problem is solved.", "Therefore, after an OP is applied to the stack specified in (3) , the stack state becomes S = [(v S lt−2 , e S lt−2 ), · · · , (v S 1 , e S 1 )].", "(16) Operand Selector When the stack action selector has decided to push an operand, the operand selector aims at choosing which operand to push.", "The operand candidates e include constants provided in the problem text whose semantic representations are e c 1 , e c 2 , · · · , e c n , unknown variable whose semantic representation is e x , and two external constants 1 and π whose semantic representations are e 1 , e π : e = [e c 1 , e c 2 , · · · , e c n , e 1 , e π , e x ].", "An operand has both symbolic and semantic representations, but the selection focuses on its semantic meaning; this procedure is the same as what human does when solving math word problems.", "Inspired by addressing mechanisms of neural Turing machine (NTM) (Graves et al., 2014) , the probability of choosing the i-th operand candidate is the attention weights of r t over the semantic representations of the operand candidates as in (8) : P (Z t | {y i } t−1 i=1 , {w i } m i=1 ) (18) = OperandSelector(r opd t ) = AttentionWeight(r opd t , {e i } m i=1 ∪ {e 1 , e π , e x }), and r opd t is defined in section 3.3.", "Semantic Transformer A semantic transformer is proposed to generate the semantic representation of a new symbol resulted from applying an operator, which provides the capability of interpretation and reasoning for the target task.", "The semantic transformer for an operator ∈ {+, −, ×, ÷} transforms semantic representations of two operands e 1 , e 2 into f (e 1 , e 2 ) = tanh(U ReLU(W [e 1 ; e 2 ]+b )+c ), where W , U , b , c are model parameters.", "Semantic transformers for different operators have different parameters in order to model different transformations.", "Training Both stack action selection and operand selection can be trained in a fully supervised way by giving problems and associated ground truth equations.", "Because our model generates the equation with stack actions, the equation is first transformed into its postfix representation.", "Let the postfix representation of the target equation be y 1 , · · · y t , · · · , y T , where y t can be either an operator (+, −, ×, ÷, =) or a target operand.", "Then for each time step t, the loss can be computed as L(y t ) = L 1 (push op) + L 2 (y t ) y t is an operand L 1 (y t ) otherwise , where L 1 is the stack action selection loss and L 2 is the operand selection loss defined as L 1 (y t ) = − log P (Y t = y t | {o i } t−1 i=1 , {w i } m i=1 ), L 2 (y t ) = − log P (Z t = y t | r t ).", "The objective of our training process is to minimize the total loss for the whole equation, T t=1 L(y t ).", "Inference When performing inference, at each time step t, the stack action with the highest probability P (Y t |{ỹ i } t−1 i=1 , {w i } m i=1 ) is chosen.", "If the chosen stack action is \"push\", the operand with the highest probability P (Z t |{Ỹ i } t−1 i=1 , {w i } m i=1 ) is chosen.", "When the stack has less than 2 elements, the probability of applying operator +, −, ×, ÷, = would be masked out to prevent illegal stack actions, so all generated equations must be legal math expressions.", "The decoder decodes until the unknown variable can be solved.", "After the equations are generated, a Python package SymPy (Meurer et al., 2017) is used to solve the unknown variable.", "The inference procedure example is illustrated in Figure 3 .", "The detailed algorithm can be found in Algorithm 1.", "Experiments To evaluate the performance of the proposed model, we conduct the experiments on the benchmark dataset and analyze the learned semantics.", "Settings The experiments are benchmarked on the dataset Math23k (Wang et al., 2017) , which contains 23,162 math problems with annotated equations.", "Each problem can be solved by a singleunknown-variable equation and only uses operators +, −, ×, ÷.", "Also, except π and 1, quantities in the equation can be found in the problem text.", "There are also other large scale datasets like Dol-phin18K (Shi et al., 2015) and AQuA (Ling et al., 2017) , containing 18,460 and 100,000 math word problems respectively.", "The reasons about not evaluating on these two datasets are 1) Dolphin18k contains some unlabeled math word problems and some incorrect labels, and 2) AQuA contains rational for solving the problems, but the equations in the rational are not formal (e.g.", "mixed with texts, using x to represent ×, etc.)", "and inconsistent.", "Therefore, the following experiments are performed and analyzed using Math23K, the only large scaled, good-quality dataset. )", "do h D t ← LSTM(h D t−1 , ct−1, ret) st ← S.get top2() h E ← Attention(h D t−1 , h E ) rt ← [h D t , st, h E ] psa ← StackActionSelector(rt) p opd ← OperandSelector(rt) if training then Target equation y is available when training.", "Yt ← yt if yt is operand then loss ← loss + L1(push) + L2(yt) else loss ← loss + L1(yt) end if else Yt ← StackActionSelector(r sa t ) if Yt = push then Zt ← OperandSelector(r opd t ) end if end if if Yt = gen var then e x ← Attention(h D t , h E ) ret ← e x else if Yt = push then S.push(vZ t , eZ t ) ret ← eZ t else if Yt ∈ {+, Results The results are shown in Our proposed end-to-end model belongs to the generation category, and the single model performance achieved by our proposed model is new state-of-the-art (> 65%) and even better than the hybrid model result (64.7%).", "In addition, we are the first to report character-based performance on this dataset, and the character-based results are slightly better than the word-based ones.", "Among the single model performance, our models obtain about more than 7% accuracy improvement compared to the previous best one (Wang et al., 2017) .", "The performance of our character-based model also shows that our model is capable of learning the relatively accurate semantic representations without word boundaries and achieves better performance.", "Ablation Test To better understand the performance contributed by each proposed component, we perform a series of ablation tests by removing components one by one and then checking the performance by 5-fold cross validation.", "Table 2 shows the ablation results.", "Char-Based v.s.", "Word-Based As reported above, using word-based model instead of character-based model only causes 0.5% performance drop.", "To fairly compare with prior word- Table 2 : 5-fold cross validation results of ablation tests.", "based models, the following ablation tests are performed on the word-based approach.", "Word-Based -Gate It uses r t instead of r sa t and r opr t as the input of both StackActionSelector and OperandSelector.", "Word-Based -Gate -Attention Considering that the prior generation-based model (seq2seq) did not use any attention mechanism, we compare the models with and without the attention mechanism.", "Removing attention means excluding q t−1 in (11), so the input of both operator and operand selector becomes r t = [h D t ; s t ].", "The result implies that our model is not better than previous models solely because of the attention.", "Word-Based -Gate -Attention -Stack To check the effectiveness of the stack status (s t in (11)), the experiments of removing the stack status from the input of both operator and operand selectors (r t = h D t ) are conducted.", "The results well justify our idea of choosing operators based on semantic meanings of operands.", "Word-Based -Semantic Transformer To validate the effectiveness of the idea that views an operator as a semantic transformer, we modify the semantic transformer function of the operator into f (e 1 , e 2 ) = e , where e is a learnable parameter and is different for different operators.", "Therefore, e acts like the embedding of the operator , and the decoding process is more similar to a general seq2seq model.", "The results show that the semantic transformer in the original model encodes not only the last operator applied on the operands but other information that helps the selectors.", "Word-Based -Semantic Representation To explicitly evaluate the effectiveness of operands' semantic representations, we rewrite semantic representation of the i-th operand in the problem texts q u a n ti fi e r 个 b a n a n a 香 蕉 , e v e r y 每 ( b a s k e t) < u n k > 6 .0 q u a n ti fi e r 个 , ta k e o ff 拿 掉 h o w m a n y 多 少 q u a n ti fi e r 个 , th e n 就 c a n 可 以 e x a c tl y 正 好 fi ll 装 9 .0 q u a n ti fi e r 个 b a s k e ts 篮 子 了 < u n k > .", "9.0 6.0 58.0 Figure 4 : The self-attention map visualization of operands' semantic expressions for the problem \"There are 58 bananas.", "Each basket can contain 6 bananas.", "How many bananas are needed to be token off such that exactly 9 baskets are filled?\".", "from (2) to e c i = b c i , where b c i is a parameter.", "Thus for every problem, the representation of the i-th operand is identical, even though their meanings in different problems may be different.", "This modification assumes that no semantic information is captured by b c i , which can merely represent a symbolic placeholder in an equation.", "Because the semantic transformer is to transform the semantic representations, applying this component is meaningless.", "Here the semantic transformer is also replaced with f (e 1 , e 2 ) = e as the setting of the previous ablation test.", "The results show that the model without using semantic representations of operands causes a significant accuracy drop of 3.5%.", "The main contribution of this paper about modeling semantic meanings of symbols is validated and well demonstrated here.", "Qualitative Analysis To further analyze whether the proposed model can provide interpretation and reasoning, we visualize the learned semantic representations of constants to check where the important cues are, Constant Embedding Analysis To better understand the information encoded in the semantic representations of constants in the problem, a self-attention is performed when their semantic representations are extracted by the encoder.", "Namely, we rewrite (2) as e c i = Attention(h E p i , {h E t } m t=1 .", "(20) Then we check the trained self-attention map (α in the attention function) on the validation dataset.", "For some problems, the self-attention that generates semantic representations of constants in the problem concentrates on the number's quantifier or unit, and sometimes it also focuses on informative verbs, such as \"gain\", \"get\", \"fill\", etc., in the sentence.", "For example, Figure 4 shows the attention weights for an example math word problem, where lighter colors indicate higher weights.", "The numbers \"58\" and \"6\" focus more on the quantifier-related words (e.g.", "\"every\" and \"how many\"), while \"9\" pays higher attention to the verb \"fill\".", "The results are consistent with those handcraft features for solving math word problems proposed by the prior research (Hosseini et al., 2014; .", "Hence, we demonstrate that the automatically learned semantic representations indeed capture critical information that facilitates solving math word problems without providing human-crafted knowledge.", "Decoding Process Visualization We visualize the attention map (q t in (6) ) to see how the attention helps the decoding process.", "An example is shown in the top of Figure 5 , where most attention focuses on the end of the sentence.", "Unlike the machine translation task, the attention shows the word-level alignment between source and target languages, solving math word problems requires high-level understanding due to the task complexity.", "To further analyze the effectiveness of the proposed gating mechanisms for stack action and operand selection, the activation of gates g sa , g opd at each step of the decoding process is shown in the bottom of Figure 5 .", "It shows that most of time, the gate activation is high, demonstrating that the proposed gating mechanisms play an important role during decoding.", "We also observe a common phenomenon that the activation g sa 2 , which controls how much attention the stack action selector puts on the stack state when deciding an stack action, is usually low until the last \"operator application\" stack action.", "For example, in the example of Figure 5 , g sa 2 is less than 0.20 till the last argument selection stack action, and activates when deciding the division operator application (÷) and the equal application (=).", "It may result from the higher-level semantics of the operand (6.75−2.75) on the stack when selecting the stack action division operator application (÷).", "In terms Problem & Results 红花有60朵,黄花比红花多1/6朵,黄花有多少朵. (There are 60 red flowers.", "Yellow flowers are more than red ones by 1/6.", "How many yellow flowers are there?)", "Generated Equation: 60 + 1 6 Correct Answer: 70 火车 48 小时行驶 5920 千米,汽车 25 小时行驶 2250 千米,汽车平均每小时比火车每小时慢 多少 千米 ? (The train travels 5920 kilometers in hours, and the car travels 2250 kilometers in 25 hours.", "How many kilometers per hour is the car slower than the train?)", "Generated Equation: 2250 ÷ 25 − 5920 ÷ 48 Correct Answer: 33 1 3 小红前面 5 人,后面 7 人,一共有多少人? (There are 5 people in front of Little Red and 7 people behind.", "How many persons are there in total?)", "Generated Equation: 5 + 7 Correct Answer: 13 Figure 5: Word attention and gate activation (g sa and g opd ) visualization when generating stack actions for the problem \"6.75 deducting 5 times of an unknown number is 2.75.", "What is the unknown number?", "\", where the associated equation is x = (6.75 − 2.75) ÷ 5.", "Note that g opd is meaningful only when the t-th stack action is push op.", "of the activation of g opd , we find that three features are important in most cases, demonstrating the effectiveness of the proposed mechanisms.", "Error Analysis We randomly sample some results predicted incorrectly by our model shown in Table 3 .", "In the first example, the error is due to the language ambiguity, and such ambiguity cannot be resolved without considering the exact value of the number.", "From the second example, although our model identifies the problem as a comparison problem successfully, it handles the order of the operands incorrectly.", "For the third problem, it cannot be solved by using only the surface meaning but requires some common sense.", "Therefore, above phenomena show the difficulty of solving math word problems and the large room for improvement.", "Conclusion We propose an end-to-end neural math solver using an encoder-decoder framework that incorporates semantic representations of numbers in order to generate mathematical symbols for solving math word problems.", "The experiments show that the proposed model achieves the state-of-the-art performance on the benchmark dataset, and empirically demonstrate the effectiveness of each component in the model.", "In sum, the proposed neural math solver is designed based on how human performs reasoning when writing equations, providing better interpretation without the need of labeled rationals.", "A Algorithm Detail The training and inference procedures are shown in Algortihm 1.", "B Hyperparameter Setup The model is trained with the optimizer adam (Kingma and Ba, 2014), and the learning rate is set to 0.001.", "Pretrained embeddings using FastText (Joulin et al., 2016 ) are adopted.", "The hidden state size of LSTM used in the encoder and decoder is 256.", "The dimension of hidden layers in attention, semantic transformer and operand/stack action selector is 256.", "The dropout rate is set as 0.1 before inputting the decoder LSTM, before the stack action selector and after the hidden layer of the stack action selector and attention.", "The reported accuracy is the result of 5-fold cross-validation, same as Wang et al.", "for fair comparison.", "C Error Analysis between Seq2Seq We implement the seq2seq model as proposed by Wang et al.", "and compare the performance difference between our proposed model and the baseline seq2seq model.", "Table 4 shows the generated results seq2seq predicts correctly but our model predicts incorrectly.", "Table 5 show the results our model can predict correctly but seq2seq cannot." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.1.1", "3.1.2", "3.2", "3.3", "3.3.1", "3.3.2", "3.3.3", "3.3.4", "3.4", "3.5", "4", "4.1", "4.2", "4.3", "5", "5.1", "5.2", "5.3", "6" ], "paper_header_content": [ "Introduction", "Related Work", "End-to-End Neural Math Solver", "Encoder", "Constant Representation Extraction", "External Constant Leveraging", "Decoder", "Decoding State Features", "Stack Action Selector", "Stack Actions", "Operand Selector", "Semantic Transformer", "Training", "Inference", "Experiments", "Settings", "Results", "Ablation Test", "Qualitative Analysis", "Constant Embedding Analysis", "Decoding Process Visualization", "Error Analysis", "Conclusion" ] }
GEM-SciDuet-train-31#paper-1044#slide-12
Equation Generation in Postfix
Each notebook takes $0.5 and each pen takes $1. Tom has $10. How many notebooks can h e buy after buying pens?
Each notebook takes $0.5 and each pen takes $1. Tom has $10. How many notebooks can h e buy after buying pens?
[]
GEM-SciDuet-train-31#paper-1044#slide-13
1044
Semantically-Aligned Equation Generation for Solving and Reasoning Math Word Problems
Solving math word problems is a challenging task that requires accurate natural language understanding to bridge natural language texts and math expressions. Motivated by the intuition about how human generates the equations given the problem texts, this paper presents a neural approach to automatically solve math word problems by operating symbols according to their semantic meanings in texts. This paper views the process of generating equations as a bridge between the semantic world and the symbolic world, where the proposed neural math solver is based on an encoderdecoder framework. In the proposed model, the encoder is designed to understand the semantics of problems, and the decoder focuses on tracking semantic meanings of the generated symbols and then deciding which symbol to generate next. The preliminary experiments are conducted in a benchmark dataset Math23K, and our model significantly outperforms both the state-of-the-art single model and the best non-retrieval-based model over about 10% accuracy, demonstrating the effectiveness of bridging the symbolic and semantic worlds from math word problems. 1
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214 ], "paper_content_text": [ "Introduction Automatically solving math word problems has been an interesting research topic and also been viewed as a way of evaluating machines' ability (Mandal and Naskar, 2019) .", "For human, writing down an equation that solves a math word problem requires the ability of reading comprehension, reasoning, and sometimes real world understanding.", "Specifically, to solve a math word problem, we first need to know the goal of the given problem, then understand the semantic 1 The source code is available at https://github.", "com/MiuLab/E2EMathSolver.", "meaning of each numerical number in the problem, perform reasoning based on the comprehension in the previous step, and finally decide what to write in the equation.", "Most prior work about solving math word problems relied on hand-crafted features, which required more human knowledge.", "Because those features are often in the lexical level, it is not clear whether machines really understand the math problems.", "Also, most prior work evaluated their approaches on relatively small datasets, and the capability of generalization is concerned.", "This paper considers the reasoning procedure when writing down the associated equation given a problem.", "Figure 1 illustrates the problem solving process.", "The illustration shows that human actually assigns the semantic meaning to each number when manipulating symbols, including operands (numbers) and operators (+ − ×÷).", "Also, we believe that the semantic meaning of operands can help us decide which operator to use.", "For example, the summation of \"price of one pen\" and \"number of pens Tom bought\" is meaningless; therefore the addition would not be chosen.", "Following the observation above, this paper proposes a novel encoder decoder model, where the encoder extracts semantic meanings of numbers in the problem, and the decoder is equipped with a stack that facilitates tracking the semantic meanings of operands.", "The contributions of this paper are 4-fold: • This paper is the first work that models semantic meanings of operands and operators for math word problems.", "• This paper proposes an end-to-end neural math solver with a novel decoding process that utilizes the stack to generate associated equations.", "Figure 1 : The solving process of the math word problem \"Each notebok takes $0.5 and each pen takes $1.", "Tom has $10.", "How many notebook can he buy after buying 5 pens?\"", "and the associated equation is x = (10 − 1 × 5) ÷ 0.5.", "The associated equation is x = (10 − 1 × 5) ÷ 0.5.", "• This paper achieves the state-of-the-art performance on the large benchmark dataset Math23K.", "• This paper is capable of providing interpretation and reasoning for the math word problem solving procedure.", "Related Work There is a lot of prior work that utilized handcrafted features, such as POS tags, paths in the dependency trees, keywords, etc., to allow the model to focus on the quantities in the problems Hosseini et al., 2014; Koncel-Kedziorski et al., 2015; Roy et al., 2016; Upadhyay and Chang, 2017; Roy and Roth, 2018; Wang et al., 2018) .", "Recently, Mehta et al.", "; Wang et al.", "; Ling et al.", "attempted at learning models without predefined features.", "Following the recent trend, the proposed end-to-end model in this paper does not need any hand-crafted features.", "Kushman et al.", "first extracted templates about math expressions from the training answers, and then trained models to select templates and map quantities in the problem to the slots in the template.", "Such two-stage approach has been tried and achieved good results (Upadhyay and Chang, 2017) .", "The prior work highly relied on human knowledge, where they parsed problems into equations by choosing the expression tree with the highest score calculated by an operator classifier, working on a hand-crafted \"trigger list\" containing quantities and noun phrases in the problem, or utilizing features extracted from text spans (Roy et al., , 2016 Koncel-Kedziorski et al., 2015) .", "Shi et al.", "defined a Dolphin language to connect math word problems and logical forms, and generated rules to parse math word problems.", "Upadhyay et al.", "parsed math word problems without explicit equation annotations.", "Roy and Roth clas-sified math word problems into 4 types and used rules to decide the operators accordingly.", "Wang et al.", "trained the parser using reinforcement learning with hand-crafted features.", "Hosseini et al.", "modeled the problem text as transition of world states, and the equation is generated as the world states changing.", "Our work uses a similar intuition, but hand-crafted features are not required and our model can be trained in an end-to-end manner.", "Some end-to-end approaches have been proposed, such as generating equations directly via a seq2seq model (Wang et al., 2017) .", "Ling et al.", "tried to generate solutions along with its rationals with a seq2seq-like model for better interpretability.", "This paper belongs to the end-to-end category, but different from the previous work; we are the first approach that generates equations with stack actions, which facilitate us to simulate the way how human solves problems.", "Furthermore, the proposed approach is the first model that is more interpretable and provides reasoning steps without the need of rational annotations.", "End-to-End Neural Math Solver Our approach composes of two parts, an encoder and a decoder, where the process of solving math word problems is viewed as transforming multiple text spans from the problems into the target information the problems ask for.", "In the example shown in Figure 1 , all numbers in the problem are attached with the associated semantics.", "Motivated by the observation, we design an encoder to extract the semantic representation of each number in the problem text.", "Considering that human usually manipulates those numbers and operators (such as addition, subtraction, etc.)", "based on their semantics for problem solving, a decoder is designed to construct the equation, where the semantics is aligned with the representations extracted by the encoder.", "The idea of the proposed model Tom has $ 10 5 pens ?", "Encoder Stack Attention Operation Selector Apply OP OP Return Decoder Operand Selector Semantic Transformer Each notebook takes $0.5 and each pen takes $1.", "Tom has $10.", "How many notebooks can he buy after buying 5 pens?", "Stack Attention is to imitate the human reasoning process for solving math word problems.", "The model architecture is illustrated in Figure 2 .", "Encoder The encoder aims to extract the semantic representation of each constant needed for solving problems.", "However, the needed constants may come from either the given problem texts or domain knowledge, so we detail these two procedures as follows.", "Constant Representation Extraction For each math word problem, we are given a passage consisting of words {w P t } m t=1 , whose word embeddings are {e P t } m t=1 .", "The problem text includes some numbers, which we refer as constants.", "The positions of constants in the problem text are denoted as {p i } n i=1 .", "In order to capture the semantic representation of each constant by considering its contexts, a bidirectional long short-term memory (BLSTM) is adopted as the encoder (Hochreiter and Schmidhuber, 1997) : h E t , c E t = BLSTM(h E t−1 , c E t−1 , e P t ), (1) and then for the i-th constant in the problem, its semantic representation e c i is modeled by the corresponding BLSTM output vector: e c i = h E p i .", "(2) External Constant Leveraging External constants, including 1 and π, are leveraged, because they are required to solve a math word problem, but not mentioned in the problem text.", "Due to their absence from the problem text, we cannot extract their semantic meanings by BLSTM in (2) .", "Instead, we model their semantic representation e π , e 1 as parts of the model parameters.", "They are randomly initialized and are learned during model training.", "Decoder The decoder aims at constructing the equation that can solve the given problem.", "We generate the equation by applying stack actions on a stack to mimic the way how human understands an equation.", "Human knows the semantic meaning of each term in the equation, even compositing of operands and operators like the term \"(10−1×5)\" in Figure 1 .", "Then what operator to apply on a pair operands can be chosen based on their semantic meanings accordingly.", "Hence we design our model to generate the equation in a postfix manner: a operator is chosen base on the semantic representations of two operands the operator is going to apply to.", "Note that the operands a operator can apply to can be any results generated previously.", "That is the reason why we use \"stack\" as our data structure in order to keep track of the operands a operator is going to apply to.", "The stack contains both symbolic and semantic representations of operands, denoted as S = [(v S lt , e S lt ), (v S lt−1 , e S lt−1 ), · · · , (v S 1 , e S 1 )], (3) where v S of each pair is the symbolic part, such as x + 1, while e S is the semantic representation, which is a vector.", "The components in the decoder are shown in the right part of Figure 2 , each of which is detailed below.", "Decoding State Features At each decoding step, decisions are made based on features of the current state.", "At each step, fea- tures r sa and r opd are extracted to select a stack action (section 3.3.2) and an operand to push (section 3.3.3).", "Specifically, the features are the gated concatenation of following vectors: • h D t is the output of an LSTM, which encodes the history of applied actions: h D t , c D t = LSTM(h D t−1 , c D t−1 , res t−1 ), (4) where res t−1 is the result from the previous stack action similar to the seq2seq model (Sutskever et al., 2014) .", "For example, if the previous stack action o t−1 is \"push\", then res t−1 is the semantic representation pushed into the stack.", "If the previous stack action o t−1 is to apply an operator , then res t−1 is the semantic representation generated by f .", "• s t is the stack status.", "It is crucial because some operators are only applicable to certain combinations of operand semantics, which is similar to the type system in programming languages.", "For example, operating multiplication is applicable to the combination of \"quantity of an item\" and \"price of an item\", while operating addition is not.", "Considering that all math operators supported here (+, −, ×, ÷) are binary operators, the semantic representations of the stack's top 2 elements at the time t − 1 are considered: s t = [e S lt ; e S lt ].", "(5) • q t incorporates problem information in the decision.", "It is believed that the attention mechanism (Luong et al., 2015) can effectively capture dependency for longer distance.", "Thus, the attention mechanism over the encoding problem h E 1 , h E 2 , · · · is adopted: q t = Attention(h D t , {h E i } m i=1 ), (6) where the attention function in this paper is defined as a function with learnable parameters w, W, b: Attention(u, {v i } m i=1 ) = m i=1 α i h i , (7) α i = exp(s i ) m l=1 exp(s i ) , (8) s i = w T tanh(W T [u; v i ] + b).", "(9) In order to model the dynamic features for different decoding steps, features in r sa t is gated as follows: r sa t = [g sa t,1 · h D t ; g sa t,2 · s t ; g sa t,3 · q t ], (10) g sa t = σ(W sa · [h D t ; s t ; q t ]), (11) where σ is a sigmoid function and W sa is a learned gating parameter.", "r opd t is defined similarly, but with a different learned gating parameter W opd .", "Stack Action Selector The stack action selector is to select an stack action at each decoding step (section 3.3.2) until the unknowns are solved.", "The probability of choosing action a at the decoding step t is calculated with a network NN constituted of one hidden layer and ReLU as the activation function: P (Y t |{y i } t−1 i=1 , {w i } m i=1 ) (12) = StackActionSelector(r sa t ) = softmax(NN(r sa t )) , where r sa t is decoding state features as defined in section 3.3.", "Stack Actions The available stack actions are listed below: • Variable generation: The semantic representation of an unknown variable x is generated dynamically as the first action in the decoding process.", "Note that this procedure provides the flexibility of solving problems with more than one unknown variables.", "The decoder module can decide how many unknown variables are required to solve the problem, and the semantic representation of the unknown variable is generated with an attention mechanism: e x = Attention(h D t , {h E i } m i=1 ).", "(13) • Push: This stack action pushes the operand chosen by the operand selector (section 3.3.3).", "Both the symbolic representation v * and semantic representation e * of the chosen operand would be pushed to the stack S in (3).", "Then the stack state becomes S = [(v S * , e S * ), (v S lt , e S lt ), · · · , (v S 1 , e S 1 )].", "(14) • Operator application ( ∈ {+, −, ×, ÷}): One stack action pops two elements from the top of the stack, which contains two pairs, (v i , e i ) and (v j , e j ), and then the associated symbolic operator, v k = v i v j , is recorded.", "Also, a semantic transformation function f for that operator is invoked, which generates the semantic representation of v k by transforming semantic representations of v i and v j to e k = f (e i , e j ).", "Therefore, after an operator is applied to the stack specified in (3) , the stack state becomes S =[(v S lt v S lt−1 , f (e S lt , e S lt−1 )), (15) (v S lt−2 , e S lt−2 ), · · · , (v S 1 , e S 1 )].", "• Equal application: When the equal application is chosen, it implies that an equation is completed.", "This stack action pops 2 tuples from the stack, (v i , e i ), (v j , e j ), and then v i = v j is recorded.", "If one of them is an unknown variable, the problem is solved.", "Therefore, after an OP is applied to the stack specified in (3) , the stack state becomes S = [(v S lt−2 , e S lt−2 ), · · · , (v S 1 , e S 1 )].", "(16) Operand Selector When the stack action selector has decided to push an operand, the operand selector aims at choosing which operand to push.", "The operand candidates e include constants provided in the problem text whose semantic representations are e c 1 , e c 2 , · · · , e c n , unknown variable whose semantic representation is e x , and two external constants 1 and π whose semantic representations are e 1 , e π : e = [e c 1 , e c 2 , · · · , e c n , e 1 , e π , e x ].", "An operand has both symbolic and semantic representations, but the selection focuses on its semantic meaning; this procedure is the same as what human does when solving math word problems.", "Inspired by addressing mechanisms of neural Turing machine (NTM) (Graves et al., 2014) , the probability of choosing the i-th operand candidate is the attention weights of r t over the semantic representations of the operand candidates as in (8) : P (Z t | {y i } t−1 i=1 , {w i } m i=1 ) (18) = OperandSelector(r opd t ) = AttentionWeight(r opd t , {e i } m i=1 ∪ {e 1 , e π , e x }), and r opd t is defined in section 3.3.", "Semantic Transformer A semantic transformer is proposed to generate the semantic representation of a new symbol resulted from applying an operator, which provides the capability of interpretation and reasoning for the target task.", "The semantic transformer for an operator ∈ {+, −, ×, ÷} transforms semantic representations of two operands e 1 , e 2 into f (e 1 , e 2 ) = tanh(U ReLU(W [e 1 ; e 2 ]+b )+c ), where W , U , b , c are model parameters.", "Semantic transformers for different operators have different parameters in order to model different transformations.", "Training Both stack action selection and operand selection can be trained in a fully supervised way by giving problems and associated ground truth equations.", "Because our model generates the equation with stack actions, the equation is first transformed into its postfix representation.", "Let the postfix representation of the target equation be y 1 , · · · y t , · · · , y T , where y t can be either an operator (+, −, ×, ÷, =) or a target operand.", "Then for each time step t, the loss can be computed as L(y t ) = L 1 (push op) + L 2 (y t ) y t is an operand L 1 (y t ) otherwise , where L 1 is the stack action selection loss and L 2 is the operand selection loss defined as L 1 (y t ) = − log P (Y t = y t | {o i } t−1 i=1 , {w i } m i=1 ), L 2 (y t ) = − log P (Z t = y t | r t ).", "The objective of our training process is to minimize the total loss for the whole equation, T t=1 L(y t ).", "Inference When performing inference, at each time step t, the stack action with the highest probability P (Y t |{ỹ i } t−1 i=1 , {w i } m i=1 ) is chosen.", "If the chosen stack action is \"push\", the operand with the highest probability P (Z t |{Ỹ i } t−1 i=1 , {w i } m i=1 ) is chosen.", "When the stack has less than 2 elements, the probability of applying operator +, −, ×, ÷, = would be masked out to prevent illegal stack actions, so all generated equations must be legal math expressions.", "The decoder decodes until the unknown variable can be solved.", "After the equations are generated, a Python package SymPy (Meurer et al., 2017) is used to solve the unknown variable.", "The inference procedure example is illustrated in Figure 3 .", "The detailed algorithm can be found in Algorithm 1.", "Experiments To evaluate the performance of the proposed model, we conduct the experiments on the benchmark dataset and analyze the learned semantics.", "Settings The experiments are benchmarked on the dataset Math23k (Wang et al., 2017) , which contains 23,162 math problems with annotated equations.", "Each problem can be solved by a singleunknown-variable equation and only uses operators +, −, ×, ÷.", "Also, except π and 1, quantities in the equation can be found in the problem text.", "There are also other large scale datasets like Dol-phin18K (Shi et al., 2015) and AQuA (Ling et al., 2017) , containing 18,460 and 100,000 math word problems respectively.", "The reasons about not evaluating on these two datasets are 1) Dolphin18k contains some unlabeled math word problems and some incorrect labels, and 2) AQuA contains rational for solving the problems, but the equations in the rational are not formal (e.g.", "mixed with texts, using x to represent ×, etc.)", "and inconsistent.", "Therefore, the following experiments are performed and analyzed using Math23K, the only large scaled, good-quality dataset. )", "do h D t ← LSTM(h D t−1 , ct−1, ret) st ← S.get top2() h E ← Attention(h D t−1 , h E ) rt ← [h D t , st, h E ] psa ← StackActionSelector(rt) p opd ← OperandSelector(rt) if training then Target equation y is available when training.", "Yt ← yt if yt is operand then loss ← loss + L1(push) + L2(yt) else loss ← loss + L1(yt) end if else Yt ← StackActionSelector(r sa t ) if Yt = push then Zt ← OperandSelector(r opd t ) end if end if if Yt = gen var then e x ← Attention(h D t , h E ) ret ← e x else if Yt = push then S.push(vZ t , eZ t ) ret ← eZ t else if Yt ∈ {+, Results The results are shown in Our proposed end-to-end model belongs to the generation category, and the single model performance achieved by our proposed model is new state-of-the-art (> 65%) and even better than the hybrid model result (64.7%).", "In addition, we are the first to report character-based performance on this dataset, and the character-based results are slightly better than the word-based ones.", "Among the single model performance, our models obtain about more than 7% accuracy improvement compared to the previous best one (Wang et al., 2017) .", "The performance of our character-based model also shows that our model is capable of learning the relatively accurate semantic representations without word boundaries and achieves better performance.", "Ablation Test To better understand the performance contributed by each proposed component, we perform a series of ablation tests by removing components one by one and then checking the performance by 5-fold cross validation.", "Table 2 shows the ablation results.", "Char-Based v.s.", "Word-Based As reported above, using word-based model instead of character-based model only causes 0.5% performance drop.", "To fairly compare with prior word- Table 2 : 5-fold cross validation results of ablation tests.", "based models, the following ablation tests are performed on the word-based approach.", "Word-Based -Gate It uses r t instead of r sa t and r opr t as the input of both StackActionSelector and OperandSelector.", "Word-Based -Gate -Attention Considering that the prior generation-based model (seq2seq) did not use any attention mechanism, we compare the models with and without the attention mechanism.", "Removing attention means excluding q t−1 in (11), so the input of both operator and operand selector becomes r t = [h D t ; s t ].", "The result implies that our model is not better than previous models solely because of the attention.", "Word-Based -Gate -Attention -Stack To check the effectiveness of the stack status (s t in (11)), the experiments of removing the stack status from the input of both operator and operand selectors (r t = h D t ) are conducted.", "The results well justify our idea of choosing operators based on semantic meanings of operands.", "Word-Based -Semantic Transformer To validate the effectiveness of the idea that views an operator as a semantic transformer, we modify the semantic transformer function of the operator into f (e 1 , e 2 ) = e , where e is a learnable parameter and is different for different operators.", "Therefore, e acts like the embedding of the operator , and the decoding process is more similar to a general seq2seq model.", "The results show that the semantic transformer in the original model encodes not only the last operator applied on the operands but other information that helps the selectors.", "Word-Based -Semantic Representation To explicitly evaluate the effectiveness of operands' semantic representations, we rewrite semantic representation of the i-th operand in the problem texts q u a n ti fi e r 个 b a n a n a 香 蕉 , e v e r y 每 ( b a s k e t) < u n k > 6 .0 q u a n ti fi e r 个 , ta k e o ff 拿 掉 h o w m a n y 多 少 q u a n ti fi e r 个 , th e n 就 c a n 可 以 e x a c tl y 正 好 fi ll 装 9 .0 q u a n ti fi e r 个 b a s k e ts 篮 子 了 < u n k > .", "9.0 6.0 58.0 Figure 4 : The self-attention map visualization of operands' semantic expressions for the problem \"There are 58 bananas.", "Each basket can contain 6 bananas.", "How many bananas are needed to be token off such that exactly 9 baskets are filled?\".", "from (2) to e c i = b c i , where b c i is a parameter.", "Thus for every problem, the representation of the i-th operand is identical, even though their meanings in different problems may be different.", "This modification assumes that no semantic information is captured by b c i , which can merely represent a symbolic placeholder in an equation.", "Because the semantic transformer is to transform the semantic representations, applying this component is meaningless.", "Here the semantic transformer is also replaced with f (e 1 , e 2 ) = e as the setting of the previous ablation test.", "The results show that the model without using semantic representations of operands causes a significant accuracy drop of 3.5%.", "The main contribution of this paper about modeling semantic meanings of symbols is validated and well demonstrated here.", "Qualitative Analysis To further analyze whether the proposed model can provide interpretation and reasoning, we visualize the learned semantic representations of constants to check where the important cues are, Constant Embedding Analysis To better understand the information encoded in the semantic representations of constants in the problem, a self-attention is performed when their semantic representations are extracted by the encoder.", "Namely, we rewrite (2) as e c i = Attention(h E p i , {h E t } m t=1 .", "(20) Then we check the trained self-attention map (α in the attention function) on the validation dataset.", "For some problems, the self-attention that generates semantic representations of constants in the problem concentrates on the number's quantifier or unit, and sometimes it also focuses on informative verbs, such as \"gain\", \"get\", \"fill\", etc., in the sentence.", "For example, Figure 4 shows the attention weights for an example math word problem, where lighter colors indicate higher weights.", "The numbers \"58\" and \"6\" focus more on the quantifier-related words (e.g.", "\"every\" and \"how many\"), while \"9\" pays higher attention to the verb \"fill\".", "The results are consistent with those handcraft features for solving math word problems proposed by the prior research (Hosseini et al., 2014; .", "Hence, we demonstrate that the automatically learned semantic representations indeed capture critical information that facilitates solving math word problems without providing human-crafted knowledge.", "Decoding Process Visualization We visualize the attention map (q t in (6) ) to see how the attention helps the decoding process.", "An example is shown in the top of Figure 5 , where most attention focuses on the end of the sentence.", "Unlike the machine translation task, the attention shows the word-level alignment between source and target languages, solving math word problems requires high-level understanding due to the task complexity.", "To further analyze the effectiveness of the proposed gating mechanisms for stack action and operand selection, the activation of gates g sa , g opd at each step of the decoding process is shown in the bottom of Figure 5 .", "It shows that most of time, the gate activation is high, demonstrating that the proposed gating mechanisms play an important role during decoding.", "We also observe a common phenomenon that the activation g sa 2 , which controls how much attention the stack action selector puts on the stack state when deciding an stack action, is usually low until the last \"operator application\" stack action.", "For example, in the example of Figure 5 , g sa 2 is less than 0.20 till the last argument selection stack action, and activates when deciding the division operator application (÷) and the equal application (=).", "It may result from the higher-level semantics of the operand (6.75−2.75) on the stack when selecting the stack action division operator application (÷).", "In terms Problem & Results 红花有60朵,黄花比红花多1/6朵,黄花有多少朵. (There are 60 red flowers.", "Yellow flowers are more than red ones by 1/6.", "How many yellow flowers are there?)", "Generated Equation: 60 + 1 6 Correct Answer: 70 火车 48 小时行驶 5920 千米,汽车 25 小时行驶 2250 千米,汽车平均每小时比火车每小时慢 多少 千米 ? (The train travels 5920 kilometers in hours, and the car travels 2250 kilometers in 25 hours.", "How many kilometers per hour is the car slower than the train?)", "Generated Equation: 2250 ÷ 25 − 5920 ÷ 48 Correct Answer: 33 1 3 小红前面 5 人,后面 7 人,一共有多少人? (There are 5 people in front of Little Red and 7 people behind.", "How many persons are there in total?)", "Generated Equation: 5 + 7 Correct Answer: 13 Figure 5: Word attention and gate activation (g sa and g opd ) visualization when generating stack actions for the problem \"6.75 deducting 5 times of an unknown number is 2.75.", "What is the unknown number?", "\", where the associated equation is x = (6.75 − 2.75) ÷ 5.", "Note that g opd is meaningful only when the t-th stack action is push op.", "of the activation of g opd , we find that three features are important in most cases, demonstrating the effectiveness of the proposed mechanisms.", "Error Analysis We randomly sample some results predicted incorrectly by our model shown in Table 3 .", "In the first example, the error is due to the language ambiguity, and such ambiguity cannot be resolved without considering the exact value of the number.", "From the second example, although our model identifies the problem as a comparison problem successfully, it handles the order of the operands incorrectly.", "For the third problem, it cannot be solved by using only the surface meaning but requires some common sense.", "Therefore, above phenomena show the difficulty of solving math word problems and the large room for improvement.", "Conclusion We propose an end-to-end neural math solver using an encoder-decoder framework that incorporates semantic representations of numbers in order to generate mathematical symbols for solving math word problems.", "The experiments show that the proposed model achieves the state-of-the-art performance on the benchmark dataset, and empirically demonstrate the effectiveness of each component in the model.", "In sum, the proposed neural math solver is designed based on how human performs reasoning when writing equations, providing better interpretation without the need of labeled rationals.", "A Algorithm Detail The training and inference procedures are shown in Algortihm 1.", "B Hyperparameter Setup The model is trained with the optimizer adam (Kingma and Ba, 2014), and the learning rate is set to 0.001.", "Pretrained embeddings using FastText (Joulin et al., 2016 ) are adopted.", "The hidden state size of LSTM used in the encoder and decoder is 256.", "The dimension of hidden layers in attention, semantic transformer and operand/stack action selector is 256.", "The dropout rate is set as 0.1 before inputting the decoder LSTM, before the stack action selector and after the hidden layer of the stack action selector and attention.", "The reported accuracy is the result of 5-fold cross-validation, same as Wang et al.", "for fair comparison.", "C Error Analysis between Seq2Seq We implement the seq2seq model as proposed by Wang et al.", "and compare the performance difference between our proposed model and the baseline seq2seq model.", "Table 4 shows the generated results seq2seq predicts correctly but our model predicts incorrectly.", "Table 5 show the results our model can predict correctly but seq2seq cannot." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.1.1", "3.1.2", "3.2", "3.3", "3.3.1", "3.3.2", "3.3.3", "3.3.4", "3.4", "3.5", "4", "4.1", "4.2", "4.3", "5", "5.1", "5.2", "5.3", "6" ], "paper_header_content": [ "Introduction", "Related Work", "End-to-End Neural Math Solver", "Encoder", "Constant Representation Extraction", "External Constant Leveraging", "Decoder", "Decoding State Features", "Stack Action Selector", "Stack Actions", "Operand Selector", "Semantic Transformer", "Training", "Inference", "Experiments", "Settings", "Results", "Ablation Test", "Qualitative Analysis", "Constant Embedding Analysis", "Decoding Process Visualization", "Error Analysis", "Conclusion" ] }
GEM-SciDuet-train-31#paper-1044#slide-13
Equation Generation by Stack Actions
M I U L A B The decoder generates stack actions. An equation is generated stack action stack action stack action stack action with actions on stack. GGeGenen enraerat rated ted Ad AcActictoitonions n:s s:x :x 1x Generated Acti ons: x GeneraGt Ged en nAerarc ateitoed d nAsA:c cx titoi1on0 nss:1 :x x Generated Acti ons: x
M I U L A B The decoder generates stack actions. An equation is generated stack action stack action stack action stack action with actions on stack. GGeGenen enraerat rated ted Ad AcActictoitonions n:s s:x :x 1x Generated Acti ons: x GeneraGt Ged en nAerarc ateitoed d nAsA:c cx titoi1on0 nss:1 :x x Generated Acti ons: x
[]
GEM-SciDuet-train-31#paper-1044#slide-15
1044
Semantically-Aligned Equation Generation for Solving and Reasoning Math Word Problems
Solving math word problems is a challenging task that requires accurate natural language understanding to bridge natural language texts and math expressions. Motivated by the intuition about how human generates the equations given the problem texts, this paper presents a neural approach to automatically solve math word problems by operating symbols according to their semantic meanings in texts. This paper views the process of generating equations as a bridge between the semantic world and the symbolic world, where the proposed neural math solver is based on an encoderdecoder framework. In the proposed model, the encoder is designed to understand the semantics of problems, and the decoder focuses on tracking semantic meanings of the generated symbols and then deciding which symbol to generate next. The preliminary experiments are conducted in a benchmark dataset Math23K, and our model significantly outperforms both the state-of-the-art single model and the best non-retrieval-based model over about 10% accuracy, demonstrating the effectiveness of bridging the symbolic and semantic worlds from math word problems. 1
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214 ], "paper_content_text": [ "Introduction Automatically solving math word problems has been an interesting research topic and also been viewed as a way of evaluating machines' ability (Mandal and Naskar, 2019) .", "For human, writing down an equation that solves a math word problem requires the ability of reading comprehension, reasoning, and sometimes real world understanding.", "Specifically, to solve a math word problem, we first need to know the goal of the given problem, then understand the semantic 1 The source code is available at https://github.", "com/MiuLab/E2EMathSolver.", "meaning of each numerical number in the problem, perform reasoning based on the comprehension in the previous step, and finally decide what to write in the equation.", "Most prior work about solving math word problems relied on hand-crafted features, which required more human knowledge.", "Because those features are often in the lexical level, it is not clear whether machines really understand the math problems.", "Also, most prior work evaluated their approaches on relatively small datasets, and the capability of generalization is concerned.", "This paper considers the reasoning procedure when writing down the associated equation given a problem.", "Figure 1 illustrates the problem solving process.", "The illustration shows that human actually assigns the semantic meaning to each number when manipulating symbols, including operands (numbers) and operators (+ − ×÷).", "Also, we believe that the semantic meaning of operands can help us decide which operator to use.", "For example, the summation of \"price of one pen\" and \"number of pens Tom bought\" is meaningless; therefore the addition would not be chosen.", "Following the observation above, this paper proposes a novel encoder decoder model, where the encoder extracts semantic meanings of numbers in the problem, and the decoder is equipped with a stack that facilitates tracking the semantic meanings of operands.", "The contributions of this paper are 4-fold: • This paper is the first work that models semantic meanings of operands and operators for math word problems.", "• This paper proposes an end-to-end neural math solver with a novel decoding process that utilizes the stack to generate associated equations.", "Figure 1 : The solving process of the math word problem \"Each notebok takes $0.5 and each pen takes $1.", "Tom has $10.", "How many notebook can he buy after buying 5 pens?\"", "and the associated equation is x = (10 − 1 × 5) ÷ 0.5.", "The associated equation is x = (10 − 1 × 5) ÷ 0.5.", "• This paper achieves the state-of-the-art performance on the large benchmark dataset Math23K.", "• This paper is capable of providing interpretation and reasoning for the math word problem solving procedure.", "Related Work There is a lot of prior work that utilized handcrafted features, such as POS tags, paths in the dependency trees, keywords, etc., to allow the model to focus on the quantities in the problems Hosseini et al., 2014; Koncel-Kedziorski et al., 2015; Roy et al., 2016; Upadhyay and Chang, 2017; Roy and Roth, 2018; Wang et al., 2018) .", "Recently, Mehta et al.", "; Wang et al.", "; Ling et al.", "attempted at learning models without predefined features.", "Following the recent trend, the proposed end-to-end model in this paper does not need any hand-crafted features.", "Kushman et al.", "first extracted templates about math expressions from the training answers, and then trained models to select templates and map quantities in the problem to the slots in the template.", "Such two-stage approach has been tried and achieved good results (Upadhyay and Chang, 2017) .", "The prior work highly relied on human knowledge, where they parsed problems into equations by choosing the expression tree with the highest score calculated by an operator classifier, working on a hand-crafted \"trigger list\" containing quantities and noun phrases in the problem, or utilizing features extracted from text spans (Roy et al., , 2016 Koncel-Kedziorski et al., 2015) .", "Shi et al.", "defined a Dolphin language to connect math word problems and logical forms, and generated rules to parse math word problems.", "Upadhyay et al.", "parsed math word problems without explicit equation annotations.", "Roy and Roth clas-sified math word problems into 4 types and used rules to decide the operators accordingly.", "Wang et al.", "trained the parser using reinforcement learning with hand-crafted features.", "Hosseini et al.", "modeled the problem text as transition of world states, and the equation is generated as the world states changing.", "Our work uses a similar intuition, but hand-crafted features are not required and our model can be trained in an end-to-end manner.", "Some end-to-end approaches have been proposed, such as generating equations directly via a seq2seq model (Wang et al., 2017) .", "Ling et al.", "tried to generate solutions along with its rationals with a seq2seq-like model for better interpretability.", "This paper belongs to the end-to-end category, but different from the previous work; we are the first approach that generates equations with stack actions, which facilitate us to simulate the way how human solves problems.", "Furthermore, the proposed approach is the first model that is more interpretable and provides reasoning steps without the need of rational annotations.", "End-to-End Neural Math Solver Our approach composes of two parts, an encoder and a decoder, where the process of solving math word problems is viewed as transforming multiple text spans from the problems into the target information the problems ask for.", "In the example shown in Figure 1 , all numbers in the problem are attached with the associated semantics.", "Motivated by the observation, we design an encoder to extract the semantic representation of each number in the problem text.", "Considering that human usually manipulates those numbers and operators (such as addition, subtraction, etc.)", "based on their semantics for problem solving, a decoder is designed to construct the equation, where the semantics is aligned with the representations extracted by the encoder.", "The idea of the proposed model Tom has $ 10 5 pens ?", "Encoder Stack Attention Operation Selector Apply OP OP Return Decoder Operand Selector Semantic Transformer Each notebook takes $0.5 and each pen takes $1.", "Tom has $10.", "How many notebooks can he buy after buying 5 pens?", "Stack Attention is to imitate the human reasoning process for solving math word problems.", "The model architecture is illustrated in Figure 2 .", "Encoder The encoder aims to extract the semantic representation of each constant needed for solving problems.", "However, the needed constants may come from either the given problem texts or domain knowledge, so we detail these two procedures as follows.", "Constant Representation Extraction For each math word problem, we are given a passage consisting of words {w P t } m t=1 , whose word embeddings are {e P t } m t=1 .", "The problem text includes some numbers, which we refer as constants.", "The positions of constants in the problem text are denoted as {p i } n i=1 .", "In order to capture the semantic representation of each constant by considering its contexts, a bidirectional long short-term memory (BLSTM) is adopted as the encoder (Hochreiter and Schmidhuber, 1997) : h E t , c E t = BLSTM(h E t−1 , c E t−1 , e P t ), (1) and then for the i-th constant in the problem, its semantic representation e c i is modeled by the corresponding BLSTM output vector: e c i = h E p i .", "(2) External Constant Leveraging External constants, including 1 and π, are leveraged, because they are required to solve a math word problem, but not mentioned in the problem text.", "Due to their absence from the problem text, we cannot extract their semantic meanings by BLSTM in (2) .", "Instead, we model their semantic representation e π , e 1 as parts of the model parameters.", "They are randomly initialized and are learned during model training.", "Decoder The decoder aims at constructing the equation that can solve the given problem.", "We generate the equation by applying stack actions on a stack to mimic the way how human understands an equation.", "Human knows the semantic meaning of each term in the equation, even compositing of operands and operators like the term \"(10−1×5)\" in Figure 1 .", "Then what operator to apply on a pair operands can be chosen based on their semantic meanings accordingly.", "Hence we design our model to generate the equation in a postfix manner: a operator is chosen base on the semantic representations of two operands the operator is going to apply to.", "Note that the operands a operator can apply to can be any results generated previously.", "That is the reason why we use \"stack\" as our data structure in order to keep track of the operands a operator is going to apply to.", "The stack contains both symbolic and semantic representations of operands, denoted as S = [(v S lt , e S lt ), (v S lt−1 , e S lt−1 ), · · · , (v S 1 , e S 1 )], (3) where v S of each pair is the symbolic part, such as x + 1, while e S is the semantic representation, which is a vector.", "The components in the decoder are shown in the right part of Figure 2 , each of which is detailed below.", "Decoding State Features At each decoding step, decisions are made based on features of the current state.", "At each step, fea- tures r sa and r opd are extracted to select a stack action (section 3.3.2) and an operand to push (section 3.3.3).", "Specifically, the features are the gated concatenation of following vectors: • h D t is the output of an LSTM, which encodes the history of applied actions: h D t , c D t = LSTM(h D t−1 , c D t−1 , res t−1 ), (4) where res t−1 is the result from the previous stack action similar to the seq2seq model (Sutskever et al., 2014) .", "For example, if the previous stack action o t−1 is \"push\", then res t−1 is the semantic representation pushed into the stack.", "If the previous stack action o t−1 is to apply an operator , then res t−1 is the semantic representation generated by f .", "• s t is the stack status.", "It is crucial because some operators are only applicable to certain combinations of operand semantics, which is similar to the type system in programming languages.", "For example, operating multiplication is applicable to the combination of \"quantity of an item\" and \"price of an item\", while operating addition is not.", "Considering that all math operators supported here (+, −, ×, ÷) are binary operators, the semantic representations of the stack's top 2 elements at the time t − 1 are considered: s t = [e S lt ; e S lt ].", "(5) • q t incorporates problem information in the decision.", "It is believed that the attention mechanism (Luong et al., 2015) can effectively capture dependency for longer distance.", "Thus, the attention mechanism over the encoding problem h E 1 , h E 2 , · · · is adopted: q t = Attention(h D t , {h E i } m i=1 ), (6) where the attention function in this paper is defined as a function with learnable parameters w, W, b: Attention(u, {v i } m i=1 ) = m i=1 α i h i , (7) α i = exp(s i ) m l=1 exp(s i ) , (8) s i = w T tanh(W T [u; v i ] + b).", "(9) In order to model the dynamic features for different decoding steps, features in r sa t is gated as follows: r sa t = [g sa t,1 · h D t ; g sa t,2 · s t ; g sa t,3 · q t ], (10) g sa t = σ(W sa · [h D t ; s t ; q t ]), (11) where σ is a sigmoid function and W sa is a learned gating parameter.", "r opd t is defined similarly, but with a different learned gating parameter W opd .", "Stack Action Selector The stack action selector is to select an stack action at each decoding step (section 3.3.2) until the unknowns are solved.", "The probability of choosing action a at the decoding step t is calculated with a network NN constituted of one hidden layer and ReLU as the activation function: P (Y t |{y i } t−1 i=1 , {w i } m i=1 ) (12) = StackActionSelector(r sa t ) = softmax(NN(r sa t )) , where r sa t is decoding state features as defined in section 3.3.", "Stack Actions The available stack actions are listed below: • Variable generation: The semantic representation of an unknown variable x is generated dynamically as the first action in the decoding process.", "Note that this procedure provides the flexibility of solving problems with more than one unknown variables.", "The decoder module can decide how many unknown variables are required to solve the problem, and the semantic representation of the unknown variable is generated with an attention mechanism: e x = Attention(h D t , {h E i } m i=1 ).", "(13) • Push: This stack action pushes the operand chosen by the operand selector (section 3.3.3).", "Both the symbolic representation v * and semantic representation e * of the chosen operand would be pushed to the stack S in (3).", "Then the stack state becomes S = [(v S * , e S * ), (v S lt , e S lt ), · · · , (v S 1 , e S 1 )].", "(14) • Operator application ( ∈ {+, −, ×, ÷}): One stack action pops two elements from the top of the stack, which contains two pairs, (v i , e i ) and (v j , e j ), and then the associated symbolic operator, v k = v i v j , is recorded.", "Also, a semantic transformation function f for that operator is invoked, which generates the semantic representation of v k by transforming semantic representations of v i and v j to e k = f (e i , e j ).", "Therefore, after an operator is applied to the stack specified in (3) , the stack state becomes S =[(v S lt v S lt−1 , f (e S lt , e S lt−1 )), (15) (v S lt−2 , e S lt−2 ), · · · , (v S 1 , e S 1 )].", "• Equal application: When the equal application is chosen, it implies that an equation is completed.", "This stack action pops 2 tuples from the stack, (v i , e i ), (v j , e j ), and then v i = v j is recorded.", "If one of them is an unknown variable, the problem is solved.", "Therefore, after an OP is applied to the stack specified in (3) , the stack state becomes S = [(v S lt−2 , e S lt−2 ), · · · , (v S 1 , e S 1 )].", "(16) Operand Selector When the stack action selector has decided to push an operand, the operand selector aims at choosing which operand to push.", "The operand candidates e include constants provided in the problem text whose semantic representations are e c 1 , e c 2 , · · · , e c n , unknown variable whose semantic representation is e x , and two external constants 1 and π whose semantic representations are e 1 , e π : e = [e c 1 , e c 2 , · · · , e c n , e 1 , e π , e x ].", "An operand has both symbolic and semantic representations, but the selection focuses on its semantic meaning; this procedure is the same as what human does when solving math word problems.", "Inspired by addressing mechanisms of neural Turing machine (NTM) (Graves et al., 2014) , the probability of choosing the i-th operand candidate is the attention weights of r t over the semantic representations of the operand candidates as in (8) : P (Z t | {y i } t−1 i=1 , {w i } m i=1 ) (18) = OperandSelector(r opd t ) = AttentionWeight(r opd t , {e i } m i=1 ∪ {e 1 , e π , e x }), and r opd t is defined in section 3.3.", "Semantic Transformer A semantic transformer is proposed to generate the semantic representation of a new symbol resulted from applying an operator, which provides the capability of interpretation and reasoning for the target task.", "The semantic transformer for an operator ∈ {+, −, ×, ÷} transforms semantic representations of two operands e 1 , e 2 into f (e 1 , e 2 ) = tanh(U ReLU(W [e 1 ; e 2 ]+b )+c ), where W , U , b , c are model parameters.", "Semantic transformers for different operators have different parameters in order to model different transformations.", "Training Both stack action selection and operand selection can be trained in a fully supervised way by giving problems and associated ground truth equations.", "Because our model generates the equation with stack actions, the equation is first transformed into its postfix representation.", "Let the postfix representation of the target equation be y 1 , · · · y t , · · · , y T , where y t can be either an operator (+, −, ×, ÷, =) or a target operand.", "Then for each time step t, the loss can be computed as L(y t ) = L 1 (push op) + L 2 (y t ) y t is an operand L 1 (y t ) otherwise , where L 1 is the stack action selection loss and L 2 is the operand selection loss defined as L 1 (y t ) = − log P (Y t = y t | {o i } t−1 i=1 , {w i } m i=1 ), L 2 (y t ) = − log P (Z t = y t | r t ).", "The objective of our training process is to minimize the total loss for the whole equation, T t=1 L(y t ).", "Inference When performing inference, at each time step t, the stack action with the highest probability P (Y t |{ỹ i } t−1 i=1 , {w i } m i=1 ) is chosen.", "If the chosen stack action is \"push\", the operand with the highest probability P (Z t |{Ỹ i } t−1 i=1 , {w i } m i=1 ) is chosen.", "When the stack has less than 2 elements, the probability of applying operator +, −, ×, ÷, = would be masked out to prevent illegal stack actions, so all generated equations must be legal math expressions.", "The decoder decodes until the unknown variable can be solved.", "After the equations are generated, a Python package SymPy (Meurer et al., 2017) is used to solve the unknown variable.", "The inference procedure example is illustrated in Figure 3 .", "The detailed algorithm can be found in Algorithm 1.", "Experiments To evaluate the performance of the proposed model, we conduct the experiments on the benchmark dataset and analyze the learned semantics.", "Settings The experiments are benchmarked on the dataset Math23k (Wang et al., 2017) , which contains 23,162 math problems with annotated equations.", "Each problem can be solved by a singleunknown-variable equation and only uses operators +, −, ×, ÷.", "Also, except π and 1, quantities in the equation can be found in the problem text.", "There are also other large scale datasets like Dol-phin18K (Shi et al., 2015) and AQuA (Ling et al., 2017) , containing 18,460 and 100,000 math word problems respectively.", "The reasons about not evaluating on these two datasets are 1) Dolphin18k contains some unlabeled math word problems and some incorrect labels, and 2) AQuA contains rational for solving the problems, but the equations in the rational are not formal (e.g.", "mixed with texts, using x to represent ×, etc.)", "and inconsistent.", "Therefore, the following experiments are performed and analyzed using Math23K, the only large scaled, good-quality dataset. )", "do h D t ← LSTM(h D t−1 , ct−1, ret) st ← S.get top2() h E ← Attention(h D t−1 , h E ) rt ← [h D t , st, h E ] psa ← StackActionSelector(rt) p opd ← OperandSelector(rt) if training then Target equation y is available when training.", "Yt ← yt if yt is operand then loss ← loss + L1(push) + L2(yt) else loss ← loss + L1(yt) end if else Yt ← StackActionSelector(r sa t ) if Yt = push then Zt ← OperandSelector(r opd t ) end if end if if Yt = gen var then e x ← Attention(h D t , h E ) ret ← e x else if Yt = push then S.push(vZ t , eZ t ) ret ← eZ t else if Yt ∈ {+, Results The results are shown in Our proposed end-to-end model belongs to the generation category, and the single model performance achieved by our proposed model is new state-of-the-art (> 65%) and even better than the hybrid model result (64.7%).", "In addition, we are the first to report character-based performance on this dataset, and the character-based results are slightly better than the word-based ones.", "Among the single model performance, our models obtain about more than 7% accuracy improvement compared to the previous best one (Wang et al., 2017) .", "The performance of our character-based model also shows that our model is capable of learning the relatively accurate semantic representations without word boundaries and achieves better performance.", "Ablation Test To better understand the performance contributed by each proposed component, we perform a series of ablation tests by removing components one by one and then checking the performance by 5-fold cross validation.", "Table 2 shows the ablation results.", "Char-Based v.s.", "Word-Based As reported above, using word-based model instead of character-based model only causes 0.5% performance drop.", "To fairly compare with prior word- Table 2 : 5-fold cross validation results of ablation tests.", "based models, the following ablation tests are performed on the word-based approach.", "Word-Based -Gate It uses r t instead of r sa t and r opr t as the input of both StackActionSelector and OperandSelector.", "Word-Based -Gate -Attention Considering that the prior generation-based model (seq2seq) did not use any attention mechanism, we compare the models with and without the attention mechanism.", "Removing attention means excluding q t−1 in (11), so the input of both operator and operand selector becomes r t = [h D t ; s t ].", "The result implies that our model is not better than previous models solely because of the attention.", "Word-Based -Gate -Attention -Stack To check the effectiveness of the stack status (s t in (11)), the experiments of removing the stack status from the input of both operator and operand selectors (r t = h D t ) are conducted.", "The results well justify our idea of choosing operators based on semantic meanings of operands.", "Word-Based -Semantic Transformer To validate the effectiveness of the idea that views an operator as a semantic transformer, we modify the semantic transformer function of the operator into f (e 1 , e 2 ) = e , where e is a learnable parameter and is different for different operators.", "Therefore, e acts like the embedding of the operator , and the decoding process is more similar to a general seq2seq model.", "The results show that the semantic transformer in the original model encodes not only the last operator applied on the operands but other information that helps the selectors.", "Word-Based -Semantic Representation To explicitly evaluate the effectiveness of operands' semantic representations, we rewrite semantic representation of the i-th operand in the problem texts q u a n ti fi e r 个 b a n a n a 香 蕉 , e v e r y 每 ( b a s k e t) < u n k > 6 .0 q u a n ti fi e r 个 , ta k e o ff 拿 掉 h o w m a n y 多 少 q u a n ti fi e r 个 , th e n 就 c a n 可 以 e x a c tl y 正 好 fi ll 装 9 .0 q u a n ti fi e r 个 b a s k e ts 篮 子 了 < u n k > .", "9.0 6.0 58.0 Figure 4 : The self-attention map visualization of operands' semantic expressions for the problem \"There are 58 bananas.", "Each basket can contain 6 bananas.", "How many bananas are needed to be token off such that exactly 9 baskets are filled?\".", "from (2) to e c i = b c i , where b c i is a parameter.", "Thus for every problem, the representation of the i-th operand is identical, even though their meanings in different problems may be different.", "This modification assumes that no semantic information is captured by b c i , which can merely represent a symbolic placeholder in an equation.", "Because the semantic transformer is to transform the semantic representations, applying this component is meaningless.", "Here the semantic transformer is also replaced with f (e 1 , e 2 ) = e as the setting of the previous ablation test.", "The results show that the model without using semantic representations of operands causes a significant accuracy drop of 3.5%.", "The main contribution of this paper about modeling semantic meanings of symbols is validated and well demonstrated here.", "Qualitative Analysis To further analyze whether the proposed model can provide interpretation and reasoning, we visualize the learned semantic representations of constants to check where the important cues are, Constant Embedding Analysis To better understand the information encoded in the semantic representations of constants in the problem, a self-attention is performed when their semantic representations are extracted by the encoder.", "Namely, we rewrite (2) as e c i = Attention(h E p i , {h E t } m t=1 .", "(20) Then we check the trained self-attention map (α in the attention function) on the validation dataset.", "For some problems, the self-attention that generates semantic representations of constants in the problem concentrates on the number's quantifier or unit, and sometimes it also focuses on informative verbs, such as \"gain\", \"get\", \"fill\", etc., in the sentence.", "For example, Figure 4 shows the attention weights for an example math word problem, where lighter colors indicate higher weights.", "The numbers \"58\" and \"6\" focus more on the quantifier-related words (e.g.", "\"every\" and \"how many\"), while \"9\" pays higher attention to the verb \"fill\".", "The results are consistent with those handcraft features for solving math word problems proposed by the prior research (Hosseini et al., 2014; .", "Hence, we demonstrate that the automatically learned semantic representations indeed capture critical information that facilitates solving math word problems without providing human-crafted knowledge.", "Decoding Process Visualization We visualize the attention map (q t in (6) ) to see how the attention helps the decoding process.", "An example is shown in the top of Figure 5 , where most attention focuses on the end of the sentence.", "Unlike the machine translation task, the attention shows the word-level alignment between source and target languages, solving math word problems requires high-level understanding due to the task complexity.", "To further analyze the effectiveness of the proposed gating mechanisms for stack action and operand selection, the activation of gates g sa , g opd at each step of the decoding process is shown in the bottom of Figure 5 .", "It shows that most of time, the gate activation is high, demonstrating that the proposed gating mechanisms play an important role during decoding.", "We also observe a common phenomenon that the activation g sa 2 , which controls how much attention the stack action selector puts on the stack state when deciding an stack action, is usually low until the last \"operator application\" stack action.", "For example, in the example of Figure 5 , g sa 2 is less than 0.20 till the last argument selection stack action, and activates when deciding the division operator application (÷) and the equal application (=).", "It may result from the higher-level semantics of the operand (6.75−2.75) on the stack when selecting the stack action division operator application (÷).", "In terms Problem & Results 红花有60朵,黄花比红花多1/6朵,黄花有多少朵. (There are 60 red flowers.", "Yellow flowers are more than red ones by 1/6.", "How many yellow flowers are there?)", "Generated Equation: 60 + 1 6 Correct Answer: 70 火车 48 小时行驶 5920 千米,汽车 25 小时行驶 2250 千米,汽车平均每小时比火车每小时慢 多少 千米 ? (The train travels 5920 kilometers in hours, and the car travels 2250 kilometers in 25 hours.", "How many kilometers per hour is the car slower than the train?)", "Generated Equation: 2250 ÷ 25 − 5920 ÷ 48 Correct Answer: 33 1 3 小红前面 5 人,后面 7 人,一共有多少人? (There are 5 people in front of Little Red and 7 people behind.", "How many persons are there in total?)", "Generated Equation: 5 + 7 Correct Answer: 13 Figure 5: Word attention and gate activation (g sa and g opd ) visualization when generating stack actions for the problem \"6.75 deducting 5 times of an unknown number is 2.75.", "What is the unknown number?", "\", where the associated equation is x = (6.75 − 2.75) ÷ 5.", "Note that g opd is meaningful only when the t-th stack action is push op.", "of the activation of g opd , we find that three features are important in most cases, demonstrating the effectiveness of the proposed mechanisms.", "Error Analysis We randomly sample some results predicted incorrectly by our model shown in Table 3 .", "In the first example, the error is due to the language ambiguity, and such ambiguity cannot be resolved without considering the exact value of the number.", "From the second example, although our model identifies the problem as a comparison problem successfully, it handles the order of the operands incorrectly.", "For the third problem, it cannot be solved by using only the surface meaning but requires some common sense.", "Therefore, above phenomena show the difficulty of solving math word problems and the large room for improvement.", "Conclusion We propose an end-to-end neural math solver using an encoder-decoder framework that incorporates semantic representations of numbers in order to generate mathematical symbols for solving math word problems.", "The experiments show that the proposed model achieves the state-of-the-art performance on the benchmark dataset, and empirically demonstrate the effectiveness of each component in the model.", "In sum, the proposed neural math solver is designed based on how human performs reasoning when writing equations, providing better interpretation without the need of labeled rationals.", "A Algorithm Detail The training and inference procedures are shown in Algortihm 1.", "B Hyperparameter Setup The model is trained with the optimizer adam (Kingma and Ba, 2014), and the learning rate is set to 0.001.", "Pretrained embeddings using FastText (Joulin et al., 2016 ) are adopted.", "The hidden state size of LSTM used in the encoder and decoder is 256.", "The dimension of hidden layers in attention, semantic transformer and operand/stack action selector is 256.", "The dropout rate is set as 0.1 before inputting the decoder LSTM, before the stack action selector and after the hidden layer of the stack action selector and attention.", "The reported accuracy is the result of 5-fold cross-validation, same as Wang et al.", "for fair comparison.", "C Error Analysis between Seq2Seq We implement the seq2seq model as proposed by Wang et al.", "and compare the performance difference between our proposed model and the baseline seq2seq model.", "Table 4 shows the generated results seq2seq predicts correctly but our model predicts incorrectly.", "Table 5 show the results our model can predict correctly but seq2seq cannot." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.1.1", "3.1.2", "3.2", "3.3", "3.3.1", "3.3.2", "3.3.3", "3.3.4", "3.4", "3.5", "4", "4.1", "4.2", "4.3", "5", "5.1", "5.2", "5.3", "6" ], "paper_header_content": [ "Introduction", "Related Work", "End-to-End Neural Math Solver", "Encoder", "Constant Representation Extraction", "External Constant Leveraging", "Decoder", "Decoding State Features", "Stack Action Selector", "Stack Actions", "Operand Selector", "Semantic Transformer", "Training", "Inference", "Experiments", "Settings", "Results", "Ablation Test", "Qualitative Analysis", "Constant Embedding Analysis", "Decoding Process Visualization", "Error Analysis", "Conclusion" ] }
GEM-SciDuet-train-31#paper-1044#slide-15
Training Process
Target equation is given. M I U L A B Each notebook takes $0.5 and each pen takes $1. Tom has $10. How many notebooks can he buy after buying 5 pens? <bos> x
Target equation is given. M I U L A B Each notebook takes $0.5 and each pen takes $1. Tom has $10. How many notebooks can he buy after buying 5 pens? <bos> x
[]
GEM-SciDuet-train-31#paper-1044#slide-17
1044
Semantically-Aligned Equation Generation for Solving and Reasoning Math Word Problems
Solving math word problems is a challenging task that requires accurate natural language understanding to bridge natural language texts and math expressions. Motivated by the intuition about how human generates the equations given the problem texts, this paper presents a neural approach to automatically solve math word problems by operating symbols according to their semantic meanings in texts. This paper views the process of generating equations as a bridge between the semantic world and the symbolic world, where the proposed neural math solver is based on an encoderdecoder framework. In the proposed model, the encoder is designed to understand the semantics of problems, and the decoder focuses on tracking semantic meanings of the generated symbols and then deciding which symbol to generate next. The preliminary experiments are conducted in a benchmark dataset Math23K, and our model significantly outperforms both the state-of-the-art single model and the best non-retrieval-based model over about 10% accuracy, demonstrating the effectiveness of bridging the symbolic and semantic worlds from math word problems. 1
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214 ], "paper_content_text": [ "Introduction Automatically solving math word problems has been an interesting research topic and also been viewed as a way of evaluating machines' ability (Mandal and Naskar, 2019) .", "For human, writing down an equation that solves a math word problem requires the ability of reading comprehension, reasoning, and sometimes real world understanding.", "Specifically, to solve a math word problem, we first need to know the goal of the given problem, then understand the semantic 1 The source code is available at https://github.", "com/MiuLab/E2EMathSolver.", "meaning of each numerical number in the problem, perform reasoning based on the comprehension in the previous step, and finally decide what to write in the equation.", "Most prior work about solving math word problems relied on hand-crafted features, which required more human knowledge.", "Because those features are often in the lexical level, it is not clear whether machines really understand the math problems.", "Also, most prior work evaluated their approaches on relatively small datasets, and the capability of generalization is concerned.", "This paper considers the reasoning procedure when writing down the associated equation given a problem.", "Figure 1 illustrates the problem solving process.", "The illustration shows that human actually assigns the semantic meaning to each number when manipulating symbols, including operands (numbers) and operators (+ − ×÷).", "Also, we believe that the semantic meaning of operands can help us decide which operator to use.", "For example, the summation of \"price of one pen\" and \"number of pens Tom bought\" is meaningless; therefore the addition would not be chosen.", "Following the observation above, this paper proposes a novel encoder decoder model, where the encoder extracts semantic meanings of numbers in the problem, and the decoder is equipped with a stack that facilitates tracking the semantic meanings of operands.", "The contributions of this paper are 4-fold: • This paper is the first work that models semantic meanings of operands and operators for math word problems.", "• This paper proposes an end-to-end neural math solver with a novel decoding process that utilizes the stack to generate associated equations.", "Figure 1 : The solving process of the math word problem \"Each notebok takes $0.5 and each pen takes $1.", "Tom has $10.", "How many notebook can he buy after buying 5 pens?\"", "and the associated equation is x = (10 − 1 × 5) ÷ 0.5.", "The associated equation is x = (10 − 1 × 5) ÷ 0.5.", "• This paper achieves the state-of-the-art performance on the large benchmark dataset Math23K.", "• This paper is capable of providing interpretation and reasoning for the math word problem solving procedure.", "Related Work There is a lot of prior work that utilized handcrafted features, such as POS tags, paths in the dependency trees, keywords, etc., to allow the model to focus on the quantities in the problems Hosseini et al., 2014; Koncel-Kedziorski et al., 2015; Roy et al., 2016; Upadhyay and Chang, 2017; Roy and Roth, 2018; Wang et al., 2018) .", "Recently, Mehta et al.", "; Wang et al.", "; Ling et al.", "attempted at learning models without predefined features.", "Following the recent trend, the proposed end-to-end model in this paper does not need any hand-crafted features.", "Kushman et al.", "first extracted templates about math expressions from the training answers, and then trained models to select templates and map quantities in the problem to the slots in the template.", "Such two-stage approach has been tried and achieved good results (Upadhyay and Chang, 2017) .", "The prior work highly relied on human knowledge, where they parsed problems into equations by choosing the expression tree with the highest score calculated by an operator classifier, working on a hand-crafted \"trigger list\" containing quantities and noun phrases in the problem, or utilizing features extracted from text spans (Roy et al., , 2016 Koncel-Kedziorski et al., 2015) .", "Shi et al.", "defined a Dolphin language to connect math word problems and logical forms, and generated rules to parse math word problems.", "Upadhyay et al.", "parsed math word problems without explicit equation annotations.", "Roy and Roth clas-sified math word problems into 4 types and used rules to decide the operators accordingly.", "Wang et al.", "trained the parser using reinforcement learning with hand-crafted features.", "Hosseini et al.", "modeled the problem text as transition of world states, and the equation is generated as the world states changing.", "Our work uses a similar intuition, but hand-crafted features are not required and our model can be trained in an end-to-end manner.", "Some end-to-end approaches have been proposed, such as generating equations directly via a seq2seq model (Wang et al., 2017) .", "Ling et al.", "tried to generate solutions along with its rationals with a seq2seq-like model for better interpretability.", "This paper belongs to the end-to-end category, but different from the previous work; we are the first approach that generates equations with stack actions, which facilitate us to simulate the way how human solves problems.", "Furthermore, the proposed approach is the first model that is more interpretable and provides reasoning steps without the need of rational annotations.", "End-to-End Neural Math Solver Our approach composes of two parts, an encoder and a decoder, where the process of solving math word problems is viewed as transforming multiple text spans from the problems into the target information the problems ask for.", "In the example shown in Figure 1 , all numbers in the problem are attached with the associated semantics.", "Motivated by the observation, we design an encoder to extract the semantic representation of each number in the problem text.", "Considering that human usually manipulates those numbers and operators (such as addition, subtraction, etc.)", "based on their semantics for problem solving, a decoder is designed to construct the equation, where the semantics is aligned with the representations extracted by the encoder.", "The idea of the proposed model Tom has $ 10 5 pens ?", "Encoder Stack Attention Operation Selector Apply OP OP Return Decoder Operand Selector Semantic Transformer Each notebook takes $0.5 and each pen takes $1.", "Tom has $10.", "How many notebooks can he buy after buying 5 pens?", "Stack Attention is to imitate the human reasoning process for solving math word problems.", "The model architecture is illustrated in Figure 2 .", "Encoder The encoder aims to extract the semantic representation of each constant needed for solving problems.", "However, the needed constants may come from either the given problem texts or domain knowledge, so we detail these two procedures as follows.", "Constant Representation Extraction For each math word problem, we are given a passage consisting of words {w P t } m t=1 , whose word embeddings are {e P t } m t=1 .", "The problem text includes some numbers, which we refer as constants.", "The positions of constants in the problem text are denoted as {p i } n i=1 .", "In order to capture the semantic representation of each constant by considering its contexts, a bidirectional long short-term memory (BLSTM) is adopted as the encoder (Hochreiter and Schmidhuber, 1997) : h E t , c E t = BLSTM(h E t−1 , c E t−1 , e P t ), (1) and then for the i-th constant in the problem, its semantic representation e c i is modeled by the corresponding BLSTM output vector: e c i = h E p i .", "(2) External Constant Leveraging External constants, including 1 and π, are leveraged, because they are required to solve a math word problem, but not mentioned in the problem text.", "Due to their absence from the problem text, we cannot extract their semantic meanings by BLSTM in (2) .", "Instead, we model their semantic representation e π , e 1 as parts of the model parameters.", "They are randomly initialized and are learned during model training.", "Decoder The decoder aims at constructing the equation that can solve the given problem.", "We generate the equation by applying stack actions on a stack to mimic the way how human understands an equation.", "Human knows the semantic meaning of each term in the equation, even compositing of operands and operators like the term \"(10−1×5)\" in Figure 1 .", "Then what operator to apply on a pair operands can be chosen based on their semantic meanings accordingly.", "Hence we design our model to generate the equation in a postfix manner: a operator is chosen base on the semantic representations of two operands the operator is going to apply to.", "Note that the operands a operator can apply to can be any results generated previously.", "That is the reason why we use \"stack\" as our data structure in order to keep track of the operands a operator is going to apply to.", "The stack contains both symbolic and semantic representations of operands, denoted as S = [(v S lt , e S lt ), (v S lt−1 , e S lt−1 ), · · · , (v S 1 , e S 1 )], (3) where v S of each pair is the symbolic part, such as x + 1, while e S is the semantic representation, which is a vector.", "The components in the decoder are shown in the right part of Figure 2 , each of which is detailed below.", "Decoding State Features At each decoding step, decisions are made based on features of the current state.", "At each step, fea- tures r sa and r opd are extracted to select a stack action (section 3.3.2) and an operand to push (section 3.3.3).", "Specifically, the features are the gated concatenation of following vectors: • h D t is the output of an LSTM, which encodes the history of applied actions: h D t , c D t = LSTM(h D t−1 , c D t−1 , res t−1 ), (4) where res t−1 is the result from the previous stack action similar to the seq2seq model (Sutskever et al., 2014) .", "For example, if the previous stack action o t−1 is \"push\", then res t−1 is the semantic representation pushed into the stack.", "If the previous stack action o t−1 is to apply an operator , then res t−1 is the semantic representation generated by f .", "• s t is the stack status.", "It is crucial because some operators are only applicable to certain combinations of operand semantics, which is similar to the type system in programming languages.", "For example, operating multiplication is applicable to the combination of \"quantity of an item\" and \"price of an item\", while operating addition is not.", "Considering that all math operators supported here (+, −, ×, ÷) are binary operators, the semantic representations of the stack's top 2 elements at the time t − 1 are considered: s t = [e S lt ; e S lt ].", "(5) • q t incorporates problem information in the decision.", "It is believed that the attention mechanism (Luong et al., 2015) can effectively capture dependency for longer distance.", "Thus, the attention mechanism over the encoding problem h E 1 , h E 2 , · · · is adopted: q t = Attention(h D t , {h E i } m i=1 ), (6) where the attention function in this paper is defined as a function with learnable parameters w, W, b: Attention(u, {v i } m i=1 ) = m i=1 α i h i , (7) α i = exp(s i ) m l=1 exp(s i ) , (8) s i = w T tanh(W T [u; v i ] + b).", "(9) In order to model the dynamic features for different decoding steps, features in r sa t is gated as follows: r sa t = [g sa t,1 · h D t ; g sa t,2 · s t ; g sa t,3 · q t ], (10) g sa t = σ(W sa · [h D t ; s t ; q t ]), (11) where σ is a sigmoid function and W sa is a learned gating parameter.", "r opd t is defined similarly, but with a different learned gating parameter W opd .", "Stack Action Selector The stack action selector is to select an stack action at each decoding step (section 3.3.2) until the unknowns are solved.", "The probability of choosing action a at the decoding step t is calculated with a network NN constituted of one hidden layer and ReLU as the activation function: P (Y t |{y i } t−1 i=1 , {w i } m i=1 ) (12) = StackActionSelector(r sa t ) = softmax(NN(r sa t )) , where r sa t is decoding state features as defined in section 3.3.", "Stack Actions The available stack actions are listed below: • Variable generation: The semantic representation of an unknown variable x is generated dynamically as the first action in the decoding process.", "Note that this procedure provides the flexibility of solving problems with more than one unknown variables.", "The decoder module can decide how many unknown variables are required to solve the problem, and the semantic representation of the unknown variable is generated with an attention mechanism: e x = Attention(h D t , {h E i } m i=1 ).", "(13) • Push: This stack action pushes the operand chosen by the operand selector (section 3.3.3).", "Both the symbolic representation v * and semantic representation e * of the chosen operand would be pushed to the stack S in (3).", "Then the stack state becomes S = [(v S * , e S * ), (v S lt , e S lt ), · · · , (v S 1 , e S 1 )].", "(14) • Operator application ( ∈ {+, −, ×, ÷}): One stack action pops two elements from the top of the stack, which contains two pairs, (v i , e i ) and (v j , e j ), and then the associated symbolic operator, v k = v i v j , is recorded.", "Also, a semantic transformation function f for that operator is invoked, which generates the semantic representation of v k by transforming semantic representations of v i and v j to e k = f (e i , e j ).", "Therefore, after an operator is applied to the stack specified in (3) , the stack state becomes S =[(v S lt v S lt−1 , f (e S lt , e S lt−1 )), (15) (v S lt−2 , e S lt−2 ), · · · , (v S 1 , e S 1 )].", "• Equal application: When the equal application is chosen, it implies that an equation is completed.", "This stack action pops 2 tuples from the stack, (v i , e i ), (v j , e j ), and then v i = v j is recorded.", "If one of them is an unknown variable, the problem is solved.", "Therefore, after an OP is applied to the stack specified in (3) , the stack state becomes S = [(v S lt−2 , e S lt−2 ), · · · , (v S 1 , e S 1 )].", "(16) Operand Selector When the stack action selector has decided to push an operand, the operand selector aims at choosing which operand to push.", "The operand candidates e include constants provided in the problem text whose semantic representations are e c 1 , e c 2 , · · · , e c n , unknown variable whose semantic representation is e x , and two external constants 1 and π whose semantic representations are e 1 , e π : e = [e c 1 , e c 2 , · · · , e c n , e 1 , e π , e x ].", "An operand has both symbolic and semantic representations, but the selection focuses on its semantic meaning; this procedure is the same as what human does when solving math word problems.", "Inspired by addressing mechanisms of neural Turing machine (NTM) (Graves et al., 2014) , the probability of choosing the i-th operand candidate is the attention weights of r t over the semantic representations of the operand candidates as in (8) : P (Z t | {y i } t−1 i=1 , {w i } m i=1 ) (18) = OperandSelector(r opd t ) = AttentionWeight(r opd t , {e i } m i=1 ∪ {e 1 , e π , e x }), and r opd t is defined in section 3.3.", "Semantic Transformer A semantic transformer is proposed to generate the semantic representation of a new symbol resulted from applying an operator, which provides the capability of interpretation and reasoning for the target task.", "The semantic transformer for an operator ∈ {+, −, ×, ÷} transforms semantic representations of two operands e 1 , e 2 into f (e 1 , e 2 ) = tanh(U ReLU(W [e 1 ; e 2 ]+b )+c ), where W , U , b , c are model parameters.", "Semantic transformers for different operators have different parameters in order to model different transformations.", "Training Both stack action selection and operand selection can be trained in a fully supervised way by giving problems and associated ground truth equations.", "Because our model generates the equation with stack actions, the equation is first transformed into its postfix representation.", "Let the postfix representation of the target equation be y 1 , · · · y t , · · · , y T , where y t can be either an operator (+, −, ×, ÷, =) or a target operand.", "Then for each time step t, the loss can be computed as L(y t ) = L 1 (push op) + L 2 (y t ) y t is an operand L 1 (y t ) otherwise , where L 1 is the stack action selection loss and L 2 is the operand selection loss defined as L 1 (y t ) = − log P (Y t = y t | {o i } t−1 i=1 , {w i } m i=1 ), L 2 (y t ) = − log P (Z t = y t | r t ).", "The objective of our training process is to minimize the total loss for the whole equation, T t=1 L(y t ).", "Inference When performing inference, at each time step t, the stack action with the highest probability P (Y t |{ỹ i } t−1 i=1 , {w i } m i=1 ) is chosen.", "If the chosen stack action is \"push\", the operand with the highest probability P (Z t |{Ỹ i } t−1 i=1 , {w i } m i=1 ) is chosen.", "When the stack has less than 2 elements, the probability of applying operator +, −, ×, ÷, = would be masked out to prevent illegal stack actions, so all generated equations must be legal math expressions.", "The decoder decodes until the unknown variable can be solved.", "After the equations are generated, a Python package SymPy (Meurer et al., 2017) is used to solve the unknown variable.", "The inference procedure example is illustrated in Figure 3 .", "The detailed algorithm can be found in Algorithm 1.", "Experiments To evaluate the performance of the proposed model, we conduct the experiments on the benchmark dataset and analyze the learned semantics.", "Settings The experiments are benchmarked on the dataset Math23k (Wang et al., 2017) , which contains 23,162 math problems with annotated equations.", "Each problem can be solved by a singleunknown-variable equation and only uses operators +, −, ×, ÷.", "Also, except π and 1, quantities in the equation can be found in the problem text.", "There are also other large scale datasets like Dol-phin18K (Shi et al., 2015) and AQuA (Ling et al., 2017) , containing 18,460 and 100,000 math word problems respectively.", "The reasons about not evaluating on these two datasets are 1) Dolphin18k contains some unlabeled math word problems and some incorrect labels, and 2) AQuA contains rational for solving the problems, but the equations in the rational are not formal (e.g.", "mixed with texts, using x to represent ×, etc.)", "and inconsistent.", "Therefore, the following experiments are performed and analyzed using Math23K, the only large scaled, good-quality dataset. )", "do h D t ← LSTM(h D t−1 , ct−1, ret) st ← S.get top2() h E ← Attention(h D t−1 , h E ) rt ← [h D t , st, h E ] psa ← StackActionSelector(rt) p opd ← OperandSelector(rt) if training then Target equation y is available when training.", "Yt ← yt if yt is operand then loss ← loss + L1(push) + L2(yt) else loss ← loss + L1(yt) end if else Yt ← StackActionSelector(r sa t ) if Yt = push then Zt ← OperandSelector(r opd t ) end if end if if Yt = gen var then e x ← Attention(h D t , h E ) ret ← e x else if Yt = push then S.push(vZ t , eZ t ) ret ← eZ t else if Yt ∈ {+, Results The results are shown in Our proposed end-to-end model belongs to the generation category, and the single model performance achieved by our proposed model is new state-of-the-art (> 65%) and even better than the hybrid model result (64.7%).", "In addition, we are the first to report character-based performance on this dataset, and the character-based results are slightly better than the word-based ones.", "Among the single model performance, our models obtain about more than 7% accuracy improvement compared to the previous best one (Wang et al., 2017) .", "The performance of our character-based model also shows that our model is capable of learning the relatively accurate semantic representations without word boundaries and achieves better performance.", "Ablation Test To better understand the performance contributed by each proposed component, we perform a series of ablation tests by removing components one by one and then checking the performance by 5-fold cross validation.", "Table 2 shows the ablation results.", "Char-Based v.s.", "Word-Based As reported above, using word-based model instead of character-based model only causes 0.5% performance drop.", "To fairly compare with prior word- Table 2 : 5-fold cross validation results of ablation tests.", "based models, the following ablation tests are performed on the word-based approach.", "Word-Based -Gate It uses r t instead of r sa t and r opr t as the input of both StackActionSelector and OperandSelector.", "Word-Based -Gate -Attention Considering that the prior generation-based model (seq2seq) did not use any attention mechanism, we compare the models with and without the attention mechanism.", "Removing attention means excluding q t−1 in (11), so the input of both operator and operand selector becomes r t = [h D t ; s t ].", "The result implies that our model is not better than previous models solely because of the attention.", "Word-Based -Gate -Attention -Stack To check the effectiveness of the stack status (s t in (11)), the experiments of removing the stack status from the input of both operator and operand selectors (r t = h D t ) are conducted.", "The results well justify our idea of choosing operators based on semantic meanings of operands.", "Word-Based -Semantic Transformer To validate the effectiveness of the idea that views an operator as a semantic transformer, we modify the semantic transformer function of the operator into f (e 1 , e 2 ) = e , where e is a learnable parameter and is different for different operators.", "Therefore, e acts like the embedding of the operator , and the decoding process is more similar to a general seq2seq model.", "The results show that the semantic transformer in the original model encodes not only the last operator applied on the operands but other information that helps the selectors.", "Word-Based -Semantic Representation To explicitly evaluate the effectiveness of operands' semantic representations, we rewrite semantic representation of the i-th operand in the problem texts q u a n ti fi e r 个 b a n a n a 香 蕉 , e v e r y 每 ( b a s k e t) < u n k > 6 .0 q u a n ti fi e r 个 , ta k e o ff 拿 掉 h o w m a n y 多 少 q u a n ti fi e r 个 , th e n 就 c a n 可 以 e x a c tl y 正 好 fi ll 装 9 .0 q u a n ti fi e r 个 b a s k e ts 篮 子 了 < u n k > .", "9.0 6.0 58.0 Figure 4 : The self-attention map visualization of operands' semantic expressions for the problem \"There are 58 bananas.", "Each basket can contain 6 bananas.", "How many bananas are needed to be token off such that exactly 9 baskets are filled?\".", "from (2) to e c i = b c i , where b c i is a parameter.", "Thus for every problem, the representation of the i-th operand is identical, even though their meanings in different problems may be different.", "This modification assumes that no semantic information is captured by b c i , which can merely represent a symbolic placeholder in an equation.", "Because the semantic transformer is to transform the semantic representations, applying this component is meaningless.", "Here the semantic transformer is also replaced with f (e 1 , e 2 ) = e as the setting of the previous ablation test.", "The results show that the model without using semantic representations of operands causes a significant accuracy drop of 3.5%.", "The main contribution of this paper about modeling semantic meanings of symbols is validated and well demonstrated here.", "Qualitative Analysis To further analyze whether the proposed model can provide interpretation and reasoning, we visualize the learned semantic representations of constants to check where the important cues are, Constant Embedding Analysis To better understand the information encoded in the semantic representations of constants in the problem, a self-attention is performed when their semantic representations are extracted by the encoder.", "Namely, we rewrite (2) as e c i = Attention(h E p i , {h E t } m t=1 .", "(20) Then we check the trained self-attention map (α in the attention function) on the validation dataset.", "For some problems, the self-attention that generates semantic representations of constants in the problem concentrates on the number's quantifier or unit, and sometimes it also focuses on informative verbs, such as \"gain\", \"get\", \"fill\", etc., in the sentence.", "For example, Figure 4 shows the attention weights for an example math word problem, where lighter colors indicate higher weights.", "The numbers \"58\" and \"6\" focus more on the quantifier-related words (e.g.", "\"every\" and \"how many\"), while \"9\" pays higher attention to the verb \"fill\".", "The results are consistent with those handcraft features for solving math word problems proposed by the prior research (Hosseini et al., 2014; .", "Hence, we demonstrate that the automatically learned semantic representations indeed capture critical information that facilitates solving math word problems without providing human-crafted knowledge.", "Decoding Process Visualization We visualize the attention map (q t in (6) ) to see how the attention helps the decoding process.", "An example is shown in the top of Figure 5 , where most attention focuses on the end of the sentence.", "Unlike the machine translation task, the attention shows the word-level alignment between source and target languages, solving math word problems requires high-level understanding due to the task complexity.", "To further analyze the effectiveness of the proposed gating mechanisms for stack action and operand selection, the activation of gates g sa , g opd at each step of the decoding process is shown in the bottom of Figure 5 .", "It shows that most of time, the gate activation is high, demonstrating that the proposed gating mechanisms play an important role during decoding.", "We also observe a common phenomenon that the activation g sa 2 , which controls how much attention the stack action selector puts on the stack state when deciding an stack action, is usually low until the last \"operator application\" stack action.", "For example, in the example of Figure 5 , g sa 2 is less than 0.20 till the last argument selection stack action, and activates when deciding the division operator application (÷) and the equal application (=).", "It may result from the higher-level semantics of the operand (6.75−2.75) on the stack when selecting the stack action division operator application (÷).", "In terms Problem & Results 红花有60朵,黄花比红花多1/6朵,黄花有多少朵. (There are 60 red flowers.", "Yellow flowers are more than red ones by 1/6.", "How many yellow flowers are there?)", "Generated Equation: 60 + 1 6 Correct Answer: 70 火车 48 小时行驶 5920 千米,汽车 25 小时行驶 2250 千米,汽车平均每小时比火车每小时慢 多少 千米 ? (The train travels 5920 kilometers in hours, and the car travels 2250 kilometers in 25 hours.", "How many kilometers per hour is the car slower than the train?)", "Generated Equation: 2250 ÷ 25 − 5920 ÷ 48 Correct Answer: 33 1 3 小红前面 5 人,后面 7 人,一共有多少人? (There are 5 people in front of Little Red and 7 people behind.", "How many persons are there in total?)", "Generated Equation: 5 + 7 Correct Answer: 13 Figure 5: Word attention and gate activation (g sa and g opd ) visualization when generating stack actions for the problem \"6.75 deducting 5 times of an unknown number is 2.75.", "What is the unknown number?", "\", where the associated equation is x = (6.75 − 2.75) ÷ 5.", "Note that g opd is meaningful only when the t-th stack action is push op.", "of the activation of g opd , we find that three features are important in most cases, demonstrating the effectiveness of the proposed mechanisms.", "Error Analysis We randomly sample some results predicted incorrectly by our model shown in Table 3 .", "In the first example, the error is due to the language ambiguity, and such ambiguity cannot be resolved without considering the exact value of the number.", "From the second example, although our model identifies the problem as a comparison problem successfully, it handles the order of the operands incorrectly.", "For the third problem, it cannot be solved by using only the surface meaning but requires some common sense.", "Therefore, above phenomena show the difficulty of solving math word problems and the large room for improvement.", "Conclusion We propose an end-to-end neural math solver using an encoder-decoder framework that incorporates semantic representations of numbers in order to generate mathematical symbols for solving math word problems.", "The experiments show that the proposed model achieves the state-of-the-art performance on the benchmark dataset, and empirically demonstrate the effectiveness of each component in the model.", "In sum, the proposed neural math solver is designed based on how human performs reasoning when writing equations, providing better interpretation without the need of labeled rationals.", "A Algorithm Detail The training and inference procedures are shown in Algortihm 1.", "B Hyperparameter Setup The model is trained with the optimizer adam (Kingma and Ba, 2014), and the learning rate is set to 0.001.", "Pretrained embeddings using FastText (Joulin et al., 2016 ) are adopted.", "The hidden state size of LSTM used in the encoder and decoder is 256.", "The dimension of hidden layers in attention, semantic transformer and operand/stack action selector is 256.", "The dropout rate is set as 0.1 before inputting the decoder LSTM, before the stack action selector and after the hidden layer of the stack action selector and attention.", "The reported accuracy is the result of 5-fold cross-validation, same as Wang et al.", "for fair comparison.", "C Error Analysis between Seq2Seq We implement the seq2seq model as proposed by Wang et al.", "and compare the performance difference between our proposed model and the baseline seq2seq model.", "Table 4 shows the generated results seq2seq predicts correctly but our model predicts incorrectly.", "Table 5 show the results our model can predict correctly but seq2seq cannot." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.1.1", "3.1.2", "3.2", "3.3", "3.3.1", "3.3.2", "3.3.3", "3.3.4", "3.4", "3.5", "4", "4.1", "4.2", "4.3", "5", "5.1", "5.2", "5.3", "6" ], "paper_header_content": [ "Introduction", "Related Work", "End-to-End Neural Math Solver", "Encoder", "Constant Representation Extraction", "External Constant Leveraging", "Decoder", "Decoding State Features", "Stack Action Selector", "Stack Actions", "Operand Selector", "Semantic Transformer", "Training", "Inference", "Experiments", "Settings", "Results", "Ablation Test", "Qualitative Analysis", "Constant Embedding Analysis", "Decoding Process Visualization", "Error Analysis", "Conclusion" ] }
GEM-SciDuet-train-31#paper-1044#slide-17
Results
Acc. Retrieval Template Generation Ensemble Retrieval BLSTM Self-Attention Seq2Seq w/SNI Proposed Hybrid
Acc. Retrieval Template Generation Ensemble Retrieval BLSTM Self-Attention Seq2Seq w/SNI Proposed Hybrid
[]
GEM-SciDuet-train-31#paper-1044#slide-18
1044
Semantically-Aligned Equation Generation for Solving and Reasoning Math Word Problems
Solving math word problems is a challenging task that requires accurate natural language understanding to bridge natural language texts and math expressions. Motivated by the intuition about how human generates the equations given the problem texts, this paper presents a neural approach to automatically solve math word problems by operating symbols according to their semantic meanings in texts. This paper views the process of generating equations as a bridge between the semantic world and the symbolic world, where the proposed neural math solver is based on an encoderdecoder framework. In the proposed model, the encoder is designed to understand the semantics of problems, and the decoder focuses on tracking semantic meanings of the generated symbols and then deciding which symbol to generate next. The preliminary experiments are conducted in a benchmark dataset Math23K, and our model significantly outperforms both the state-of-the-art single model and the best non-retrieval-based model over about 10% accuracy, demonstrating the effectiveness of bridging the symbolic and semantic worlds from math word problems. 1
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214 ], "paper_content_text": [ "Introduction Automatically solving math word problems has been an interesting research topic and also been viewed as a way of evaluating machines' ability (Mandal and Naskar, 2019) .", "For human, writing down an equation that solves a math word problem requires the ability of reading comprehension, reasoning, and sometimes real world understanding.", "Specifically, to solve a math word problem, we first need to know the goal of the given problem, then understand the semantic 1 The source code is available at https://github.", "com/MiuLab/E2EMathSolver.", "meaning of each numerical number in the problem, perform reasoning based on the comprehension in the previous step, and finally decide what to write in the equation.", "Most prior work about solving math word problems relied on hand-crafted features, which required more human knowledge.", "Because those features are often in the lexical level, it is not clear whether machines really understand the math problems.", "Also, most prior work evaluated their approaches on relatively small datasets, and the capability of generalization is concerned.", "This paper considers the reasoning procedure when writing down the associated equation given a problem.", "Figure 1 illustrates the problem solving process.", "The illustration shows that human actually assigns the semantic meaning to each number when manipulating symbols, including operands (numbers) and operators (+ − ×÷).", "Also, we believe that the semantic meaning of operands can help us decide which operator to use.", "For example, the summation of \"price of one pen\" and \"number of pens Tom bought\" is meaningless; therefore the addition would not be chosen.", "Following the observation above, this paper proposes a novel encoder decoder model, where the encoder extracts semantic meanings of numbers in the problem, and the decoder is equipped with a stack that facilitates tracking the semantic meanings of operands.", "The contributions of this paper are 4-fold: • This paper is the first work that models semantic meanings of operands and operators for math word problems.", "• This paper proposes an end-to-end neural math solver with a novel decoding process that utilizes the stack to generate associated equations.", "Figure 1 : The solving process of the math word problem \"Each notebok takes $0.5 and each pen takes $1.", "Tom has $10.", "How many notebook can he buy after buying 5 pens?\"", "and the associated equation is x = (10 − 1 × 5) ÷ 0.5.", "The associated equation is x = (10 − 1 × 5) ÷ 0.5.", "• This paper achieves the state-of-the-art performance on the large benchmark dataset Math23K.", "• This paper is capable of providing interpretation and reasoning for the math word problem solving procedure.", "Related Work There is a lot of prior work that utilized handcrafted features, such as POS tags, paths in the dependency trees, keywords, etc., to allow the model to focus on the quantities in the problems Hosseini et al., 2014; Koncel-Kedziorski et al., 2015; Roy et al., 2016; Upadhyay and Chang, 2017; Roy and Roth, 2018; Wang et al., 2018) .", "Recently, Mehta et al.", "; Wang et al.", "; Ling et al.", "attempted at learning models without predefined features.", "Following the recent trend, the proposed end-to-end model in this paper does not need any hand-crafted features.", "Kushman et al.", "first extracted templates about math expressions from the training answers, and then trained models to select templates and map quantities in the problem to the slots in the template.", "Such two-stage approach has been tried and achieved good results (Upadhyay and Chang, 2017) .", "The prior work highly relied on human knowledge, where they parsed problems into equations by choosing the expression tree with the highest score calculated by an operator classifier, working on a hand-crafted \"trigger list\" containing quantities and noun phrases in the problem, or utilizing features extracted from text spans (Roy et al., , 2016 Koncel-Kedziorski et al., 2015) .", "Shi et al.", "defined a Dolphin language to connect math word problems and logical forms, and generated rules to parse math word problems.", "Upadhyay et al.", "parsed math word problems without explicit equation annotations.", "Roy and Roth clas-sified math word problems into 4 types and used rules to decide the operators accordingly.", "Wang et al.", "trained the parser using reinforcement learning with hand-crafted features.", "Hosseini et al.", "modeled the problem text as transition of world states, and the equation is generated as the world states changing.", "Our work uses a similar intuition, but hand-crafted features are not required and our model can be trained in an end-to-end manner.", "Some end-to-end approaches have been proposed, such as generating equations directly via a seq2seq model (Wang et al., 2017) .", "Ling et al.", "tried to generate solutions along with its rationals with a seq2seq-like model for better interpretability.", "This paper belongs to the end-to-end category, but different from the previous work; we are the first approach that generates equations with stack actions, which facilitate us to simulate the way how human solves problems.", "Furthermore, the proposed approach is the first model that is more interpretable and provides reasoning steps without the need of rational annotations.", "End-to-End Neural Math Solver Our approach composes of two parts, an encoder and a decoder, where the process of solving math word problems is viewed as transforming multiple text spans from the problems into the target information the problems ask for.", "In the example shown in Figure 1 , all numbers in the problem are attached with the associated semantics.", "Motivated by the observation, we design an encoder to extract the semantic representation of each number in the problem text.", "Considering that human usually manipulates those numbers and operators (such as addition, subtraction, etc.)", "based on their semantics for problem solving, a decoder is designed to construct the equation, where the semantics is aligned with the representations extracted by the encoder.", "The idea of the proposed model Tom has $ 10 5 pens ?", "Encoder Stack Attention Operation Selector Apply OP OP Return Decoder Operand Selector Semantic Transformer Each notebook takes $0.5 and each pen takes $1.", "Tom has $10.", "How many notebooks can he buy after buying 5 pens?", "Stack Attention is to imitate the human reasoning process for solving math word problems.", "The model architecture is illustrated in Figure 2 .", "Encoder The encoder aims to extract the semantic representation of each constant needed for solving problems.", "However, the needed constants may come from either the given problem texts or domain knowledge, so we detail these two procedures as follows.", "Constant Representation Extraction For each math word problem, we are given a passage consisting of words {w P t } m t=1 , whose word embeddings are {e P t } m t=1 .", "The problem text includes some numbers, which we refer as constants.", "The positions of constants in the problem text are denoted as {p i } n i=1 .", "In order to capture the semantic representation of each constant by considering its contexts, a bidirectional long short-term memory (BLSTM) is adopted as the encoder (Hochreiter and Schmidhuber, 1997) : h E t , c E t = BLSTM(h E t−1 , c E t−1 , e P t ), (1) and then for the i-th constant in the problem, its semantic representation e c i is modeled by the corresponding BLSTM output vector: e c i = h E p i .", "(2) External Constant Leveraging External constants, including 1 and π, are leveraged, because they are required to solve a math word problem, but not mentioned in the problem text.", "Due to their absence from the problem text, we cannot extract their semantic meanings by BLSTM in (2) .", "Instead, we model their semantic representation e π , e 1 as parts of the model parameters.", "They are randomly initialized and are learned during model training.", "Decoder The decoder aims at constructing the equation that can solve the given problem.", "We generate the equation by applying stack actions on a stack to mimic the way how human understands an equation.", "Human knows the semantic meaning of each term in the equation, even compositing of operands and operators like the term \"(10−1×5)\" in Figure 1 .", "Then what operator to apply on a pair operands can be chosen based on their semantic meanings accordingly.", "Hence we design our model to generate the equation in a postfix manner: a operator is chosen base on the semantic representations of two operands the operator is going to apply to.", "Note that the operands a operator can apply to can be any results generated previously.", "That is the reason why we use \"stack\" as our data structure in order to keep track of the operands a operator is going to apply to.", "The stack contains both symbolic and semantic representations of operands, denoted as S = [(v S lt , e S lt ), (v S lt−1 , e S lt−1 ), · · · , (v S 1 , e S 1 )], (3) where v S of each pair is the symbolic part, such as x + 1, while e S is the semantic representation, which is a vector.", "The components in the decoder are shown in the right part of Figure 2 , each of which is detailed below.", "Decoding State Features At each decoding step, decisions are made based on features of the current state.", "At each step, fea- tures r sa and r opd are extracted to select a stack action (section 3.3.2) and an operand to push (section 3.3.3).", "Specifically, the features are the gated concatenation of following vectors: • h D t is the output of an LSTM, which encodes the history of applied actions: h D t , c D t = LSTM(h D t−1 , c D t−1 , res t−1 ), (4) where res t−1 is the result from the previous stack action similar to the seq2seq model (Sutskever et al., 2014) .", "For example, if the previous stack action o t−1 is \"push\", then res t−1 is the semantic representation pushed into the stack.", "If the previous stack action o t−1 is to apply an operator , then res t−1 is the semantic representation generated by f .", "• s t is the stack status.", "It is crucial because some operators are only applicable to certain combinations of operand semantics, which is similar to the type system in programming languages.", "For example, operating multiplication is applicable to the combination of \"quantity of an item\" and \"price of an item\", while operating addition is not.", "Considering that all math operators supported here (+, −, ×, ÷) are binary operators, the semantic representations of the stack's top 2 elements at the time t − 1 are considered: s t = [e S lt ; e S lt ].", "(5) • q t incorporates problem information in the decision.", "It is believed that the attention mechanism (Luong et al., 2015) can effectively capture dependency for longer distance.", "Thus, the attention mechanism over the encoding problem h E 1 , h E 2 , · · · is adopted: q t = Attention(h D t , {h E i } m i=1 ), (6) where the attention function in this paper is defined as a function with learnable parameters w, W, b: Attention(u, {v i } m i=1 ) = m i=1 α i h i , (7) α i = exp(s i ) m l=1 exp(s i ) , (8) s i = w T tanh(W T [u; v i ] + b).", "(9) In order to model the dynamic features for different decoding steps, features in r sa t is gated as follows: r sa t = [g sa t,1 · h D t ; g sa t,2 · s t ; g sa t,3 · q t ], (10) g sa t = σ(W sa · [h D t ; s t ; q t ]), (11) where σ is a sigmoid function and W sa is a learned gating parameter.", "r opd t is defined similarly, but with a different learned gating parameter W opd .", "Stack Action Selector The stack action selector is to select an stack action at each decoding step (section 3.3.2) until the unknowns are solved.", "The probability of choosing action a at the decoding step t is calculated with a network NN constituted of one hidden layer and ReLU as the activation function: P (Y t |{y i } t−1 i=1 , {w i } m i=1 ) (12) = StackActionSelector(r sa t ) = softmax(NN(r sa t )) , where r sa t is decoding state features as defined in section 3.3.", "Stack Actions The available stack actions are listed below: • Variable generation: The semantic representation of an unknown variable x is generated dynamically as the first action in the decoding process.", "Note that this procedure provides the flexibility of solving problems with more than one unknown variables.", "The decoder module can decide how many unknown variables are required to solve the problem, and the semantic representation of the unknown variable is generated with an attention mechanism: e x = Attention(h D t , {h E i } m i=1 ).", "(13) • Push: This stack action pushes the operand chosen by the operand selector (section 3.3.3).", "Both the symbolic representation v * and semantic representation e * of the chosen operand would be pushed to the stack S in (3).", "Then the stack state becomes S = [(v S * , e S * ), (v S lt , e S lt ), · · · , (v S 1 , e S 1 )].", "(14) • Operator application ( ∈ {+, −, ×, ÷}): One stack action pops two elements from the top of the stack, which contains two pairs, (v i , e i ) and (v j , e j ), and then the associated symbolic operator, v k = v i v j , is recorded.", "Also, a semantic transformation function f for that operator is invoked, which generates the semantic representation of v k by transforming semantic representations of v i and v j to e k = f (e i , e j ).", "Therefore, after an operator is applied to the stack specified in (3) , the stack state becomes S =[(v S lt v S lt−1 , f (e S lt , e S lt−1 )), (15) (v S lt−2 , e S lt−2 ), · · · , (v S 1 , e S 1 )].", "• Equal application: When the equal application is chosen, it implies that an equation is completed.", "This stack action pops 2 tuples from the stack, (v i , e i ), (v j , e j ), and then v i = v j is recorded.", "If one of them is an unknown variable, the problem is solved.", "Therefore, after an OP is applied to the stack specified in (3) , the stack state becomes S = [(v S lt−2 , e S lt−2 ), · · · , (v S 1 , e S 1 )].", "(16) Operand Selector When the stack action selector has decided to push an operand, the operand selector aims at choosing which operand to push.", "The operand candidates e include constants provided in the problem text whose semantic representations are e c 1 , e c 2 , · · · , e c n , unknown variable whose semantic representation is e x , and two external constants 1 and π whose semantic representations are e 1 , e π : e = [e c 1 , e c 2 , · · · , e c n , e 1 , e π , e x ].", "An operand has both symbolic and semantic representations, but the selection focuses on its semantic meaning; this procedure is the same as what human does when solving math word problems.", "Inspired by addressing mechanisms of neural Turing machine (NTM) (Graves et al., 2014) , the probability of choosing the i-th operand candidate is the attention weights of r t over the semantic representations of the operand candidates as in (8) : P (Z t | {y i } t−1 i=1 , {w i } m i=1 ) (18) = OperandSelector(r opd t ) = AttentionWeight(r opd t , {e i } m i=1 ∪ {e 1 , e π , e x }), and r opd t is defined in section 3.3.", "Semantic Transformer A semantic transformer is proposed to generate the semantic representation of a new symbol resulted from applying an operator, which provides the capability of interpretation and reasoning for the target task.", "The semantic transformer for an operator ∈ {+, −, ×, ÷} transforms semantic representations of two operands e 1 , e 2 into f (e 1 , e 2 ) = tanh(U ReLU(W [e 1 ; e 2 ]+b )+c ), where W , U , b , c are model parameters.", "Semantic transformers for different operators have different parameters in order to model different transformations.", "Training Both stack action selection and operand selection can be trained in a fully supervised way by giving problems and associated ground truth equations.", "Because our model generates the equation with stack actions, the equation is first transformed into its postfix representation.", "Let the postfix representation of the target equation be y 1 , · · · y t , · · · , y T , where y t can be either an operator (+, −, ×, ÷, =) or a target operand.", "Then for each time step t, the loss can be computed as L(y t ) = L 1 (push op) + L 2 (y t ) y t is an operand L 1 (y t ) otherwise , where L 1 is the stack action selection loss and L 2 is the operand selection loss defined as L 1 (y t ) = − log P (Y t = y t | {o i } t−1 i=1 , {w i } m i=1 ), L 2 (y t ) = − log P (Z t = y t | r t ).", "The objective of our training process is to minimize the total loss for the whole equation, T t=1 L(y t ).", "Inference When performing inference, at each time step t, the stack action with the highest probability P (Y t |{ỹ i } t−1 i=1 , {w i } m i=1 ) is chosen.", "If the chosen stack action is \"push\", the operand with the highest probability P (Z t |{Ỹ i } t−1 i=1 , {w i } m i=1 ) is chosen.", "When the stack has less than 2 elements, the probability of applying operator +, −, ×, ÷, = would be masked out to prevent illegal stack actions, so all generated equations must be legal math expressions.", "The decoder decodes until the unknown variable can be solved.", "After the equations are generated, a Python package SymPy (Meurer et al., 2017) is used to solve the unknown variable.", "The inference procedure example is illustrated in Figure 3 .", "The detailed algorithm can be found in Algorithm 1.", "Experiments To evaluate the performance of the proposed model, we conduct the experiments on the benchmark dataset and analyze the learned semantics.", "Settings The experiments are benchmarked on the dataset Math23k (Wang et al., 2017) , which contains 23,162 math problems with annotated equations.", "Each problem can be solved by a singleunknown-variable equation and only uses operators +, −, ×, ÷.", "Also, except π and 1, quantities in the equation can be found in the problem text.", "There are also other large scale datasets like Dol-phin18K (Shi et al., 2015) and AQuA (Ling et al., 2017) , containing 18,460 and 100,000 math word problems respectively.", "The reasons about not evaluating on these two datasets are 1) Dolphin18k contains some unlabeled math word problems and some incorrect labels, and 2) AQuA contains rational for solving the problems, but the equations in the rational are not formal (e.g.", "mixed with texts, using x to represent ×, etc.)", "and inconsistent.", "Therefore, the following experiments are performed and analyzed using Math23K, the only large scaled, good-quality dataset. )", "do h D t ← LSTM(h D t−1 , ct−1, ret) st ← S.get top2() h E ← Attention(h D t−1 , h E ) rt ← [h D t , st, h E ] psa ← StackActionSelector(rt) p opd ← OperandSelector(rt) if training then Target equation y is available when training.", "Yt ← yt if yt is operand then loss ← loss + L1(push) + L2(yt) else loss ← loss + L1(yt) end if else Yt ← StackActionSelector(r sa t ) if Yt = push then Zt ← OperandSelector(r opd t ) end if end if if Yt = gen var then e x ← Attention(h D t , h E ) ret ← e x else if Yt = push then S.push(vZ t , eZ t ) ret ← eZ t else if Yt ∈ {+, Results The results are shown in Our proposed end-to-end model belongs to the generation category, and the single model performance achieved by our proposed model is new state-of-the-art (> 65%) and even better than the hybrid model result (64.7%).", "In addition, we are the first to report character-based performance on this dataset, and the character-based results are slightly better than the word-based ones.", "Among the single model performance, our models obtain about more than 7% accuracy improvement compared to the previous best one (Wang et al., 2017) .", "The performance of our character-based model also shows that our model is capable of learning the relatively accurate semantic representations without word boundaries and achieves better performance.", "Ablation Test To better understand the performance contributed by each proposed component, we perform a series of ablation tests by removing components one by one and then checking the performance by 5-fold cross validation.", "Table 2 shows the ablation results.", "Char-Based v.s.", "Word-Based As reported above, using word-based model instead of character-based model only causes 0.5% performance drop.", "To fairly compare with prior word- Table 2 : 5-fold cross validation results of ablation tests.", "based models, the following ablation tests are performed on the word-based approach.", "Word-Based -Gate It uses r t instead of r sa t and r opr t as the input of both StackActionSelector and OperandSelector.", "Word-Based -Gate -Attention Considering that the prior generation-based model (seq2seq) did not use any attention mechanism, we compare the models with and without the attention mechanism.", "Removing attention means excluding q t−1 in (11), so the input of both operator and operand selector becomes r t = [h D t ; s t ].", "The result implies that our model is not better than previous models solely because of the attention.", "Word-Based -Gate -Attention -Stack To check the effectiveness of the stack status (s t in (11)), the experiments of removing the stack status from the input of both operator and operand selectors (r t = h D t ) are conducted.", "The results well justify our idea of choosing operators based on semantic meanings of operands.", "Word-Based -Semantic Transformer To validate the effectiveness of the idea that views an operator as a semantic transformer, we modify the semantic transformer function of the operator into f (e 1 , e 2 ) = e , where e is a learnable parameter and is different for different operators.", "Therefore, e acts like the embedding of the operator , and the decoding process is more similar to a general seq2seq model.", "The results show that the semantic transformer in the original model encodes not only the last operator applied on the operands but other information that helps the selectors.", "Word-Based -Semantic Representation To explicitly evaluate the effectiveness of operands' semantic representations, we rewrite semantic representation of the i-th operand in the problem texts q u a n ti fi e r 个 b a n a n a 香 蕉 , e v e r y 每 ( b a s k e t) < u n k > 6 .0 q u a n ti fi e r 个 , ta k e o ff 拿 掉 h o w m a n y 多 少 q u a n ti fi e r 个 , th e n 就 c a n 可 以 e x a c tl y 正 好 fi ll 装 9 .0 q u a n ti fi e r 个 b a s k e ts 篮 子 了 < u n k > .", "9.0 6.0 58.0 Figure 4 : The self-attention map visualization of operands' semantic expressions for the problem \"There are 58 bananas.", "Each basket can contain 6 bananas.", "How many bananas are needed to be token off such that exactly 9 baskets are filled?\".", "from (2) to e c i = b c i , where b c i is a parameter.", "Thus for every problem, the representation of the i-th operand is identical, even though their meanings in different problems may be different.", "This modification assumes that no semantic information is captured by b c i , which can merely represent a symbolic placeholder in an equation.", "Because the semantic transformer is to transform the semantic representations, applying this component is meaningless.", "Here the semantic transformer is also replaced with f (e 1 , e 2 ) = e as the setting of the previous ablation test.", "The results show that the model without using semantic representations of operands causes a significant accuracy drop of 3.5%.", "The main contribution of this paper about modeling semantic meanings of symbols is validated and well demonstrated here.", "Qualitative Analysis To further analyze whether the proposed model can provide interpretation and reasoning, we visualize the learned semantic representations of constants to check where the important cues are, Constant Embedding Analysis To better understand the information encoded in the semantic representations of constants in the problem, a self-attention is performed when their semantic representations are extracted by the encoder.", "Namely, we rewrite (2) as e c i = Attention(h E p i , {h E t } m t=1 .", "(20) Then we check the trained self-attention map (α in the attention function) on the validation dataset.", "For some problems, the self-attention that generates semantic representations of constants in the problem concentrates on the number's quantifier or unit, and sometimes it also focuses on informative verbs, such as \"gain\", \"get\", \"fill\", etc., in the sentence.", "For example, Figure 4 shows the attention weights for an example math word problem, where lighter colors indicate higher weights.", "The numbers \"58\" and \"6\" focus more on the quantifier-related words (e.g.", "\"every\" and \"how many\"), while \"9\" pays higher attention to the verb \"fill\".", "The results are consistent with those handcraft features for solving math word problems proposed by the prior research (Hosseini et al., 2014; .", "Hence, we demonstrate that the automatically learned semantic representations indeed capture critical information that facilitates solving math word problems without providing human-crafted knowledge.", "Decoding Process Visualization We visualize the attention map (q t in (6) ) to see how the attention helps the decoding process.", "An example is shown in the top of Figure 5 , where most attention focuses on the end of the sentence.", "Unlike the machine translation task, the attention shows the word-level alignment between source and target languages, solving math word problems requires high-level understanding due to the task complexity.", "To further analyze the effectiveness of the proposed gating mechanisms for stack action and operand selection, the activation of gates g sa , g opd at each step of the decoding process is shown in the bottom of Figure 5 .", "It shows that most of time, the gate activation is high, demonstrating that the proposed gating mechanisms play an important role during decoding.", "We also observe a common phenomenon that the activation g sa 2 , which controls how much attention the stack action selector puts on the stack state when deciding an stack action, is usually low until the last \"operator application\" stack action.", "For example, in the example of Figure 5 , g sa 2 is less than 0.20 till the last argument selection stack action, and activates when deciding the division operator application (÷) and the equal application (=).", "It may result from the higher-level semantics of the operand (6.75−2.75) on the stack when selecting the stack action division operator application (÷).", "In terms Problem & Results 红花有60朵,黄花比红花多1/6朵,黄花有多少朵. (There are 60 red flowers.", "Yellow flowers are more than red ones by 1/6.", "How many yellow flowers are there?)", "Generated Equation: 60 + 1 6 Correct Answer: 70 火车 48 小时行驶 5920 千米,汽车 25 小时行驶 2250 千米,汽车平均每小时比火车每小时慢 多少 千米 ? (The train travels 5920 kilometers in hours, and the car travels 2250 kilometers in 25 hours.", "How many kilometers per hour is the car slower than the train?)", "Generated Equation: 2250 ÷ 25 − 5920 ÷ 48 Correct Answer: 33 1 3 小红前面 5 人,后面 7 人,一共有多少人? (There are 5 people in front of Little Red and 7 people behind.", "How many persons are there in total?)", "Generated Equation: 5 + 7 Correct Answer: 13 Figure 5: Word attention and gate activation (g sa and g opd ) visualization when generating stack actions for the problem \"6.75 deducting 5 times of an unknown number is 2.75.", "What is the unknown number?", "\", where the associated equation is x = (6.75 − 2.75) ÷ 5.", "Note that g opd is meaningful only when the t-th stack action is push op.", "of the activation of g opd , we find that three features are important in most cases, demonstrating the effectiveness of the proposed mechanisms.", "Error Analysis We randomly sample some results predicted incorrectly by our model shown in Table 3 .", "In the first example, the error is due to the language ambiguity, and such ambiguity cannot be resolved without considering the exact value of the number.", "From the second example, although our model identifies the problem as a comparison problem successfully, it handles the order of the operands incorrectly.", "For the third problem, it cannot be solved by using only the surface meaning but requires some common sense.", "Therefore, above phenomena show the difficulty of solving math word problems and the large room for improvement.", "Conclusion We propose an end-to-end neural math solver using an encoder-decoder framework that incorporates semantic representations of numbers in order to generate mathematical symbols for solving math word problems.", "The experiments show that the proposed model achieves the state-of-the-art performance on the benchmark dataset, and empirically demonstrate the effectiveness of each component in the model.", "In sum, the proposed neural math solver is designed based on how human performs reasoning when writing equations, providing better interpretation without the need of labeled rationals.", "A Algorithm Detail The training and inference procedures are shown in Algortihm 1.", "B Hyperparameter Setup The model is trained with the optimizer adam (Kingma and Ba, 2014), and the learning rate is set to 0.001.", "Pretrained embeddings using FastText (Joulin et al., 2016 ) are adopted.", "The hidden state size of LSTM used in the encoder and decoder is 256.", "The dimension of hidden layers in attention, semantic transformer and operand/stack action selector is 256.", "The dropout rate is set as 0.1 before inputting the decoder LSTM, before the stack action selector and after the hidden layer of the stack action selector and attention.", "The reported accuracy is the result of 5-fold cross-validation, same as Wang et al.", "for fair comparison.", "C Error Analysis between Seq2Seq We implement the seq2seq model as proposed by Wang et al.", "and compare the performance difference between our proposed model and the baseline seq2seq model.", "Table 4 shows the generated results seq2seq predicts correctly but our model predicts incorrectly.", "Table 5 show the results our model can predict correctly but seq2seq cannot." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.1.1", "3.1.2", "3.2", "3.3", "3.3.1", "3.3.2", "3.3.3", "3.3.4", "3.4", "3.5", "4", "4.1", "4.2", "4.3", "5", "5.1", "5.2", "5.3", "6" ], "paper_header_content": [ "Introduction", "Related Work", "End-to-End Neural Math Solver", "Encoder", "Constant Representation Extraction", "External Constant Leveraging", "Decoder", "Decoding State Features", "Stack Action Selector", "Stack Actions", "Operand Selector", "Semantic Transformer", "Training", "Inference", "Experiments", "Settings", "Results", "Ablation Test", "Qualitative Analysis", "Constant Embedding Analysis", "Decoding Process Visualization", "Error Analysis", "Conclusion" ] }
GEM-SciDuet-train-31#paper-1044#slide-18
Ablation Test
Char-Based Word-Based Word-Based Word-Based
Char-Based Word-Based Word-Based Word-Based
[]
GEM-SciDuet-train-31#paper-1044#slide-19
1044
Semantically-Aligned Equation Generation for Solving and Reasoning Math Word Problems
Solving math word problems is a challenging task that requires accurate natural language understanding to bridge natural language texts and math expressions. Motivated by the intuition about how human generates the equations given the problem texts, this paper presents a neural approach to automatically solve math word problems by operating symbols according to their semantic meanings in texts. This paper views the process of generating equations as a bridge between the semantic world and the symbolic world, where the proposed neural math solver is based on an encoderdecoder framework. In the proposed model, the encoder is designed to understand the semantics of problems, and the decoder focuses on tracking semantic meanings of the generated symbols and then deciding which symbol to generate next. The preliminary experiments are conducted in a benchmark dataset Math23K, and our model significantly outperforms both the state-of-the-art single model and the best non-retrieval-based model over about 10% accuracy, demonstrating the effectiveness of bridging the symbolic and semantic worlds from math word problems. 1
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214 ], "paper_content_text": [ "Introduction Automatically solving math word problems has been an interesting research topic and also been viewed as a way of evaluating machines' ability (Mandal and Naskar, 2019) .", "For human, writing down an equation that solves a math word problem requires the ability of reading comprehension, reasoning, and sometimes real world understanding.", "Specifically, to solve a math word problem, we first need to know the goal of the given problem, then understand the semantic 1 The source code is available at https://github.", "com/MiuLab/E2EMathSolver.", "meaning of each numerical number in the problem, perform reasoning based on the comprehension in the previous step, and finally decide what to write in the equation.", "Most prior work about solving math word problems relied on hand-crafted features, which required more human knowledge.", "Because those features are often in the lexical level, it is not clear whether machines really understand the math problems.", "Also, most prior work evaluated their approaches on relatively small datasets, and the capability of generalization is concerned.", "This paper considers the reasoning procedure when writing down the associated equation given a problem.", "Figure 1 illustrates the problem solving process.", "The illustration shows that human actually assigns the semantic meaning to each number when manipulating symbols, including operands (numbers) and operators (+ − ×÷).", "Also, we believe that the semantic meaning of operands can help us decide which operator to use.", "For example, the summation of \"price of one pen\" and \"number of pens Tom bought\" is meaningless; therefore the addition would not be chosen.", "Following the observation above, this paper proposes a novel encoder decoder model, where the encoder extracts semantic meanings of numbers in the problem, and the decoder is equipped with a stack that facilitates tracking the semantic meanings of operands.", "The contributions of this paper are 4-fold: • This paper is the first work that models semantic meanings of operands and operators for math word problems.", "• This paper proposes an end-to-end neural math solver with a novel decoding process that utilizes the stack to generate associated equations.", "Figure 1 : The solving process of the math word problem \"Each notebok takes $0.5 and each pen takes $1.", "Tom has $10.", "How many notebook can he buy after buying 5 pens?\"", "and the associated equation is x = (10 − 1 × 5) ÷ 0.5.", "The associated equation is x = (10 − 1 × 5) ÷ 0.5.", "• This paper achieves the state-of-the-art performance on the large benchmark dataset Math23K.", "• This paper is capable of providing interpretation and reasoning for the math word problem solving procedure.", "Related Work There is a lot of prior work that utilized handcrafted features, such as POS tags, paths in the dependency trees, keywords, etc., to allow the model to focus on the quantities in the problems Hosseini et al., 2014; Koncel-Kedziorski et al., 2015; Roy et al., 2016; Upadhyay and Chang, 2017; Roy and Roth, 2018; Wang et al., 2018) .", "Recently, Mehta et al.", "; Wang et al.", "; Ling et al.", "attempted at learning models without predefined features.", "Following the recent trend, the proposed end-to-end model in this paper does not need any hand-crafted features.", "Kushman et al.", "first extracted templates about math expressions from the training answers, and then trained models to select templates and map quantities in the problem to the slots in the template.", "Such two-stage approach has been tried and achieved good results (Upadhyay and Chang, 2017) .", "The prior work highly relied on human knowledge, where they parsed problems into equations by choosing the expression tree with the highest score calculated by an operator classifier, working on a hand-crafted \"trigger list\" containing quantities and noun phrases in the problem, or utilizing features extracted from text spans (Roy et al., , 2016 Koncel-Kedziorski et al., 2015) .", "Shi et al.", "defined a Dolphin language to connect math word problems and logical forms, and generated rules to parse math word problems.", "Upadhyay et al.", "parsed math word problems without explicit equation annotations.", "Roy and Roth clas-sified math word problems into 4 types and used rules to decide the operators accordingly.", "Wang et al.", "trained the parser using reinforcement learning with hand-crafted features.", "Hosseini et al.", "modeled the problem text as transition of world states, and the equation is generated as the world states changing.", "Our work uses a similar intuition, but hand-crafted features are not required and our model can be trained in an end-to-end manner.", "Some end-to-end approaches have been proposed, such as generating equations directly via a seq2seq model (Wang et al., 2017) .", "Ling et al.", "tried to generate solutions along with its rationals with a seq2seq-like model for better interpretability.", "This paper belongs to the end-to-end category, but different from the previous work; we are the first approach that generates equations with stack actions, which facilitate us to simulate the way how human solves problems.", "Furthermore, the proposed approach is the first model that is more interpretable and provides reasoning steps without the need of rational annotations.", "End-to-End Neural Math Solver Our approach composes of two parts, an encoder and a decoder, where the process of solving math word problems is viewed as transforming multiple text spans from the problems into the target information the problems ask for.", "In the example shown in Figure 1 , all numbers in the problem are attached with the associated semantics.", "Motivated by the observation, we design an encoder to extract the semantic representation of each number in the problem text.", "Considering that human usually manipulates those numbers and operators (such as addition, subtraction, etc.)", "based on their semantics for problem solving, a decoder is designed to construct the equation, where the semantics is aligned with the representations extracted by the encoder.", "The idea of the proposed model Tom has $ 10 5 pens ?", "Encoder Stack Attention Operation Selector Apply OP OP Return Decoder Operand Selector Semantic Transformer Each notebook takes $0.5 and each pen takes $1.", "Tom has $10.", "How many notebooks can he buy after buying 5 pens?", "Stack Attention is to imitate the human reasoning process for solving math word problems.", "The model architecture is illustrated in Figure 2 .", "Encoder The encoder aims to extract the semantic representation of each constant needed for solving problems.", "However, the needed constants may come from either the given problem texts or domain knowledge, so we detail these two procedures as follows.", "Constant Representation Extraction For each math word problem, we are given a passage consisting of words {w P t } m t=1 , whose word embeddings are {e P t } m t=1 .", "The problem text includes some numbers, which we refer as constants.", "The positions of constants in the problem text are denoted as {p i } n i=1 .", "In order to capture the semantic representation of each constant by considering its contexts, a bidirectional long short-term memory (BLSTM) is adopted as the encoder (Hochreiter and Schmidhuber, 1997) : h E t , c E t = BLSTM(h E t−1 , c E t−1 , e P t ), (1) and then for the i-th constant in the problem, its semantic representation e c i is modeled by the corresponding BLSTM output vector: e c i = h E p i .", "(2) External Constant Leveraging External constants, including 1 and π, are leveraged, because they are required to solve a math word problem, but not mentioned in the problem text.", "Due to their absence from the problem text, we cannot extract their semantic meanings by BLSTM in (2) .", "Instead, we model their semantic representation e π , e 1 as parts of the model parameters.", "They are randomly initialized and are learned during model training.", "Decoder The decoder aims at constructing the equation that can solve the given problem.", "We generate the equation by applying stack actions on a stack to mimic the way how human understands an equation.", "Human knows the semantic meaning of each term in the equation, even compositing of operands and operators like the term \"(10−1×5)\" in Figure 1 .", "Then what operator to apply on a pair operands can be chosen based on their semantic meanings accordingly.", "Hence we design our model to generate the equation in a postfix manner: a operator is chosen base on the semantic representations of two operands the operator is going to apply to.", "Note that the operands a operator can apply to can be any results generated previously.", "That is the reason why we use \"stack\" as our data structure in order to keep track of the operands a operator is going to apply to.", "The stack contains both symbolic and semantic representations of operands, denoted as S = [(v S lt , e S lt ), (v S lt−1 , e S lt−1 ), · · · , (v S 1 , e S 1 )], (3) where v S of each pair is the symbolic part, such as x + 1, while e S is the semantic representation, which is a vector.", "The components in the decoder are shown in the right part of Figure 2 , each of which is detailed below.", "Decoding State Features At each decoding step, decisions are made based on features of the current state.", "At each step, fea- tures r sa and r opd are extracted to select a stack action (section 3.3.2) and an operand to push (section 3.3.3).", "Specifically, the features are the gated concatenation of following vectors: • h D t is the output of an LSTM, which encodes the history of applied actions: h D t , c D t = LSTM(h D t−1 , c D t−1 , res t−1 ), (4) where res t−1 is the result from the previous stack action similar to the seq2seq model (Sutskever et al., 2014) .", "For example, if the previous stack action o t−1 is \"push\", then res t−1 is the semantic representation pushed into the stack.", "If the previous stack action o t−1 is to apply an operator , then res t−1 is the semantic representation generated by f .", "• s t is the stack status.", "It is crucial because some operators are only applicable to certain combinations of operand semantics, which is similar to the type system in programming languages.", "For example, operating multiplication is applicable to the combination of \"quantity of an item\" and \"price of an item\", while operating addition is not.", "Considering that all math operators supported here (+, −, ×, ÷) are binary operators, the semantic representations of the stack's top 2 elements at the time t − 1 are considered: s t = [e S lt ; e S lt ].", "(5) • q t incorporates problem information in the decision.", "It is believed that the attention mechanism (Luong et al., 2015) can effectively capture dependency for longer distance.", "Thus, the attention mechanism over the encoding problem h E 1 , h E 2 , · · · is adopted: q t = Attention(h D t , {h E i } m i=1 ), (6) where the attention function in this paper is defined as a function with learnable parameters w, W, b: Attention(u, {v i } m i=1 ) = m i=1 α i h i , (7) α i = exp(s i ) m l=1 exp(s i ) , (8) s i = w T tanh(W T [u; v i ] + b).", "(9) In order to model the dynamic features for different decoding steps, features in r sa t is gated as follows: r sa t = [g sa t,1 · h D t ; g sa t,2 · s t ; g sa t,3 · q t ], (10) g sa t = σ(W sa · [h D t ; s t ; q t ]), (11) where σ is a sigmoid function and W sa is a learned gating parameter.", "r opd t is defined similarly, but with a different learned gating parameter W opd .", "Stack Action Selector The stack action selector is to select an stack action at each decoding step (section 3.3.2) until the unknowns are solved.", "The probability of choosing action a at the decoding step t is calculated with a network NN constituted of one hidden layer and ReLU as the activation function: P (Y t |{y i } t−1 i=1 , {w i } m i=1 ) (12) = StackActionSelector(r sa t ) = softmax(NN(r sa t )) , where r sa t is decoding state features as defined in section 3.3.", "Stack Actions The available stack actions are listed below: • Variable generation: The semantic representation of an unknown variable x is generated dynamically as the first action in the decoding process.", "Note that this procedure provides the flexibility of solving problems with more than one unknown variables.", "The decoder module can decide how many unknown variables are required to solve the problem, and the semantic representation of the unknown variable is generated with an attention mechanism: e x = Attention(h D t , {h E i } m i=1 ).", "(13) • Push: This stack action pushes the operand chosen by the operand selector (section 3.3.3).", "Both the symbolic representation v * and semantic representation e * of the chosen operand would be pushed to the stack S in (3).", "Then the stack state becomes S = [(v S * , e S * ), (v S lt , e S lt ), · · · , (v S 1 , e S 1 )].", "(14) • Operator application ( ∈ {+, −, ×, ÷}): One stack action pops two elements from the top of the stack, which contains two pairs, (v i , e i ) and (v j , e j ), and then the associated symbolic operator, v k = v i v j , is recorded.", "Also, a semantic transformation function f for that operator is invoked, which generates the semantic representation of v k by transforming semantic representations of v i and v j to e k = f (e i , e j ).", "Therefore, after an operator is applied to the stack specified in (3) , the stack state becomes S =[(v S lt v S lt−1 , f (e S lt , e S lt−1 )), (15) (v S lt−2 , e S lt−2 ), · · · , (v S 1 , e S 1 )].", "• Equal application: When the equal application is chosen, it implies that an equation is completed.", "This stack action pops 2 tuples from the stack, (v i , e i ), (v j , e j ), and then v i = v j is recorded.", "If one of them is an unknown variable, the problem is solved.", "Therefore, after an OP is applied to the stack specified in (3) , the stack state becomes S = [(v S lt−2 , e S lt−2 ), · · · , (v S 1 , e S 1 )].", "(16) Operand Selector When the stack action selector has decided to push an operand, the operand selector aims at choosing which operand to push.", "The operand candidates e include constants provided in the problem text whose semantic representations are e c 1 , e c 2 , · · · , e c n , unknown variable whose semantic representation is e x , and two external constants 1 and π whose semantic representations are e 1 , e π : e = [e c 1 , e c 2 , · · · , e c n , e 1 , e π , e x ].", "An operand has both symbolic and semantic representations, but the selection focuses on its semantic meaning; this procedure is the same as what human does when solving math word problems.", "Inspired by addressing mechanisms of neural Turing machine (NTM) (Graves et al., 2014) , the probability of choosing the i-th operand candidate is the attention weights of r t over the semantic representations of the operand candidates as in (8) : P (Z t | {y i } t−1 i=1 , {w i } m i=1 ) (18) = OperandSelector(r opd t ) = AttentionWeight(r opd t , {e i } m i=1 ∪ {e 1 , e π , e x }), and r opd t is defined in section 3.3.", "Semantic Transformer A semantic transformer is proposed to generate the semantic representation of a new symbol resulted from applying an operator, which provides the capability of interpretation and reasoning for the target task.", "The semantic transformer for an operator ∈ {+, −, ×, ÷} transforms semantic representations of two operands e 1 , e 2 into f (e 1 , e 2 ) = tanh(U ReLU(W [e 1 ; e 2 ]+b )+c ), where W , U , b , c are model parameters.", "Semantic transformers for different operators have different parameters in order to model different transformations.", "Training Both stack action selection and operand selection can be trained in a fully supervised way by giving problems and associated ground truth equations.", "Because our model generates the equation with stack actions, the equation is first transformed into its postfix representation.", "Let the postfix representation of the target equation be y 1 , · · · y t , · · · , y T , where y t can be either an operator (+, −, ×, ÷, =) or a target operand.", "Then for each time step t, the loss can be computed as L(y t ) = L 1 (push op) + L 2 (y t ) y t is an operand L 1 (y t ) otherwise , where L 1 is the stack action selection loss and L 2 is the operand selection loss defined as L 1 (y t ) = − log P (Y t = y t | {o i } t−1 i=1 , {w i } m i=1 ), L 2 (y t ) = − log P (Z t = y t | r t ).", "The objective of our training process is to minimize the total loss for the whole equation, T t=1 L(y t ).", "Inference When performing inference, at each time step t, the stack action with the highest probability P (Y t |{ỹ i } t−1 i=1 , {w i } m i=1 ) is chosen.", "If the chosen stack action is \"push\", the operand with the highest probability P (Z t |{Ỹ i } t−1 i=1 , {w i } m i=1 ) is chosen.", "When the stack has less than 2 elements, the probability of applying operator +, −, ×, ÷, = would be masked out to prevent illegal stack actions, so all generated equations must be legal math expressions.", "The decoder decodes until the unknown variable can be solved.", "After the equations are generated, a Python package SymPy (Meurer et al., 2017) is used to solve the unknown variable.", "The inference procedure example is illustrated in Figure 3 .", "The detailed algorithm can be found in Algorithm 1.", "Experiments To evaluate the performance of the proposed model, we conduct the experiments on the benchmark dataset and analyze the learned semantics.", "Settings The experiments are benchmarked on the dataset Math23k (Wang et al., 2017) , which contains 23,162 math problems with annotated equations.", "Each problem can be solved by a singleunknown-variable equation and only uses operators +, −, ×, ÷.", "Also, except π and 1, quantities in the equation can be found in the problem text.", "There are also other large scale datasets like Dol-phin18K (Shi et al., 2015) and AQuA (Ling et al., 2017) , containing 18,460 and 100,000 math word problems respectively.", "The reasons about not evaluating on these two datasets are 1) Dolphin18k contains some unlabeled math word problems and some incorrect labels, and 2) AQuA contains rational for solving the problems, but the equations in the rational are not formal (e.g.", "mixed with texts, using x to represent ×, etc.)", "and inconsistent.", "Therefore, the following experiments are performed and analyzed using Math23K, the only large scaled, good-quality dataset. )", "do h D t ← LSTM(h D t−1 , ct−1, ret) st ← S.get top2() h E ← Attention(h D t−1 , h E ) rt ← [h D t , st, h E ] psa ← StackActionSelector(rt) p opd ← OperandSelector(rt) if training then Target equation y is available when training.", "Yt ← yt if yt is operand then loss ← loss + L1(push) + L2(yt) else loss ← loss + L1(yt) end if else Yt ← StackActionSelector(r sa t ) if Yt = push then Zt ← OperandSelector(r opd t ) end if end if if Yt = gen var then e x ← Attention(h D t , h E ) ret ← e x else if Yt = push then S.push(vZ t , eZ t ) ret ← eZ t else if Yt ∈ {+, Results The results are shown in Our proposed end-to-end model belongs to the generation category, and the single model performance achieved by our proposed model is new state-of-the-art (> 65%) and even better than the hybrid model result (64.7%).", "In addition, we are the first to report character-based performance on this dataset, and the character-based results are slightly better than the word-based ones.", "Among the single model performance, our models obtain about more than 7% accuracy improvement compared to the previous best one (Wang et al., 2017) .", "The performance of our character-based model also shows that our model is capable of learning the relatively accurate semantic representations without word boundaries and achieves better performance.", "Ablation Test To better understand the performance contributed by each proposed component, we perform a series of ablation tests by removing components one by one and then checking the performance by 5-fold cross validation.", "Table 2 shows the ablation results.", "Char-Based v.s.", "Word-Based As reported above, using word-based model instead of character-based model only causes 0.5% performance drop.", "To fairly compare with prior word- Table 2 : 5-fold cross validation results of ablation tests.", "based models, the following ablation tests are performed on the word-based approach.", "Word-Based -Gate It uses r t instead of r sa t and r opr t as the input of both StackActionSelector and OperandSelector.", "Word-Based -Gate -Attention Considering that the prior generation-based model (seq2seq) did not use any attention mechanism, we compare the models with and without the attention mechanism.", "Removing attention means excluding q t−1 in (11), so the input of both operator and operand selector becomes r t = [h D t ; s t ].", "The result implies that our model is not better than previous models solely because of the attention.", "Word-Based -Gate -Attention -Stack To check the effectiveness of the stack status (s t in (11)), the experiments of removing the stack status from the input of both operator and operand selectors (r t = h D t ) are conducted.", "The results well justify our idea of choosing operators based on semantic meanings of operands.", "Word-Based -Semantic Transformer To validate the effectiveness of the idea that views an operator as a semantic transformer, we modify the semantic transformer function of the operator into f (e 1 , e 2 ) = e , where e is a learnable parameter and is different for different operators.", "Therefore, e acts like the embedding of the operator , and the decoding process is more similar to a general seq2seq model.", "The results show that the semantic transformer in the original model encodes not only the last operator applied on the operands but other information that helps the selectors.", "Word-Based -Semantic Representation To explicitly evaluate the effectiveness of operands' semantic representations, we rewrite semantic representation of the i-th operand in the problem texts q u a n ti fi e r 个 b a n a n a 香 蕉 , e v e r y 每 ( b a s k e t) < u n k > 6 .0 q u a n ti fi e r 个 , ta k e o ff 拿 掉 h o w m a n y 多 少 q u a n ti fi e r 个 , th e n 就 c a n 可 以 e x a c tl y 正 好 fi ll 装 9 .0 q u a n ti fi e r 个 b a s k e ts 篮 子 了 < u n k > .", "9.0 6.0 58.0 Figure 4 : The self-attention map visualization of operands' semantic expressions for the problem \"There are 58 bananas.", "Each basket can contain 6 bananas.", "How many bananas are needed to be token off such that exactly 9 baskets are filled?\".", "from (2) to e c i = b c i , where b c i is a parameter.", "Thus for every problem, the representation of the i-th operand is identical, even though their meanings in different problems may be different.", "This modification assumes that no semantic information is captured by b c i , which can merely represent a symbolic placeholder in an equation.", "Because the semantic transformer is to transform the semantic representations, applying this component is meaningless.", "Here the semantic transformer is also replaced with f (e 1 , e 2 ) = e as the setting of the previous ablation test.", "The results show that the model without using semantic representations of operands causes a significant accuracy drop of 3.5%.", "The main contribution of this paper about modeling semantic meanings of symbols is validated and well demonstrated here.", "Qualitative Analysis To further analyze whether the proposed model can provide interpretation and reasoning, we visualize the learned semantic representations of constants to check where the important cues are, Constant Embedding Analysis To better understand the information encoded in the semantic representations of constants in the problem, a self-attention is performed when their semantic representations are extracted by the encoder.", "Namely, we rewrite (2) as e c i = Attention(h E p i , {h E t } m t=1 .", "(20) Then we check the trained self-attention map (α in the attention function) on the validation dataset.", "For some problems, the self-attention that generates semantic representations of constants in the problem concentrates on the number's quantifier or unit, and sometimes it also focuses on informative verbs, such as \"gain\", \"get\", \"fill\", etc., in the sentence.", "For example, Figure 4 shows the attention weights for an example math word problem, where lighter colors indicate higher weights.", "The numbers \"58\" and \"6\" focus more on the quantifier-related words (e.g.", "\"every\" and \"how many\"), while \"9\" pays higher attention to the verb \"fill\".", "The results are consistent with those handcraft features for solving math word problems proposed by the prior research (Hosseini et al., 2014; .", "Hence, we demonstrate that the automatically learned semantic representations indeed capture critical information that facilitates solving math word problems without providing human-crafted knowledge.", "Decoding Process Visualization We visualize the attention map (q t in (6) ) to see how the attention helps the decoding process.", "An example is shown in the top of Figure 5 , where most attention focuses on the end of the sentence.", "Unlike the machine translation task, the attention shows the word-level alignment between source and target languages, solving math word problems requires high-level understanding due to the task complexity.", "To further analyze the effectiveness of the proposed gating mechanisms for stack action and operand selection, the activation of gates g sa , g opd at each step of the decoding process is shown in the bottom of Figure 5 .", "It shows that most of time, the gate activation is high, demonstrating that the proposed gating mechanisms play an important role during decoding.", "We also observe a common phenomenon that the activation g sa 2 , which controls how much attention the stack action selector puts on the stack state when deciding an stack action, is usually low until the last \"operator application\" stack action.", "For example, in the example of Figure 5 , g sa 2 is less than 0.20 till the last argument selection stack action, and activates when deciding the division operator application (÷) and the equal application (=).", "It may result from the higher-level semantics of the operand (6.75−2.75) on the stack when selecting the stack action division operator application (÷).", "In terms Problem & Results 红花有60朵,黄花比红花多1/6朵,黄花有多少朵. (There are 60 red flowers.", "Yellow flowers are more than red ones by 1/6.", "How many yellow flowers are there?)", "Generated Equation: 60 + 1 6 Correct Answer: 70 火车 48 小时行驶 5920 千米,汽车 25 小时行驶 2250 千米,汽车平均每小时比火车每小时慢 多少 千米 ? (The train travels 5920 kilometers in hours, and the car travels 2250 kilometers in 25 hours.", "How many kilometers per hour is the car slower than the train?)", "Generated Equation: 2250 ÷ 25 − 5920 ÷ 48 Correct Answer: 33 1 3 小红前面 5 人,后面 7 人,一共有多少人? (There are 5 people in front of Little Red and 7 people behind.", "How many persons are there in total?)", "Generated Equation: 5 + 7 Correct Answer: 13 Figure 5: Word attention and gate activation (g sa and g opd ) visualization when generating stack actions for the problem \"6.75 deducting 5 times of an unknown number is 2.75.", "What is the unknown number?", "\", where the associated equation is x = (6.75 − 2.75) ÷ 5.", "Note that g opd is meaningful only when the t-th stack action is push op.", "of the activation of g opd , we find that three features are important in most cases, demonstrating the effectiveness of the proposed mechanisms.", "Error Analysis We randomly sample some results predicted incorrectly by our model shown in Table 3 .", "In the first example, the error is due to the language ambiguity, and such ambiguity cannot be resolved without considering the exact value of the number.", "From the second example, although our model identifies the problem as a comparison problem successfully, it handles the order of the operands incorrectly.", "For the third problem, it cannot be solved by using only the surface meaning but requires some common sense.", "Therefore, above phenomena show the difficulty of solving math word problems and the large room for improvement.", "Conclusion We propose an end-to-end neural math solver using an encoder-decoder framework that incorporates semantic representations of numbers in order to generate mathematical symbols for solving math word problems.", "The experiments show that the proposed model achieves the state-of-the-art performance on the benchmark dataset, and empirically demonstrate the effectiveness of each component in the model.", "In sum, the proposed neural math solver is designed based on how human performs reasoning when writing equations, providing better interpretation without the need of labeled rationals.", "A Algorithm Detail The training and inference procedures are shown in Algortihm 1.", "B Hyperparameter Setup The model is trained with the optimizer adam (Kingma and Ba, 2014), and the learning rate is set to 0.001.", "Pretrained embeddings using FastText (Joulin et al., 2016 ) are adopted.", "The hidden state size of LSTM used in the encoder and decoder is 256.", "The dimension of hidden layers in attention, semantic transformer and operand/stack action selector is 256.", "The dropout rate is set as 0.1 before inputting the decoder LSTM, before the stack action selector and after the hidden layer of the stack action selector and attention.", "The reported accuracy is the result of 5-fold cross-validation, same as Wang et al.", "for fair comparison.", "C Error Analysis between Seq2Seq We implement the seq2seq model as proposed by Wang et al.", "and compare the performance difference between our proposed model and the baseline seq2seq model.", "Table 4 shows the generated results seq2seq predicts correctly but our model predicts incorrectly.", "Table 5 show the results our model can predict correctly but seq2seq cannot." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.1.1", "3.1.2", "3.2", "3.3", "3.3.1", "3.3.2", "3.3.3", "3.3.4", "3.4", "3.5", "4", "4.1", "4.2", "4.3", "5", "5.1", "5.2", "5.3", "6" ], "paper_header_content": [ "Introduction", "Related Work", "End-to-End Neural Math Solver", "Encoder", "Constant Representation Extraction", "External Constant Leveraging", "Decoder", "Decoding State Features", "Stack Action Selector", "Stack Actions", "Operand Selector", "Semantic Transformer", "Training", "Inference", "Experiments", "Settings", "Results", "Ablation Test", "Qualitative Analysis", "Constant Embedding Analysis", "Decoding Process Visualization", "Error Analysis", "Conclusion" ] }
GEM-SciDuet-train-31#paper-1044#slide-19
Self Attention for Qualitative Analysis
Each notebook takes and
Each notebook takes and
[]
GEM-SciDuet-train-31#paper-1044#slide-20
1044
Semantically-Aligned Equation Generation for Solving and Reasoning Math Word Problems
Solving math word problems is a challenging task that requires accurate natural language understanding to bridge natural language texts and math expressions. Motivated by the intuition about how human generates the equations given the problem texts, this paper presents a neural approach to automatically solve math word problems by operating symbols according to their semantic meanings in texts. This paper views the process of generating equations as a bridge between the semantic world and the symbolic world, where the proposed neural math solver is based on an encoderdecoder framework. In the proposed model, the encoder is designed to understand the semantics of problems, and the decoder focuses on tracking semantic meanings of the generated symbols and then deciding which symbol to generate next. The preliminary experiments are conducted in a benchmark dataset Math23K, and our model significantly outperforms both the state-of-the-art single model and the best non-retrieval-based model over about 10% accuracy, demonstrating the effectiveness of bridging the symbolic and semantic worlds from math word problems. 1
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214 ], "paper_content_text": [ "Introduction Automatically solving math word problems has been an interesting research topic and also been viewed as a way of evaluating machines' ability (Mandal and Naskar, 2019) .", "For human, writing down an equation that solves a math word problem requires the ability of reading comprehension, reasoning, and sometimes real world understanding.", "Specifically, to solve a math word problem, we first need to know the goal of the given problem, then understand the semantic 1 The source code is available at https://github.", "com/MiuLab/E2EMathSolver.", "meaning of each numerical number in the problem, perform reasoning based on the comprehension in the previous step, and finally decide what to write in the equation.", "Most prior work about solving math word problems relied on hand-crafted features, which required more human knowledge.", "Because those features are often in the lexical level, it is not clear whether machines really understand the math problems.", "Also, most prior work evaluated their approaches on relatively small datasets, and the capability of generalization is concerned.", "This paper considers the reasoning procedure when writing down the associated equation given a problem.", "Figure 1 illustrates the problem solving process.", "The illustration shows that human actually assigns the semantic meaning to each number when manipulating symbols, including operands (numbers) and operators (+ − ×÷).", "Also, we believe that the semantic meaning of operands can help us decide which operator to use.", "For example, the summation of \"price of one pen\" and \"number of pens Tom bought\" is meaningless; therefore the addition would not be chosen.", "Following the observation above, this paper proposes a novel encoder decoder model, where the encoder extracts semantic meanings of numbers in the problem, and the decoder is equipped with a stack that facilitates tracking the semantic meanings of operands.", "The contributions of this paper are 4-fold: • This paper is the first work that models semantic meanings of operands and operators for math word problems.", "• This paper proposes an end-to-end neural math solver with a novel decoding process that utilizes the stack to generate associated equations.", "Figure 1 : The solving process of the math word problem \"Each notebok takes $0.5 and each pen takes $1.", "Tom has $10.", "How many notebook can he buy after buying 5 pens?\"", "and the associated equation is x = (10 − 1 × 5) ÷ 0.5.", "The associated equation is x = (10 − 1 × 5) ÷ 0.5.", "• This paper achieves the state-of-the-art performance on the large benchmark dataset Math23K.", "• This paper is capable of providing interpretation and reasoning for the math word problem solving procedure.", "Related Work There is a lot of prior work that utilized handcrafted features, such as POS tags, paths in the dependency trees, keywords, etc., to allow the model to focus on the quantities in the problems Hosseini et al., 2014; Koncel-Kedziorski et al., 2015; Roy et al., 2016; Upadhyay and Chang, 2017; Roy and Roth, 2018; Wang et al., 2018) .", "Recently, Mehta et al.", "; Wang et al.", "; Ling et al.", "attempted at learning models without predefined features.", "Following the recent trend, the proposed end-to-end model in this paper does not need any hand-crafted features.", "Kushman et al.", "first extracted templates about math expressions from the training answers, and then trained models to select templates and map quantities in the problem to the slots in the template.", "Such two-stage approach has been tried and achieved good results (Upadhyay and Chang, 2017) .", "The prior work highly relied on human knowledge, where they parsed problems into equations by choosing the expression tree with the highest score calculated by an operator classifier, working on a hand-crafted \"trigger list\" containing quantities and noun phrases in the problem, or utilizing features extracted from text spans (Roy et al., , 2016 Koncel-Kedziorski et al., 2015) .", "Shi et al.", "defined a Dolphin language to connect math word problems and logical forms, and generated rules to parse math word problems.", "Upadhyay et al.", "parsed math word problems without explicit equation annotations.", "Roy and Roth clas-sified math word problems into 4 types and used rules to decide the operators accordingly.", "Wang et al.", "trained the parser using reinforcement learning with hand-crafted features.", "Hosseini et al.", "modeled the problem text as transition of world states, and the equation is generated as the world states changing.", "Our work uses a similar intuition, but hand-crafted features are not required and our model can be trained in an end-to-end manner.", "Some end-to-end approaches have been proposed, such as generating equations directly via a seq2seq model (Wang et al., 2017) .", "Ling et al.", "tried to generate solutions along with its rationals with a seq2seq-like model for better interpretability.", "This paper belongs to the end-to-end category, but different from the previous work; we are the first approach that generates equations with stack actions, which facilitate us to simulate the way how human solves problems.", "Furthermore, the proposed approach is the first model that is more interpretable and provides reasoning steps without the need of rational annotations.", "End-to-End Neural Math Solver Our approach composes of two parts, an encoder and a decoder, where the process of solving math word problems is viewed as transforming multiple text spans from the problems into the target information the problems ask for.", "In the example shown in Figure 1 , all numbers in the problem are attached with the associated semantics.", "Motivated by the observation, we design an encoder to extract the semantic representation of each number in the problem text.", "Considering that human usually manipulates those numbers and operators (such as addition, subtraction, etc.)", "based on their semantics for problem solving, a decoder is designed to construct the equation, where the semantics is aligned with the representations extracted by the encoder.", "The idea of the proposed model Tom has $ 10 5 pens ?", "Encoder Stack Attention Operation Selector Apply OP OP Return Decoder Operand Selector Semantic Transformer Each notebook takes $0.5 and each pen takes $1.", "Tom has $10.", "How many notebooks can he buy after buying 5 pens?", "Stack Attention is to imitate the human reasoning process for solving math word problems.", "The model architecture is illustrated in Figure 2 .", "Encoder The encoder aims to extract the semantic representation of each constant needed for solving problems.", "However, the needed constants may come from either the given problem texts or domain knowledge, so we detail these two procedures as follows.", "Constant Representation Extraction For each math word problem, we are given a passage consisting of words {w P t } m t=1 , whose word embeddings are {e P t } m t=1 .", "The problem text includes some numbers, which we refer as constants.", "The positions of constants in the problem text are denoted as {p i } n i=1 .", "In order to capture the semantic representation of each constant by considering its contexts, a bidirectional long short-term memory (BLSTM) is adopted as the encoder (Hochreiter and Schmidhuber, 1997) : h E t , c E t = BLSTM(h E t−1 , c E t−1 , e P t ), (1) and then for the i-th constant in the problem, its semantic representation e c i is modeled by the corresponding BLSTM output vector: e c i = h E p i .", "(2) External Constant Leveraging External constants, including 1 and π, are leveraged, because they are required to solve a math word problem, but not mentioned in the problem text.", "Due to their absence from the problem text, we cannot extract their semantic meanings by BLSTM in (2) .", "Instead, we model their semantic representation e π , e 1 as parts of the model parameters.", "They are randomly initialized and are learned during model training.", "Decoder The decoder aims at constructing the equation that can solve the given problem.", "We generate the equation by applying stack actions on a stack to mimic the way how human understands an equation.", "Human knows the semantic meaning of each term in the equation, even compositing of operands and operators like the term \"(10−1×5)\" in Figure 1 .", "Then what operator to apply on a pair operands can be chosen based on their semantic meanings accordingly.", "Hence we design our model to generate the equation in a postfix manner: a operator is chosen base on the semantic representations of two operands the operator is going to apply to.", "Note that the operands a operator can apply to can be any results generated previously.", "That is the reason why we use \"stack\" as our data structure in order to keep track of the operands a operator is going to apply to.", "The stack contains both symbolic and semantic representations of operands, denoted as S = [(v S lt , e S lt ), (v S lt−1 , e S lt−1 ), · · · , (v S 1 , e S 1 )], (3) where v S of each pair is the symbolic part, such as x + 1, while e S is the semantic representation, which is a vector.", "The components in the decoder are shown in the right part of Figure 2 , each of which is detailed below.", "Decoding State Features At each decoding step, decisions are made based on features of the current state.", "At each step, fea- tures r sa and r opd are extracted to select a stack action (section 3.3.2) and an operand to push (section 3.3.3).", "Specifically, the features are the gated concatenation of following vectors: • h D t is the output of an LSTM, which encodes the history of applied actions: h D t , c D t = LSTM(h D t−1 , c D t−1 , res t−1 ), (4) where res t−1 is the result from the previous stack action similar to the seq2seq model (Sutskever et al., 2014) .", "For example, if the previous stack action o t−1 is \"push\", then res t−1 is the semantic representation pushed into the stack.", "If the previous stack action o t−1 is to apply an operator , then res t−1 is the semantic representation generated by f .", "• s t is the stack status.", "It is crucial because some operators are only applicable to certain combinations of operand semantics, which is similar to the type system in programming languages.", "For example, operating multiplication is applicable to the combination of \"quantity of an item\" and \"price of an item\", while operating addition is not.", "Considering that all math operators supported here (+, −, ×, ÷) are binary operators, the semantic representations of the stack's top 2 elements at the time t − 1 are considered: s t = [e S lt ; e S lt ].", "(5) • q t incorporates problem information in the decision.", "It is believed that the attention mechanism (Luong et al., 2015) can effectively capture dependency for longer distance.", "Thus, the attention mechanism over the encoding problem h E 1 , h E 2 , · · · is adopted: q t = Attention(h D t , {h E i } m i=1 ), (6) where the attention function in this paper is defined as a function with learnable parameters w, W, b: Attention(u, {v i } m i=1 ) = m i=1 α i h i , (7) α i = exp(s i ) m l=1 exp(s i ) , (8) s i = w T tanh(W T [u; v i ] + b).", "(9) In order to model the dynamic features for different decoding steps, features in r sa t is gated as follows: r sa t = [g sa t,1 · h D t ; g sa t,2 · s t ; g sa t,3 · q t ], (10) g sa t = σ(W sa · [h D t ; s t ; q t ]), (11) where σ is a sigmoid function and W sa is a learned gating parameter.", "r opd t is defined similarly, but with a different learned gating parameter W opd .", "Stack Action Selector The stack action selector is to select an stack action at each decoding step (section 3.3.2) until the unknowns are solved.", "The probability of choosing action a at the decoding step t is calculated with a network NN constituted of one hidden layer and ReLU as the activation function: P (Y t |{y i } t−1 i=1 , {w i } m i=1 ) (12) = StackActionSelector(r sa t ) = softmax(NN(r sa t )) , where r sa t is decoding state features as defined in section 3.3.", "Stack Actions The available stack actions are listed below: • Variable generation: The semantic representation of an unknown variable x is generated dynamically as the first action in the decoding process.", "Note that this procedure provides the flexibility of solving problems with more than one unknown variables.", "The decoder module can decide how many unknown variables are required to solve the problem, and the semantic representation of the unknown variable is generated with an attention mechanism: e x = Attention(h D t , {h E i } m i=1 ).", "(13) • Push: This stack action pushes the operand chosen by the operand selector (section 3.3.3).", "Both the symbolic representation v * and semantic representation e * of the chosen operand would be pushed to the stack S in (3).", "Then the stack state becomes S = [(v S * , e S * ), (v S lt , e S lt ), · · · , (v S 1 , e S 1 )].", "(14) • Operator application ( ∈ {+, −, ×, ÷}): One stack action pops two elements from the top of the stack, which contains two pairs, (v i , e i ) and (v j , e j ), and then the associated symbolic operator, v k = v i v j , is recorded.", "Also, a semantic transformation function f for that operator is invoked, which generates the semantic representation of v k by transforming semantic representations of v i and v j to e k = f (e i , e j ).", "Therefore, after an operator is applied to the stack specified in (3) , the stack state becomes S =[(v S lt v S lt−1 , f (e S lt , e S lt−1 )), (15) (v S lt−2 , e S lt−2 ), · · · , (v S 1 , e S 1 )].", "• Equal application: When the equal application is chosen, it implies that an equation is completed.", "This stack action pops 2 tuples from the stack, (v i , e i ), (v j , e j ), and then v i = v j is recorded.", "If one of them is an unknown variable, the problem is solved.", "Therefore, after an OP is applied to the stack specified in (3) , the stack state becomes S = [(v S lt−2 , e S lt−2 ), · · · , (v S 1 , e S 1 )].", "(16) Operand Selector When the stack action selector has decided to push an operand, the operand selector aims at choosing which operand to push.", "The operand candidates e include constants provided in the problem text whose semantic representations are e c 1 , e c 2 , · · · , e c n , unknown variable whose semantic representation is e x , and two external constants 1 and π whose semantic representations are e 1 , e π : e = [e c 1 , e c 2 , · · · , e c n , e 1 , e π , e x ].", "An operand has both symbolic and semantic representations, but the selection focuses on its semantic meaning; this procedure is the same as what human does when solving math word problems.", "Inspired by addressing mechanisms of neural Turing machine (NTM) (Graves et al., 2014) , the probability of choosing the i-th operand candidate is the attention weights of r t over the semantic representations of the operand candidates as in (8) : P (Z t | {y i } t−1 i=1 , {w i } m i=1 ) (18) = OperandSelector(r opd t ) = AttentionWeight(r opd t , {e i } m i=1 ∪ {e 1 , e π , e x }), and r opd t is defined in section 3.3.", "Semantic Transformer A semantic transformer is proposed to generate the semantic representation of a new symbol resulted from applying an operator, which provides the capability of interpretation and reasoning for the target task.", "The semantic transformer for an operator ∈ {+, −, ×, ÷} transforms semantic representations of two operands e 1 , e 2 into f (e 1 , e 2 ) = tanh(U ReLU(W [e 1 ; e 2 ]+b )+c ), where W , U , b , c are model parameters.", "Semantic transformers for different operators have different parameters in order to model different transformations.", "Training Both stack action selection and operand selection can be trained in a fully supervised way by giving problems and associated ground truth equations.", "Because our model generates the equation with stack actions, the equation is first transformed into its postfix representation.", "Let the postfix representation of the target equation be y 1 , · · · y t , · · · , y T , where y t can be either an operator (+, −, ×, ÷, =) or a target operand.", "Then for each time step t, the loss can be computed as L(y t ) = L 1 (push op) + L 2 (y t ) y t is an operand L 1 (y t ) otherwise , where L 1 is the stack action selection loss and L 2 is the operand selection loss defined as L 1 (y t ) = − log P (Y t = y t | {o i } t−1 i=1 , {w i } m i=1 ), L 2 (y t ) = − log P (Z t = y t | r t ).", "The objective of our training process is to minimize the total loss for the whole equation, T t=1 L(y t ).", "Inference When performing inference, at each time step t, the stack action with the highest probability P (Y t |{ỹ i } t−1 i=1 , {w i } m i=1 ) is chosen.", "If the chosen stack action is \"push\", the operand with the highest probability P (Z t |{Ỹ i } t−1 i=1 , {w i } m i=1 ) is chosen.", "When the stack has less than 2 elements, the probability of applying operator +, −, ×, ÷, = would be masked out to prevent illegal stack actions, so all generated equations must be legal math expressions.", "The decoder decodes until the unknown variable can be solved.", "After the equations are generated, a Python package SymPy (Meurer et al., 2017) is used to solve the unknown variable.", "The inference procedure example is illustrated in Figure 3 .", "The detailed algorithm can be found in Algorithm 1.", "Experiments To evaluate the performance of the proposed model, we conduct the experiments on the benchmark dataset and analyze the learned semantics.", "Settings The experiments are benchmarked on the dataset Math23k (Wang et al., 2017) , which contains 23,162 math problems with annotated equations.", "Each problem can be solved by a singleunknown-variable equation and only uses operators +, −, ×, ÷.", "Also, except π and 1, quantities in the equation can be found in the problem text.", "There are also other large scale datasets like Dol-phin18K (Shi et al., 2015) and AQuA (Ling et al., 2017) , containing 18,460 and 100,000 math word problems respectively.", "The reasons about not evaluating on these two datasets are 1) Dolphin18k contains some unlabeled math word problems and some incorrect labels, and 2) AQuA contains rational for solving the problems, but the equations in the rational are not formal (e.g.", "mixed with texts, using x to represent ×, etc.)", "and inconsistent.", "Therefore, the following experiments are performed and analyzed using Math23K, the only large scaled, good-quality dataset. )", "do h D t ← LSTM(h D t−1 , ct−1, ret) st ← S.get top2() h E ← Attention(h D t−1 , h E ) rt ← [h D t , st, h E ] psa ← StackActionSelector(rt) p opd ← OperandSelector(rt) if training then Target equation y is available when training.", "Yt ← yt if yt is operand then loss ← loss + L1(push) + L2(yt) else loss ← loss + L1(yt) end if else Yt ← StackActionSelector(r sa t ) if Yt = push then Zt ← OperandSelector(r opd t ) end if end if if Yt = gen var then e x ← Attention(h D t , h E ) ret ← e x else if Yt = push then S.push(vZ t , eZ t ) ret ← eZ t else if Yt ∈ {+, Results The results are shown in Our proposed end-to-end model belongs to the generation category, and the single model performance achieved by our proposed model is new state-of-the-art (> 65%) and even better than the hybrid model result (64.7%).", "In addition, we are the first to report character-based performance on this dataset, and the character-based results are slightly better than the word-based ones.", "Among the single model performance, our models obtain about more than 7% accuracy improvement compared to the previous best one (Wang et al., 2017) .", "The performance of our character-based model also shows that our model is capable of learning the relatively accurate semantic representations without word boundaries and achieves better performance.", "Ablation Test To better understand the performance contributed by each proposed component, we perform a series of ablation tests by removing components one by one and then checking the performance by 5-fold cross validation.", "Table 2 shows the ablation results.", "Char-Based v.s.", "Word-Based As reported above, using word-based model instead of character-based model only causes 0.5% performance drop.", "To fairly compare with prior word- Table 2 : 5-fold cross validation results of ablation tests.", "based models, the following ablation tests are performed on the word-based approach.", "Word-Based -Gate It uses r t instead of r sa t and r opr t as the input of both StackActionSelector and OperandSelector.", "Word-Based -Gate -Attention Considering that the prior generation-based model (seq2seq) did not use any attention mechanism, we compare the models with and without the attention mechanism.", "Removing attention means excluding q t−1 in (11), so the input of both operator and operand selector becomes r t = [h D t ; s t ].", "The result implies that our model is not better than previous models solely because of the attention.", "Word-Based -Gate -Attention -Stack To check the effectiveness of the stack status (s t in (11)), the experiments of removing the stack status from the input of both operator and operand selectors (r t = h D t ) are conducted.", "The results well justify our idea of choosing operators based on semantic meanings of operands.", "Word-Based -Semantic Transformer To validate the effectiveness of the idea that views an operator as a semantic transformer, we modify the semantic transformer function of the operator into f (e 1 , e 2 ) = e , where e is a learnable parameter and is different for different operators.", "Therefore, e acts like the embedding of the operator , and the decoding process is more similar to a general seq2seq model.", "The results show that the semantic transformer in the original model encodes not only the last operator applied on the operands but other information that helps the selectors.", "Word-Based -Semantic Representation To explicitly evaluate the effectiveness of operands' semantic representations, we rewrite semantic representation of the i-th operand in the problem texts q u a n ti fi e r 个 b a n a n a 香 蕉 , e v e r y 每 ( b a s k e t) < u n k > 6 .0 q u a n ti fi e r 个 , ta k e o ff 拿 掉 h o w m a n y 多 少 q u a n ti fi e r 个 , th e n 就 c a n 可 以 e x a c tl y 正 好 fi ll 装 9 .0 q u a n ti fi e r 个 b a s k e ts 篮 子 了 < u n k > .", "9.0 6.0 58.0 Figure 4 : The self-attention map visualization of operands' semantic expressions for the problem \"There are 58 bananas.", "Each basket can contain 6 bananas.", "How many bananas are needed to be token off such that exactly 9 baskets are filled?\".", "from (2) to e c i = b c i , where b c i is a parameter.", "Thus for every problem, the representation of the i-th operand is identical, even though their meanings in different problems may be different.", "This modification assumes that no semantic information is captured by b c i , which can merely represent a symbolic placeholder in an equation.", "Because the semantic transformer is to transform the semantic representations, applying this component is meaningless.", "Here the semantic transformer is also replaced with f (e 1 , e 2 ) = e as the setting of the previous ablation test.", "The results show that the model without using semantic representations of operands causes a significant accuracy drop of 3.5%.", "The main contribution of this paper about modeling semantic meanings of symbols is validated and well demonstrated here.", "Qualitative Analysis To further analyze whether the proposed model can provide interpretation and reasoning, we visualize the learned semantic representations of constants to check where the important cues are, Constant Embedding Analysis To better understand the information encoded in the semantic representations of constants in the problem, a self-attention is performed when their semantic representations are extracted by the encoder.", "Namely, we rewrite (2) as e c i = Attention(h E p i , {h E t } m t=1 .", "(20) Then we check the trained self-attention map (α in the attention function) on the validation dataset.", "For some problems, the self-attention that generates semantic representations of constants in the problem concentrates on the number's quantifier or unit, and sometimes it also focuses on informative verbs, such as \"gain\", \"get\", \"fill\", etc., in the sentence.", "For example, Figure 4 shows the attention weights for an example math word problem, where lighter colors indicate higher weights.", "The numbers \"58\" and \"6\" focus more on the quantifier-related words (e.g.", "\"every\" and \"how many\"), while \"9\" pays higher attention to the verb \"fill\".", "The results are consistent with those handcraft features for solving math word problems proposed by the prior research (Hosseini et al., 2014; .", "Hence, we demonstrate that the automatically learned semantic representations indeed capture critical information that facilitates solving math word problems without providing human-crafted knowledge.", "Decoding Process Visualization We visualize the attention map (q t in (6) ) to see how the attention helps the decoding process.", "An example is shown in the top of Figure 5 , where most attention focuses on the end of the sentence.", "Unlike the machine translation task, the attention shows the word-level alignment between source and target languages, solving math word problems requires high-level understanding due to the task complexity.", "To further analyze the effectiveness of the proposed gating mechanisms for stack action and operand selection, the activation of gates g sa , g opd at each step of the decoding process is shown in the bottom of Figure 5 .", "It shows that most of time, the gate activation is high, demonstrating that the proposed gating mechanisms play an important role during decoding.", "We also observe a common phenomenon that the activation g sa 2 , which controls how much attention the stack action selector puts on the stack state when deciding an stack action, is usually low until the last \"operator application\" stack action.", "For example, in the example of Figure 5 , g sa 2 is less than 0.20 till the last argument selection stack action, and activates when deciding the division operator application (÷) and the equal application (=).", "It may result from the higher-level semantics of the operand (6.75−2.75) on the stack when selecting the stack action division operator application (÷).", "In terms Problem & Results 红花有60朵,黄花比红花多1/6朵,黄花有多少朵. (There are 60 red flowers.", "Yellow flowers are more than red ones by 1/6.", "How many yellow flowers are there?)", "Generated Equation: 60 + 1 6 Correct Answer: 70 火车 48 小时行驶 5920 千米,汽车 25 小时行驶 2250 千米,汽车平均每小时比火车每小时慢 多少 千米 ? (The train travels 5920 kilometers in hours, and the car travels 2250 kilometers in 25 hours.", "How many kilometers per hour is the car slower than the train?)", "Generated Equation: 2250 ÷ 25 − 5920 ÷ 48 Correct Answer: 33 1 3 小红前面 5 人,后面 7 人,一共有多少人? (There are 5 people in front of Little Red and 7 people behind.", "How many persons are there in total?)", "Generated Equation: 5 + 7 Correct Answer: 13 Figure 5: Word attention and gate activation (g sa and g opd ) visualization when generating stack actions for the problem \"6.75 deducting 5 times of an unknown number is 2.75.", "What is the unknown number?", "\", where the associated equation is x = (6.75 − 2.75) ÷ 5.", "Note that g opd is meaningful only when the t-th stack action is push op.", "of the activation of g opd , we find that three features are important in most cases, demonstrating the effectiveness of the proposed mechanisms.", "Error Analysis We randomly sample some results predicted incorrectly by our model shown in Table 3 .", "In the first example, the error is due to the language ambiguity, and such ambiguity cannot be resolved without considering the exact value of the number.", "From the second example, although our model identifies the problem as a comparison problem successfully, it handles the order of the operands incorrectly.", "For the third problem, it cannot be solved by using only the surface meaning but requires some common sense.", "Therefore, above phenomena show the difficulty of solving math word problems and the large room for improvement.", "Conclusion We propose an end-to-end neural math solver using an encoder-decoder framework that incorporates semantic representations of numbers in order to generate mathematical symbols for solving math word problems.", "The experiments show that the proposed model achieves the state-of-the-art performance on the benchmark dataset, and empirically demonstrate the effectiveness of each component in the model.", "In sum, the proposed neural math solver is designed based on how human performs reasoning when writing equations, providing better interpretation without the need of labeled rationals.", "A Algorithm Detail The training and inference procedures are shown in Algortihm 1.", "B Hyperparameter Setup The model is trained with the optimizer adam (Kingma and Ba, 2014), and the learning rate is set to 0.001.", "Pretrained embeddings using FastText (Joulin et al., 2016 ) are adopted.", "The hidden state size of LSTM used in the encoder and decoder is 256.", "The dimension of hidden layers in attention, semantic transformer and operand/stack action selector is 256.", "The dropout rate is set as 0.1 before inputting the decoder LSTM, before the stack action selector and after the hidden layer of the stack action selector and attention.", "The reported accuracy is the result of 5-fold cross-validation, same as Wang et al.", "for fair comparison.", "C Error Analysis between Seq2Seq We implement the seq2seq model as proposed by Wang et al.", "and compare the performance difference between our proposed model and the baseline seq2seq model.", "Table 4 shows the generated results seq2seq predicts correctly but our model predicts incorrectly.", "Table 5 show the results our model can predict correctly but seq2seq cannot." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.1.1", "3.1.2", "3.2", "3.3", "3.3.1", "3.3.2", "3.3.3", "3.3.4", "3.4", "3.5", "4", "4.1", "4.2", "4.3", "5", "5.1", "5.2", "5.3", "6" ], "paper_header_content": [ "Introduction", "Related Work", "End-to-End Neural Math Solver", "Encoder", "Constant Representation Extraction", "External Constant Leveraging", "Decoder", "Decoding State Features", "Stack Action Selector", "Stack Actions", "Operand Selector", "Semantic Transformer", "Training", "Inference", "Experiments", "Settings", "Results", "Ablation Test", "Qualitative Analysis", "Constant Embedding Analysis", "Decoding Process Visualization", "Error Analysis", "Conclusion" ] }
GEM-SciDuet-train-31#paper-1044#slide-20
Attention for Operand Semantics
The attention focuses on: o gain, get, fill, etc. o every, how many, etc.
The attention focuses on: o gain, get, fill, etc. o every, how many, etc.
[]
GEM-SciDuet-train-31#paper-1044#slide-21
1044
Semantically-Aligned Equation Generation for Solving and Reasoning Math Word Problems
Solving math word problems is a challenging task that requires accurate natural language understanding to bridge natural language texts and math expressions. Motivated by the intuition about how human generates the equations given the problem texts, this paper presents a neural approach to automatically solve math word problems by operating symbols according to their semantic meanings in texts. This paper views the process of generating equations as a bridge between the semantic world and the symbolic world, where the proposed neural math solver is based on an encoderdecoder framework. In the proposed model, the encoder is designed to understand the semantics of problems, and the decoder focuses on tracking semantic meanings of the generated symbols and then deciding which symbol to generate next. The preliminary experiments are conducted in a benchmark dataset Math23K, and our model significantly outperforms both the state-of-the-art single model and the best non-retrieval-based model over about 10% accuracy, demonstrating the effectiveness of bridging the symbolic and semantic worlds from math word problems. 1
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214 ], "paper_content_text": [ "Introduction Automatically solving math word problems has been an interesting research topic and also been viewed as a way of evaluating machines' ability (Mandal and Naskar, 2019) .", "For human, writing down an equation that solves a math word problem requires the ability of reading comprehension, reasoning, and sometimes real world understanding.", "Specifically, to solve a math word problem, we first need to know the goal of the given problem, then understand the semantic 1 The source code is available at https://github.", "com/MiuLab/E2EMathSolver.", "meaning of each numerical number in the problem, perform reasoning based on the comprehension in the previous step, and finally decide what to write in the equation.", "Most prior work about solving math word problems relied on hand-crafted features, which required more human knowledge.", "Because those features are often in the lexical level, it is not clear whether machines really understand the math problems.", "Also, most prior work evaluated their approaches on relatively small datasets, and the capability of generalization is concerned.", "This paper considers the reasoning procedure when writing down the associated equation given a problem.", "Figure 1 illustrates the problem solving process.", "The illustration shows that human actually assigns the semantic meaning to each number when manipulating symbols, including operands (numbers) and operators (+ − ×÷).", "Also, we believe that the semantic meaning of operands can help us decide which operator to use.", "For example, the summation of \"price of one pen\" and \"number of pens Tom bought\" is meaningless; therefore the addition would not be chosen.", "Following the observation above, this paper proposes a novel encoder decoder model, where the encoder extracts semantic meanings of numbers in the problem, and the decoder is equipped with a stack that facilitates tracking the semantic meanings of operands.", "The contributions of this paper are 4-fold: • This paper is the first work that models semantic meanings of operands and operators for math word problems.", "• This paper proposes an end-to-end neural math solver with a novel decoding process that utilizes the stack to generate associated equations.", "Figure 1 : The solving process of the math word problem \"Each notebok takes $0.5 and each pen takes $1.", "Tom has $10.", "How many notebook can he buy after buying 5 pens?\"", "and the associated equation is x = (10 − 1 × 5) ÷ 0.5.", "The associated equation is x = (10 − 1 × 5) ÷ 0.5.", "• This paper achieves the state-of-the-art performance on the large benchmark dataset Math23K.", "• This paper is capable of providing interpretation and reasoning for the math word problem solving procedure.", "Related Work There is a lot of prior work that utilized handcrafted features, such as POS tags, paths in the dependency trees, keywords, etc., to allow the model to focus on the quantities in the problems Hosseini et al., 2014; Koncel-Kedziorski et al., 2015; Roy et al., 2016; Upadhyay and Chang, 2017; Roy and Roth, 2018; Wang et al., 2018) .", "Recently, Mehta et al.", "; Wang et al.", "; Ling et al.", "attempted at learning models without predefined features.", "Following the recent trend, the proposed end-to-end model in this paper does not need any hand-crafted features.", "Kushman et al.", "first extracted templates about math expressions from the training answers, and then trained models to select templates and map quantities in the problem to the slots in the template.", "Such two-stage approach has been tried and achieved good results (Upadhyay and Chang, 2017) .", "The prior work highly relied on human knowledge, where they parsed problems into equations by choosing the expression tree with the highest score calculated by an operator classifier, working on a hand-crafted \"trigger list\" containing quantities and noun phrases in the problem, or utilizing features extracted from text spans (Roy et al., , 2016 Koncel-Kedziorski et al., 2015) .", "Shi et al.", "defined a Dolphin language to connect math word problems and logical forms, and generated rules to parse math word problems.", "Upadhyay et al.", "parsed math word problems without explicit equation annotations.", "Roy and Roth clas-sified math word problems into 4 types and used rules to decide the operators accordingly.", "Wang et al.", "trained the parser using reinforcement learning with hand-crafted features.", "Hosseini et al.", "modeled the problem text as transition of world states, and the equation is generated as the world states changing.", "Our work uses a similar intuition, but hand-crafted features are not required and our model can be trained in an end-to-end manner.", "Some end-to-end approaches have been proposed, such as generating equations directly via a seq2seq model (Wang et al., 2017) .", "Ling et al.", "tried to generate solutions along with its rationals with a seq2seq-like model for better interpretability.", "This paper belongs to the end-to-end category, but different from the previous work; we are the first approach that generates equations with stack actions, which facilitate us to simulate the way how human solves problems.", "Furthermore, the proposed approach is the first model that is more interpretable and provides reasoning steps without the need of rational annotations.", "End-to-End Neural Math Solver Our approach composes of two parts, an encoder and a decoder, where the process of solving math word problems is viewed as transforming multiple text spans from the problems into the target information the problems ask for.", "In the example shown in Figure 1 , all numbers in the problem are attached with the associated semantics.", "Motivated by the observation, we design an encoder to extract the semantic representation of each number in the problem text.", "Considering that human usually manipulates those numbers and operators (such as addition, subtraction, etc.)", "based on their semantics for problem solving, a decoder is designed to construct the equation, where the semantics is aligned with the representations extracted by the encoder.", "The idea of the proposed model Tom has $ 10 5 pens ?", "Encoder Stack Attention Operation Selector Apply OP OP Return Decoder Operand Selector Semantic Transformer Each notebook takes $0.5 and each pen takes $1.", "Tom has $10.", "How many notebooks can he buy after buying 5 pens?", "Stack Attention is to imitate the human reasoning process for solving math word problems.", "The model architecture is illustrated in Figure 2 .", "Encoder The encoder aims to extract the semantic representation of each constant needed for solving problems.", "However, the needed constants may come from either the given problem texts or domain knowledge, so we detail these two procedures as follows.", "Constant Representation Extraction For each math word problem, we are given a passage consisting of words {w P t } m t=1 , whose word embeddings are {e P t } m t=1 .", "The problem text includes some numbers, which we refer as constants.", "The positions of constants in the problem text are denoted as {p i } n i=1 .", "In order to capture the semantic representation of each constant by considering its contexts, a bidirectional long short-term memory (BLSTM) is adopted as the encoder (Hochreiter and Schmidhuber, 1997) : h E t , c E t = BLSTM(h E t−1 , c E t−1 , e P t ), (1) and then for the i-th constant in the problem, its semantic representation e c i is modeled by the corresponding BLSTM output vector: e c i = h E p i .", "(2) External Constant Leveraging External constants, including 1 and π, are leveraged, because they are required to solve a math word problem, but not mentioned in the problem text.", "Due to their absence from the problem text, we cannot extract their semantic meanings by BLSTM in (2) .", "Instead, we model their semantic representation e π , e 1 as parts of the model parameters.", "They are randomly initialized and are learned during model training.", "Decoder The decoder aims at constructing the equation that can solve the given problem.", "We generate the equation by applying stack actions on a stack to mimic the way how human understands an equation.", "Human knows the semantic meaning of each term in the equation, even compositing of operands and operators like the term \"(10−1×5)\" in Figure 1 .", "Then what operator to apply on a pair operands can be chosen based on their semantic meanings accordingly.", "Hence we design our model to generate the equation in a postfix manner: a operator is chosen base on the semantic representations of two operands the operator is going to apply to.", "Note that the operands a operator can apply to can be any results generated previously.", "That is the reason why we use \"stack\" as our data structure in order to keep track of the operands a operator is going to apply to.", "The stack contains both symbolic and semantic representations of operands, denoted as S = [(v S lt , e S lt ), (v S lt−1 , e S lt−1 ), · · · , (v S 1 , e S 1 )], (3) where v S of each pair is the symbolic part, such as x + 1, while e S is the semantic representation, which is a vector.", "The components in the decoder are shown in the right part of Figure 2 , each of which is detailed below.", "Decoding State Features At each decoding step, decisions are made based on features of the current state.", "At each step, fea- tures r sa and r opd are extracted to select a stack action (section 3.3.2) and an operand to push (section 3.3.3).", "Specifically, the features are the gated concatenation of following vectors: • h D t is the output of an LSTM, which encodes the history of applied actions: h D t , c D t = LSTM(h D t−1 , c D t−1 , res t−1 ), (4) where res t−1 is the result from the previous stack action similar to the seq2seq model (Sutskever et al., 2014) .", "For example, if the previous stack action o t−1 is \"push\", then res t−1 is the semantic representation pushed into the stack.", "If the previous stack action o t−1 is to apply an operator , then res t−1 is the semantic representation generated by f .", "• s t is the stack status.", "It is crucial because some operators are only applicable to certain combinations of operand semantics, which is similar to the type system in programming languages.", "For example, operating multiplication is applicable to the combination of \"quantity of an item\" and \"price of an item\", while operating addition is not.", "Considering that all math operators supported here (+, −, ×, ÷) are binary operators, the semantic representations of the stack's top 2 elements at the time t − 1 are considered: s t = [e S lt ; e S lt ].", "(5) • q t incorporates problem information in the decision.", "It is believed that the attention mechanism (Luong et al., 2015) can effectively capture dependency for longer distance.", "Thus, the attention mechanism over the encoding problem h E 1 , h E 2 , · · · is adopted: q t = Attention(h D t , {h E i } m i=1 ), (6) where the attention function in this paper is defined as a function with learnable parameters w, W, b: Attention(u, {v i } m i=1 ) = m i=1 α i h i , (7) α i = exp(s i ) m l=1 exp(s i ) , (8) s i = w T tanh(W T [u; v i ] + b).", "(9) In order to model the dynamic features for different decoding steps, features in r sa t is gated as follows: r sa t = [g sa t,1 · h D t ; g sa t,2 · s t ; g sa t,3 · q t ], (10) g sa t = σ(W sa · [h D t ; s t ; q t ]), (11) where σ is a sigmoid function and W sa is a learned gating parameter.", "r opd t is defined similarly, but with a different learned gating parameter W opd .", "Stack Action Selector The stack action selector is to select an stack action at each decoding step (section 3.3.2) until the unknowns are solved.", "The probability of choosing action a at the decoding step t is calculated with a network NN constituted of one hidden layer and ReLU as the activation function: P (Y t |{y i } t−1 i=1 , {w i } m i=1 ) (12) = StackActionSelector(r sa t ) = softmax(NN(r sa t )) , where r sa t is decoding state features as defined in section 3.3.", "Stack Actions The available stack actions are listed below: • Variable generation: The semantic representation of an unknown variable x is generated dynamically as the first action in the decoding process.", "Note that this procedure provides the flexibility of solving problems with more than one unknown variables.", "The decoder module can decide how many unknown variables are required to solve the problem, and the semantic representation of the unknown variable is generated with an attention mechanism: e x = Attention(h D t , {h E i } m i=1 ).", "(13) • Push: This stack action pushes the operand chosen by the operand selector (section 3.3.3).", "Both the symbolic representation v * and semantic representation e * of the chosen operand would be pushed to the stack S in (3).", "Then the stack state becomes S = [(v S * , e S * ), (v S lt , e S lt ), · · · , (v S 1 , e S 1 )].", "(14) • Operator application ( ∈ {+, −, ×, ÷}): One stack action pops two elements from the top of the stack, which contains two pairs, (v i , e i ) and (v j , e j ), and then the associated symbolic operator, v k = v i v j , is recorded.", "Also, a semantic transformation function f for that operator is invoked, which generates the semantic representation of v k by transforming semantic representations of v i and v j to e k = f (e i , e j ).", "Therefore, after an operator is applied to the stack specified in (3) , the stack state becomes S =[(v S lt v S lt−1 , f (e S lt , e S lt−1 )), (15) (v S lt−2 , e S lt−2 ), · · · , (v S 1 , e S 1 )].", "• Equal application: When the equal application is chosen, it implies that an equation is completed.", "This stack action pops 2 tuples from the stack, (v i , e i ), (v j , e j ), and then v i = v j is recorded.", "If one of them is an unknown variable, the problem is solved.", "Therefore, after an OP is applied to the stack specified in (3) , the stack state becomes S = [(v S lt−2 , e S lt−2 ), · · · , (v S 1 , e S 1 )].", "(16) Operand Selector When the stack action selector has decided to push an operand, the operand selector aims at choosing which operand to push.", "The operand candidates e include constants provided in the problem text whose semantic representations are e c 1 , e c 2 , · · · , e c n , unknown variable whose semantic representation is e x , and two external constants 1 and π whose semantic representations are e 1 , e π : e = [e c 1 , e c 2 , · · · , e c n , e 1 , e π , e x ].", "An operand has both symbolic and semantic representations, but the selection focuses on its semantic meaning; this procedure is the same as what human does when solving math word problems.", "Inspired by addressing mechanisms of neural Turing machine (NTM) (Graves et al., 2014) , the probability of choosing the i-th operand candidate is the attention weights of r t over the semantic representations of the operand candidates as in (8) : P (Z t | {y i } t−1 i=1 , {w i } m i=1 ) (18) = OperandSelector(r opd t ) = AttentionWeight(r opd t , {e i } m i=1 ∪ {e 1 , e π , e x }), and r opd t is defined in section 3.3.", "Semantic Transformer A semantic transformer is proposed to generate the semantic representation of a new symbol resulted from applying an operator, which provides the capability of interpretation and reasoning for the target task.", "The semantic transformer for an operator ∈ {+, −, ×, ÷} transforms semantic representations of two operands e 1 , e 2 into f (e 1 , e 2 ) = tanh(U ReLU(W [e 1 ; e 2 ]+b )+c ), where W , U , b , c are model parameters.", "Semantic transformers for different operators have different parameters in order to model different transformations.", "Training Both stack action selection and operand selection can be trained in a fully supervised way by giving problems and associated ground truth equations.", "Because our model generates the equation with stack actions, the equation is first transformed into its postfix representation.", "Let the postfix representation of the target equation be y 1 , · · · y t , · · · , y T , where y t can be either an operator (+, −, ×, ÷, =) or a target operand.", "Then for each time step t, the loss can be computed as L(y t ) = L 1 (push op) + L 2 (y t ) y t is an operand L 1 (y t ) otherwise , where L 1 is the stack action selection loss and L 2 is the operand selection loss defined as L 1 (y t ) = − log P (Y t = y t | {o i } t−1 i=1 , {w i } m i=1 ), L 2 (y t ) = − log P (Z t = y t | r t ).", "The objective of our training process is to minimize the total loss for the whole equation, T t=1 L(y t ).", "Inference When performing inference, at each time step t, the stack action with the highest probability P (Y t |{ỹ i } t−1 i=1 , {w i } m i=1 ) is chosen.", "If the chosen stack action is \"push\", the operand with the highest probability P (Z t |{Ỹ i } t−1 i=1 , {w i } m i=1 ) is chosen.", "When the stack has less than 2 elements, the probability of applying operator +, −, ×, ÷, = would be masked out to prevent illegal stack actions, so all generated equations must be legal math expressions.", "The decoder decodes until the unknown variable can be solved.", "After the equations are generated, a Python package SymPy (Meurer et al., 2017) is used to solve the unknown variable.", "The inference procedure example is illustrated in Figure 3 .", "The detailed algorithm can be found in Algorithm 1.", "Experiments To evaluate the performance of the proposed model, we conduct the experiments on the benchmark dataset and analyze the learned semantics.", "Settings The experiments are benchmarked on the dataset Math23k (Wang et al., 2017) , which contains 23,162 math problems with annotated equations.", "Each problem can be solved by a singleunknown-variable equation and only uses operators +, −, ×, ÷.", "Also, except π and 1, quantities in the equation can be found in the problem text.", "There are also other large scale datasets like Dol-phin18K (Shi et al., 2015) and AQuA (Ling et al., 2017) , containing 18,460 and 100,000 math word problems respectively.", "The reasons about not evaluating on these two datasets are 1) Dolphin18k contains some unlabeled math word problems and some incorrect labels, and 2) AQuA contains rational for solving the problems, but the equations in the rational are not formal (e.g.", "mixed with texts, using x to represent ×, etc.)", "and inconsistent.", "Therefore, the following experiments are performed and analyzed using Math23K, the only large scaled, good-quality dataset. )", "do h D t ← LSTM(h D t−1 , ct−1, ret) st ← S.get top2() h E ← Attention(h D t−1 , h E ) rt ← [h D t , st, h E ] psa ← StackActionSelector(rt) p opd ← OperandSelector(rt) if training then Target equation y is available when training.", "Yt ← yt if yt is operand then loss ← loss + L1(push) + L2(yt) else loss ← loss + L1(yt) end if else Yt ← StackActionSelector(r sa t ) if Yt = push then Zt ← OperandSelector(r opd t ) end if end if if Yt = gen var then e x ← Attention(h D t , h E ) ret ← e x else if Yt = push then S.push(vZ t , eZ t ) ret ← eZ t else if Yt ∈ {+, Results The results are shown in Our proposed end-to-end model belongs to the generation category, and the single model performance achieved by our proposed model is new state-of-the-art (> 65%) and even better than the hybrid model result (64.7%).", "In addition, we are the first to report character-based performance on this dataset, and the character-based results are slightly better than the word-based ones.", "Among the single model performance, our models obtain about more than 7% accuracy improvement compared to the previous best one (Wang et al., 2017) .", "The performance of our character-based model also shows that our model is capable of learning the relatively accurate semantic representations without word boundaries and achieves better performance.", "Ablation Test To better understand the performance contributed by each proposed component, we perform a series of ablation tests by removing components one by one and then checking the performance by 5-fold cross validation.", "Table 2 shows the ablation results.", "Char-Based v.s.", "Word-Based As reported above, using word-based model instead of character-based model only causes 0.5% performance drop.", "To fairly compare with prior word- Table 2 : 5-fold cross validation results of ablation tests.", "based models, the following ablation tests are performed on the word-based approach.", "Word-Based -Gate It uses r t instead of r sa t and r opr t as the input of both StackActionSelector and OperandSelector.", "Word-Based -Gate -Attention Considering that the prior generation-based model (seq2seq) did not use any attention mechanism, we compare the models with and without the attention mechanism.", "Removing attention means excluding q t−1 in (11), so the input of both operator and operand selector becomes r t = [h D t ; s t ].", "The result implies that our model is not better than previous models solely because of the attention.", "Word-Based -Gate -Attention -Stack To check the effectiveness of the stack status (s t in (11)), the experiments of removing the stack status from the input of both operator and operand selectors (r t = h D t ) are conducted.", "The results well justify our idea of choosing operators based on semantic meanings of operands.", "Word-Based -Semantic Transformer To validate the effectiveness of the idea that views an operator as a semantic transformer, we modify the semantic transformer function of the operator into f (e 1 , e 2 ) = e , where e is a learnable parameter and is different for different operators.", "Therefore, e acts like the embedding of the operator , and the decoding process is more similar to a general seq2seq model.", "The results show that the semantic transformer in the original model encodes not only the last operator applied on the operands but other information that helps the selectors.", "Word-Based -Semantic Representation To explicitly evaluate the effectiveness of operands' semantic representations, we rewrite semantic representation of the i-th operand in the problem texts q u a n ti fi e r 个 b a n a n a 香 蕉 , e v e r y 每 ( b a s k e t) < u n k > 6 .0 q u a n ti fi e r 个 , ta k e o ff 拿 掉 h o w m a n y 多 少 q u a n ti fi e r 个 , th e n 就 c a n 可 以 e x a c tl y 正 好 fi ll 装 9 .0 q u a n ti fi e r 个 b a s k e ts 篮 子 了 < u n k > .", "9.0 6.0 58.0 Figure 4 : The self-attention map visualization of operands' semantic expressions for the problem \"There are 58 bananas.", "Each basket can contain 6 bananas.", "How many bananas are needed to be token off such that exactly 9 baskets are filled?\".", "from (2) to e c i = b c i , where b c i is a parameter.", "Thus for every problem, the representation of the i-th operand is identical, even though their meanings in different problems may be different.", "This modification assumes that no semantic information is captured by b c i , which can merely represent a symbolic placeholder in an equation.", "Because the semantic transformer is to transform the semantic representations, applying this component is meaningless.", "Here the semantic transformer is also replaced with f (e 1 , e 2 ) = e as the setting of the previous ablation test.", "The results show that the model without using semantic representations of operands causes a significant accuracy drop of 3.5%.", "The main contribution of this paper about modeling semantic meanings of symbols is validated and well demonstrated here.", "Qualitative Analysis To further analyze whether the proposed model can provide interpretation and reasoning, we visualize the learned semantic representations of constants to check where the important cues are, Constant Embedding Analysis To better understand the information encoded in the semantic representations of constants in the problem, a self-attention is performed when their semantic representations are extracted by the encoder.", "Namely, we rewrite (2) as e c i = Attention(h E p i , {h E t } m t=1 .", "(20) Then we check the trained self-attention map (α in the attention function) on the validation dataset.", "For some problems, the self-attention that generates semantic representations of constants in the problem concentrates on the number's quantifier or unit, and sometimes it also focuses on informative verbs, such as \"gain\", \"get\", \"fill\", etc., in the sentence.", "For example, Figure 4 shows the attention weights for an example math word problem, where lighter colors indicate higher weights.", "The numbers \"58\" and \"6\" focus more on the quantifier-related words (e.g.", "\"every\" and \"how many\"), while \"9\" pays higher attention to the verb \"fill\".", "The results are consistent with those handcraft features for solving math word problems proposed by the prior research (Hosseini et al., 2014; .", "Hence, we demonstrate that the automatically learned semantic representations indeed capture critical information that facilitates solving math word problems without providing human-crafted knowledge.", "Decoding Process Visualization We visualize the attention map (q t in (6) ) to see how the attention helps the decoding process.", "An example is shown in the top of Figure 5 , where most attention focuses on the end of the sentence.", "Unlike the machine translation task, the attention shows the word-level alignment between source and target languages, solving math word problems requires high-level understanding due to the task complexity.", "To further analyze the effectiveness of the proposed gating mechanisms for stack action and operand selection, the activation of gates g sa , g opd at each step of the decoding process is shown in the bottom of Figure 5 .", "It shows that most of time, the gate activation is high, demonstrating that the proposed gating mechanisms play an important role during decoding.", "We also observe a common phenomenon that the activation g sa 2 , which controls how much attention the stack action selector puts on the stack state when deciding an stack action, is usually low until the last \"operator application\" stack action.", "For example, in the example of Figure 5 , g sa 2 is less than 0.20 till the last argument selection stack action, and activates when deciding the division operator application (÷) and the equal application (=).", "It may result from the higher-level semantics of the operand (6.75−2.75) on the stack when selecting the stack action division operator application (÷).", "In terms Problem & Results 红花有60朵,黄花比红花多1/6朵,黄花有多少朵. (There are 60 red flowers.", "Yellow flowers are more than red ones by 1/6.", "How many yellow flowers are there?)", "Generated Equation: 60 + 1 6 Correct Answer: 70 火车 48 小时行驶 5920 千米,汽车 25 小时行驶 2250 千米,汽车平均每小时比火车每小时慢 多少 千米 ? (The train travels 5920 kilometers in hours, and the car travels 2250 kilometers in 25 hours.", "How many kilometers per hour is the car slower than the train?)", "Generated Equation: 2250 ÷ 25 − 5920 ÷ 48 Correct Answer: 33 1 3 小红前面 5 人,后面 7 人,一共有多少人? (There are 5 people in front of Little Red and 7 people behind.", "How many persons are there in total?)", "Generated Equation: 5 + 7 Correct Answer: 13 Figure 5: Word attention and gate activation (g sa and g opd ) visualization when generating stack actions for the problem \"6.75 deducting 5 times of an unknown number is 2.75.", "What is the unknown number?", "\", where the associated equation is x = (6.75 − 2.75) ÷ 5.", "Note that g opd is meaningful only when the t-th stack action is push op.", "of the activation of g opd , we find that three features are important in most cases, demonstrating the effectiveness of the proposed mechanisms.", "Error Analysis We randomly sample some results predicted incorrectly by our model shown in Table 3 .", "In the first example, the error is due to the language ambiguity, and such ambiguity cannot be resolved without considering the exact value of the number.", "From the second example, although our model identifies the problem as a comparison problem successfully, it handles the order of the operands incorrectly.", "For the third problem, it cannot be solved by using only the surface meaning but requires some common sense.", "Therefore, above phenomena show the difficulty of solving math word problems and the large room for improvement.", "Conclusion We propose an end-to-end neural math solver using an encoder-decoder framework that incorporates semantic representations of numbers in order to generate mathematical symbols for solving math word problems.", "The experiments show that the proposed model achieves the state-of-the-art performance on the benchmark dataset, and empirically demonstrate the effectiveness of each component in the model.", "In sum, the proposed neural math solver is designed based on how human performs reasoning when writing equations, providing better interpretation without the need of labeled rationals.", "A Algorithm Detail The training and inference procedures are shown in Algortihm 1.", "B Hyperparameter Setup The model is trained with the optimizer adam (Kingma and Ba, 2014), and the learning rate is set to 0.001.", "Pretrained embeddings using FastText (Joulin et al., 2016 ) are adopted.", "The hidden state size of LSTM used in the encoder and decoder is 256.", "The dimension of hidden layers in attention, semantic transformer and operand/stack action selector is 256.", "The dropout rate is set as 0.1 before inputting the decoder LSTM, before the stack action selector and after the hidden layer of the stack action selector and attention.", "The reported accuracy is the result of 5-fold cross-validation, same as Wang et al.", "for fair comparison.", "C Error Analysis between Seq2Seq We implement the seq2seq model as proposed by Wang et al.", "and compare the performance difference between our proposed model and the baseline seq2seq model.", "Table 4 shows the generated results seq2seq predicts correctly but our model predicts incorrectly.", "Table 5 show the results our model can predict correctly but seq2seq cannot." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.1.1", "3.1.2", "3.2", "3.3", "3.3.1", "3.3.2", "3.3.3", "3.3.4", "3.4", "3.5", "4", "4.1", "4.2", "4.3", "5", "5.1", "5.2", "5.3", "6" ], "paper_header_content": [ "Introduction", "Related Work", "End-to-End Neural Math Solver", "Encoder", "Constant Representation Extraction", "External Constant Leveraging", "Decoder", "Decoding State Features", "Stack Action Selector", "Stack Actions", "Operand Selector", "Semantic Transformer", "Training", "Inference", "Experiments", "Settings", "Results", "Ablation Test", "Qualitative Analysis", "Constant Embedding Analysis", "Decoding Process Visualization", "Error Analysis", "Conclusion" ] }
GEM-SciDuet-train-31#paper-1044#slide-21
Conclusion
M I U L A B Approach: equation generatio n with stack Originality: automatic extracti on of operand semantics Performance: a SOTA end-to-e nd neural model on Math23k
M I U L A B Approach: equation generatio n with stack Originality: automatic extracti on of operand semantics Performance: a SOTA end-to-e nd neural model on Math23k
[]
GEM-SciDuet-train-32#paper-1046#slide-0
1046
Reasoning with Sarcasm by Reading In-between
Sarcasm is a sophisticated speech act which commonly manifests on social communities such as Twitter and Reddit. The prevalence of sarcasm on the social web is highly disruptive to opinion mining systems due to not only its tendency of polarity flipping but also usage of figurative language. Sarcasm commonly manifests with a contrastive theme either between positive-negative sentiments or between literal-figurative scenarios. In this paper, we revisit the notion of modeling contrast in order to reason with sarcasm. More specifically, we propose an attention-based neural model that looks inbetween instead of across, enabling it to explicitly model contrast and incongruity. We conduct extensive experiments on six benchmark datasets from Twitter, Reddit and the Internet Argument Corpus. Our proposed model not only achieves stateof-the-art performance on all datasets but also enjoys improved interpretability.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239 ], "paper_content_text": [ "Introduction Sarcasm, commonly defined as 'An ironical taunt used to express contempt', is a challenging NLP problem due to its highly figurative nature.", "The usage of sarcasm on the social web is prevalent and can be frequently observed in reviews, microblogs (tweets) and online forums.", "As such, the battle against sarcasm is also regularly cited as one of the key challenges in sentiment analysis and opinion mining applications (Pang et al., 2008) .", "Hence, it is both imperative and intuitive that effective sarcasm detectors can bring about numerous benefits to opinion mining applications.", "Sarcasm is often associated to several linguistic phenomena such as (1) an explicit contrast between sentiments or (2) disparity between the conveyed emotion and the author's situation (context).", "Prior work has considered sarcasm to be a contrast between a positive and negative sentiment (Riloff et al., 2013) .", "Consider the following examples: 1.", "I absolutely love to be ignored!", "2.", "Yay!!!", "The best thing to wake up to is my neighbor's drilling.", "3.", "Perfect movie for people who can't fall asleep.", "Given the examples, we make a crucial observation -Sarcasm relies a lot on the semantic relationships (and contrast) between individual words and phrases in a sentence.", "For instance, the relationships between phrases {love, ignored}, {best, drilling} and {movie, asleep} (in the examples above) richly characterize the nature of sarcasm conveyed, i.e., word pairs tend to be contradictory and more often than not, express a juxtaposition of positive and negative terms.", "This concept is also explored in (Joshi et al., 2015) in which the authors refer to this phenomena as 'incongruity'.", "Hence, it would be useful to capture the relationships between selected word pairs in a sentence, i.e., looking in-between.", "State-of-the-art sarcasm detection systems mainly rely on deep and sequential neural networks (Ghosh and Veale, 2016; Zhang et al., 2016) .", "In these works, compositional encoders such as gated recurrent units (GRU) or long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997) are often employed, with the input document being parsed one word at a time.", "This has several shortcomings for the sarcasm detection task.", "Firstly, there is no explicit interaction between word pairs, which hampers its ability to explicitly model contrast, incongruity or juxtaposition of situations.", "Secondly, it is difficult to capture long-range dependencies.", "In this case, contrastive situations (or sentiments) which are commonplace in sarcastic language may be hard to detect with simple sequential models.", "To overcome the weaknesses of standard sequential models such as recurrent neural networks, our work is based on the intuition that modeling intra-sentence relationships can not only improve classification performance but also pave the way for more explainable neural sarcasm detection methods.", "In other words, our key intuition manifests itself in the form of an attention-based neural network.", "While the key idea of most neural attention mechanisms is to focus on relevant words and sub-phrases, it merely looks across and does not explicitly capture word-word relationships.", "Hence, it suffers from the same shortcomings as sequential models.", "In this paper, our aim is to combine the effectiveness of state-of-the-art recurrent models while harnessing the intuition of looking in-between.", "We propose a multi-dimensional intra-attention recurrent network that models intricate similarities between each word pair in the sentence.", "In other words, our novel deep learning model aims to capture 'contrast' (Riloff et al., 2013) and 'incongruity' (Joshi et al., 2015) within end-to-end neural networks.", "Our model can be thought of selftargeted co-attention (Xiong et al., 2016) , which allows our model to not only capture word-word relationships but also long-range dependencies.", "Finally, we show that our model produces interpretable attention maps which aid in the explainability of model outputs.", "To the best of our knowledge, our model is the first attention model that can produce explainable results in the sarcasm detection task.", "Briefly, the prime contributions of this work can be summarized as follows: • We propose a new state-of-the-art method for sarcasm detection.", "Our proposed model, the Multi-dimensional Intra-Attention Recurrent Network (MIARN) is strongly based on the intuition of compositional learning by leveraging intra-sentence relationships.", "To the best of our knowledge, none of the existing state-of-the-art models considered exploiting intra-sentence relationships, solely relying on sequential composition.", "• We conduct extensive experiments on multiple benchmarks from Twitter, Reddit and the Internet Argument Corpus.", "Our proposed MIARN achieves highly competitive performance on all benchmarks, outperforming existing state-of-the-art models such as GRNN (Zhang et al., 2016) and CNN-LSTM-DNN (Ghosh and Veale, 2016) .", "Related Work Sarcasm is a complex linguistic phenomena that have long fascinated both linguists and NLP researchers.", "After all, a better computational understanding of this complicated speech act could potentially bring about numerous benefits to existing opinion mining applications.", "Across the rich history of research on sarcasm, several theories such as the Situational Disparity Theory (Wilson, 2006 ) and the Negation Theory (Giora, 1995) have emerged.", "In these theories, a common theme is a motif that is strongly grounded in contrast, whether in sentiment, intention, situation or context.", "(Riloff et al., 2013) propagates this premise forward, presenting an algorithm strongly based on the intuition that sarcasm arises from a juxtaposition of positive and negative situations.", "Sarcasm Detection Naturally, many works in this area have treated the sarcasm detection task as a standard text classification problem.", "An extremely comprehensive overview can be found at (Joshi et al., 2017) .", "Feature engineering approaches were highly popular, exploiting a wide diverse range of features such as syntactic patterns (Tsur et al., 2010) , sentiment lexicons (González-Ibánez et al., 2011), ngram (Reyes et al., 2013) , word frequency (Barbieri et al., 2014), word shape and pointedness features (Ptáček et al., 2014) , readability and flips (Rajadesingan et al., 2015) , etc.", "Notably, there have been quite a reasonable number of works that propose features based on similarity and contrast.", "(Hernández-Farías et al., 2015) measured the Wordnet based semantic similarity between words.", "(Joshi et al., 2015) proposed a framework based on explicit and implicit incongruity, utilizing features based on positive-negative patterns.", "(Joshi et al., 2016) proposed similarity features based on word embeddings.", "Deep Learning for Sarcasm Detection Deep learning based methods have recently garnered considerable interest in many areas of NLP research.", "In our problem domain, (Zhang et al., 2016) proposed a recurrent-based model with a gated pooling mechanism for sarcasm detection on Twitter.", "(Ghosh and Veale, 2016) proposed a convolutional long-short-term memory network (CNN-LSTM-DNN) that achieves state-of-the-art performance.", "While our work focuses on document-only sarcasm detection, several notable works have proposed models that exploit personality information (Ghosh and Veale, 2017) and user context (Amir et al., 2016) .", "Novel methods for sarcasm detection such as gaze / cognitive features (Mishra et al., 2016 (Mishra et al., , 2017 have also been explored.", "(Peled and Reichart, 2017) proposed a novel framework based on neural machine translation to convert a sequence from sarcastic to non-sarcastic.", "(Felbo et al., 2017) proposed a layer-wise training scheme that utilizes emoji-based distant supervision for sentiment analysis and sarcasm detection tasks.", "Attention Models for NLP In the context of NLP, the key idea of neural attention is to soft select a sequence of words based on their relative importance to the task at hand.", "Early innovations in attentional paradigms mainly involve neural machine translation (Luong et al., 2015; for aligning sequence pairs.", "Attention is also commonplace in many NLP applications such as sentiment classification (Chen et al., 2016; Yang et al., 2016) , aspect-level sentiment analysis (Tay et al., 2018s, 2017b Chen et al., 2017) and entailment classification (Rocktäschel et al., 2015) .", "Co-attention / Bi-Attention (Xiong et al., 2016; Seo et al., 2016) is a form of pairwise attention mechanism that was proposed to model query-document pairs.", "Intraattention can be interpreted as a self-targetted coattention and is seeing a lot promising results in many recent works (Vaswani et al., 2017; Parikh et al., 2016; Tay et al., 2017a; Shen et al., 2017) .", "The key idea is to model a sequence against itself, learning to attend while capturing long term dependencies and word-word level interactions.", "To the best of our knowledge, our work is not only the first work that only applies intra-attention to sarcasm detection but also the first attention model for sarcasm detection.", "Our Proposed Approach In this section, we describe our proposed model.", "Figure 1 illustrates our overall model architecture.", "Input Encoding Layer Our model accepts a sequence of one-hot encoded vectors as an input.", "Each one-hot encoded vector corresponds to a single word in the vocabulary.", "In the input encoding layer, each one-hot vector is converted into a low-dimensional vector representation (word embedding).", "The word embeddings are parameterized by an embedding layer W ∈ R n×|V | .", "As such, the output of this layer is a sequence of word embeddings, i.e., {w 1 , w 2 , · · · w } where is a predefined maximum sequence length.", "Multi-dimensional Intra-Attention In this section, we describe our multi-dimensional intra-attention mechanism for sarcasm detection.", "We first begin by describing the standard single-dimensional intra-attention.", "The multidimensional adaptation will be introduced later in this section.", "The key idea behind this layer is to look in-between, i.e., modeling the semantics between each word in the input sequence.", "We first begin by modeling the relationship of each word pair in the input sequence.", "A simple way to achieve this is to use a linear 1 transformation layer to project the concatenation of each word embedding pair into a scalar score as follows: s ij = W a ([w i ; w j ]) + b a (1) where W a ∈ R 2n×1 , b a ∈ R are the parameters of this layer.", "[.", "; .]", "is the vector concatenation operator and s ij is a scalar representing the affinity score between word pairs (w i , w j ).", "We can easily observe that s is a symmetrical matrix of × dimensions.", "In order to learn attention vector a, we apply a row-wise max-pooling operator on matrix s. a = sof tmax(max row s) (2) where a ∈ R is a vector representing the learned intra-attention weights.", "Then, the vector a is employed to learn weighted representation of {w 1 , w 2 · · · w } as follows: v a = i=1 w i a i (3) where v ∈ R n is the intra-attentive representation of the input sequence.", "While other choices of pooling operators may be also employed (e.g., mean-pooling over max-pooling), the choice of max-pooling is empirically motivated.", "Intuitively, this attention layer learns to pay attention based on a word's largest contribution to all words in the sequence.", "Since our objective is to highlight words that might contribute to the contrastive theories of sarcasm, a more discriminative pooling operator is desirable.", "Notably, we also mask values of s where i = j such that we do not allow the relationship scores of a word with respect to itself to influence the overall attention weights.", "Furthermore, our network can be considered as an 'inner' adaptation of neural attention, modeling intra-sentence relationships between the raw word representations instead of representations that have been compositionally manipulated.", "This allows word-to-word similarity to be modeled 'as it is' and not be influenced by composition.", "For example, when using the outputs of a compositional encoder (e.g., LSTM), matching words n and n + 1 might not be meaningful since they would be relatively similar in terms of semantic composition.", "For relatively short documents (such as tweets), it is also intuitive that attention typically focuses on the last hidden representation.", "Intuitively, the relationships between two words is often not straightforward.", "Words are complex and often hold more than one meanings (or word senses).", "As such, it might be beneficial to model multiple views between two words.", "This can be modeled by representing the word pair interaction with a vector instead of a scalar.", "As such, we propose a multi-dimensional adaptation of the intra-attention mechanism.", "The key idea here is that each word pair is projected down to a lowdimensional vector before we compute the affinity score, which allows it to not only capture one view (one scalar) but also multiple views.", "A modification to Equation (1) constitutes our Multi-Dimensional Intra-Attention variant.", "s ij = W p (ReLU (W q ([w i ; w j ]) + b q )) + b p (4) where W q ∈ R n×k , W p ∈ R k×1 , b q ∈ R k , b p ∈ R are the parameters of this layer.", "The final intraattentive representation is then learned with Equation (2) and Equation (3) which we do not repeat here for the sake of brevity.", "Long Short-Term Memory Encoder While we are able to simply use the learned representation v for prediction, it is clear that v does not encode compositional information and may miss out on important compositional phrases such as 'not happy'.", "Clearly, our intra-attention mechanism simply considers a word-by-word interaction and does not model the input document sequentially.", "As such, it is beneficial to use a separate compositional encoder for this purpose, i.e., learning compositional representations.", "To this end, we employ the standard Long Short-Term Memory (LSTM) encoder.", "The output of an LSTM encoder at each time-step can be briefly defined as: h i = LSTM(w, i), ∀i ∈ [1, .", ".", ". ]", "(5) where represents the maximum length of the sequence and h i ∈ R d is the hidden output of the LSTM encoder at time-step i. d is the size of the hidden units of the LSTM encoder.", "LSTM encoders are parameterized by gating mechanisms learned via nonlinear transformations.", "Since LSTMs are commonplace in standard NLP applications, we omit the technical details for the sake of brevity.", "Finally, to obtain a compositional representation of the input document, we use v c = h which is the last hidden output of the LSTM encoder.", "Note that the inputs to the LSTM encoder are the word embeddings right after the input encoding layer and not the output of the intraattention layer.", "We found that applying an LSTM on the intra-attentively scaled representations do not yield any benefits.", "Prediction Layer The inputs to the final prediction layer are two representations, namely (1) the intra-attentive representation (v a ∈ R n ) and (2) the compositional representation (v c ∈ R d ).", "This layer learns a joint representation of these two views using a nonlinear projection layer.", "v = ReLU (W z ([v a ; v c ]) + b z ) (6) where W z ∈ R (d+n)×d and b z ∈ R d .", "Finally, we pass v into a Softmax classification layer.", "y = Sof tmax(W f v + b f ) (7) where W f ∈ R d×2 , b f ∈ R 2 are the parameters of this layer.ŷ ∈ R 2 is the output layer of our proposed model.", "Optimization and Learning Our network is trained end-to-end, optimizing the standard binary cross-entropy loss function.", "J = − N i=1 [yi logŷi + (1 − yi) log(1 −ŷi)] + R (8) where J is the cost function,ŷ is the output of the network, R = ||θ|| L2 is the L2 regularization and λ is the weight of the regularizer.", "Empirical Evaluation In this section, we describe our experimental setup and results.", "Our experiments were designed to answer the following research questions (RQs).", "• RQ1 -Does our proposed approach outperform existing state-of-the-art models?", "• RQ2 -What are the impacts of some of the architectural choices of our model?", "How much does intra-attention contribute to the model performance?", "Is the Multi-Dimensional adaptation better than the Single-Dimensional adaptation?", "• RQ3 -What can we interpret from the intraattention layers?", "Does this align with our hypothesis about looking in-between and modeling contrast?", "Datasets We conduct our experiments on six publicly available benchmark datasets which span across three well-known sources.", "• Tweets -Twitter 2 is a microblogging platform which allows users to post statuses of less than 140 characters.", "We use two collections for sarcasm detection on tweets.", "More specifically, we use the dataset obtained from (1) (Ptáček et al., 2014) in which tweets are trained via hashtag based semisupervised learning, i.e., hashtags such as #not, #sarcasm and #irony are marked as sarcastic tweets and (2) (Riloff et al., 2013) in which Tweets are hand annotated and manually checked for sarcasm.", "For both datasets, we retrieve.", "Tweets using the Twitter API using the provided tweet IDs.", "• Reddit -Reddit 3 is a highly popular social forum and community.", "Similar to Tweets, sarcastic posts are obtained via the tag '/s' which are marked by the authors themselves.", "We use two Reddit datasets which are obtained from the subreddits /r/movies and /r/technology respectively.", "Datasets are subsets from (Khodak et al., 2017) .", "• Debates -We use two datasets 4 from the Internet Argument Corpus (IAC) (Lukin and Walker, 2017) which have been hand annotated for sarcasm.", "This dataset, unlike the first two, is mainly concerned with long text and provides a diverse comparison from the other datasets.", "The IAC corpus was designed for research on political debates on online forums.", "We use the V1 and V2 versions of the sarcasm corpus which are denoted as IAC-V1 and IAC-V2 respectively.", "The statistics of the datasets used in our experiments is reported in Table 1 .", "Compared Methods We compare our proposed model with the following algorithms.", "• NBOW is a simple neural bag-of-words baseline that sums all the word embeddings and passes the summed vector into a simple logistic regression layer.", "• CNN is a vanilla Convolutional Neural Network with max-pooling.", "CNNs are considered as compositional encoders that capture n-gram features by parameterized sliding windows.", "The filter width is 3 and number of filters f = 100.", "• LSTM is a vanilla Long Short-Term Memory Network.", "The size of the LSTM cell is set to d = 100.", "• ATT-LSTM (Attention-based LSTM) is a LSTM model with a neural attention mechanism applied to all the LSTM hidden outputs.", "We use a similar adaptation to (Yang et al., 2016) , albeit only at the document-level.", "• GRNN (Gated Recurrent Neural Network) is a Bidirectional Gated Recurrent Unit (GRU) model that was proposed for sarcasm detection by (Zhang et al., 2016) .", "GRNN uses a gated pooling mechanism to aggregate the hidden representations from a standard BiGRU model.", "Since we only compare on document-level sarcasm detection, we do not use the variant of GRNN that exploits user context.", "• CNN-LSTM-DNN (Convolutional LSTM + Deep Neural Network), proposed by (Ghosh and Veale, 2016) , is the state-of-theart model for sarcasm detection.", "This model is a combination of a CNN, LSTM and Deep Neural Network via stacking.", "It stacks two layers of 1D convolution with 2 LSTM layers.", "The output passes through a deep neural network (DNN) for prediction.", "Both CNN-LSTM-DNN (Ghosh and Veale, 2016) and GRNN (Zhang et al., 2016) are state-ofthe-art models for document-level sarcasm detection and have outperformed numerous neural and non-neural baselines.", "In particular, both works have well surpassed feature-based models (Support Vector Machines, etc.", "), as such we omit comparisons for the sake of brevity and focus comparisons with recent neural models instead.", "Moreover, since our work focuses only on document-level sarcasm detection, we do not compare against models that use external information such as user profiles, context, personality information (Ghosh and Veale, 2017) or emoji-based distant supervision (Felbo et al., 2017) .", "For our model, we report results on both multi-dimensional and single-dimensional intraattention.", "The two models are named as MIARN and SIARN respectively.", "Implementation Details and Metrics We adopt standard the evaluation metrics for the sarcasm detection task, i.e., macro-averaged F1 and accuracy score.", "Additionally, we also report precision and recall scores.", "All deep learning models are implemented using Tensor-Flow (Abadi et al., 2015) and optimized on a NVIDIA GTX1070 GPU.", "Text is preprocessed with NLTK 5 's Tweet tokenizer.", "Words that only appear once in the entire corpus are removed and marked with the UNK token.", "Document lengths are truncated at 40, 20, 80 tokens for Twitter, Reddit and Debates dataset respectively.", "Mentions of other users on the Twitter dataset are replaced by '@USER'.", "Documents with URLs (i.e., containing 'http') are removed from the corpus.", "Documents with less than 5 tokens are also removed.", "The learning optimizer used is the RMSProp with an initial learning rate of 0.001.", "The L2 regularization is set to 10 −8 .", "We initialize the word embedding layer with GloVe (Pennington et al., 2014) .", "We use the GloVe model trained on 2B Tweets for the Tweets and Reddit dataset.", "The Glove model trained on Common Crawl is used for the Debates corpus.", "The size of the word embeddings is fixed at d = 100 and are fine-tuned during training.", "In all experiments, we use a development set to select the best hyperparameters.", "Each model is trained for a total of 30 epochs and the model is saved each time the performance Tweets (Ptáček et al., 2014) Tweets (Riloff et al., 2013 on the development set is topped.", "The batch size is tuned amongst {128, 256, 512} for all datasets.", "The only exception is the Tweets dataset from (Riloff et al., 2013) , in which a batch size of 16 is used in lieu of the much smaller dataset size.", "For fair comparison, all models have the same hidden representation size and are set to 100 for both recurrent and convolutional based models (i.e., number of filters).", "For MIARN, the size of intraattention hidden representation is tuned amongst {4, 8, 10, 20}.", "Experimental Results Table 2, Table 3 and Table 4 reports a performance comparison of all benchmarked models on the Tweets, Reddit and Debates datasets respectively.", "We observe that our proposed SIARN and MIARN models achieve the best results across all six datasets.", "The relative improvement differs across domain and datasets.", "On the Tweets dataset from (Ptáček et al., 2014) , MIARN achieves about ≈ 2% − 2.2% improvement in terms of F1 and accuracy score when compared against the best baseline.", "On the other Tweets dataset from (Riloff et al., 2013) , the performance gain of our proposed model is larger, i.e., 3% − 5% improvement on average over most baselines.", "Our proposed SIARN and MIARN models achieve very competitive performance on the Reddit datasets, with an average of ≈ 2% margin improvement over the best baselines.", "Notably, the baselines we compare against are extremely competitive state-of-the-art neural network models.", "This further reinforces the effectiveness of our proposed approach.", "Additionally, the performance improvement on Debates (long text) is significantly larger than short text (i.e., Twitter and Reddit).", "For example, MI-ARN outperforms GRNN and CNN-LSTM-DNN by ≈ 8% − 10% on both IAC-V1 and IAC-V2.", "At this note, we can safely put RQ1 to rest.", "Overall, the performance of MIARN is often marginally better than SIARN (with some exceptions, e.g., Tweets dataset from (Riloff et al., 2013) ).", "We believe that this is attributed to the fact that more complex word-word relationships can be learned by using multi-dimensional values instead of single-dimensional scalars.", "The performance brought by our additional intra-attentive representations can be further observed by comparing against the vanilla LSTM model.", "Clearly, removing the intra-attention network reverts our model to the standard LSTM.", "The performance improvements are encouraging, leading to almost 10% improvement in terms of F1 and accuracy.", "On datasets with short text, the performance improvement is often a modest ≈ 2% − 3% (RQ2).", "Notably, our proposed models also perform much better on long text, which can be attributed to the intra-attentive representations explicitly modeling long range dependencies.", "Intuitively, this is problematic for models that only capture sequential dependencies (e.g., word by word).", "Finally, the relative performance of competitor methods are as expected.", "NBOW performs the worse, since it is just a naive bag-of-words model without any compositional or sequential information.", "On short text, LSTMs are overall better than CNNs.", "However, this trend is reversed on long text (i.e., Debates) since the LSTM model may be overburdened by overly long sequences.", "On short text, we also found that attention (or the gated pooling mechanism from GRNN) did not really help make any significant improvements over the vanilla LSTM model and a qualitative explanation to why this is so is deferred to the next section.", "However, attention helps for long text (such as debates), resulting in Attention LSTMs becoming the strongest baseline on the Debates datasets.", "However, our proposed intra-attentive model is both effective on short text and long text, outperforming Attention LSTMs consistently on all datasets.", "In-depth Model Analysis In this section, we present an in-depth analysis of our proposed model.", "More specifically, we not only aim to showcase the interpretability of our model but also explain how representations are formed.", "More specifically, we test our model (trained on Tweets dataset by (Ptáček et al., 2014) ) on two examples.", "We extract the attention maps of three models, namely MIARN, Attention LSTM (ATT-LSTM) and applying Attention mechanism directly on the word embeddings without using a LSTM encoder (ATT-RAW).", "Table 5 shows the visualization of the attention maps.", "In the first example (true label), we notice that the attention maps of MIARN are focusing on the words 'love' and 'ignored'.", "This is in concert with our intuition about modeling contrast and incongruity.", "On the other hand, both ATT-LSTM and ATT-RAW learn very different attention maps.", "As for ATT-LSTM, the attention weight is focused completely on the last representation -the token '!!'.", "Additionally, we also observed that this is true for many examples in the Tweets and Reddit dataset.", "We believe that this is the reason why standard neural attention does not help as what the attention mechanism is learning is to select the last representation (i.e., vanilla LSTM).", "Without the LSTM encoder, the attention weights focus on 'love' but not 'ignored'.", "This fails to capture any concept of contrast or incongruity.", "Next, we consider the false labeled example.", "This time, the attention maps of MIARN are not as distinct as before.", "However, they focus on sentiment-bearing words, composing the words 'ignored sucks' to form the majority of the intraattentive representation.", "This time, passing the vector made up of 'ignored sucks' allows the subsequent layers to recognize that there is no contrasting situation or sentiment.", "Similarly, ATT-LSTM focuses on the last word time which is totally non-interpretable.", "On the other hand, ATT-RAW focuses on relatively non-meaningful words such as 'big'.", "Overall, we analyzed two cases (positive and negative labels) and found that MIARN produces very explainable attention maps.", "In general, we found that MIARN is able to identify contrast and incongruity in sentences, allowing our model to better detect sarcasm.", "This is facilitated by modeling intra-sentence relationships.", "Notably, the standard vanilla attention is not explainable or interpretable.", "Conclusion Based on the intuition of intra-sentence similarity (i.e., looking in-between), we proposed a new neural network architecture for sarcasm detection.", "Our network incorporates a multi-dimensional intra-attention component that learns an intraattentive representation of the sentence, enabling it to detect contrastive sentiment, situations and incongruity.", "Extensive experiments over six public benchmarks confirm the empirical effectiveness of our proposed model.", "Our proposed MI-ARN model outperforms strong state-of-the-art baselines such as GRNN and CNN-LSTM-DNN.", "Analysis of the intra-attention scores shows that our model learns highly interpretable attention weights, paving the way for more explainable neural sarcasm detection methods." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "3", "3.1", "3.2", "3.3", "3.4", "3.5", "4", "4.1", "4.2", "4.3", "4.4", "4.5", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Sarcasm Detection", "Deep Learning for Sarcasm Detection", "Attention Models for NLP", "Our Proposed Approach", "Input Encoding Layer", "Multi-dimensional Intra-Attention", "Long Short-Term Memory Encoder", "Prediction Layer", "Optimization and Learning", "Empirical Evaluation", "Datasets", "Compared Methods", "Implementation Details and Metrics", "Experimental Results", "In-depth Model Analysis", "Conclusion" ] }
GEM-SciDuet-train-32#paper-1046#slide-0
Background
o a form of verbal irony that is intended to express contempt or ridicule (The Free Dictionary) o commonly manifests on social communities (e.g. Twitter, Reddit) Prior work considered sarcasm to be a contrast between a positive and negative sentiment (Riloff et al., 2013) I love to be ignored! Perfect movie for people who cant fall asleep Scope of this work: sarcasm detection based on documents content and commonsense knowledge but not external knowledge, or users profile and context I love to solve math problem everyday Cool. It took me 10 hours to flight from Sydney to Melbourne.
o a form of verbal irony that is intended to express contempt or ridicule (The Free Dictionary) o commonly manifests on social communities (e.g. Twitter, Reddit) Prior work considered sarcasm to be a contrast between a positive and negative sentiment (Riloff et al., 2013) I love to be ignored! Perfect movie for people who cant fall asleep Scope of this work: sarcasm detection based on documents content and commonsense knowledge but not external knowledge, or users profile and context I love to solve math problem everyday Cool. It took me 10 hours to flight from Sydney to Melbourne.
[]
GEM-SciDuet-train-32#paper-1046#slide-1
1046
Reasoning with Sarcasm by Reading In-between
Sarcasm is a sophisticated speech act which commonly manifests on social communities such as Twitter and Reddit. The prevalence of sarcasm on the social web is highly disruptive to opinion mining systems due to not only its tendency of polarity flipping but also usage of figurative language. Sarcasm commonly manifests with a contrastive theme either between positive-negative sentiments or between literal-figurative scenarios. In this paper, we revisit the notion of modeling contrast in order to reason with sarcasm. More specifically, we propose an attention-based neural model that looks inbetween instead of across, enabling it to explicitly model contrast and incongruity. We conduct extensive experiments on six benchmark datasets from Twitter, Reddit and the Internet Argument Corpus. Our proposed model not only achieves stateof-the-art performance on all datasets but also enjoys improved interpretability.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239 ], "paper_content_text": [ "Introduction Sarcasm, commonly defined as 'An ironical taunt used to express contempt', is a challenging NLP problem due to its highly figurative nature.", "The usage of sarcasm on the social web is prevalent and can be frequently observed in reviews, microblogs (tweets) and online forums.", "As such, the battle against sarcasm is also regularly cited as one of the key challenges in sentiment analysis and opinion mining applications (Pang et al., 2008) .", "Hence, it is both imperative and intuitive that effective sarcasm detectors can bring about numerous benefits to opinion mining applications.", "Sarcasm is often associated to several linguistic phenomena such as (1) an explicit contrast between sentiments or (2) disparity between the conveyed emotion and the author's situation (context).", "Prior work has considered sarcasm to be a contrast between a positive and negative sentiment (Riloff et al., 2013) .", "Consider the following examples: 1.", "I absolutely love to be ignored!", "2.", "Yay!!!", "The best thing to wake up to is my neighbor's drilling.", "3.", "Perfect movie for people who can't fall asleep.", "Given the examples, we make a crucial observation -Sarcasm relies a lot on the semantic relationships (and contrast) between individual words and phrases in a sentence.", "For instance, the relationships between phrases {love, ignored}, {best, drilling} and {movie, asleep} (in the examples above) richly characterize the nature of sarcasm conveyed, i.e., word pairs tend to be contradictory and more often than not, express a juxtaposition of positive and negative terms.", "This concept is also explored in (Joshi et al., 2015) in which the authors refer to this phenomena as 'incongruity'.", "Hence, it would be useful to capture the relationships between selected word pairs in a sentence, i.e., looking in-between.", "State-of-the-art sarcasm detection systems mainly rely on deep and sequential neural networks (Ghosh and Veale, 2016; Zhang et al., 2016) .", "In these works, compositional encoders such as gated recurrent units (GRU) or long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997) are often employed, with the input document being parsed one word at a time.", "This has several shortcomings for the sarcasm detection task.", "Firstly, there is no explicit interaction between word pairs, which hampers its ability to explicitly model contrast, incongruity or juxtaposition of situations.", "Secondly, it is difficult to capture long-range dependencies.", "In this case, contrastive situations (or sentiments) which are commonplace in sarcastic language may be hard to detect with simple sequential models.", "To overcome the weaknesses of standard sequential models such as recurrent neural networks, our work is based on the intuition that modeling intra-sentence relationships can not only improve classification performance but also pave the way for more explainable neural sarcasm detection methods.", "In other words, our key intuition manifests itself in the form of an attention-based neural network.", "While the key idea of most neural attention mechanisms is to focus on relevant words and sub-phrases, it merely looks across and does not explicitly capture word-word relationships.", "Hence, it suffers from the same shortcomings as sequential models.", "In this paper, our aim is to combine the effectiveness of state-of-the-art recurrent models while harnessing the intuition of looking in-between.", "We propose a multi-dimensional intra-attention recurrent network that models intricate similarities between each word pair in the sentence.", "In other words, our novel deep learning model aims to capture 'contrast' (Riloff et al., 2013) and 'incongruity' (Joshi et al., 2015) within end-to-end neural networks.", "Our model can be thought of selftargeted co-attention (Xiong et al., 2016) , which allows our model to not only capture word-word relationships but also long-range dependencies.", "Finally, we show that our model produces interpretable attention maps which aid in the explainability of model outputs.", "To the best of our knowledge, our model is the first attention model that can produce explainable results in the sarcasm detection task.", "Briefly, the prime contributions of this work can be summarized as follows: • We propose a new state-of-the-art method for sarcasm detection.", "Our proposed model, the Multi-dimensional Intra-Attention Recurrent Network (MIARN) is strongly based on the intuition of compositional learning by leveraging intra-sentence relationships.", "To the best of our knowledge, none of the existing state-of-the-art models considered exploiting intra-sentence relationships, solely relying on sequential composition.", "• We conduct extensive experiments on multiple benchmarks from Twitter, Reddit and the Internet Argument Corpus.", "Our proposed MIARN achieves highly competitive performance on all benchmarks, outperforming existing state-of-the-art models such as GRNN (Zhang et al., 2016) and CNN-LSTM-DNN (Ghosh and Veale, 2016) .", "Related Work Sarcasm is a complex linguistic phenomena that have long fascinated both linguists and NLP researchers.", "After all, a better computational understanding of this complicated speech act could potentially bring about numerous benefits to existing opinion mining applications.", "Across the rich history of research on sarcasm, several theories such as the Situational Disparity Theory (Wilson, 2006 ) and the Negation Theory (Giora, 1995) have emerged.", "In these theories, a common theme is a motif that is strongly grounded in contrast, whether in sentiment, intention, situation or context.", "(Riloff et al., 2013) propagates this premise forward, presenting an algorithm strongly based on the intuition that sarcasm arises from a juxtaposition of positive and negative situations.", "Sarcasm Detection Naturally, many works in this area have treated the sarcasm detection task as a standard text classification problem.", "An extremely comprehensive overview can be found at (Joshi et al., 2017) .", "Feature engineering approaches were highly popular, exploiting a wide diverse range of features such as syntactic patterns (Tsur et al., 2010) , sentiment lexicons (González-Ibánez et al., 2011), ngram (Reyes et al., 2013) , word frequency (Barbieri et al., 2014), word shape and pointedness features (Ptáček et al., 2014) , readability and flips (Rajadesingan et al., 2015) , etc.", "Notably, there have been quite a reasonable number of works that propose features based on similarity and contrast.", "(Hernández-Farías et al., 2015) measured the Wordnet based semantic similarity between words.", "(Joshi et al., 2015) proposed a framework based on explicit and implicit incongruity, utilizing features based on positive-negative patterns.", "(Joshi et al., 2016) proposed similarity features based on word embeddings.", "Deep Learning for Sarcasm Detection Deep learning based methods have recently garnered considerable interest in many areas of NLP research.", "In our problem domain, (Zhang et al., 2016) proposed a recurrent-based model with a gated pooling mechanism for sarcasm detection on Twitter.", "(Ghosh and Veale, 2016) proposed a convolutional long-short-term memory network (CNN-LSTM-DNN) that achieves state-of-the-art performance.", "While our work focuses on document-only sarcasm detection, several notable works have proposed models that exploit personality information (Ghosh and Veale, 2017) and user context (Amir et al., 2016) .", "Novel methods for sarcasm detection such as gaze / cognitive features (Mishra et al., 2016 (Mishra et al., , 2017 have also been explored.", "(Peled and Reichart, 2017) proposed a novel framework based on neural machine translation to convert a sequence from sarcastic to non-sarcastic.", "(Felbo et al., 2017) proposed a layer-wise training scheme that utilizes emoji-based distant supervision for sentiment analysis and sarcasm detection tasks.", "Attention Models for NLP In the context of NLP, the key idea of neural attention is to soft select a sequence of words based on their relative importance to the task at hand.", "Early innovations in attentional paradigms mainly involve neural machine translation (Luong et al., 2015; for aligning sequence pairs.", "Attention is also commonplace in many NLP applications such as sentiment classification (Chen et al., 2016; Yang et al., 2016) , aspect-level sentiment analysis (Tay et al., 2018s, 2017b Chen et al., 2017) and entailment classification (Rocktäschel et al., 2015) .", "Co-attention / Bi-Attention (Xiong et al., 2016; Seo et al., 2016) is a form of pairwise attention mechanism that was proposed to model query-document pairs.", "Intraattention can be interpreted as a self-targetted coattention and is seeing a lot promising results in many recent works (Vaswani et al., 2017; Parikh et al., 2016; Tay et al., 2017a; Shen et al., 2017) .", "The key idea is to model a sequence against itself, learning to attend while capturing long term dependencies and word-word level interactions.", "To the best of our knowledge, our work is not only the first work that only applies intra-attention to sarcasm detection but also the first attention model for sarcasm detection.", "Our Proposed Approach In this section, we describe our proposed model.", "Figure 1 illustrates our overall model architecture.", "Input Encoding Layer Our model accepts a sequence of one-hot encoded vectors as an input.", "Each one-hot encoded vector corresponds to a single word in the vocabulary.", "In the input encoding layer, each one-hot vector is converted into a low-dimensional vector representation (word embedding).", "The word embeddings are parameterized by an embedding layer W ∈ R n×|V | .", "As such, the output of this layer is a sequence of word embeddings, i.e., {w 1 , w 2 , · · · w } where is a predefined maximum sequence length.", "Multi-dimensional Intra-Attention In this section, we describe our multi-dimensional intra-attention mechanism for sarcasm detection.", "We first begin by describing the standard single-dimensional intra-attention.", "The multidimensional adaptation will be introduced later in this section.", "The key idea behind this layer is to look in-between, i.e., modeling the semantics between each word in the input sequence.", "We first begin by modeling the relationship of each word pair in the input sequence.", "A simple way to achieve this is to use a linear 1 transformation layer to project the concatenation of each word embedding pair into a scalar score as follows: s ij = W a ([w i ; w j ]) + b a (1) where W a ∈ R 2n×1 , b a ∈ R are the parameters of this layer.", "[.", "; .]", "is the vector concatenation operator and s ij is a scalar representing the affinity score between word pairs (w i , w j ).", "We can easily observe that s is a symmetrical matrix of × dimensions.", "In order to learn attention vector a, we apply a row-wise max-pooling operator on matrix s. a = sof tmax(max row s) (2) where a ∈ R is a vector representing the learned intra-attention weights.", "Then, the vector a is employed to learn weighted representation of {w 1 , w 2 · · · w } as follows: v a = i=1 w i a i (3) where v ∈ R n is the intra-attentive representation of the input sequence.", "While other choices of pooling operators may be also employed (e.g., mean-pooling over max-pooling), the choice of max-pooling is empirically motivated.", "Intuitively, this attention layer learns to pay attention based on a word's largest contribution to all words in the sequence.", "Since our objective is to highlight words that might contribute to the contrastive theories of sarcasm, a more discriminative pooling operator is desirable.", "Notably, we also mask values of s where i = j such that we do not allow the relationship scores of a word with respect to itself to influence the overall attention weights.", "Furthermore, our network can be considered as an 'inner' adaptation of neural attention, modeling intra-sentence relationships between the raw word representations instead of representations that have been compositionally manipulated.", "This allows word-to-word similarity to be modeled 'as it is' and not be influenced by composition.", "For example, when using the outputs of a compositional encoder (e.g., LSTM), matching words n and n + 1 might not be meaningful since they would be relatively similar in terms of semantic composition.", "For relatively short documents (such as tweets), it is also intuitive that attention typically focuses on the last hidden representation.", "Intuitively, the relationships between two words is often not straightforward.", "Words are complex and often hold more than one meanings (or word senses).", "As such, it might be beneficial to model multiple views between two words.", "This can be modeled by representing the word pair interaction with a vector instead of a scalar.", "As such, we propose a multi-dimensional adaptation of the intra-attention mechanism.", "The key idea here is that each word pair is projected down to a lowdimensional vector before we compute the affinity score, which allows it to not only capture one view (one scalar) but also multiple views.", "A modification to Equation (1) constitutes our Multi-Dimensional Intra-Attention variant.", "s ij = W p (ReLU (W q ([w i ; w j ]) + b q )) + b p (4) where W q ∈ R n×k , W p ∈ R k×1 , b q ∈ R k , b p ∈ R are the parameters of this layer.", "The final intraattentive representation is then learned with Equation (2) and Equation (3) which we do not repeat here for the sake of brevity.", "Long Short-Term Memory Encoder While we are able to simply use the learned representation v for prediction, it is clear that v does not encode compositional information and may miss out on important compositional phrases such as 'not happy'.", "Clearly, our intra-attention mechanism simply considers a word-by-word interaction and does not model the input document sequentially.", "As such, it is beneficial to use a separate compositional encoder for this purpose, i.e., learning compositional representations.", "To this end, we employ the standard Long Short-Term Memory (LSTM) encoder.", "The output of an LSTM encoder at each time-step can be briefly defined as: h i = LSTM(w, i), ∀i ∈ [1, .", ".", ". ]", "(5) where represents the maximum length of the sequence and h i ∈ R d is the hidden output of the LSTM encoder at time-step i. d is the size of the hidden units of the LSTM encoder.", "LSTM encoders are parameterized by gating mechanisms learned via nonlinear transformations.", "Since LSTMs are commonplace in standard NLP applications, we omit the technical details for the sake of brevity.", "Finally, to obtain a compositional representation of the input document, we use v c = h which is the last hidden output of the LSTM encoder.", "Note that the inputs to the LSTM encoder are the word embeddings right after the input encoding layer and not the output of the intraattention layer.", "We found that applying an LSTM on the intra-attentively scaled representations do not yield any benefits.", "Prediction Layer The inputs to the final prediction layer are two representations, namely (1) the intra-attentive representation (v a ∈ R n ) and (2) the compositional representation (v c ∈ R d ).", "This layer learns a joint representation of these two views using a nonlinear projection layer.", "v = ReLU (W z ([v a ; v c ]) + b z ) (6) where W z ∈ R (d+n)×d and b z ∈ R d .", "Finally, we pass v into a Softmax classification layer.", "y = Sof tmax(W f v + b f ) (7) where W f ∈ R d×2 , b f ∈ R 2 are the parameters of this layer.ŷ ∈ R 2 is the output layer of our proposed model.", "Optimization and Learning Our network is trained end-to-end, optimizing the standard binary cross-entropy loss function.", "J = − N i=1 [yi logŷi + (1 − yi) log(1 −ŷi)] + R (8) where J is the cost function,ŷ is the output of the network, R = ||θ|| L2 is the L2 regularization and λ is the weight of the regularizer.", "Empirical Evaluation In this section, we describe our experimental setup and results.", "Our experiments were designed to answer the following research questions (RQs).", "• RQ1 -Does our proposed approach outperform existing state-of-the-art models?", "• RQ2 -What are the impacts of some of the architectural choices of our model?", "How much does intra-attention contribute to the model performance?", "Is the Multi-Dimensional adaptation better than the Single-Dimensional adaptation?", "• RQ3 -What can we interpret from the intraattention layers?", "Does this align with our hypothesis about looking in-between and modeling contrast?", "Datasets We conduct our experiments on six publicly available benchmark datasets which span across three well-known sources.", "• Tweets -Twitter 2 is a microblogging platform which allows users to post statuses of less than 140 characters.", "We use two collections for sarcasm detection on tweets.", "More specifically, we use the dataset obtained from (1) (Ptáček et al., 2014) in which tweets are trained via hashtag based semisupervised learning, i.e., hashtags such as #not, #sarcasm and #irony are marked as sarcastic tweets and (2) (Riloff et al., 2013) in which Tweets are hand annotated and manually checked for sarcasm.", "For both datasets, we retrieve.", "Tweets using the Twitter API using the provided tweet IDs.", "• Reddit -Reddit 3 is a highly popular social forum and community.", "Similar to Tweets, sarcastic posts are obtained via the tag '/s' which are marked by the authors themselves.", "We use two Reddit datasets which are obtained from the subreddits /r/movies and /r/technology respectively.", "Datasets are subsets from (Khodak et al., 2017) .", "• Debates -We use two datasets 4 from the Internet Argument Corpus (IAC) (Lukin and Walker, 2017) which have been hand annotated for sarcasm.", "This dataset, unlike the first two, is mainly concerned with long text and provides a diverse comparison from the other datasets.", "The IAC corpus was designed for research on political debates on online forums.", "We use the V1 and V2 versions of the sarcasm corpus which are denoted as IAC-V1 and IAC-V2 respectively.", "The statistics of the datasets used in our experiments is reported in Table 1 .", "Compared Methods We compare our proposed model with the following algorithms.", "• NBOW is a simple neural bag-of-words baseline that sums all the word embeddings and passes the summed vector into a simple logistic regression layer.", "• CNN is a vanilla Convolutional Neural Network with max-pooling.", "CNNs are considered as compositional encoders that capture n-gram features by parameterized sliding windows.", "The filter width is 3 and number of filters f = 100.", "• LSTM is a vanilla Long Short-Term Memory Network.", "The size of the LSTM cell is set to d = 100.", "• ATT-LSTM (Attention-based LSTM) is a LSTM model with a neural attention mechanism applied to all the LSTM hidden outputs.", "We use a similar adaptation to (Yang et al., 2016) , albeit only at the document-level.", "• GRNN (Gated Recurrent Neural Network) is a Bidirectional Gated Recurrent Unit (GRU) model that was proposed for sarcasm detection by (Zhang et al., 2016) .", "GRNN uses a gated pooling mechanism to aggregate the hidden representations from a standard BiGRU model.", "Since we only compare on document-level sarcasm detection, we do not use the variant of GRNN that exploits user context.", "• CNN-LSTM-DNN (Convolutional LSTM + Deep Neural Network), proposed by (Ghosh and Veale, 2016) , is the state-of-theart model for sarcasm detection.", "This model is a combination of a CNN, LSTM and Deep Neural Network via stacking.", "It stacks two layers of 1D convolution with 2 LSTM layers.", "The output passes through a deep neural network (DNN) for prediction.", "Both CNN-LSTM-DNN (Ghosh and Veale, 2016) and GRNN (Zhang et al., 2016) are state-ofthe-art models for document-level sarcasm detection and have outperformed numerous neural and non-neural baselines.", "In particular, both works have well surpassed feature-based models (Support Vector Machines, etc.", "), as such we omit comparisons for the sake of brevity and focus comparisons with recent neural models instead.", "Moreover, since our work focuses only on document-level sarcasm detection, we do not compare against models that use external information such as user profiles, context, personality information (Ghosh and Veale, 2017) or emoji-based distant supervision (Felbo et al., 2017) .", "For our model, we report results on both multi-dimensional and single-dimensional intraattention.", "The two models are named as MIARN and SIARN respectively.", "Implementation Details and Metrics We adopt standard the evaluation metrics for the sarcasm detection task, i.e., macro-averaged F1 and accuracy score.", "Additionally, we also report precision and recall scores.", "All deep learning models are implemented using Tensor-Flow (Abadi et al., 2015) and optimized on a NVIDIA GTX1070 GPU.", "Text is preprocessed with NLTK 5 's Tweet tokenizer.", "Words that only appear once in the entire corpus are removed and marked with the UNK token.", "Document lengths are truncated at 40, 20, 80 tokens for Twitter, Reddit and Debates dataset respectively.", "Mentions of other users on the Twitter dataset are replaced by '@USER'.", "Documents with URLs (i.e., containing 'http') are removed from the corpus.", "Documents with less than 5 tokens are also removed.", "The learning optimizer used is the RMSProp with an initial learning rate of 0.001.", "The L2 regularization is set to 10 −8 .", "We initialize the word embedding layer with GloVe (Pennington et al., 2014) .", "We use the GloVe model trained on 2B Tweets for the Tweets and Reddit dataset.", "The Glove model trained on Common Crawl is used for the Debates corpus.", "The size of the word embeddings is fixed at d = 100 and are fine-tuned during training.", "In all experiments, we use a development set to select the best hyperparameters.", "Each model is trained for a total of 30 epochs and the model is saved each time the performance Tweets (Ptáček et al., 2014) Tweets (Riloff et al., 2013 on the development set is topped.", "The batch size is tuned amongst {128, 256, 512} for all datasets.", "The only exception is the Tweets dataset from (Riloff et al., 2013) , in which a batch size of 16 is used in lieu of the much smaller dataset size.", "For fair comparison, all models have the same hidden representation size and are set to 100 for both recurrent and convolutional based models (i.e., number of filters).", "For MIARN, the size of intraattention hidden representation is tuned amongst {4, 8, 10, 20}.", "Experimental Results Table 2, Table 3 and Table 4 reports a performance comparison of all benchmarked models on the Tweets, Reddit and Debates datasets respectively.", "We observe that our proposed SIARN and MIARN models achieve the best results across all six datasets.", "The relative improvement differs across domain and datasets.", "On the Tweets dataset from (Ptáček et al., 2014) , MIARN achieves about ≈ 2% − 2.2% improvement in terms of F1 and accuracy score when compared against the best baseline.", "On the other Tweets dataset from (Riloff et al., 2013) , the performance gain of our proposed model is larger, i.e., 3% − 5% improvement on average over most baselines.", "Our proposed SIARN and MIARN models achieve very competitive performance on the Reddit datasets, with an average of ≈ 2% margin improvement over the best baselines.", "Notably, the baselines we compare against are extremely competitive state-of-the-art neural network models.", "This further reinforces the effectiveness of our proposed approach.", "Additionally, the performance improvement on Debates (long text) is significantly larger than short text (i.e., Twitter and Reddit).", "For example, MI-ARN outperforms GRNN and CNN-LSTM-DNN by ≈ 8% − 10% on both IAC-V1 and IAC-V2.", "At this note, we can safely put RQ1 to rest.", "Overall, the performance of MIARN is often marginally better than SIARN (with some exceptions, e.g., Tweets dataset from (Riloff et al., 2013) ).", "We believe that this is attributed to the fact that more complex word-word relationships can be learned by using multi-dimensional values instead of single-dimensional scalars.", "The performance brought by our additional intra-attentive representations can be further observed by comparing against the vanilla LSTM model.", "Clearly, removing the intra-attention network reverts our model to the standard LSTM.", "The performance improvements are encouraging, leading to almost 10% improvement in terms of F1 and accuracy.", "On datasets with short text, the performance improvement is often a modest ≈ 2% − 3% (RQ2).", "Notably, our proposed models also perform much better on long text, which can be attributed to the intra-attentive representations explicitly modeling long range dependencies.", "Intuitively, this is problematic for models that only capture sequential dependencies (e.g., word by word).", "Finally, the relative performance of competitor methods are as expected.", "NBOW performs the worse, since it is just a naive bag-of-words model without any compositional or sequential information.", "On short text, LSTMs are overall better than CNNs.", "However, this trend is reversed on long text (i.e., Debates) since the LSTM model may be overburdened by overly long sequences.", "On short text, we also found that attention (or the gated pooling mechanism from GRNN) did not really help make any significant improvements over the vanilla LSTM model and a qualitative explanation to why this is so is deferred to the next section.", "However, attention helps for long text (such as debates), resulting in Attention LSTMs becoming the strongest baseline on the Debates datasets.", "However, our proposed intra-attentive model is both effective on short text and long text, outperforming Attention LSTMs consistently on all datasets.", "In-depth Model Analysis In this section, we present an in-depth analysis of our proposed model.", "More specifically, we not only aim to showcase the interpretability of our model but also explain how representations are formed.", "More specifically, we test our model (trained on Tweets dataset by (Ptáček et al., 2014) ) on two examples.", "We extract the attention maps of three models, namely MIARN, Attention LSTM (ATT-LSTM) and applying Attention mechanism directly on the word embeddings without using a LSTM encoder (ATT-RAW).", "Table 5 shows the visualization of the attention maps.", "In the first example (true label), we notice that the attention maps of MIARN are focusing on the words 'love' and 'ignored'.", "This is in concert with our intuition about modeling contrast and incongruity.", "On the other hand, both ATT-LSTM and ATT-RAW learn very different attention maps.", "As for ATT-LSTM, the attention weight is focused completely on the last representation -the token '!!'.", "Additionally, we also observed that this is true for many examples in the Tweets and Reddit dataset.", "We believe that this is the reason why standard neural attention does not help as what the attention mechanism is learning is to select the last representation (i.e., vanilla LSTM).", "Without the LSTM encoder, the attention weights focus on 'love' but not 'ignored'.", "This fails to capture any concept of contrast or incongruity.", "Next, we consider the false labeled example.", "This time, the attention maps of MIARN are not as distinct as before.", "However, they focus on sentiment-bearing words, composing the words 'ignored sucks' to form the majority of the intraattentive representation.", "This time, passing the vector made up of 'ignored sucks' allows the subsequent layers to recognize that there is no contrasting situation or sentiment.", "Similarly, ATT-LSTM focuses on the last word time which is totally non-interpretable.", "On the other hand, ATT-RAW focuses on relatively non-meaningful words such as 'big'.", "Overall, we analyzed two cases (positive and negative labels) and found that MIARN produces very explainable attention maps.", "In general, we found that MIARN is able to identify contrast and incongruity in sentences, allowing our model to better detect sarcasm.", "This is facilitated by modeling intra-sentence relationships.", "Notably, the standard vanilla attention is not explainable or interpretable.", "Conclusion Based on the intuition of intra-sentence similarity (i.e., looking in-between), we proposed a new neural network architecture for sarcasm detection.", "Our network incorporates a multi-dimensional intra-attention component that learns an intraattentive representation of the sentence, enabling it to detect contrastive sentiment, situations and incongruity.", "Extensive experiments over six public benchmarks confirm the empirical effectiveness of our proposed model.", "Our proposed MI-ARN model outperforms strong state-of-the-art baselines such as GRNN and CNN-LSTM-DNN.", "Analysis of the intra-attention scores shows that our model learns highly interpretable attention weights, paving the way for more explainable neural sarcasm detection methods." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "3", "3.1", "3.2", "3.3", "3.4", "3.5", "4", "4.1", "4.2", "4.3", "4.4", "4.5", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Sarcasm Detection", "Deep Learning for Sarcasm Detection", "Attention Models for NLP", "Our Proposed Approach", "Input Encoding Layer", "Multi-dimensional Intra-Attention", "Long Short-Term Memory Encoder", "Prediction Layer", "Optimization and Learning", "Empirical Evaluation", "Datasets", "Compared Methods", "Implementation Details and Metrics", "Experimental Results", "In-depth Model Analysis", "Conclusion" ] }
GEM-SciDuet-train-32#paper-1046#slide-1
Motivation
State-of-the-art sarcasm detection systems mainly rely on deep and sequential neural networks (Ghosh and Veale, o compositional encoders (GRU, LSTM) are often employed, with the input document being parsed one word at a time o no explicit interaction between word pairs hampers ability to explicitly model contrast, incongruity or juxtaposition of situations o difficult to capture long-range dependencies
State-of-the-art sarcasm detection systems mainly rely on deep and sequential neural networks (Ghosh and Veale, o compositional encoders (GRU, LSTM) are often employed, with the input document being parsed one word at a time o no explicit interaction between word pairs hampers ability to explicitly model contrast, incongruity or juxtaposition of situations o difficult to capture long-range dependencies
[]
GEM-SciDuet-train-32#paper-1046#slide-2
1046
Reasoning with Sarcasm by Reading In-between
Sarcasm is a sophisticated speech act which commonly manifests on social communities such as Twitter and Reddit. The prevalence of sarcasm on the social web is highly disruptive to opinion mining systems due to not only its tendency of polarity flipping but also usage of figurative language. Sarcasm commonly manifests with a contrastive theme either between positive-negative sentiments or between literal-figurative scenarios. In this paper, we revisit the notion of modeling contrast in order to reason with sarcasm. More specifically, we propose an attention-based neural model that looks inbetween instead of across, enabling it to explicitly model contrast and incongruity. We conduct extensive experiments on six benchmark datasets from Twitter, Reddit and the Internet Argument Corpus. Our proposed model not only achieves stateof-the-art performance on all datasets but also enjoys improved interpretability.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239 ], "paper_content_text": [ "Introduction Sarcasm, commonly defined as 'An ironical taunt used to express contempt', is a challenging NLP problem due to its highly figurative nature.", "The usage of sarcasm on the social web is prevalent and can be frequently observed in reviews, microblogs (tweets) and online forums.", "As such, the battle against sarcasm is also regularly cited as one of the key challenges in sentiment analysis and opinion mining applications (Pang et al., 2008) .", "Hence, it is both imperative and intuitive that effective sarcasm detectors can bring about numerous benefits to opinion mining applications.", "Sarcasm is often associated to several linguistic phenomena such as (1) an explicit contrast between sentiments or (2) disparity between the conveyed emotion and the author's situation (context).", "Prior work has considered sarcasm to be a contrast between a positive and negative sentiment (Riloff et al., 2013) .", "Consider the following examples: 1.", "I absolutely love to be ignored!", "2.", "Yay!!!", "The best thing to wake up to is my neighbor's drilling.", "3.", "Perfect movie for people who can't fall asleep.", "Given the examples, we make a crucial observation -Sarcasm relies a lot on the semantic relationships (and contrast) between individual words and phrases in a sentence.", "For instance, the relationships between phrases {love, ignored}, {best, drilling} and {movie, asleep} (in the examples above) richly characterize the nature of sarcasm conveyed, i.e., word pairs tend to be contradictory and more often than not, express a juxtaposition of positive and negative terms.", "This concept is also explored in (Joshi et al., 2015) in which the authors refer to this phenomena as 'incongruity'.", "Hence, it would be useful to capture the relationships between selected word pairs in a sentence, i.e., looking in-between.", "State-of-the-art sarcasm detection systems mainly rely on deep and sequential neural networks (Ghosh and Veale, 2016; Zhang et al., 2016) .", "In these works, compositional encoders such as gated recurrent units (GRU) or long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997) are often employed, with the input document being parsed one word at a time.", "This has several shortcomings for the sarcasm detection task.", "Firstly, there is no explicit interaction between word pairs, which hampers its ability to explicitly model contrast, incongruity or juxtaposition of situations.", "Secondly, it is difficult to capture long-range dependencies.", "In this case, contrastive situations (or sentiments) which are commonplace in sarcastic language may be hard to detect with simple sequential models.", "To overcome the weaknesses of standard sequential models such as recurrent neural networks, our work is based on the intuition that modeling intra-sentence relationships can not only improve classification performance but also pave the way for more explainable neural sarcasm detection methods.", "In other words, our key intuition manifests itself in the form of an attention-based neural network.", "While the key idea of most neural attention mechanisms is to focus on relevant words and sub-phrases, it merely looks across and does not explicitly capture word-word relationships.", "Hence, it suffers from the same shortcomings as sequential models.", "In this paper, our aim is to combine the effectiveness of state-of-the-art recurrent models while harnessing the intuition of looking in-between.", "We propose a multi-dimensional intra-attention recurrent network that models intricate similarities between each word pair in the sentence.", "In other words, our novel deep learning model aims to capture 'contrast' (Riloff et al., 2013) and 'incongruity' (Joshi et al., 2015) within end-to-end neural networks.", "Our model can be thought of selftargeted co-attention (Xiong et al., 2016) , which allows our model to not only capture word-word relationships but also long-range dependencies.", "Finally, we show that our model produces interpretable attention maps which aid in the explainability of model outputs.", "To the best of our knowledge, our model is the first attention model that can produce explainable results in the sarcasm detection task.", "Briefly, the prime contributions of this work can be summarized as follows: • We propose a new state-of-the-art method for sarcasm detection.", "Our proposed model, the Multi-dimensional Intra-Attention Recurrent Network (MIARN) is strongly based on the intuition of compositional learning by leveraging intra-sentence relationships.", "To the best of our knowledge, none of the existing state-of-the-art models considered exploiting intra-sentence relationships, solely relying on sequential composition.", "• We conduct extensive experiments on multiple benchmarks from Twitter, Reddit and the Internet Argument Corpus.", "Our proposed MIARN achieves highly competitive performance on all benchmarks, outperforming existing state-of-the-art models such as GRNN (Zhang et al., 2016) and CNN-LSTM-DNN (Ghosh and Veale, 2016) .", "Related Work Sarcasm is a complex linguistic phenomena that have long fascinated both linguists and NLP researchers.", "After all, a better computational understanding of this complicated speech act could potentially bring about numerous benefits to existing opinion mining applications.", "Across the rich history of research on sarcasm, several theories such as the Situational Disparity Theory (Wilson, 2006 ) and the Negation Theory (Giora, 1995) have emerged.", "In these theories, a common theme is a motif that is strongly grounded in contrast, whether in sentiment, intention, situation or context.", "(Riloff et al., 2013) propagates this premise forward, presenting an algorithm strongly based on the intuition that sarcasm arises from a juxtaposition of positive and negative situations.", "Sarcasm Detection Naturally, many works in this area have treated the sarcasm detection task as a standard text classification problem.", "An extremely comprehensive overview can be found at (Joshi et al., 2017) .", "Feature engineering approaches were highly popular, exploiting a wide diverse range of features such as syntactic patterns (Tsur et al., 2010) , sentiment lexicons (González-Ibánez et al., 2011), ngram (Reyes et al., 2013) , word frequency (Barbieri et al., 2014), word shape and pointedness features (Ptáček et al., 2014) , readability and flips (Rajadesingan et al., 2015) , etc.", "Notably, there have been quite a reasonable number of works that propose features based on similarity and contrast.", "(Hernández-Farías et al., 2015) measured the Wordnet based semantic similarity between words.", "(Joshi et al., 2015) proposed a framework based on explicit and implicit incongruity, utilizing features based on positive-negative patterns.", "(Joshi et al., 2016) proposed similarity features based on word embeddings.", "Deep Learning for Sarcasm Detection Deep learning based methods have recently garnered considerable interest in many areas of NLP research.", "In our problem domain, (Zhang et al., 2016) proposed a recurrent-based model with a gated pooling mechanism for sarcasm detection on Twitter.", "(Ghosh and Veale, 2016) proposed a convolutional long-short-term memory network (CNN-LSTM-DNN) that achieves state-of-the-art performance.", "While our work focuses on document-only sarcasm detection, several notable works have proposed models that exploit personality information (Ghosh and Veale, 2017) and user context (Amir et al., 2016) .", "Novel methods for sarcasm detection such as gaze / cognitive features (Mishra et al., 2016 (Mishra et al., , 2017 have also been explored.", "(Peled and Reichart, 2017) proposed a novel framework based on neural machine translation to convert a sequence from sarcastic to non-sarcastic.", "(Felbo et al., 2017) proposed a layer-wise training scheme that utilizes emoji-based distant supervision for sentiment analysis and sarcasm detection tasks.", "Attention Models for NLP In the context of NLP, the key idea of neural attention is to soft select a sequence of words based on their relative importance to the task at hand.", "Early innovations in attentional paradigms mainly involve neural machine translation (Luong et al., 2015; for aligning sequence pairs.", "Attention is also commonplace in many NLP applications such as sentiment classification (Chen et al., 2016; Yang et al., 2016) , aspect-level sentiment analysis (Tay et al., 2018s, 2017b Chen et al., 2017) and entailment classification (Rocktäschel et al., 2015) .", "Co-attention / Bi-Attention (Xiong et al., 2016; Seo et al., 2016) is a form of pairwise attention mechanism that was proposed to model query-document pairs.", "Intraattention can be interpreted as a self-targetted coattention and is seeing a lot promising results in many recent works (Vaswani et al., 2017; Parikh et al., 2016; Tay et al., 2017a; Shen et al., 2017) .", "The key idea is to model a sequence against itself, learning to attend while capturing long term dependencies and word-word level interactions.", "To the best of our knowledge, our work is not only the first work that only applies intra-attention to sarcasm detection but also the first attention model for sarcasm detection.", "Our Proposed Approach In this section, we describe our proposed model.", "Figure 1 illustrates our overall model architecture.", "Input Encoding Layer Our model accepts a sequence of one-hot encoded vectors as an input.", "Each one-hot encoded vector corresponds to a single word in the vocabulary.", "In the input encoding layer, each one-hot vector is converted into a low-dimensional vector representation (word embedding).", "The word embeddings are parameterized by an embedding layer W ∈ R n×|V | .", "As such, the output of this layer is a sequence of word embeddings, i.e., {w 1 , w 2 , · · · w } where is a predefined maximum sequence length.", "Multi-dimensional Intra-Attention In this section, we describe our multi-dimensional intra-attention mechanism for sarcasm detection.", "We first begin by describing the standard single-dimensional intra-attention.", "The multidimensional adaptation will be introduced later in this section.", "The key idea behind this layer is to look in-between, i.e., modeling the semantics between each word in the input sequence.", "We first begin by modeling the relationship of each word pair in the input sequence.", "A simple way to achieve this is to use a linear 1 transformation layer to project the concatenation of each word embedding pair into a scalar score as follows: s ij = W a ([w i ; w j ]) + b a (1) where W a ∈ R 2n×1 , b a ∈ R are the parameters of this layer.", "[.", "; .]", "is the vector concatenation operator and s ij is a scalar representing the affinity score between word pairs (w i , w j ).", "We can easily observe that s is a symmetrical matrix of × dimensions.", "In order to learn attention vector a, we apply a row-wise max-pooling operator on matrix s. a = sof tmax(max row s) (2) where a ∈ R is a vector representing the learned intra-attention weights.", "Then, the vector a is employed to learn weighted representation of {w 1 , w 2 · · · w } as follows: v a = i=1 w i a i (3) where v ∈ R n is the intra-attentive representation of the input sequence.", "While other choices of pooling operators may be also employed (e.g., mean-pooling over max-pooling), the choice of max-pooling is empirically motivated.", "Intuitively, this attention layer learns to pay attention based on a word's largest contribution to all words in the sequence.", "Since our objective is to highlight words that might contribute to the contrastive theories of sarcasm, a more discriminative pooling operator is desirable.", "Notably, we also mask values of s where i = j such that we do not allow the relationship scores of a word with respect to itself to influence the overall attention weights.", "Furthermore, our network can be considered as an 'inner' adaptation of neural attention, modeling intra-sentence relationships between the raw word representations instead of representations that have been compositionally manipulated.", "This allows word-to-word similarity to be modeled 'as it is' and not be influenced by composition.", "For example, when using the outputs of a compositional encoder (e.g., LSTM), matching words n and n + 1 might not be meaningful since they would be relatively similar in terms of semantic composition.", "For relatively short documents (such as tweets), it is also intuitive that attention typically focuses on the last hidden representation.", "Intuitively, the relationships between two words is often not straightforward.", "Words are complex and often hold more than one meanings (or word senses).", "As such, it might be beneficial to model multiple views between two words.", "This can be modeled by representing the word pair interaction with a vector instead of a scalar.", "As such, we propose a multi-dimensional adaptation of the intra-attention mechanism.", "The key idea here is that each word pair is projected down to a lowdimensional vector before we compute the affinity score, which allows it to not only capture one view (one scalar) but also multiple views.", "A modification to Equation (1) constitutes our Multi-Dimensional Intra-Attention variant.", "s ij = W p (ReLU (W q ([w i ; w j ]) + b q )) + b p (4) where W q ∈ R n×k , W p ∈ R k×1 , b q ∈ R k , b p ∈ R are the parameters of this layer.", "The final intraattentive representation is then learned with Equation (2) and Equation (3) which we do not repeat here for the sake of brevity.", "Long Short-Term Memory Encoder While we are able to simply use the learned representation v for prediction, it is clear that v does not encode compositional information and may miss out on important compositional phrases such as 'not happy'.", "Clearly, our intra-attention mechanism simply considers a word-by-word interaction and does not model the input document sequentially.", "As such, it is beneficial to use a separate compositional encoder for this purpose, i.e., learning compositional representations.", "To this end, we employ the standard Long Short-Term Memory (LSTM) encoder.", "The output of an LSTM encoder at each time-step can be briefly defined as: h i = LSTM(w, i), ∀i ∈ [1, .", ".", ". ]", "(5) where represents the maximum length of the sequence and h i ∈ R d is the hidden output of the LSTM encoder at time-step i. d is the size of the hidden units of the LSTM encoder.", "LSTM encoders are parameterized by gating mechanisms learned via nonlinear transformations.", "Since LSTMs are commonplace in standard NLP applications, we omit the technical details for the sake of brevity.", "Finally, to obtain a compositional representation of the input document, we use v c = h which is the last hidden output of the LSTM encoder.", "Note that the inputs to the LSTM encoder are the word embeddings right after the input encoding layer and not the output of the intraattention layer.", "We found that applying an LSTM on the intra-attentively scaled representations do not yield any benefits.", "Prediction Layer The inputs to the final prediction layer are two representations, namely (1) the intra-attentive representation (v a ∈ R n ) and (2) the compositional representation (v c ∈ R d ).", "This layer learns a joint representation of these two views using a nonlinear projection layer.", "v = ReLU (W z ([v a ; v c ]) + b z ) (6) where W z ∈ R (d+n)×d and b z ∈ R d .", "Finally, we pass v into a Softmax classification layer.", "y = Sof tmax(W f v + b f ) (7) where W f ∈ R d×2 , b f ∈ R 2 are the parameters of this layer.ŷ ∈ R 2 is the output layer of our proposed model.", "Optimization and Learning Our network is trained end-to-end, optimizing the standard binary cross-entropy loss function.", "J = − N i=1 [yi logŷi + (1 − yi) log(1 −ŷi)] + R (8) where J is the cost function,ŷ is the output of the network, R = ||θ|| L2 is the L2 regularization and λ is the weight of the regularizer.", "Empirical Evaluation In this section, we describe our experimental setup and results.", "Our experiments were designed to answer the following research questions (RQs).", "• RQ1 -Does our proposed approach outperform existing state-of-the-art models?", "• RQ2 -What are the impacts of some of the architectural choices of our model?", "How much does intra-attention contribute to the model performance?", "Is the Multi-Dimensional adaptation better than the Single-Dimensional adaptation?", "• RQ3 -What can we interpret from the intraattention layers?", "Does this align with our hypothesis about looking in-between and modeling contrast?", "Datasets We conduct our experiments on six publicly available benchmark datasets which span across three well-known sources.", "• Tweets -Twitter 2 is a microblogging platform which allows users to post statuses of less than 140 characters.", "We use two collections for sarcasm detection on tweets.", "More specifically, we use the dataset obtained from (1) (Ptáček et al., 2014) in which tweets are trained via hashtag based semisupervised learning, i.e., hashtags such as #not, #sarcasm and #irony are marked as sarcastic tweets and (2) (Riloff et al., 2013) in which Tweets are hand annotated and manually checked for sarcasm.", "For both datasets, we retrieve.", "Tweets using the Twitter API using the provided tweet IDs.", "• Reddit -Reddit 3 is a highly popular social forum and community.", "Similar to Tweets, sarcastic posts are obtained via the tag '/s' which are marked by the authors themselves.", "We use two Reddit datasets which are obtained from the subreddits /r/movies and /r/technology respectively.", "Datasets are subsets from (Khodak et al., 2017) .", "• Debates -We use two datasets 4 from the Internet Argument Corpus (IAC) (Lukin and Walker, 2017) which have been hand annotated for sarcasm.", "This dataset, unlike the first two, is mainly concerned with long text and provides a diverse comparison from the other datasets.", "The IAC corpus was designed for research on political debates on online forums.", "We use the V1 and V2 versions of the sarcasm corpus which are denoted as IAC-V1 and IAC-V2 respectively.", "The statistics of the datasets used in our experiments is reported in Table 1 .", "Compared Methods We compare our proposed model with the following algorithms.", "• NBOW is a simple neural bag-of-words baseline that sums all the word embeddings and passes the summed vector into a simple logistic regression layer.", "• CNN is a vanilla Convolutional Neural Network with max-pooling.", "CNNs are considered as compositional encoders that capture n-gram features by parameterized sliding windows.", "The filter width is 3 and number of filters f = 100.", "• LSTM is a vanilla Long Short-Term Memory Network.", "The size of the LSTM cell is set to d = 100.", "• ATT-LSTM (Attention-based LSTM) is a LSTM model with a neural attention mechanism applied to all the LSTM hidden outputs.", "We use a similar adaptation to (Yang et al., 2016) , albeit only at the document-level.", "• GRNN (Gated Recurrent Neural Network) is a Bidirectional Gated Recurrent Unit (GRU) model that was proposed for sarcasm detection by (Zhang et al., 2016) .", "GRNN uses a gated pooling mechanism to aggregate the hidden representations from a standard BiGRU model.", "Since we only compare on document-level sarcasm detection, we do not use the variant of GRNN that exploits user context.", "• CNN-LSTM-DNN (Convolutional LSTM + Deep Neural Network), proposed by (Ghosh and Veale, 2016) , is the state-of-theart model for sarcasm detection.", "This model is a combination of a CNN, LSTM and Deep Neural Network via stacking.", "It stacks two layers of 1D convolution with 2 LSTM layers.", "The output passes through a deep neural network (DNN) for prediction.", "Both CNN-LSTM-DNN (Ghosh and Veale, 2016) and GRNN (Zhang et al., 2016) are state-ofthe-art models for document-level sarcasm detection and have outperformed numerous neural and non-neural baselines.", "In particular, both works have well surpassed feature-based models (Support Vector Machines, etc.", "), as such we omit comparisons for the sake of brevity and focus comparisons with recent neural models instead.", "Moreover, since our work focuses only on document-level sarcasm detection, we do not compare against models that use external information such as user profiles, context, personality information (Ghosh and Veale, 2017) or emoji-based distant supervision (Felbo et al., 2017) .", "For our model, we report results on both multi-dimensional and single-dimensional intraattention.", "The two models are named as MIARN and SIARN respectively.", "Implementation Details and Metrics We adopt standard the evaluation metrics for the sarcasm detection task, i.e., macro-averaged F1 and accuracy score.", "Additionally, we also report precision and recall scores.", "All deep learning models are implemented using Tensor-Flow (Abadi et al., 2015) and optimized on a NVIDIA GTX1070 GPU.", "Text is preprocessed with NLTK 5 's Tweet tokenizer.", "Words that only appear once in the entire corpus are removed and marked with the UNK token.", "Document lengths are truncated at 40, 20, 80 tokens for Twitter, Reddit and Debates dataset respectively.", "Mentions of other users on the Twitter dataset are replaced by '@USER'.", "Documents with URLs (i.e., containing 'http') are removed from the corpus.", "Documents with less than 5 tokens are also removed.", "The learning optimizer used is the RMSProp with an initial learning rate of 0.001.", "The L2 regularization is set to 10 −8 .", "We initialize the word embedding layer with GloVe (Pennington et al., 2014) .", "We use the GloVe model trained on 2B Tweets for the Tweets and Reddit dataset.", "The Glove model trained on Common Crawl is used for the Debates corpus.", "The size of the word embeddings is fixed at d = 100 and are fine-tuned during training.", "In all experiments, we use a development set to select the best hyperparameters.", "Each model is trained for a total of 30 epochs and the model is saved each time the performance Tweets (Ptáček et al., 2014) Tweets (Riloff et al., 2013 on the development set is topped.", "The batch size is tuned amongst {128, 256, 512} for all datasets.", "The only exception is the Tweets dataset from (Riloff et al., 2013) , in which a batch size of 16 is used in lieu of the much smaller dataset size.", "For fair comparison, all models have the same hidden representation size and are set to 100 for both recurrent and convolutional based models (i.e., number of filters).", "For MIARN, the size of intraattention hidden representation is tuned amongst {4, 8, 10, 20}.", "Experimental Results Table 2, Table 3 and Table 4 reports a performance comparison of all benchmarked models on the Tweets, Reddit and Debates datasets respectively.", "We observe that our proposed SIARN and MIARN models achieve the best results across all six datasets.", "The relative improvement differs across domain and datasets.", "On the Tweets dataset from (Ptáček et al., 2014) , MIARN achieves about ≈ 2% − 2.2% improvement in terms of F1 and accuracy score when compared against the best baseline.", "On the other Tweets dataset from (Riloff et al., 2013) , the performance gain of our proposed model is larger, i.e., 3% − 5% improvement on average over most baselines.", "Our proposed SIARN and MIARN models achieve very competitive performance on the Reddit datasets, with an average of ≈ 2% margin improvement over the best baselines.", "Notably, the baselines we compare against are extremely competitive state-of-the-art neural network models.", "This further reinforces the effectiveness of our proposed approach.", "Additionally, the performance improvement on Debates (long text) is significantly larger than short text (i.e., Twitter and Reddit).", "For example, MI-ARN outperforms GRNN and CNN-LSTM-DNN by ≈ 8% − 10% on both IAC-V1 and IAC-V2.", "At this note, we can safely put RQ1 to rest.", "Overall, the performance of MIARN is often marginally better than SIARN (with some exceptions, e.g., Tweets dataset from (Riloff et al., 2013) ).", "We believe that this is attributed to the fact that more complex word-word relationships can be learned by using multi-dimensional values instead of single-dimensional scalars.", "The performance brought by our additional intra-attentive representations can be further observed by comparing against the vanilla LSTM model.", "Clearly, removing the intra-attention network reverts our model to the standard LSTM.", "The performance improvements are encouraging, leading to almost 10% improvement in terms of F1 and accuracy.", "On datasets with short text, the performance improvement is often a modest ≈ 2% − 3% (RQ2).", "Notably, our proposed models also perform much better on long text, which can be attributed to the intra-attentive representations explicitly modeling long range dependencies.", "Intuitively, this is problematic for models that only capture sequential dependencies (e.g., word by word).", "Finally, the relative performance of competitor methods are as expected.", "NBOW performs the worse, since it is just a naive bag-of-words model without any compositional or sequential information.", "On short text, LSTMs are overall better than CNNs.", "However, this trend is reversed on long text (i.e., Debates) since the LSTM model may be overburdened by overly long sequences.", "On short text, we also found that attention (or the gated pooling mechanism from GRNN) did not really help make any significant improvements over the vanilla LSTM model and a qualitative explanation to why this is so is deferred to the next section.", "However, attention helps for long text (such as debates), resulting in Attention LSTMs becoming the strongest baseline on the Debates datasets.", "However, our proposed intra-attentive model is both effective on short text and long text, outperforming Attention LSTMs consistently on all datasets.", "In-depth Model Analysis In this section, we present an in-depth analysis of our proposed model.", "More specifically, we not only aim to showcase the interpretability of our model but also explain how representations are formed.", "More specifically, we test our model (trained on Tweets dataset by (Ptáček et al., 2014) ) on two examples.", "We extract the attention maps of three models, namely MIARN, Attention LSTM (ATT-LSTM) and applying Attention mechanism directly on the word embeddings without using a LSTM encoder (ATT-RAW).", "Table 5 shows the visualization of the attention maps.", "In the first example (true label), we notice that the attention maps of MIARN are focusing on the words 'love' and 'ignored'.", "This is in concert with our intuition about modeling contrast and incongruity.", "On the other hand, both ATT-LSTM and ATT-RAW learn very different attention maps.", "As for ATT-LSTM, the attention weight is focused completely on the last representation -the token '!!'.", "Additionally, we also observed that this is true for many examples in the Tweets and Reddit dataset.", "We believe that this is the reason why standard neural attention does not help as what the attention mechanism is learning is to select the last representation (i.e., vanilla LSTM).", "Without the LSTM encoder, the attention weights focus on 'love' but not 'ignored'.", "This fails to capture any concept of contrast or incongruity.", "Next, we consider the false labeled example.", "This time, the attention maps of MIARN are not as distinct as before.", "However, they focus on sentiment-bearing words, composing the words 'ignored sucks' to form the majority of the intraattentive representation.", "This time, passing the vector made up of 'ignored sucks' allows the subsequent layers to recognize that there is no contrasting situation or sentiment.", "Similarly, ATT-LSTM focuses on the last word time which is totally non-interpretable.", "On the other hand, ATT-RAW focuses on relatively non-meaningful words such as 'big'.", "Overall, we analyzed two cases (positive and negative labels) and found that MIARN produces very explainable attention maps.", "In general, we found that MIARN is able to identify contrast and incongruity in sentences, allowing our model to better detect sarcasm.", "This is facilitated by modeling intra-sentence relationships.", "Notably, the standard vanilla attention is not explainable or interpretable.", "Conclusion Based on the intuition of intra-sentence similarity (i.e., looking in-between), we proposed a new neural network architecture for sarcasm detection.", "Our network incorporates a multi-dimensional intra-attention component that learns an intraattentive representation of the sentence, enabling it to detect contrastive sentiment, situations and incongruity.", "Extensive experiments over six public benchmarks confirm the empirical effectiveness of our proposed model.", "Our proposed MI-ARN model outperforms strong state-of-the-art baselines such as GRNN and CNN-LSTM-DNN.", "Analysis of the intra-attention scores shows that our model learns highly interpretable attention weights, paving the way for more explainable neural sarcasm detection methods." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "3", "3.1", "3.2", "3.3", "3.4", "3.5", "4", "4.1", "4.2", "4.3", "4.4", "4.5", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Sarcasm Detection", "Deep Learning for Sarcasm Detection", "Attention Models for NLP", "Our Proposed Approach", "Input Encoding Layer", "Multi-dimensional Intra-Attention", "Long Short-Term Memory Encoder", "Prediction Layer", "Optimization and Learning", "Empirical Evaluation", "Datasets", "Compared Methods", "Implementation Details and Metrics", "Experimental Results", "In-depth Model Analysis", "Conclusion" ] }
GEM-SciDuet-train-32#paper-1046#slide-2
Proposed approach
Our idea: modeling contrast in order to reason with sarcasm o either between positive-negative sentiments or between literal- o looking in-between: propose a multi-dimensional intra-attention recurrent network capture both word-word relationship and long-range dependency I absolutely love to be ignored! Perfect movie for people who cant fall asleep
Our idea: modeling contrast in order to reason with sarcasm o either between positive-negative sentiments or between literal- o looking in-between: propose a multi-dimensional intra-attention recurrent network capture both word-word relationship and long-range dependency I absolutely love to be ignored! Perfect movie for people who cant fall asleep
[]
GEM-SciDuet-train-32#paper-1046#slide-7
1046
Reasoning with Sarcasm by Reading In-between
Sarcasm is a sophisticated speech act which commonly manifests on social communities such as Twitter and Reddit. The prevalence of sarcasm on the social web is highly disruptive to opinion mining systems due to not only its tendency of polarity flipping but also usage of figurative language. Sarcasm commonly manifests with a contrastive theme either between positive-negative sentiments or between literal-figurative scenarios. In this paper, we revisit the notion of modeling contrast in order to reason with sarcasm. More specifically, we propose an attention-based neural model that looks inbetween instead of across, enabling it to explicitly model contrast and incongruity. We conduct extensive experiments on six benchmark datasets from Twitter, Reddit and the Internet Argument Corpus. Our proposed model not only achieves stateof-the-art performance on all datasets but also enjoys improved interpretability.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239 ], "paper_content_text": [ "Introduction Sarcasm, commonly defined as 'An ironical taunt used to express contempt', is a challenging NLP problem due to its highly figurative nature.", "The usage of sarcasm on the social web is prevalent and can be frequently observed in reviews, microblogs (tweets) and online forums.", "As such, the battle against sarcasm is also regularly cited as one of the key challenges in sentiment analysis and opinion mining applications (Pang et al., 2008) .", "Hence, it is both imperative and intuitive that effective sarcasm detectors can bring about numerous benefits to opinion mining applications.", "Sarcasm is often associated to several linguistic phenomena such as (1) an explicit contrast between sentiments or (2) disparity between the conveyed emotion and the author's situation (context).", "Prior work has considered sarcasm to be a contrast between a positive and negative sentiment (Riloff et al., 2013) .", "Consider the following examples: 1.", "I absolutely love to be ignored!", "2.", "Yay!!!", "The best thing to wake up to is my neighbor's drilling.", "3.", "Perfect movie for people who can't fall asleep.", "Given the examples, we make a crucial observation -Sarcasm relies a lot on the semantic relationships (and contrast) between individual words and phrases in a sentence.", "For instance, the relationships between phrases {love, ignored}, {best, drilling} and {movie, asleep} (in the examples above) richly characterize the nature of sarcasm conveyed, i.e., word pairs tend to be contradictory and more often than not, express a juxtaposition of positive and negative terms.", "This concept is also explored in (Joshi et al., 2015) in which the authors refer to this phenomena as 'incongruity'.", "Hence, it would be useful to capture the relationships between selected word pairs in a sentence, i.e., looking in-between.", "State-of-the-art sarcasm detection systems mainly rely on deep and sequential neural networks (Ghosh and Veale, 2016; Zhang et al., 2016) .", "In these works, compositional encoders such as gated recurrent units (GRU) or long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997) are often employed, with the input document being parsed one word at a time.", "This has several shortcomings for the sarcasm detection task.", "Firstly, there is no explicit interaction between word pairs, which hampers its ability to explicitly model contrast, incongruity or juxtaposition of situations.", "Secondly, it is difficult to capture long-range dependencies.", "In this case, contrastive situations (or sentiments) which are commonplace in sarcastic language may be hard to detect with simple sequential models.", "To overcome the weaknesses of standard sequential models such as recurrent neural networks, our work is based on the intuition that modeling intra-sentence relationships can not only improve classification performance but also pave the way for more explainable neural sarcasm detection methods.", "In other words, our key intuition manifests itself in the form of an attention-based neural network.", "While the key idea of most neural attention mechanisms is to focus on relevant words and sub-phrases, it merely looks across and does not explicitly capture word-word relationships.", "Hence, it suffers from the same shortcomings as sequential models.", "In this paper, our aim is to combine the effectiveness of state-of-the-art recurrent models while harnessing the intuition of looking in-between.", "We propose a multi-dimensional intra-attention recurrent network that models intricate similarities between each word pair in the sentence.", "In other words, our novel deep learning model aims to capture 'contrast' (Riloff et al., 2013) and 'incongruity' (Joshi et al., 2015) within end-to-end neural networks.", "Our model can be thought of selftargeted co-attention (Xiong et al., 2016) , which allows our model to not only capture word-word relationships but also long-range dependencies.", "Finally, we show that our model produces interpretable attention maps which aid in the explainability of model outputs.", "To the best of our knowledge, our model is the first attention model that can produce explainable results in the sarcasm detection task.", "Briefly, the prime contributions of this work can be summarized as follows: • We propose a new state-of-the-art method for sarcasm detection.", "Our proposed model, the Multi-dimensional Intra-Attention Recurrent Network (MIARN) is strongly based on the intuition of compositional learning by leveraging intra-sentence relationships.", "To the best of our knowledge, none of the existing state-of-the-art models considered exploiting intra-sentence relationships, solely relying on sequential composition.", "• We conduct extensive experiments on multiple benchmarks from Twitter, Reddit and the Internet Argument Corpus.", "Our proposed MIARN achieves highly competitive performance on all benchmarks, outperforming existing state-of-the-art models such as GRNN (Zhang et al., 2016) and CNN-LSTM-DNN (Ghosh and Veale, 2016) .", "Related Work Sarcasm is a complex linguistic phenomena that have long fascinated both linguists and NLP researchers.", "After all, a better computational understanding of this complicated speech act could potentially bring about numerous benefits to existing opinion mining applications.", "Across the rich history of research on sarcasm, several theories such as the Situational Disparity Theory (Wilson, 2006 ) and the Negation Theory (Giora, 1995) have emerged.", "In these theories, a common theme is a motif that is strongly grounded in contrast, whether in sentiment, intention, situation or context.", "(Riloff et al., 2013) propagates this premise forward, presenting an algorithm strongly based on the intuition that sarcasm arises from a juxtaposition of positive and negative situations.", "Sarcasm Detection Naturally, many works in this area have treated the sarcasm detection task as a standard text classification problem.", "An extremely comprehensive overview can be found at (Joshi et al., 2017) .", "Feature engineering approaches were highly popular, exploiting a wide diverse range of features such as syntactic patterns (Tsur et al., 2010) , sentiment lexicons (González-Ibánez et al., 2011), ngram (Reyes et al., 2013) , word frequency (Barbieri et al., 2014), word shape and pointedness features (Ptáček et al., 2014) , readability and flips (Rajadesingan et al., 2015) , etc.", "Notably, there have been quite a reasonable number of works that propose features based on similarity and contrast.", "(Hernández-Farías et al., 2015) measured the Wordnet based semantic similarity between words.", "(Joshi et al., 2015) proposed a framework based on explicit and implicit incongruity, utilizing features based on positive-negative patterns.", "(Joshi et al., 2016) proposed similarity features based on word embeddings.", "Deep Learning for Sarcasm Detection Deep learning based methods have recently garnered considerable interest in many areas of NLP research.", "In our problem domain, (Zhang et al., 2016) proposed a recurrent-based model with a gated pooling mechanism for sarcasm detection on Twitter.", "(Ghosh and Veale, 2016) proposed a convolutional long-short-term memory network (CNN-LSTM-DNN) that achieves state-of-the-art performance.", "While our work focuses on document-only sarcasm detection, several notable works have proposed models that exploit personality information (Ghosh and Veale, 2017) and user context (Amir et al., 2016) .", "Novel methods for sarcasm detection such as gaze / cognitive features (Mishra et al., 2016 (Mishra et al., , 2017 have also been explored.", "(Peled and Reichart, 2017) proposed a novel framework based on neural machine translation to convert a sequence from sarcastic to non-sarcastic.", "(Felbo et al., 2017) proposed a layer-wise training scheme that utilizes emoji-based distant supervision for sentiment analysis and sarcasm detection tasks.", "Attention Models for NLP In the context of NLP, the key idea of neural attention is to soft select a sequence of words based on their relative importance to the task at hand.", "Early innovations in attentional paradigms mainly involve neural machine translation (Luong et al., 2015; for aligning sequence pairs.", "Attention is also commonplace in many NLP applications such as sentiment classification (Chen et al., 2016; Yang et al., 2016) , aspect-level sentiment analysis (Tay et al., 2018s, 2017b Chen et al., 2017) and entailment classification (Rocktäschel et al., 2015) .", "Co-attention / Bi-Attention (Xiong et al., 2016; Seo et al., 2016) is a form of pairwise attention mechanism that was proposed to model query-document pairs.", "Intraattention can be interpreted as a self-targetted coattention and is seeing a lot promising results in many recent works (Vaswani et al., 2017; Parikh et al., 2016; Tay et al., 2017a; Shen et al., 2017) .", "The key idea is to model a sequence against itself, learning to attend while capturing long term dependencies and word-word level interactions.", "To the best of our knowledge, our work is not only the first work that only applies intra-attention to sarcasm detection but also the first attention model for sarcasm detection.", "Our Proposed Approach In this section, we describe our proposed model.", "Figure 1 illustrates our overall model architecture.", "Input Encoding Layer Our model accepts a sequence of one-hot encoded vectors as an input.", "Each one-hot encoded vector corresponds to a single word in the vocabulary.", "In the input encoding layer, each one-hot vector is converted into a low-dimensional vector representation (word embedding).", "The word embeddings are parameterized by an embedding layer W ∈ R n×|V | .", "As such, the output of this layer is a sequence of word embeddings, i.e., {w 1 , w 2 , · · · w } where is a predefined maximum sequence length.", "Multi-dimensional Intra-Attention In this section, we describe our multi-dimensional intra-attention mechanism for sarcasm detection.", "We first begin by describing the standard single-dimensional intra-attention.", "The multidimensional adaptation will be introduced later in this section.", "The key idea behind this layer is to look in-between, i.e., modeling the semantics between each word in the input sequence.", "We first begin by modeling the relationship of each word pair in the input sequence.", "A simple way to achieve this is to use a linear 1 transformation layer to project the concatenation of each word embedding pair into a scalar score as follows: s ij = W a ([w i ; w j ]) + b a (1) where W a ∈ R 2n×1 , b a ∈ R are the parameters of this layer.", "[.", "; .]", "is the vector concatenation operator and s ij is a scalar representing the affinity score between word pairs (w i , w j ).", "We can easily observe that s is a symmetrical matrix of × dimensions.", "In order to learn attention vector a, we apply a row-wise max-pooling operator on matrix s. a = sof tmax(max row s) (2) where a ∈ R is a vector representing the learned intra-attention weights.", "Then, the vector a is employed to learn weighted representation of {w 1 , w 2 · · · w } as follows: v a = i=1 w i a i (3) where v ∈ R n is the intra-attentive representation of the input sequence.", "While other choices of pooling operators may be also employed (e.g., mean-pooling over max-pooling), the choice of max-pooling is empirically motivated.", "Intuitively, this attention layer learns to pay attention based on a word's largest contribution to all words in the sequence.", "Since our objective is to highlight words that might contribute to the contrastive theories of sarcasm, a more discriminative pooling operator is desirable.", "Notably, we also mask values of s where i = j such that we do not allow the relationship scores of a word with respect to itself to influence the overall attention weights.", "Furthermore, our network can be considered as an 'inner' adaptation of neural attention, modeling intra-sentence relationships between the raw word representations instead of representations that have been compositionally manipulated.", "This allows word-to-word similarity to be modeled 'as it is' and not be influenced by composition.", "For example, when using the outputs of a compositional encoder (e.g., LSTM), matching words n and n + 1 might not be meaningful since they would be relatively similar in terms of semantic composition.", "For relatively short documents (such as tweets), it is also intuitive that attention typically focuses on the last hidden representation.", "Intuitively, the relationships between two words is often not straightforward.", "Words are complex and often hold more than one meanings (or word senses).", "As such, it might be beneficial to model multiple views between two words.", "This can be modeled by representing the word pair interaction with a vector instead of a scalar.", "As such, we propose a multi-dimensional adaptation of the intra-attention mechanism.", "The key idea here is that each word pair is projected down to a lowdimensional vector before we compute the affinity score, which allows it to not only capture one view (one scalar) but also multiple views.", "A modification to Equation (1) constitutes our Multi-Dimensional Intra-Attention variant.", "s ij = W p (ReLU (W q ([w i ; w j ]) + b q )) + b p (4) where W q ∈ R n×k , W p ∈ R k×1 , b q ∈ R k , b p ∈ R are the parameters of this layer.", "The final intraattentive representation is then learned with Equation (2) and Equation (3) which we do not repeat here for the sake of brevity.", "Long Short-Term Memory Encoder While we are able to simply use the learned representation v for prediction, it is clear that v does not encode compositional information and may miss out on important compositional phrases such as 'not happy'.", "Clearly, our intra-attention mechanism simply considers a word-by-word interaction and does not model the input document sequentially.", "As such, it is beneficial to use a separate compositional encoder for this purpose, i.e., learning compositional representations.", "To this end, we employ the standard Long Short-Term Memory (LSTM) encoder.", "The output of an LSTM encoder at each time-step can be briefly defined as: h i = LSTM(w, i), ∀i ∈ [1, .", ".", ". ]", "(5) where represents the maximum length of the sequence and h i ∈ R d is the hidden output of the LSTM encoder at time-step i. d is the size of the hidden units of the LSTM encoder.", "LSTM encoders are parameterized by gating mechanisms learned via nonlinear transformations.", "Since LSTMs are commonplace in standard NLP applications, we omit the technical details for the sake of brevity.", "Finally, to obtain a compositional representation of the input document, we use v c = h which is the last hidden output of the LSTM encoder.", "Note that the inputs to the LSTM encoder are the word embeddings right after the input encoding layer and not the output of the intraattention layer.", "We found that applying an LSTM on the intra-attentively scaled representations do not yield any benefits.", "Prediction Layer The inputs to the final prediction layer are two representations, namely (1) the intra-attentive representation (v a ∈ R n ) and (2) the compositional representation (v c ∈ R d ).", "This layer learns a joint representation of these two views using a nonlinear projection layer.", "v = ReLU (W z ([v a ; v c ]) + b z ) (6) where W z ∈ R (d+n)×d and b z ∈ R d .", "Finally, we pass v into a Softmax classification layer.", "y = Sof tmax(W f v + b f ) (7) where W f ∈ R d×2 , b f ∈ R 2 are the parameters of this layer.ŷ ∈ R 2 is the output layer of our proposed model.", "Optimization and Learning Our network is trained end-to-end, optimizing the standard binary cross-entropy loss function.", "J = − N i=1 [yi logŷi + (1 − yi) log(1 −ŷi)] + R (8) where J is the cost function,ŷ is the output of the network, R = ||θ|| L2 is the L2 regularization and λ is the weight of the regularizer.", "Empirical Evaluation In this section, we describe our experimental setup and results.", "Our experiments were designed to answer the following research questions (RQs).", "• RQ1 -Does our proposed approach outperform existing state-of-the-art models?", "• RQ2 -What are the impacts of some of the architectural choices of our model?", "How much does intra-attention contribute to the model performance?", "Is the Multi-Dimensional adaptation better than the Single-Dimensional adaptation?", "• RQ3 -What can we interpret from the intraattention layers?", "Does this align with our hypothesis about looking in-between and modeling contrast?", "Datasets We conduct our experiments on six publicly available benchmark datasets which span across three well-known sources.", "• Tweets -Twitter 2 is a microblogging platform which allows users to post statuses of less than 140 characters.", "We use two collections for sarcasm detection on tweets.", "More specifically, we use the dataset obtained from (1) (Ptáček et al., 2014) in which tweets are trained via hashtag based semisupervised learning, i.e., hashtags such as #not, #sarcasm and #irony are marked as sarcastic tweets and (2) (Riloff et al., 2013) in which Tweets are hand annotated and manually checked for sarcasm.", "For both datasets, we retrieve.", "Tweets using the Twitter API using the provided tweet IDs.", "• Reddit -Reddit 3 is a highly popular social forum and community.", "Similar to Tweets, sarcastic posts are obtained via the tag '/s' which are marked by the authors themselves.", "We use two Reddit datasets which are obtained from the subreddits /r/movies and /r/technology respectively.", "Datasets are subsets from (Khodak et al., 2017) .", "• Debates -We use two datasets 4 from the Internet Argument Corpus (IAC) (Lukin and Walker, 2017) which have been hand annotated for sarcasm.", "This dataset, unlike the first two, is mainly concerned with long text and provides a diverse comparison from the other datasets.", "The IAC corpus was designed for research on political debates on online forums.", "We use the V1 and V2 versions of the sarcasm corpus which are denoted as IAC-V1 and IAC-V2 respectively.", "The statistics of the datasets used in our experiments is reported in Table 1 .", "Compared Methods We compare our proposed model with the following algorithms.", "• NBOW is a simple neural bag-of-words baseline that sums all the word embeddings and passes the summed vector into a simple logistic regression layer.", "• CNN is a vanilla Convolutional Neural Network with max-pooling.", "CNNs are considered as compositional encoders that capture n-gram features by parameterized sliding windows.", "The filter width is 3 and number of filters f = 100.", "• LSTM is a vanilla Long Short-Term Memory Network.", "The size of the LSTM cell is set to d = 100.", "• ATT-LSTM (Attention-based LSTM) is a LSTM model with a neural attention mechanism applied to all the LSTM hidden outputs.", "We use a similar adaptation to (Yang et al., 2016) , albeit only at the document-level.", "• GRNN (Gated Recurrent Neural Network) is a Bidirectional Gated Recurrent Unit (GRU) model that was proposed for sarcasm detection by (Zhang et al., 2016) .", "GRNN uses a gated pooling mechanism to aggregate the hidden representations from a standard BiGRU model.", "Since we only compare on document-level sarcasm detection, we do not use the variant of GRNN that exploits user context.", "• CNN-LSTM-DNN (Convolutional LSTM + Deep Neural Network), proposed by (Ghosh and Veale, 2016) , is the state-of-theart model for sarcasm detection.", "This model is a combination of a CNN, LSTM and Deep Neural Network via stacking.", "It stacks two layers of 1D convolution with 2 LSTM layers.", "The output passes through a deep neural network (DNN) for prediction.", "Both CNN-LSTM-DNN (Ghosh and Veale, 2016) and GRNN (Zhang et al., 2016) are state-ofthe-art models for document-level sarcasm detection and have outperformed numerous neural and non-neural baselines.", "In particular, both works have well surpassed feature-based models (Support Vector Machines, etc.", "), as such we omit comparisons for the sake of brevity and focus comparisons with recent neural models instead.", "Moreover, since our work focuses only on document-level sarcasm detection, we do not compare against models that use external information such as user profiles, context, personality information (Ghosh and Veale, 2017) or emoji-based distant supervision (Felbo et al., 2017) .", "For our model, we report results on both multi-dimensional and single-dimensional intraattention.", "The two models are named as MIARN and SIARN respectively.", "Implementation Details and Metrics We adopt standard the evaluation metrics for the sarcasm detection task, i.e., macro-averaged F1 and accuracy score.", "Additionally, we also report precision and recall scores.", "All deep learning models are implemented using Tensor-Flow (Abadi et al., 2015) and optimized on a NVIDIA GTX1070 GPU.", "Text is preprocessed with NLTK 5 's Tweet tokenizer.", "Words that only appear once in the entire corpus are removed and marked with the UNK token.", "Document lengths are truncated at 40, 20, 80 tokens for Twitter, Reddit and Debates dataset respectively.", "Mentions of other users on the Twitter dataset are replaced by '@USER'.", "Documents with URLs (i.e., containing 'http') are removed from the corpus.", "Documents with less than 5 tokens are also removed.", "The learning optimizer used is the RMSProp with an initial learning rate of 0.001.", "The L2 regularization is set to 10 −8 .", "We initialize the word embedding layer with GloVe (Pennington et al., 2014) .", "We use the GloVe model trained on 2B Tweets for the Tweets and Reddit dataset.", "The Glove model trained on Common Crawl is used for the Debates corpus.", "The size of the word embeddings is fixed at d = 100 and are fine-tuned during training.", "In all experiments, we use a development set to select the best hyperparameters.", "Each model is trained for a total of 30 epochs and the model is saved each time the performance Tweets (Ptáček et al., 2014) Tweets (Riloff et al., 2013 on the development set is topped.", "The batch size is tuned amongst {128, 256, 512} for all datasets.", "The only exception is the Tweets dataset from (Riloff et al., 2013) , in which a batch size of 16 is used in lieu of the much smaller dataset size.", "For fair comparison, all models have the same hidden representation size and are set to 100 for both recurrent and convolutional based models (i.e., number of filters).", "For MIARN, the size of intraattention hidden representation is tuned amongst {4, 8, 10, 20}.", "Experimental Results Table 2, Table 3 and Table 4 reports a performance comparison of all benchmarked models on the Tweets, Reddit and Debates datasets respectively.", "We observe that our proposed SIARN and MIARN models achieve the best results across all six datasets.", "The relative improvement differs across domain and datasets.", "On the Tweets dataset from (Ptáček et al., 2014) , MIARN achieves about ≈ 2% − 2.2% improvement in terms of F1 and accuracy score when compared against the best baseline.", "On the other Tweets dataset from (Riloff et al., 2013) , the performance gain of our proposed model is larger, i.e., 3% − 5% improvement on average over most baselines.", "Our proposed SIARN and MIARN models achieve very competitive performance on the Reddit datasets, with an average of ≈ 2% margin improvement over the best baselines.", "Notably, the baselines we compare against are extremely competitive state-of-the-art neural network models.", "This further reinforces the effectiveness of our proposed approach.", "Additionally, the performance improvement on Debates (long text) is significantly larger than short text (i.e., Twitter and Reddit).", "For example, MI-ARN outperforms GRNN and CNN-LSTM-DNN by ≈ 8% − 10% on both IAC-V1 and IAC-V2.", "At this note, we can safely put RQ1 to rest.", "Overall, the performance of MIARN is often marginally better than SIARN (with some exceptions, e.g., Tweets dataset from (Riloff et al., 2013) ).", "We believe that this is attributed to the fact that more complex word-word relationships can be learned by using multi-dimensional values instead of single-dimensional scalars.", "The performance brought by our additional intra-attentive representations can be further observed by comparing against the vanilla LSTM model.", "Clearly, removing the intra-attention network reverts our model to the standard LSTM.", "The performance improvements are encouraging, leading to almost 10% improvement in terms of F1 and accuracy.", "On datasets with short text, the performance improvement is often a modest ≈ 2% − 3% (RQ2).", "Notably, our proposed models also perform much better on long text, which can be attributed to the intra-attentive representations explicitly modeling long range dependencies.", "Intuitively, this is problematic for models that only capture sequential dependencies (e.g., word by word).", "Finally, the relative performance of competitor methods are as expected.", "NBOW performs the worse, since it is just a naive bag-of-words model without any compositional or sequential information.", "On short text, LSTMs are overall better than CNNs.", "However, this trend is reversed on long text (i.e., Debates) since the LSTM model may be overburdened by overly long sequences.", "On short text, we also found that attention (or the gated pooling mechanism from GRNN) did not really help make any significant improvements over the vanilla LSTM model and a qualitative explanation to why this is so is deferred to the next section.", "However, attention helps for long text (such as debates), resulting in Attention LSTMs becoming the strongest baseline on the Debates datasets.", "However, our proposed intra-attentive model is both effective on short text and long text, outperforming Attention LSTMs consistently on all datasets.", "In-depth Model Analysis In this section, we present an in-depth analysis of our proposed model.", "More specifically, we not only aim to showcase the interpretability of our model but also explain how representations are formed.", "More specifically, we test our model (trained on Tweets dataset by (Ptáček et al., 2014) ) on two examples.", "We extract the attention maps of three models, namely MIARN, Attention LSTM (ATT-LSTM) and applying Attention mechanism directly on the word embeddings without using a LSTM encoder (ATT-RAW).", "Table 5 shows the visualization of the attention maps.", "In the first example (true label), we notice that the attention maps of MIARN are focusing on the words 'love' and 'ignored'.", "This is in concert with our intuition about modeling contrast and incongruity.", "On the other hand, both ATT-LSTM and ATT-RAW learn very different attention maps.", "As for ATT-LSTM, the attention weight is focused completely on the last representation -the token '!!'.", "Additionally, we also observed that this is true for many examples in the Tweets and Reddit dataset.", "We believe that this is the reason why standard neural attention does not help as what the attention mechanism is learning is to select the last representation (i.e., vanilla LSTM).", "Without the LSTM encoder, the attention weights focus on 'love' but not 'ignored'.", "This fails to capture any concept of contrast or incongruity.", "Next, we consider the false labeled example.", "This time, the attention maps of MIARN are not as distinct as before.", "However, they focus on sentiment-bearing words, composing the words 'ignored sucks' to form the majority of the intraattentive representation.", "This time, passing the vector made up of 'ignored sucks' allows the subsequent layers to recognize that there is no contrasting situation or sentiment.", "Similarly, ATT-LSTM focuses on the last word time which is totally non-interpretable.", "On the other hand, ATT-RAW focuses on relatively non-meaningful words such as 'big'.", "Overall, we analyzed two cases (positive and negative labels) and found that MIARN produces very explainable attention maps.", "In general, we found that MIARN is able to identify contrast and incongruity in sentences, allowing our model to better detect sarcasm.", "This is facilitated by modeling intra-sentence relationships.", "Notably, the standard vanilla attention is not explainable or interpretable.", "Conclusion Based on the intuition of intra-sentence similarity (i.e., looking in-between), we proposed a new neural network architecture for sarcasm detection.", "Our network incorporates a multi-dimensional intra-attention component that learns an intraattentive representation of the sentence, enabling it to detect contrastive sentiment, situations and incongruity.", "Extensive experiments over six public benchmarks confirm the empirical effectiveness of our proposed model.", "Our proposed MI-ARN model outperforms strong state-of-the-art baselines such as GRNN and CNN-LSTM-DNN.", "Analysis of the intra-attention scores shows that our model learns highly interpretable attention weights, paving the way for more explainable neural sarcasm detection methods." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "3", "3.1", "3.2", "3.3", "3.4", "3.5", "4", "4.1", "4.2", "4.3", "4.4", "4.5", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Sarcasm Detection", "Deep Learning for Sarcasm Detection", "Attention Models for NLP", "Our Proposed Approach", "Input Encoding Layer", "Multi-dimensional Intra-Attention", "Long Short-Term Memory Encoder", "Prediction Layer", "Optimization and Learning", "Empirical Evaluation", "Datasets", "Compared Methods", "Implementation Details and Metrics", "Experimental Results", "In-depth Model Analysis", "Conclusion" ] }
GEM-SciDuet-train-32#paper-1046#slide-7
Conclusion
We proposed a new neural network architecture for sarcasm detection o incorporates a multi-dimensional intra-attention component that learns an intra-attentive representation of the sentence o enabling it to detect contrastive sentiment, situations and outperforms strong state-of-the-art baselines such as GRNN and CNN-LSTM-DNN over six public benchmarks Able to learns highly interpretable attention weights paving the way for more explainable neural sarcasm detection methods.
We proposed a new neural network architecture for sarcasm detection o incorporates a multi-dimensional intra-attention component that learns an intra-attentive representation of the sentence o enabling it to detect contrastive sentiment, situations and outperforms strong state-of-the-art baselines such as GRNN and CNN-LSTM-DNN over six public benchmarks Able to learns highly interpretable attention weights paving the way for more explainable neural sarcasm detection methods.
[]
GEM-SciDuet-train-33#paper-1047#slide-0
1047
Adversarial Contrastive Estimation
Learning by contrasting positive and negative samples is a general strategy adopted by many methods. Noise contrastive estimation (NCE) for word embeddings and translating embeddings for knowledge graphs are examples in NLP employing this approach. In this work, we view contrastive learning as an abstraction of all such methods and augment the negative sampler into a mixture distribution containing an adversarially learned sampler. The resulting adaptive sampler finds harder negative examples, which forces the main model to learn a better representation of the data. We evaluate our proposal on learning word embeddings, order embeddings and knowledge graph embeddings and observe both faster convergence and improved results on multiple metrics.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226 ], "paper_content_text": [ "Introduction Many models learn by contrasting losses on observed positive examples with those on some fictitious negative examples, trying to decrease some score on positive ones while increasing it on negative ones.", "There are multiple reasons why such contrastive learning approach is needed.", "Computational tractability is one.", "For instance, instead of using softmax to predict a word for learning word embeddings, noise contrastive estimation (NCE) (Dyer, 2014; Mnih and Teh, 2012) can be used in skip-gram or CBOW word embedding models (Gutmann and Hyvärinen, 2012; Mikolov et al., 2013; Mnih and Kavukcuoglu, 2013; Vaswani et al., 2013) .", "Another reason is * authors contributed equally † Work done while author was an intern at Borealis AI modeling need, as certain assumptions are best expressed as some score or energy in margin based or un-normalized probability models (Smith and Eisner, 2005) .", "For example, modeling entity relations as translations or variants thereof in a vector space naturally leads to a distance-based score to be minimized for observed entity-relation-entity triplets (Bordes et al., 2013) .", "Given a scoring function, the gradient of the model's parameters on observed positive examples can be readily computed, but the negative phase requires a design decision on how to sample data.", "In noise contrastive estimation for word embeddings, a negative example is formed by replacing a component of a positive pair by randomly selecting a sampled word from the vocabulary, resulting in a fictitious word-context pair which would be unlikely to actually exist in the dataset.", "This negative sampling by corruption approach is also used in learning knowledge graph embeddings (Bordes et al., 2013; Lin et al., 2015; Ji et al., 2015; Wang et al., 2014; Trouillon et al., 2016; Yang et al., 2014; Dettmers et al., 2017) , order embeddings (Vendrov et al., 2016) , caption generation (Dai and Lin, 2017) , etc.", "Typically the corruption distribution is the same for all inputs like in skip-gram or CBOW NCE, rather than being a conditional distribution that takes into account information about the input sample under consideration.", "Furthermore, the corruption process usually only encodes a human prior as to what constitutes a hard negative sample, rather than being learned from data.", "For these two reasons, the simple fixed corruption process often yields only easy negative examples.", "Easy negatives are sub-optimal for learning discriminative representation as they do not force the model to find critical characteristics of observed positive data, which has been independently discovered in applications outside NLP previously (Shrivastava et al., 2016) .", "Even if hard negatives are occasionally reached, the infrequency means slow convergence.", "Designing a more sophisticated corruption process could be fruitful, but requires costly trialand-error by a human expert.", "In this work, we propose to augment the simple corruption noise process in various embedding models with an adversarially learned conditional distribution, forming a mixture negative sampler that adapts to the underlying data and the embedding model training progress.", "The resulting method is referred to as adversarial contrastive estimation (ACE).", "The adaptive conditional model engages in a minimax game with the primary embedding model, much like in Generative Adversarial Networks (GANs) (Goodfellow et al., 2014a) , where a discriminator net (D), tries to distinguish samples produced by a generator (G) from real data (Goodfellow et al., 2014b) .", "In ACE, the main model learns to distinguish between a real positive example and a negative sample selected by the mixture of a fixed NCE sampler and an adversarial generator.", "The main model and the generator takes alternating turns to update their parameters.", "In fact, our method can be viewed as a conditional GAN (Mirza and Osindero, 2014) on discrete inputs, with a mixture generator consisting of a learned and a fixed distribution, with additional techniques introduced to achieve stable and convergent training of embedding models.", "In our proposed ACE approach, the conditional sampler finds harder negatives than NCE, while being able to gracefully fall back to NCE whenever the generator cannot find hard negatives.", "We demonstrate the efficacy and generality of the proposed method on three different learning tasks, word embeddings (Mikolov et al., 2013) , order embeddings (Vendrov et al., 2016) and knowledge graph embeddings (Ji et al., 2015) .", "Method Background: contrastive learning In the most general form, our method applies to supervised learning problems with a contrastive objective of the following form: L(ω) = E p(x + ,y + ,y − ) l ω (x + , y + , y − ) (1) where l ω (x + , y + , y − ) captures both the model with parameters ω and the loss that scores a positive tuple (x + , y + ) against a negative one (x + , y − ).", "E p(x + ,y + ,y − ) (.)", "denotes expectation with respect to some joint distribution over positive and negative samples.", "Furthermore, by the law of total expectation, and the fact that given x + , the negative sampling is not dependent on the positive label, i.e.", "p(y + , y − |x + ) = p(y + |x + )p(y − |x + ), Eq.", "1 can be re-written as E p(x + ) [E p(y + |x + )p(y − |x + ) l ω (x + , y + , y − )] (2) Separable loss In the case where the loss decomposes into a sum of scores on positive and negative tuples such as l ω (x + , y + , y − ) = s ω (x + , y + )−s ω (x + , y − ), then Expression.", "2 becomes E p + (x) [E p + (y|x) s ω (x, y) − E p − (y|x)sω (x, y)] (3) where we moved the + and − to p for notational brevity.", "Learning by stochastic gradient descent aims to adjust ω to pushing down s ω (x, y) on samples from p + while pushing ups ω (x, y) on samples from p − .", "Note that for generality, the scoring function for negative samples, denoted bỹ s ω , could be slightly different from s ω .", "For instance,s could contain a margin as in the case of Order Embeddings in Sec.", "4.2.", "Non separable loss Eq.", "1 is the general form that we would like to consider because for certain problems, the loss function cannot be separated into sums of terms containing only positive (x + , y + ) and terms with negatives (x + , y − ).", "An example of such a nonseparable loss is the triplet ranking loss (Schroff et al., 2015) : l ω = max(0, η + s ω (x + , y + ) − s ω (x + , y − )), which does not decompose due to the rectification.", "Noise contrastive estimation The typical NCE approach in tasks such as word embeddings (Mikolov et al., 2013) , order embeddings (Vendrov et al., 2016) , and knowledge graph embeddings can be viewed as a special case of Eq.", "2 by taking p(y − |x + ) to be some unconditional p nce (y).", "This leads to efficient computation during training, however, p nce (y) sacrifices the sampling efficiency of learning as the negatives produced using a fixed distribution are not tailored toward x + , and as a result are not necessarily hard negative examples.", "Thus, the model is not forced to discover discriminative representation of observed positive data.", "As training progresses, more and more negative examples are correctly learned, the probability of drawing a hard negative example diminishes further, causing slow convergence.", "Adversarial mixture noise To remedy the above mentioned problem of a fixed unconditional negative sampler, we propose to augment it into a mixture one, λp nce (y) + (1 − λ)g θ (y|x), where g θ is a conditional distribution with a learnable parameter θ and λ is a hyperparameter.", "The objective in Expression.", "2 can then be written as (conditioned on x for notational brevity): L(ω, θ; x) = λ E p(y + |x)pnce(y − ) l ω (x, y + , y − ) + (1 − λ) E p(y + |x)g θ (y − |x) l ω (x, y + , y − ) (4) We learn (ω, θ) in a GAN-style minimax game: min ω max θ V (ω, θ) = min ω max θ E p + (x) L(ω, θ; x) (5 ) The embedding model behind l ω (x, y + , y − ) is similar to the discriminator in (conditional) GAN (or critic in Wasserstein or Energy-based GAN (Zhao et al., 2016) , while g θ (y|x) acts as the generator.", "Henceforth, we will use the term discriminator (D) and embedding model interchangeably, and refer to g θ as the generator.", "Learning the generator There is one important distinction to typical GAN: g θ (y|x) defines a categorical distribution over possible y values, and samples are drawn accordingly; in contrast to typical GAN over continuous data space such as images, where samples are generated by an implicit generative model that warps noise vectors into data points.", "Due to the discrete sampling step, g θ cannot learn by receiving gradient through the discriminator.", "One possible solution is to use the Gumbel-softmax reparametrization trick (Jang et al., 2016; Maddison et al., 2016) , which gives a differentiable approximation.", "However, this differentiability comes at the cost of drawing N Gumbel samples per each categorical sample, where N is the number of categories.", "For word embeddings, N is the vocabulary size, and for knowledge graph embeddings, N is the number of entities, both leading to infeasible computational requirements.", "Instead, we use the REINFORCE (Williams, 1992) gradient estimator for ∇ θ L(θ, x): (1−λ) E −l ω (x, y + , y − )∇ θ log(g θ (y − |x)) (6) where the expectation E is with respect to p(y + , y − |x) = p(y + |x)g θ (y − |x), and the discriminator loss l ω (x, y + , y − ) acts as the reward.", "With a separable loss, the (conditional) value function of the minimax game is: L(ω, θ; x) = E p + (y|x) s ω (x, y) − E pnce(y)sω (x, y) − E g θ (y|x)sω (x, y) (7) and only the last term depends on the generator parameter ω.", "Hence, with a separable loss, the reward is −s(x + , y − ).", "This reduction does not happen with a non-separable loss, and we have to use l ω (x, y + , y − ).", "Entropy and training stability GAN training can suffer from instability and degeneracy where the generator probability mass collapses to a few modes or points.", "Much work has been done to stabilize GAN training in the continuous case Gulrajani et al., 2017; Cao et al., 2018) .", "In ACE, if the generator g θ probability mass collapses to a few candidates, then after the discriminator successfully learns about these negatives, g θ cannot adapt to select new hard negatives, because the REIN-FORCE gradient estimator Eq.", "6 relies on g θ being able to explore other candidates during sampling.", "Therefore, if the g θ probability mass collapses, instead of leading to oscillation as in typical GAN, the min-max game in ACE reaches an equilibrium where the discriminator wins and g θ can no longer adapt, then ACE falls back to NCE since the negative sampler has another mixture component from NCE.", "This behavior of gracefully falling back to NCE is more desirable than the alternative of stalled training if p − (y|x) does not have a simple p nce mixture component.", "However, we would still like to avoid such collapse, as the adversarial samples provide greater learning signals than NCE samples.", "To this end, we propose to use a regularizer to encourage the categorical distribution g θ (y|x) to have high entropy.", "In order to make the the regularizer interpretable and its hyperparameters easy to tune, we design the following form: R ent (x) = min(0, c − H(g θ (y|x))) (8) where H(g θ (y|x)) is the entropy of the categorical distribution g θ (y|x), and c = log(k) is the entropy of a uniform distribution over k choices, and k is a hyper-parameter.", "Intuitively, R ent expresses the prior that the generator should spread its mass over more than k choices for each x.", "Handling false negatives During negative sampling, p − (y|x) could actually produce y that forms a positive pair that exists in the training set, i.e., a false negative.", "This possibility exists in NCE already, but since p nce is not adaptive, the probability of sampling a false negative is low.", "Hence in NCE, the score on this false negative (true observation) pair is pushed up less in the negative term than in the positive term.", "However, with the adaptive sampler, g ω (y|x), false negatives become a much more severe issue.", "g ω (y|x) can learn to concentrate its mass on a few false negatives, significantly canceling the learning of those observations in the positive phase.", "The entropy regularization reduces this problem as it forces the generator to spread its mass, hence reducing the chance of a false negative.", "To further alleviate this problem, whenever computationally feasible, we apply an additional two-step technique.", "First, we maintain a hash map of the training data in memory, and use it to efficiently detect if a negative sample (x + , y − ) is an actual observation.", "If so, its contribution to the loss is given a zero weight in ω learning step.", "Second, to upate θ in the generator learning step, the reward for false negative samples are replaced by a large penalty, so that the REINFORCE gradient update would steer g θ away from those samples.", "The second step is needed to prevent null computation where g θ learns to sample false negatives which are subsequently ignored by the discriminator update for ω. Variance Reduction The basic REINFORCE gradient estimator is poised with high variance, so in practice one often needs to apply variance reduction techniques.", "The most basic form of variance reduction is to subtract a baseline from the reward.", "As long as the baseline is not a function of actions (i.e., samples y − being drawn), the REINFORCE gradient estimator remains unbiased.", "More advanced gradient estimators exist that also reduce variance (Grathwohl et al., 2017; Tucker et al., 2017; Liu et al., 2018) , but for simplicity we use the self-critical baseline method (Rennie et al., 2016) , where the baseline is b(x) = l ω (y + , y , x), or b(x) = −s ω (y , x) in the separable loss case, and y = argmax i g θ (y i |x).", "In other words, the baseline is the reward of the most likely sample according to the generator.", "2.7 Improving exploration in g θ by leveraging NCE samples In Sec.", "2.4 we touched on the need for sufficient exploration in g θ .", "It is possible to also leverage negative samples from NCE to help the generator learn.", "This is essentially off-policy exploration in reinforcement learning since NCE samples are not drawn according to g θ (y|x).", "The generator learning can use importance re-weighting to leverage those samples.", "The resulting REIN-FORCE gradient estimator is basically the same as Eq.", "6 except that the rewards are reweighted by g θ (y − |x)/p nce (y − ), and the expectation is with respect to p(y + |x)p nce (y − ).", "This additional offpolicy learning term provides gradient information for generator learning if g θ (y − |x) is not zero, meaning that for it to be effective in helping exploration, the generator cannot be collapsed at the first place.", "Hence, in practice, this term is only used to further help on top of the entropy regularization, but it does not replace it.", "Related Work Smith and Eisner (2005) proposed contrastive estimation as a way for unsupervised learning of log-linear models by taking implicit evidence from user-defined neighborhoods around observed datapoints.", "Gutmann and Hyvärinen (2010) introduced NCE as an alternative to the hierarchical softmax.", "In the works of Mnih and Teh (2012) and Mnih and Kavukcuoglu (2013) , NCE is applied to log-bilinear models and Vaswani et al.", "(2013) applied NCE to neural probabilistic language models (Yoshua et al., 2003) .", "Compared to these previous NCE methods that rely on simple fixed sampling heuristics, ACE uses an adaptive sampler that produces harder negatives.", "In the domain of max-margin estimation for structured prediction (Taskar et al., 2005) , loss augmented MAP inference plays the role of finding hard negatives (the hardest).", "However, this inference is only tractable in a limited class of models such structured SVM (Tsochantaridis et al., 2005) .", "Compared to those models that use exact maximization to find the hardest negative configuration each time, the generator in ACE can be viewed as learning an approximate amortized inference network.", "Concurrently to this work, Tu and Gimpel (2018) proposes a very similar framework, using a learned inference network for Structured prediction energy networks (SPEN) (Belanger and McCallum, 2016) .", "Concurrent with our work, there have been other interests in applying the GAN to NLP problems (Fedus et al., 2018; Wang et al., 2018; Cai and Wang, 2017) .", "Knowledge graph models naturally lend to a GAN setup, and has been the subject of study in Wang et al.", "(2018) and Cai and Wang (2017) .", "These two concurrent works are most closely related to one of the three tasks on which we study ACE in this work.", "Besides a more general formulation that applies to problems beyond those considered in Wang et al.", "(2018) and Cai and Wang (2017) , the techniques introduced in our work on handling false negatives and entropy regularization lead to improved experimental results as shown in Sec.", "5.4.", "Application of ACE on three tasks 4.1 Word Embeddings Word embeddings learn a vector representation of words from co-occurrences in a text corpus.", "NCE casts this learning problem as a binary classification where the model tries to distinguish positive word and context pairs, from negative noise samples composed of word and false context pairs.", "The NCE objective in Skip-gram (Mikolov et al., 2013) for word embeddings is a separable loss of the form: L = − wt∈V [log p(y = 1|w t , w + c ) + K c=1 log p(y = 0|w t , w − c )] (9) Here, w + c is sampled from the set of true contexts and w − c ∼ Q is sampled k times from a fixed noise distribution.", "Mikolov et al.", "(2013) introduced a further simplification of NCE, called \"Negative Sampling\" (Dyer, 2014) .", "With respect to our ACE framework, the difference between NCE and Negative Sampling is inconsequential, so we continue the discussion using NCE.", "A drawback of this sampling scheme is that it favors more common words as context.", "Another issue is that the negative context words are sampled in the same way, rather than tailored toward the actual target word.", "To apply ACE to this problem we first define the value function for the minimax game, V (D, G), as follows: V (D, G) = E p + (wc) [log D(w c , w t )] − E pnce(wc) [− log(1 − D(w c , w t ))] − E g θ (wc|wt) [− log(1 − D(w c , w t ))] (10) with D = p(y = 1|w t , w c ) and G = g θ (w c |w t ).", "Implementation details For our experiments, we train all our models on a single pass of the May 2017 dump of the English Wikipedia with lowercased unigrams.", "The vocabulary size is restricted to the top 150k most frequent words when training from scratch while for finetuning we use the same vocabulary as Pennington et al.", "(2014) , which is 400k of the most frequent words.", "We use 5 NCE samples for each positive sample and 1 adversarial sample in a window size of 10 and the same positive subsampling scheme proposed by Mikolov et al.", "(2013) .", "Learning for both G and D uses Adam (Kingma and Ba, 2014) optimizer with its default parameters.", "Our conditional discriminator is modeled using the Skip-Gram architecture, which is a two layer neural network with a linear mapping between the layers.", "The generator network consists of an embedding layer followed by two small hidden layers, followed by an output softmax layer.", "The first layer of the generator shares its weights with the second embedding layer in the discriminator network, which we find really speeds up convergence as the generator does not have to relearn its own set of embeddings.", "The difference between the discriminator and generator is that a sigmoid nonlinearity is used after the second layer in the discriminator, while in the generator, a softmax layer is used to define a categorical distribution over negative word candidates.", "We find that controlling the generator entropy is critical for finetuning experiments as otherwise the generator collapses to its favorite negative sample.", "The word embeddings are taken to be the first dense matrix in the discriminator.", "Order Embeddings Hypernym Prediction As introduced in Vendrov et al.", "(2016) , ordered representations over hierarchy can be learned by order embeddings.", "An example task for such ordered representation is hypernym prediction.", "A hypernym pair is a pair of concepts where the first concept is a specialization or an instance of the second.", "For completeness, we briefly describe order embeddings, then analyze ACE on the hypernym prediction task.", "In order embeddings, each entity is represented by a vector in R N , the score for a positive ordered pair of entities (x, y) is defined by s ω (x, y) = ||max(0, y − x)|| 2 and, score for a negative ordered pair (x + , y − ) is defined bỹ s ω (x + , y − ) = max{0, η − s(x + , y − )}, where is η is the margin.", "Let f (u) be the embedding function which takes an entity as input and outputs en embedding vector.", "We define P as a set of positive pairs and N as negative pairs, the separable loss function for order embedding task is defined by: L = (u,v)∈P s ω (f (u), f (v)))+ (u,v)∈Ns (f (u), f (v)) (11) Implementation details Our generator for this task is just a linear fully connected softmax layer, taking an embedding vector from discriminator as input and outputting a categorical distribution over the entity set.", "For the discriminator, we inherit all model setting from Vendrov et al.", "(2016) : we use 50 dimensions hidden state and bash size 1000, a learning rate of 0.01 and the Adam optimizer.", "For the generator, we use a batch size of 1000, a learning rate 0.01 and the Adam optimizer.", "We apply weight decay with rate 0.1 and entropy loss regularization as described in Sec.", "2.4.", "We handle false negative as described in Sec.", "2.5.", "After cross validation, variance reduction and leveraging NCE samples does not greatly affect the order embedding task.", "Knowledge Graph Embeddings Knowledge graphs contain entity and relation data of the form (head entity, relation, tail entity), and the goal is to learn from observed positive entity relations and predict missing links (a.k.a.", "link prediction).", "There have been many works on knowledge graph embeddings, e.g.", "TransE (Bordes et al., 2013) , TransR (Lin et al., 2015) , TransH (Wang et al., 2014) , TransD (Ji et al., 2015) , Complex (Trouillon et al., 2016) , DistMult (Yang et al., 2014) and ConvE (Dettmers et al., 2017) .", "Many of them use a contrastive learning objective.", "Here we take TransD as an example, and modify its noise contrastive learning to ACE, and demonstrate significant improvement in sample efficiency and link prediction results.", "Implementation details Let a positive entity-relation-entity triplet be denoted by ξ + = (h + , r + , t + ), and a negative triplet could either have its head or tail be a negative sample, i.e.", "ξ − = (h − , r + , t + ) or ξ − = (h + , r + , t − ).", "In either case, the general formulation in Sec.", "2.1 still applies.", "The non-separable loss function takes on the form: l = max(0, η + s ω (ξ + ) − s ω (ξ − )) (12) The scoring rule is: s = h ⊥ + r − t ⊥ (13) where r is the embedding vector for r, and h ⊥ is projection of the embedding of h onto the space of r by h ⊥ = h + r p h p h, where r p and h p are projection parameters of the model.", "t ⊥ is defined in a similar way through parameters t, t p and r p .", "The form of the generator g θ (t − |r + , h + ) is chosen to be f θ (h ⊥ , h ⊥ + r), where f θ is a feedforward neural net that concatenates its two input arguments, then propagates through two hidden layers, followed by a final softmax output layer.", "As a function of (r + , h + ), g θ shares parameter with the discriminator, as the inputs to f θ are the embedding vectors.", "During generator learning, only θ is updated and the TransD model embedding parameters are frozen.", "Experiments We evaluate ACE with experiments on word embeddings, order embeddings, and knowledge graph embeddings tasks.", "In short, whenever the original learning objective is contrastive (all tasks except Glove fine-tuning) our results consistently show that ACE improves over NCE.", "In some cases, we include additional comparisons to the state-of-art results on the task to put the significance of such improvements in context: the generic ACE can often make a reasonable baseline competitive with SOTA methods that are optimized for the task.", "For word embeddings, we evaluate models trained from scratch as well as fine-tuned Glove models (Pennington et al., 2014) on word similarity tasks that consist of computing the similarity between word pairs where the ground truth is an average of human scores.", "We choose the Rare word dataset (Luong et al., 2013) and WordSim-353 (Finkelstein et al., 2001) by virtue of our hypothesis that ACE learns better representations for both rare and frequent words.", "We also qualitatively evaluate ACE word embeddings by inspecting the nearest neighbors of selected words.", "For the hypernym prediction task, following Vendrov et al.", "(2016) , hypernym pairs are created from the WordNet hierarchy's transitive closure.", "We use the released random development split and test split from Vendrov et al.", "(2016) , which both contain 4000 edges.", "For knowledge graph embeddings, we use TransD (Ji et al., 2015) as our base model, and perform ablation study to analyze the behavior of ACE with various add-on features, and confirm that entropy regularization is crucial for good performance in ACE.", "We also obtain link prediction results that are competitive or superior to the stateof-arts on the WN18 dataset (Bordes et al., 2014) .", "Training Word Embeddings from scratch In this experiment, we empirically observe that training word embeddings using ACE converges significantly faster than NCE after one epoch.", "As shown in Fig.", "3 both ACE (a mixture of p nce and g θ ) and just g θ (denoted by ADV) significantly outperforms the NCE baseline, with an absolute improvement of 73.1% and 58.5% respectively on RW score.", "We note similar results on WordSim-353 dataset where ACE and ADV outperforms NCE by 40.4% and 45.7%.", "We also evaluate our model qualitatively by inspecting the nearest neighbors of selected words in Table.", "1.", "We first present the five nearest neighbors to each word to show that both NCE and ACE models learn sensible embeddings.", "We then show that ACE embeddings have much better semantic relevance in a larger neighborhood (nearest neighbor 45-50).", "Finetuning Word Embeddings We take off-the-shelf pre-trained Glove embeddings which were trained using 6 billion tokens (Pennington et al., 2014) and fine-tune them using our algorithm.", "It is interesting to note that the original Glove objective does not fit into the contrastive learning framework, but nonetheless we find that they benefit from ACE.", "In fact, we observe that training such that 75% of the words appear as positive contexts is sufficient to beat the largest dimensionality pre-trained Glove model on word similarity tasks.", "We evaluate our performance on the Rare Word and WordSim353 data.", "As can be seen from our results in Table 2 , ACE on RW is not always better and for the 100d and 300d Glove embeddings is marginally worse.", "However, on WordSim353 ACE does considerably better across the board to the point where 50d Glove embeddings outperform the 300d baseline Glove model.", "Hypernym Prediction As shown in Table 3 , with ACE training, our method achieves a 1.5% improvement on accu- racy over Vendrov et al.", "(2016) without tunning any of the discriminator's hyperparameters.", "We further report training curve in Fig.", "1 , we report loss curve on randomly sampled pairs.", "We stress that in the ACE model, we train random pairs and generator generated pairs jointly, as shown in Fig.", "2 , hard negatives help the order embedding model converges faster.", "Ablation Study and Improving TransD To analyze different aspects of ACE, we perform an ablation study on the knowledge graph embedding task.", "As described in Sec.", "4.3, the base Method Accuracy (%) order-embeddings 90.6 order-embeddings + Our ACE 92.0 Table 3 : Order Embedding Performance model (discriminator) we apply ACE to is TransD (Ji et al., 2015) .", "Fig.", "5 shows validation performance as training progresses.", "All variants of ACE converges to better results than base NCE.", "Among ACE variants, all methods that include entropy regularization significantly outperform without entropy regularization.", "Without the self critical baseline variance reduction, learning could progress faster at the beginning but the final performance suffers slightly.", "The best performance is obtained without the additional off-policy learning of the generator.", "Table.", "4 shows the final test results on WN18 link prediction task.", "It is interesting to note that ACE improves MRR score more significantly than hit@10.", "As MRR is a lot more sensitive to the top rankings, i.e., how the correct configuration ranks among the competitive alternatives, this is consistent with the fact that ACE samples hard negatives and forces the base model to learn a more discriminative representation of the positive examples.", "(Trouillon et al., 2016) , which achieves the SOTA on this dataset.", "Among all TransD based models (the best results in this group is underlined), ACE improves over basic NCE and another GAN based approach KBGAN.", "The gap on MRR is likely due to the difference between TransD and COMPLEX models.", "Hard Negative Analysis To better understand the effect of the adversarial samples proposed by the generator we plot the discriminator loss on both p nce and g θ samples.", "In this context, a harder sample means a higher loss assigned by the discriminator.", "Fig.", "4 shows that discriminator loss for the word embedding task on g θ samples are always higher than on p nce samples, confirming that the generator is indeed sampling harder negatives.", "For Hypernym Prediction task, Fig.2 shows discriminator loss on negative pairs sampled from NCE and ACE respectively.", "The higher the loss the harder the negative pair is.", "As indicated in the left plot, loss on the ACE negative terms collapses faster than on the NCE negatives.", "After adding entropy regularization and weight decay, the generator works as expected.", "Limitations When the generator softmax is large, the current implementation of ACE training is computationally expensive.", "Although ACE converges faster per iteration, it may converge more slowly on wall-clock time depending on the cost of the softmax.", "However, embeddings are typically used as pre-trained building blocks for subsequent tasks.", "Thus, their learning is usually the pre-computation step for the more complex downstream models and spending more time is justified, especially with GPU acceleration.", "We believe that the computational cost could potentially be reduced via some existing techniques such as the \"augment and reduce\" variational inference of (Ruiz et al., 2018), adaptive softmax (Grave et al., 2016) , or the \"sparsely-gated\" softmax of Shazeer et al.", "(2017) , but leave that to future work.", "Another limitation is on the theoretical front.", "As noted in Goodfellow (2014) , GAN learning does not implement maximum likelihood estimation (MLE), while NCE has MLE as an asymptotic limit.", "To the best of our knowledge, more distant connections between GAN and MLE training are not known, and tools for analyzing the equilibrium of a min-max game where players are parametrized by deep neural nets are currently not available to the best of our knowledge.", "Conclusion In this paper, we propose Adversarial Contrastive Estimation as a general technique for improving supervised learning problems that learn by contrasting observed and fictitious samples.", "Specifically, we use a generator network in a conditional GAN like setting to propose hard negative examples for our discriminator model.", "We find that a mixture distribution of randomly sampling negative examples along with an adaptive negative sampler leads to improved performances on a variety of embedding tasks.", "We validate our hypothesis that hard negative examples are critical to optimal learning and can be proposed via our ACE framework.", "Finally, we find that controlling the entropy of the generator through a regularization term and properly handling false negatives is crucial for successful training." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "2.3", "2.4", "2.5", "2.6", "3", "4", "4.2", "4.3", "5", "5.1", "5.2", "5.3", "5.4", "5.5", "6", "7" ], "paper_header_content": [ "Introduction", "Background: contrastive learning", "Adversarial mixture noise", "Learning the generator", "Entropy and training stability", "Handling false negatives", "Variance Reduction", "Related Work", "Application of ACE on three tasks 4.1 Word Embeddings", "Order Embeddings Hypernym Prediction", "Knowledge Graph Embeddings", "Experiments", "Training Word Embeddings from scratch", "Finetuning Word Embeddings", "Hypernym Prediction", "Ablation Study and Improving TransD", "Hard Negative Analysis", "Limitations", "Conclusion" ] }
GEM-SciDuet-train-33#paper-1047#slide-0
Contrastive Estimation
Many Machine Learning models learn by trying to separate positive examples from negative examples. Positive Examples are taken from observed real data distribution Negative Examples are any other configurations that are not observed Data is in the form of tuples or triplets (x+, y+) and (x+, y) are positive and negative data points respectively.
Many Machine Learning models learn by trying to separate positive examples from negative examples. Positive Examples are taken from observed real data distribution Negative Examples are any other configurations that are not observed Data is in the form of tuples or triplets (x+, y+) and (x+, y) are positive and negative data points respectively.
[]
GEM-SciDuet-train-33#paper-1047#slide-1
1047
Adversarial Contrastive Estimation
Learning by contrasting positive and negative samples is a general strategy adopted by many methods. Noise contrastive estimation (NCE) for word embeddings and translating embeddings for knowledge graphs are examples in NLP employing this approach. In this work, we view contrastive learning as an abstraction of all such methods and augment the negative sampler into a mixture distribution containing an adversarially learned sampler. The resulting adaptive sampler finds harder negative examples, which forces the main model to learn a better representation of the data. We evaluate our proposal on learning word embeddings, order embeddings and knowledge graph embeddings and observe both faster convergence and improved results on multiple metrics.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226 ], "paper_content_text": [ "Introduction Many models learn by contrasting losses on observed positive examples with those on some fictitious negative examples, trying to decrease some score on positive ones while increasing it on negative ones.", "There are multiple reasons why such contrastive learning approach is needed.", "Computational tractability is one.", "For instance, instead of using softmax to predict a word for learning word embeddings, noise contrastive estimation (NCE) (Dyer, 2014; Mnih and Teh, 2012) can be used in skip-gram or CBOW word embedding models (Gutmann and Hyvärinen, 2012; Mikolov et al., 2013; Mnih and Kavukcuoglu, 2013; Vaswani et al., 2013) .", "Another reason is * authors contributed equally † Work done while author was an intern at Borealis AI modeling need, as certain assumptions are best expressed as some score or energy in margin based or un-normalized probability models (Smith and Eisner, 2005) .", "For example, modeling entity relations as translations or variants thereof in a vector space naturally leads to a distance-based score to be minimized for observed entity-relation-entity triplets (Bordes et al., 2013) .", "Given a scoring function, the gradient of the model's parameters on observed positive examples can be readily computed, but the negative phase requires a design decision on how to sample data.", "In noise contrastive estimation for word embeddings, a negative example is formed by replacing a component of a positive pair by randomly selecting a sampled word from the vocabulary, resulting in a fictitious word-context pair which would be unlikely to actually exist in the dataset.", "This negative sampling by corruption approach is also used in learning knowledge graph embeddings (Bordes et al., 2013; Lin et al., 2015; Ji et al., 2015; Wang et al., 2014; Trouillon et al., 2016; Yang et al., 2014; Dettmers et al., 2017) , order embeddings (Vendrov et al., 2016) , caption generation (Dai and Lin, 2017) , etc.", "Typically the corruption distribution is the same for all inputs like in skip-gram or CBOW NCE, rather than being a conditional distribution that takes into account information about the input sample under consideration.", "Furthermore, the corruption process usually only encodes a human prior as to what constitutes a hard negative sample, rather than being learned from data.", "For these two reasons, the simple fixed corruption process often yields only easy negative examples.", "Easy negatives are sub-optimal for learning discriminative representation as they do not force the model to find critical characteristics of observed positive data, which has been independently discovered in applications outside NLP previously (Shrivastava et al., 2016) .", "Even if hard negatives are occasionally reached, the infrequency means slow convergence.", "Designing a more sophisticated corruption process could be fruitful, but requires costly trialand-error by a human expert.", "In this work, we propose to augment the simple corruption noise process in various embedding models with an adversarially learned conditional distribution, forming a mixture negative sampler that adapts to the underlying data and the embedding model training progress.", "The resulting method is referred to as adversarial contrastive estimation (ACE).", "The adaptive conditional model engages in a minimax game with the primary embedding model, much like in Generative Adversarial Networks (GANs) (Goodfellow et al., 2014a) , where a discriminator net (D), tries to distinguish samples produced by a generator (G) from real data (Goodfellow et al., 2014b) .", "In ACE, the main model learns to distinguish between a real positive example and a negative sample selected by the mixture of a fixed NCE sampler and an adversarial generator.", "The main model and the generator takes alternating turns to update their parameters.", "In fact, our method can be viewed as a conditional GAN (Mirza and Osindero, 2014) on discrete inputs, with a mixture generator consisting of a learned and a fixed distribution, with additional techniques introduced to achieve stable and convergent training of embedding models.", "In our proposed ACE approach, the conditional sampler finds harder negatives than NCE, while being able to gracefully fall back to NCE whenever the generator cannot find hard negatives.", "We demonstrate the efficacy and generality of the proposed method on three different learning tasks, word embeddings (Mikolov et al., 2013) , order embeddings (Vendrov et al., 2016) and knowledge graph embeddings (Ji et al., 2015) .", "Method Background: contrastive learning In the most general form, our method applies to supervised learning problems with a contrastive objective of the following form: L(ω) = E p(x + ,y + ,y − ) l ω (x + , y + , y − ) (1) where l ω (x + , y + , y − ) captures both the model with parameters ω and the loss that scores a positive tuple (x + , y + ) against a negative one (x + , y − ).", "E p(x + ,y + ,y − ) (.)", "denotes expectation with respect to some joint distribution over positive and negative samples.", "Furthermore, by the law of total expectation, and the fact that given x + , the negative sampling is not dependent on the positive label, i.e.", "p(y + , y − |x + ) = p(y + |x + )p(y − |x + ), Eq.", "1 can be re-written as E p(x + ) [E p(y + |x + )p(y − |x + ) l ω (x + , y + , y − )] (2) Separable loss In the case where the loss decomposes into a sum of scores on positive and negative tuples such as l ω (x + , y + , y − ) = s ω (x + , y + )−s ω (x + , y − ), then Expression.", "2 becomes E p + (x) [E p + (y|x) s ω (x, y) − E p − (y|x)sω (x, y)] (3) where we moved the + and − to p for notational brevity.", "Learning by stochastic gradient descent aims to adjust ω to pushing down s ω (x, y) on samples from p + while pushing ups ω (x, y) on samples from p − .", "Note that for generality, the scoring function for negative samples, denoted bỹ s ω , could be slightly different from s ω .", "For instance,s could contain a margin as in the case of Order Embeddings in Sec.", "4.2.", "Non separable loss Eq.", "1 is the general form that we would like to consider because for certain problems, the loss function cannot be separated into sums of terms containing only positive (x + , y + ) and terms with negatives (x + , y − ).", "An example of such a nonseparable loss is the triplet ranking loss (Schroff et al., 2015) : l ω = max(0, η + s ω (x + , y + ) − s ω (x + , y − )), which does not decompose due to the rectification.", "Noise contrastive estimation The typical NCE approach in tasks such as word embeddings (Mikolov et al., 2013) , order embeddings (Vendrov et al., 2016) , and knowledge graph embeddings can be viewed as a special case of Eq.", "2 by taking p(y − |x + ) to be some unconditional p nce (y).", "This leads to efficient computation during training, however, p nce (y) sacrifices the sampling efficiency of learning as the negatives produced using a fixed distribution are not tailored toward x + , and as a result are not necessarily hard negative examples.", "Thus, the model is not forced to discover discriminative representation of observed positive data.", "As training progresses, more and more negative examples are correctly learned, the probability of drawing a hard negative example diminishes further, causing slow convergence.", "Adversarial mixture noise To remedy the above mentioned problem of a fixed unconditional negative sampler, we propose to augment it into a mixture one, λp nce (y) + (1 − λ)g θ (y|x), where g θ is a conditional distribution with a learnable parameter θ and λ is a hyperparameter.", "The objective in Expression.", "2 can then be written as (conditioned on x for notational brevity): L(ω, θ; x) = λ E p(y + |x)pnce(y − ) l ω (x, y + , y − ) + (1 − λ) E p(y + |x)g θ (y − |x) l ω (x, y + , y − ) (4) We learn (ω, θ) in a GAN-style minimax game: min ω max θ V (ω, θ) = min ω max θ E p + (x) L(ω, θ; x) (5 ) The embedding model behind l ω (x, y + , y − ) is similar to the discriminator in (conditional) GAN (or critic in Wasserstein or Energy-based GAN (Zhao et al., 2016) , while g θ (y|x) acts as the generator.", "Henceforth, we will use the term discriminator (D) and embedding model interchangeably, and refer to g θ as the generator.", "Learning the generator There is one important distinction to typical GAN: g θ (y|x) defines a categorical distribution over possible y values, and samples are drawn accordingly; in contrast to typical GAN over continuous data space such as images, where samples are generated by an implicit generative model that warps noise vectors into data points.", "Due to the discrete sampling step, g θ cannot learn by receiving gradient through the discriminator.", "One possible solution is to use the Gumbel-softmax reparametrization trick (Jang et al., 2016; Maddison et al., 2016) , which gives a differentiable approximation.", "However, this differentiability comes at the cost of drawing N Gumbel samples per each categorical sample, where N is the number of categories.", "For word embeddings, N is the vocabulary size, and for knowledge graph embeddings, N is the number of entities, both leading to infeasible computational requirements.", "Instead, we use the REINFORCE (Williams, 1992) gradient estimator for ∇ θ L(θ, x): (1−λ) E −l ω (x, y + , y − )∇ θ log(g θ (y − |x)) (6) where the expectation E is with respect to p(y + , y − |x) = p(y + |x)g θ (y − |x), and the discriminator loss l ω (x, y + , y − ) acts as the reward.", "With a separable loss, the (conditional) value function of the minimax game is: L(ω, θ; x) = E p + (y|x) s ω (x, y) − E pnce(y)sω (x, y) − E g θ (y|x)sω (x, y) (7) and only the last term depends on the generator parameter ω.", "Hence, with a separable loss, the reward is −s(x + , y − ).", "This reduction does not happen with a non-separable loss, and we have to use l ω (x, y + , y − ).", "Entropy and training stability GAN training can suffer from instability and degeneracy where the generator probability mass collapses to a few modes or points.", "Much work has been done to stabilize GAN training in the continuous case Gulrajani et al., 2017; Cao et al., 2018) .", "In ACE, if the generator g θ probability mass collapses to a few candidates, then after the discriminator successfully learns about these negatives, g θ cannot adapt to select new hard negatives, because the REIN-FORCE gradient estimator Eq.", "6 relies on g θ being able to explore other candidates during sampling.", "Therefore, if the g θ probability mass collapses, instead of leading to oscillation as in typical GAN, the min-max game in ACE reaches an equilibrium where the discriminator wins and g θ can no longer adapt, then ACE falls back to NCE since the negative sampler has another mixture component from NCE.", "This behavior of gracefully falling back to NCE is more desirable than the alternative of stalled training if p − (y|x) does not have a simple p nce mixture component.", "However, we would still like to avoid such collapse, as the adversarial samples provide greater learning signals than NCE samples.", "To this end, we propose to use a regularizer to encourage the categorical distribution g θ (y|x) to have high entropy.", "In order to make the the regularizer interpretable and its hyperparameters easy to tune, we design the following form: R ent (x) = min(0, c − H(g θ (y|x))) (8) where H(g θ (y|x)) is the entropy of the categorical distribution g θ (y|x), and c = log(k) is the entropy of a uniform distribution over k choices, and k is a hyper-parameter.", "Intuitively, R ent expresses the prior that the generator should spread its mass over more than k choices for each x.", "Handling false negatives During negative sampling, p − (y|x) could actually produce y that forms a positive pair that exists in the training set, i.e., a false negative.", "This possibility exists in NCE already, but since p nce is not adaptive, the probability of sampling a false negative is low.", "Hence in NCE, the score on this false negative (true observation) pair is pushed up less in the negative term than in the positive term.", "However, with the adaptive sampler, g ω (y|x), false negatives become a much more severe issue.", "g ω (y|x) can learn to concentrate its mass on a few false negatives, significantly canceling the learning of those observations in the positive phase.", "The entropy regularization reduces this problem as it forces the generator to spread its mass, hence reducing the chance of a false negative.", "To further alleviate this problem, whenever computationally feasible, we apply an additional two-step technique.", "First, we maintain a hash map of the training data in memory, and use it to efficiently detect if a negative sample (x + , y − ) is an actual observation.", "If so, its contribution to the loss is given a zero weight in ω learning step.", "Second, to upate θ in the generator learning step, the reward for false negative samples are replaced by a large penalty, so that the REINFORCE gradient update would steer g θ away from those samples.", "The second step is needed to prevent null computation where g θ learns to sample false negatives which are subsequently ignored by the discriminator update for ω. Variance Reduction The basic REINFORCE gradient estimator is poised with high variance, so in practice one often needs to apply variance reduction techniques.", "The most basic form of variance reduction is to subtract a baseline from the reward.", "As long as the baseline is not a function of actions (i.e., samples y − being drawn), the REINFORCE gradient estimator remains unbiased.", "More advanced gradient estimators exist that also reduce variance (Grathwohl et al., 2017; Tucker et al., 2017; Liu et al., 2018) , but for simplicity we use the self-critical baseline method (Rennie et al., 2016) , where the baseline is b(x) = l ω (y + , y , x), or b(x) = −s ω (y , x) in the separable loss case, and y = argmax i g θ (y i |x).", "In other words, the baseline is the reward of the most likely sample according to the generator.", "2.7 Improving exploration in g θ by leveraging NCE samples In Sec.", "2.4 we touched on the need for sufficient exploration in g θ .", "It is possible to also leverage negative samples from NCE to help the generator learn.", "This is essentially off-policy exploration in reinforcement learning since NCE samples are not drawn according to g θ (y|x).", "The generator learning can use importance re-weighting to leverage those samples.", "The resulting REIN-FORCE gradient estimator is basically the same as Eq.", "6 except that the rewards are reweighted by g θ (y − |x)/p nce (y − ), and the expectation is with respect to p(y + |x)p nce (y − ).", "This additional offpolicy learning term provides gradient information for generator learning if g θ (y − |x) is not zero, meaning that for it to be effective in helping exploration, the generator cannot be collapsed at the first place.", "Hence, in practice, this term is only used to further help on top of the entropy regularization, but it does not replace it.", "Related Work Smith and Eisner (2005) proposed contrastive estimation as a way for unsupervised learning of log-linear models by taking implicit evidence from user-defined neighborhoods around observed datapoints.", "Gutmann and Hyvärinen (2010) introduced NCE as an alternative to the hierarchical softmax.", "In the works of Mnih and Teh (2012) and Mnih and Kavukcuoglu (2013) , NCE is applied to log-bilinear models and Vaswani et al.", "(2013) applied NCE to neural probabilistic language models (Yoshua et al., 2003) .", "Compared to these previous NCE methods that rely on simple fixed sampling heuristics, ACE uses an adaptive sampler that produces harder negatives.", "In the domain of max-margin estimation for structured prediction (Taskar et al., 2005) , loss augmented MAP inference plays the role of finding hard negatives (the hardest).", "However, this inference is only tractable in a limited class of models such structured SVM (Tsochantaridis et al., 2005) .", "Compared to those models that use exact maximization to find the hardest negative configuration each time, the generator in ACE can be viewed as learning an approximate amortized inference network.", "Concurrently to this work, Tu and Gimpel (2018) proposes a very similar framework, using a learned inference network for Structured prediction energy networks (SPEN) (Belanger and McCallum, 2016) .", "Concurrent with our work, there have been other interests in applying the GAN to NLP problems (Fedus et al., 2018; Wang et al., 2018; Cai and Wang, 2017) .", "Knowledge graph models naturally lend to a GAN setup, and has been the subject of study in Wang et al.", "(2018) and Cai and Wang (2017) .", "These two concurrent works are most closely related to one of the three tasks on which we study ACE in this work.", "Besides a more general formulation that applies to problems beyond those considered in Wang et al.", "(2018) and Cai and Wang (2017) , the techniques introduced in our work on handling false negatives and entropy regularization lead to improved experimental results as shown in Sec.", "5.4.", "Application of ACE on three tasks 4.1 Word Embeddings Word embeddings learn a vector representation of words from co-occurrences in a text corpus.", "NCE casts this learning problem as a binary classification where the model tries to distinguish positive word and context pairs, from negative noise samples composed of word and false context pairs.", "The NCE objective in Skip-gram (Mikolov et al., 2013) for word embeddings is a separable loss of the form: L = − wt∈V [log p(y = 1|w t , w + c ) + K c=1 log p(y = 0|w t , w − c )] (9) Here, w + c is sampled from the set of true contexts and w − c ∼ Q is sampled k times from a fixed noise distribution.", "Mikolov et al.", "(2013) introduced a further simplification of NCE, called \"Negative Sampling\" (Dyer, 2014) .", "With respect to our ACE framework, the difference between NCE and Negative Sampling is inconsequential, so we continue the discussion using NCE.", "A drawback of this sampling scheme is that it favors more common words as context.", "Another issue is that the negative context words are sampled in the same way, rather than tailored toward the actual target word.", "To apply ACE to this problem we first define the value function for the minimax game, V (D, G), as follows: V (D, G) = E p + (wc) [log D(w c , w t )] − E pnce(wc) [− log(1 − D(w c , w t ))] − E g θ (wc|wt) [− log(1 − D(w c , w t ))] (10) with D = p(y = 1|w t , w c ) and G = g θ (w c |w t ).", "Implementation details For our experiments, we train all our models on a single pass of the May 2017 dump of the English Wikipedia with lowercased unigrams.", "The vocabulary size is restricted to the top 150k most frequent words when training from scratch while for finetuning we use the same vocabulary as Pennington et al.", "(2014) , which is 400k of the most frequent words.", "We use 5 NCE samples for each positive sample and 1 adversarial sample in a window size of 10 and the same positive subsampling scheme proposed by Mikolov et al.", "(2013) .", "Learning for both G and D uses Adam (Kingma and Ba, 2014) optimizer with its default parameters.", "Our conditional discriminator is modeled using the Skip-Gram architecture, which is a two layer neural network with a linear mapping between the layers.", "The generator network consists of an embedding layer followed by two small hidden layers, followed by an output softmax layer.", "The first layer of the generator shares its weights with the second embedding layer in the discriminator network, which we find really speeds up convergence as the generator does not have to relearn its own set of embeddings.", "The difference between the discriminator and generator is that a sigmoid nonlinearity is used after the second layer in the discriminator, while in the generator, a softmax layer is used to define a categorical distribution over negative word candidates.", "We find that controlling the generator entropy is critical for finetuning experiments as otherwise the generator collapses to its favorite negative sample.", "The word embeddings are taken to be the first dense matrix in the discriminator.", "Order Embeddings Hypernym Prediction As introduced in Vendrov et al.", "(2016) , ordered representations over hierarchy can be learned by order embeddings.", "An example task for such ordered representation is hypernym prediction.", "A hypernym pair is a pair of concepts where the first concept is a specialization or an instance of the second.", "For completeness, we briefly describe order embeddings, then analyze ACE on the hypernym prediction task.", "In order embeddings, each entity is represented by a vector in R N , the score for a positive ordered pair of entities (x, y) is defined by s ω (x, y) = ||max(0, y − x)|| 2 and, score for a negative ordered pair (x + , y − ) is defined bỹ s ω (x + , y − ) = max{0, η − s(x + , y − )}, where is η is the margin.", "Let f (u) be the embedding function which takes an entity as input and outputs en embedding vector.", "We define P as a set of positive pairs and N as negative pairs, the separable loss function for order embedding task is defined by: L = (u,v)∈P s ω (f (u), f (v)))+ (u,v)∈Ns (f (u), f (v)) (11) Implementation details Our generator for this task is just a linear fully connected softmax layer, taking an embedding vector from discriminator as input and outputting a categorical distribution over the entity set.", "For the discriminator, we inherit all model setting from Vendrov et al.", "(2016) : we use 50 dimensions hidden state and bash size 1000, a learning rate of 0.01 and the Adam optimizer.", "For the generator, we use a batch size of 1000, a learning rate 0.01 and the Adam optimizer.", "We apply weight decay with rate 0.1 and entropy loss regularization as described in Sec.", "2.4.", "We handle false negative as described in Sec.", "2.5.", "After cross validation, variance reduction and leveraging NCE samples does not greatly affect the order embedding task.", "Knowledge Graph Embeddings Knowledge graphs contain entity and relation data of the form (head entity, relation, tail entity), and the goal is to learn from observed positive entity relations and predict missing links (a.k.a.", "link prediction).", "There have been many works on knowledge graph embeddings, e.g.", "TransE (Bordes et al., 2013) , TransR (Lin et al., 2015) , TransH (Wang et al., 2014) , TransD (Ji et al., 2015) , Complex (Trouillon et al., 2016) , DistMult (Yang et al., 2014) and ConvE (Dettmers et al., 2017) .", "Many of them use a contrastive learning objective.", "Here we take TransD as an example, and modify its noise contrastive learning to ACE, and demonstrate significant improvement in sample efficiency and link prediction results.", "Implementation details Let a positive entity-relation-entity triplet be denoted by ξ + = (h + , r + , t + ), and a negative triplet could either have its head or tail be a negative sample, i.e.", "ξ − = (h − , r + , t + ) or ξ − = (h + , r + , t − ).", "In either case, the general formulation in Sec.", "2.1 still applies.", "The non-separable loss function takes on the form: l = max(0, η + s ω (ξ + ) − s ω (ξ − )) (12) The scoring rule is: s = h ⊥ + r − t ⊥ (13) where r is the embedding vector for r, and h ⊥ is projection of the embedding of h onto the space of r by h ⊥ = h + r p h p h, where r p and h p are projection parameters of the model.", "t ⊥ is defined in a similar way through parameters t, t p and r p .", "The form of the generator g θ (t − |r + , h + ) is chosen to be f θ (h ⊥ , h ⊥ + r), where f θ is a feedforward neural net that concatenates its two input arguments, then propagates through two hidden layers, followed by a final softmax output layer.", "As a function of (r + , h + ), g θ shares parameter with the discriminator, as the inputs to f θ are the embedding vectors.", "During generator learning, only θ is updated and the TransD model embedding parameters are frozen.", "Experiments We evaluate ACE with experiments on word embeddings, order embeddings, and knowledge graph embeddings tasks.", "In short, whenever the original learning objective is contrastive (all tasks except Glove fine-tuning) our results consistently show that ACE improves over NCE.", "In some cases, we include additional comparisons to the state-of-art results on the task to put the significance of such improvements in context: the generic ACE can often make a reasonable baseline competitive with SOTA methods that are optimized for the task.", "For word embeddings, we evaluate models trained from scratch as well as fine-tuned Glove models (Pennington et al., 2014) on word similarity tasks that consist of computing the similarity between word pairs where the ground truth is an average of human scores.", "We choose the Rare word dataset (Luong et al., 2013) and WordSim-353 (Finkelstein et al., 2001) by virtue of our hypothesis that ACE learns better representations for both rare and frequent words.", "We also qualitatively evaluate ACE word embeddings by inspecting the nearest neighbors of selected words.", "For the hypernym prediction task, following Vendrov et al.", "(2016) , hypernym pairs are created from the WordNet hierarchy's transitive closure.", "We use the released random development split and test split from Vendrov et al.", "(2016) , which both contain 4000 edges.", "For knowledge graph embeddings, we use TransD (Ji et al., 2015) as our base model, and perform ablation study to analyze the behavior of ACE with various add-on features, and confirm that entropy regularization is crucial for good performance in ACE.", "We also obtain link prediction results that are competitive or superior to the stateof-arts on the WN18 dataset (Bordes et al., 2014) .", "Training Word Embeddings from scratch In this experiment, we empirically observe that training word embeddings using ACE converges significantly faster than NCE after one epoch.", "As shown in Fig.", "3 both ACE (a mixture of p nce and g θ ) and just g θ (denoted by ADV) significantly outperforms the NCE baseline, with an absolute improvement of 73.1% and 58.5% respectively on RW score.", "We note similar results on WordSim-353 dataset where ACE and ADV outperforms NCE by 40.4% and 45.7%.", "We also evaluate our model qualitatively by inspecting the nearest neighbors of selected words in Table.", "1.", "We first present the five nearest neighbors to each word to show that both NCE and ACE models learn sensible embeddings.", "We then show that ACE embeddings have much better semantic relevance in a larger neighborhood (nearest neighbor 45-50).", "Finetuning Word Embeddings We take off-the-shelf pre-trained Glove embeddings which were trained using 6 billion tokens (Pennington et al., 2014) and fine-tune them using our algorithm.", "It is interesting to note that the original Glove objective does not fit into the contrastive learning framework, but nonetheless we find that they benefit from ACE.", "In fact, we observe that training such that 75% of the words appear as positive contexts is sufficient to beat the largest dimensionality pre-trained Glove model on word similarity tasks.", "We evaluate our performance on the Rare Word and WordSim353 data.", "As can be seen from our results in Table 2 , ACE on RW is not always better and for the 100d and 300d Glove embeddings is marginally worse.", "However, on WordSim353 ACE does considerably better across the board to the point where 50d Glove embeddings outperform the 300d baseline Glove model.", "Hypernym Prediction As shown in Table 3 , with ACE training, our method achieves a 1.5% improvement on accu- racy over Vendrov et al.", "(2016) without tunning any of the discriminator's hyperparameters.", "We further report training curve in Fig.", "1 , we report loss curve on randomly sampled pairs.", "We stress that in the ACE model, we train random pairs and generator generated pairs jointly, as shown in Fig.", "2 , hard negatives help the order embedding model converges faster.", "Ablation Study and Improving TransD To analyze different aspects of ACE, we perform an ablation study on the knowledge graph embedding task.", "As described in Sec.", "4.3, the base Method Accuracy (%) order-embeddings 90.6 order-embeddings + Our ACE 92.0 Table 3 : Order Embedding Performance model (discriminator) we apply ACE to is TransD (Ji et al., 2015) .", "Fig.", "5 shows validation performance as training progresses.", "All variants of ACE converges to better results than base NCE.", "Among ACE variants, all methods that include entropy regularization significantly outperform without entropy regularization.", "Without the self critical baseline variance reduction, learning could progress faster at the beginning but the final performance suffers slightly.", "The best performance is obtained without the additional off-policy learning of the generator.", "Table.", "4 shows the final test results on WN18 link prediction task.", "It is interesting to note that ACE improves MRR score more significantly than hit@10.", "As MRR is a lot more sensitive to the top rankings, i.e., how the correct configuration ranks among the competitive alternatives, this is consistent with the fact that ACE samples hard negatives and forces the base model to learn a more discriminative representation of the positive examples.", "(Trouillon et al., 2016) , which achieves the SOTA on this dataset.", "Among all TransD based models (the best results in this group is underlined), ACE improves over basic NCE and another GAN based approach KBGAN.", "The gap on MRR is likely due to the difference between TransD and COMPLEX models.", "Hard Negative Analysis To better understand the effect of the adversarial samples proposed by the generator we plot the discriminator loss on both p nce and g θ samples.", "In this context, a harder sample means a higher loss assigned by the discriminator.", "Fig.", "4 shows that discriminator loss for the word embedding task on g θ samples are always higher than on p nce samples, confirming that the generator is indeed sampling harder negatives.", "For Hypernym Prediction task, Fig.2 shows discriminator loss on negative pairs sampled from NCE and ACE respectively.", "The higher the loss the harder the negative pair is.", "As indicated in the left plot, loss on the ACE negative terms collapses faster than on the NCE negatives.", "After adding entropy regularization and weight decay, the generator works as expected.", "Limitations When the generator softmax is large, the current implementation of ACE training is computationally expensive.", "Although ACE converges faster per iteration, it may converge more slowly on wall-clock time depending on the cost of the softmax.", "However, embeddings are typically used as pre-trained building blocks for subsequent tasks.", "Thus, their learning is usually the pre-computation step for the more complex downstream models and spending more time is justified, especially with GPU acceleration.", "We believe that the computational cost could potentially be reduced via some existing techniques such as the \"augment and reduce\" variational inference of (Ruiz et al., 2018), adaptive softmax (Grave et al., 2016) , or the \"sparsely-gated\" softmax of Shazeer et al.", "(2017) , but leave that to future work.", "Another limitation is on the theoretical front.", "As noted in Goodfellow (2014) , GAN learning does not implement maximum likelihood estimation (MLE), while NCE has MLE as an asymptotic limit.", "To the best of our knowledge, more distant connections between GAN and MLE training are not known, and tools for analyzing the equilibrium of a min-max game where players are parametrized by deep neural nets are currently not available to the best of our knowledge.", "Conclusion In this paper, we propose Adversarial Contrastive Estimation as a general technique for improving supervised learning problems that learn by contrasting observed and fictitious samples.", "Specifically, we use a generator network in a conditional GAN like setting to propose hard negative examples for our discriminator model.", "We find that a mixture distribution of randomly sampling negative examples along with an adaptive negative sampler leads to improved performances on a variety of embedding tasks.", "We validate our hypothesis that hard negative examples are critical to optimal learning and can be proposed via our ACE framework.", "Finally, we find that controlling the entropy of the generator through a regularization term and properly handling false negatives is crucial for successful training." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "2.3", "2.4", "2.5", "2.6", "3", "4", "4.2", "4.3", "5", "5.1", "5.2", "5.3", "5.4", "5.5", "6", "7" ], "paper_header_content": [ "Introduction", "Background: contrastive learning", "Adversarial mixture noise", "Learning the generator", "Entropy and training stability", "Handling false negatives", "Variance Reduction", "Related Work", "Application of ACE on three tasks 4.1 Word Embeddings", "Order Embeddings Hypernym Prediction", "Knowledge Graph Embeddings", "Experiments", "Training Word Embeddings from scratch", "Finetuning Word Embeddings", "Hypernym Prediction", "Ablation Study and Improving TransD", "Hard Negative Analysis", "Limitations", "Conclusion" ] }
GEM-SciDuet-train-33#paper-1047#slide-1
Easy Negative Examples with NCE
Noise Constrastive Estimation samples negatives by taking p(y|x+) to be some unconditional pnce(y). Whats wrong with this? Negative y in (x y) is not tailored toward x Difficult to choose hard negatives as training progresses Model doesnt learn discriminating features between positive and hard negative examples NCE negatives are easy !!!
Noise Constrastive Estimation samples negatives by taking p(y|x+) to be some unconditional pnce(y). Whats wrong with this? Negative y in (x y) is not tailored toward x Difficult to choose hard negatives as training progresses Model doesnt learn discriminating features between positive and hard negative examples NCE negatives are easy !!!
[]
GEM-SciDuet-train-33#paper-1047#slide-2
1047
Adversarial Contrastive Estimation
Learning by contrasting positive and negative samples is a general strategy adopted by many methods. Noise contrastive estimation (NCE) for word embeddings and translating embeddings for knowledge graphs are examples in NLP employing this approach. In this work, we view contrastive learning as an abstraction of all such methods and augment the negative sampler into a mixture distribution containing an adversarially learned sampler. The resulting adaptive sampler finds harder negative examples, which forces the main model to learn a better representation of the data. We evaluate our proposal on learning word embeddings, order embeddings and knowledge graph embeddings and observe both faster convergence and improved results on multiple metrics.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226 ], "paper_content_text": [ "Introduction Many models learn by contrasting losses on observed positive examples with those on some fictitious negative examples, trying to decrease some score on positive ones while increasing it on negative ones.", "There are multiple reasons why such contrastive learning approach is needed.", "Computational tractability is one.", "For instance, instead of using softmax to predict a word for learning word embeddings, noise contrastive estimation (NCE) (Dyer, 2014; Mnih and Teh, 2012) can be used in skip-gram or CBOW word embedding models (Gutmann and Hyvärinen, 2012; Mikolov et al., 2013; Mnih and Kavukcuoglu, 2013; Vaswani et al., 2013) .", "Another reason is * authors contributed equally † Work done while author was an intern at Borealis AI modeling need, as certain assumptions are best expressed as some score or energy in margin based or un-normalized probability models (Smith and Eisner, 2005) .", "For example, modeling entity relations as translations or variants thereof in a vector space naturally leads to a distance-based score to be minimized for observed entity-relation-entity triplets (Bordes et al., 2013) .", "Given a scoring function, the gradient of the model's parameters on observed positive examples can be readily computed, but the negative phase requires a design decision on how to sample data.", "In noise contrastive estimation for word embeddings, a negative example is formed by replacing a component of a positive pair by randomly selecting a sampled word from the vocabulary, resulting in a fictitious word-context pair which would be unlikely to actually exist in the dataset.", "This negative sampling by corruption approach is also used in learning knowledge graph embeddings (Bordes et al., 2013; Lin et al., 2015; Ji et al., 2015; Wang et al., 2014; Trouillon et al., 2016; Yang et al., 2014; Dettmers et al., 2017) , order embeddings (Vendrov et al., 2016) , caption generation (Dai and Lin, 2017) , etc.", "Typically the corruption distribution is the same for all inputs like in skip-gram or CBOW NCE, rather than being a conditional distribution that takes into account information about the input sample under consideration.", "Furthermore, the corruption process usually only encodes a human prior as to what constitutes a hard negative sample, rather than being learned from data.", "For these two reasons, the simple fixed corruption process often yields only easy negative examples.", "Easy negatives are sub-optimal for learning discriminative representation as they do not force the model to find critical characteristics of observed positive data, which has been independently discovered in applications outside NLP previously (Shrivastava et al., 2016) .", "Even if hard negatives are occasionally reached, the infrequency means slow convergence.", "Designing a more sophisticated corruption process could be fruitful, but requires costly trialand-error by a human expert.", "In this work, we propose to augment the simple corruption noise process in various embedding models with an adversarially learned conditional distribution, forming a mixture negative sampler that adapts to the underlying data and the embedding model training progress.", "The resulting method is referred to as adversarial contrastive estimation (ACE).", "The adaptive conditional model engages in a minimax game with the primary embedding model, much like in Generative Adversarial Networks (GANs) (Goodfellow et al., 2014a) , where a discriminator net (D), tries to distinguish samples produced by a generator (G) from real data (Goodfellow et al., 2014b) .", "In ACE, the main model learns to distinguish between a real positive example and a negative sample selected by the mixture of a fixed NCE sampler and an adversarial generator.", "The main model and the generator takes alternating turns to update their parameters.", "In fact, our method can be viewed as a conditional GAN (Mirza and Osindero, 2014) on discrete inputs, with a mixture generator consisting of a learned and a fixed distribution, with additional techniques introduced to achieve stable and convergent training of embedding models.", "In our proposed ACE approach, the conditional sampler finds harder negatives than NCE, while being able to gracefully fall back to NCE whenever the generator cannot find hard negatives.", "We demonstrate the efficacy and generality of the proposed method on three different learning tasks, word embeddings (Mikolov et al., 2013) , order embeddings (Vendrov et al., 2016) and knowledge graph embeddings (Ji et al., 2015) .", "Method Background: contrastive learning In the most general form, our method applies to supervised learning problems with a contrastive objective of the following form: L(ω) = E p(x + ,y + ,y − ) l ω (x + , y + , y − ) (1) where l ω (x + , y + , y − ) captures both the model with parameters ω and the loss that scores a positive tuple (x + , y + ) against a negative one (x + , y − ).", "E p(x + ,y + ,y − ) (.)", "denotes expectation with respect to some joint distribution over positive and negative samples.", "Furthermore, by the law of total expectation, and the fact that given x + , the negative sampling is not dependent on the positive label, i.e.", "p(y + , y − |x + ) = p(y + |x + )p(y − |x + ), Eq.", "1 can be re-written as E p(x + ) [E p(y + |x + )p(y − |x + ) l ω (x + , y + , y − )] (2) Separable loss In the case where the loss decomposes into a sum of scores on positive and negative tuples such as l ω (x + , y + , y − ) = s ω (x + , y + )−s ω (x + , y − ), then Expression.", "2 becomes E p + (x) [E p + (y|x) s ω (x, y) − E p − (y|x)sω (x, y)] (3) where we moved the + and − to p for notational brevity.", "Learning by stochastic gradient descent aims to adjust ω to pushing down s ω (x, y) on samples from p + while pushing ups ω (x, y) on samples from p − .", "Note that for generality, the scoring function for negative samples, denoted bỹ s ω , could be slightly different from s ω .", "For instance,s could contain a margin as in the case of Order Embeddings in Sec.", "4.2.", "Non separable loss Eq.", "1 is the general form that we would like to consider because for certain problems, the loss function cannot be separated into sums of terms containing only positive (x + , y + ) and terms with negatives (x + , y − ).", "An example of such a nonseparable loss is the triplet ranking loss (Schroff et al., 2015) : l ω = max(0, η + s ω (x + , y + ) − s ω (x + , y − )), which does not decompose due to the rectification.", "Noise contrastive estimation The typical NCE approach in tasks such as word embeddings (Mikolov et al., 2013) , order embeddings (Vendrov et al., 2016) , and knowledge graph embeddings can be viewed as a special case of Eq.", "2 by taking p(y − |x + ) to be some unconditional p nce (y).", "This leads to efficient computation during training, however, p nce (y) sacrifices the sampling efficiency of learning as the negatives produced using a fixed distribution are not tailored toward x + , and as a result are not necessarily hard negative examples.", "Thus, the model is not forced to discover discriminative representation of observed positive data.", "As training progresses, more and more negative examples are correctly learned, the probability of drawing a hard negative example diminishes further, causing slow convergence.", "Adversarial mixture noise To remedy the above mentioned problem of a fixed unconditional negative sampler, we propose to augment it into a mixture one, λp nce (y) + (1 − λ)g θ (y|x), where g θ is a conditional distribution with a learnable parameter θ and λ is a hyperparameter.", "The objective in Expression.", "2 can then be written as (conditioned on x for notational brevity): L(ω, θ; x) = λ E p(y + |x)pnce(y − ) l ω (x, y + , y − ) + (1 − λ) E p(y + |x)g θ (y − |x) l ω (x, y + , y − ) (4) We learn (ω, θ) in a GAN-style minimax game: min ω max θ V (ω, θ) = min ω max θ E p + (x) L(ω, θ; x) (5 ) The embedding model behind l ω (x, y + , y − ) is similar to the discriminator in (conditional) GAN (or critic in Wasserstein or Energy-based GAN (Zhao et al., 2016) , while g θ (y|x) acts as the generator.", "Henceforth, we will use the term discriminator (D) and embedding model interchangeably, and refer to g θ as the generator.", "Learning the generator There is one important distinction to typical GAN: g θ (y|x) defines a categorical distribution over possible y values, and samples are drawn accordingly; in contrast to typical GAN over continuous data space such as images, where samples are generated by an implicit generative model that warps noise vectors into data points.", "Due to the discrete sampling step, g θ cannot learn by receiving gradient through the discriminator.", "One possible solution is to use the Gumbel-softmax reparametrization trick (Jang et al., 2016; Maddison et al., 2016) , which gives a differentiable approximation.", "However, this differentiability comes at the cost of drawing N Gumbel samples per each categorical sample, where N is the number of categories.", "For word embeddings, N is the vocabulary size, and for knowledge graph embeddings, N is the number of entities, both leading to infeasible computational requirements.", "Instead, we use the REINFORCE (Williams, 1992) gradient estimator for ∇ θ L(θ, x): (1−λ) E −l ω (x, y + , y − )∇ θ log(g θ (y − |x)) (6) where the expectation E is with respect to p(y + , y − |x) = p(y + |x)g θ (y − |x), and the discriminator loss l ω (x, y + , y − ) acts as the reward.", "With a separable loss, the (conditional) value function of the minimax game is: L(ω, θ; x) = E p + (y|x) s ω (x, y) − E pnce(y)sω (x, y) − E g θ (y|x)sω (x, y) (7) and only the last term depends on the generator parameter ω.", "Hence, with a separable loss, the reward is −s(x + , y − ).", "This reduction does not happen with a non-separable loss, and we have to use l ω (x, y + , y − ).", "Entropy and training stability GAN training can suffer from instability and degeneracy where the generator probability mass collapses to a few modes or points.", "Much work has been done to stabilize GAN training in the continuous case Gulrajani et al., 2017; Cao et al., 2018) .", "In ACE, if the generator g θ probability mass collapses to a few candidates, then after the discriminator successfully learns about these negatives, g θ cannot adapt to select new hard negatives, because the REIN-FORCE gradient estimator Eq.", "6 relies on g θ being able to explore other candidates during sampling.", "Therefore, if the g θ probability mass collapses, instead of leading to oscillation as in typical GAN, the min-max game in ACE reaches an equilibrium where the discriminator wins and g θ can no longer adapt, then ACE falls back to NCE since the negative sampler has another mixture component from NCE.", "This behavior of gracefully falling back to NCE is more desirable than the alternative of stalled training if p − (y|x) does not have a simple p nce mixture component.", "However, we would still like to avoid such collapse, as the adversarial samples provide greater learning signals than NCE samples.", "To this end, we propose to use a regularizer to encourage the categorical distribution g θ (y|x) to have high entropy.", "In order to make the the regularizer interpretable and its hyperparameters easy to tune, we design the following form: R ent (x) = min(0, c − H(g θ (y|x))) (8) where H(g θ (y|x)) is the entropy of the categorical distribution g θ (y|x), and c = log(k) is the entropy of a uniform distribution over k choices, and k is a hyper-parameter.", "Intuitively, R ent expresses the prior that the generator should spread its mass over more than k choices for each x.", "Handling false negatives During negative sampling, p − (y|x) could actually produce y that forms a positive pair that exists in the training set, i.e., a false negative.", "This possibility exists in NCE already, but since p nce is not adaptive, the probability of sampling a false negative is low.", "Hence in NCE, the score on this false negative (true observation) pair is pushed up less in the negative term than in the positive term.", "However, with the adaptive sampler, g ω (y|x), false negatives become a much more severe issue.", "g ω (y|x) can learn to concentrate its mass on a few false negatives, significantly canceling the learning of those observations in the positive phase.", "The entropy regularization reduces this problem as it forces the generator to spread its mass, hence reducing the chance of a false negative.", "To further alleviate this problem, whenever computationally feasible, we apply an additional two-step technique.", "First, we maintain a hash map of the training data in memory, and use it to efficiently detect if a negative sample (x + , y − ) is an actual observation.", "If so, its contribution to the loss is given a zero weight in ω learning step.", "Second, to upate θ in the generator learning step, the reward for false negative samples are replaced by a large penalty, so that the REINFORCE gradient update would steer g θ away from those samples.", "The second step is needed to prevent null computation where g θ learns to sample false negatives which are subsequently ignored by the discriminator update for ω. Variance Reduction The basic REINFORCE gradient estimator is poised with high variance, so in practice one often needs to apply variance reduction techniques.", "The most basic form of variance reduction is to subtract a baseline from the reward.", "As long as the baseline is not a function of actions (i.e., samples y − being drawn), the REINFORCE gradient estimator remains unbiased.", "More advanced gradient estimators exist that also reduce variance (Grathwohl et al., 2017; Tucker et al., 2017; Liu et al., 2018) , but for simplicity we use the self-critical baseline method (Rennie et al., 2016) , where the baseline is b(x) = l ω (y + , y , x), or b(x) = −s ω (y , x) in the separable loss case, and y = argmax i g θ (y i |x).", "In other words, the baseline is the reward of the most likely sample according to the generator.", "2.7 Improving exploration in g θ by leveraging NCE samples In Sec.", "2.4 we touched on the need for sufficient exploration in g θ .", "It is possible to also leverage negative samples from NCE to help the generator learn.", "This is essentially off-policy exploration in reinforcement learning since NCE samples are not drawn according to g θ (y|x).", "The generator learning can use importance re-weighting to leverage those samples.", "The resulting REIN-FORCE gradient estimator is basically the same as Eq.", "6 except that the rewards are reweighted by g θ (y − |x)/p nce (y − ), and the expectation is with respect to p(y + |x)p nce (y − ).", "This additional offpolicy learning term provides gradient information for generator learning if g θ (y − |x) is not zero, meaning that for it to be effective in helping exploration, the generator cannot be collapsed at the first place.", "Hence, in practice, this term is only used to further help on top of the entropy regularization, but it does not replace it.", "Related Work Smith and Eisner (2005) proposed contrastive estimation as a way for unsupervised learning of log-linear models by taking implicit evidence from user-defined neighborhoods around observed datapoints.", "Gutmann and Hyvärinen (2010) introduced NCE as an alternative to the hierarchical softmax.", "In the works of Mnih and Teh (2012) and Mnih and Kavukcuoglu (2013) , NCE is applied to log-bilinear models and Vaswani et al.", "(2013) applied NCE to neural probabilistic language models (Yoshua et al., 2003) .", "Compared to these previous NCE methods that rely on simple fixed sampling heuristics, ACE uses an adaptive sampler that produces harder negatives.", "In the domain of max-margin estimation for structured prediction (Taskar et al., 2005) , loss augmented MAP inference plays the role of finding hard negatives (the hardest).", "However, this inference is only tractable in a limited class of models such structured SVM (Tsochantaridis et al., 2005) .", "Compared to those models that use exact maximization to find the hardest negative configuration each time, the generator in ACE can be viewed as learning an approximate amortized inference network.", "Concurrently to this work, Tu and Gimpel (2018) proposes a very similar framework, using a learned inference network for Structured prediction energy networks (SPEN) (Belanger and McCallum, 2016) .", "Concurrent with our work, there have been other interests in applying the GAN to NLP problems (Fedus et al., 2018; Wang et al., 2018; Cai and Wang, 2017) .", "Knowledge graph models naturally lend to a GAN setup, and has been the subject of study in Wang et al.", "(2018) and Cai and Wang (2017) .", "These two concurrent works are most closely related to one of the three tasks on which we study ACE in this work.", "Besides a more general formulation that applies to problems beyond those considered in Wang et al.", "(2018) and Cai and Wang (2017) , the techniques introduced in our work on handling false negatives and entropy regularization lead to improved experimental results as shown in Sec.", "5.4.", "Application of ACE on three tasks 4.1 Word Embeddings Word embeddings learn a vector representation of words from co-occurrences in a text corpus.", "NCE casts this learning problem as a binary classification where the model tries to distinguish positive word and context pairs, from negative noise samples composed of word and false context pairs.", "The NCE objective in Skip-gram (Mikolov et al., 2013) for word embeddings is a separable loss of the form: L = − wt∈V [log p(y = 1|w t , w + c ) + K c=1 log p(y = 0|w t , w − c )] (9) Here, w + c is sampled from the set of true contexts and w − c ∼ Q is sampled k times from a fixed noise distribution.", "Mikolov et al.", "(2013) introduced a further simplification of NCE, called \"Negative Sampling\" (Dyer, 2014) .", "With respect to our ACE framework, the difference between NCE and Negative Sampling is inconsequential, so we continue the discussion using NCE.", "A drawback of this sampling scheme is that it favors more common words as context.", "Another issue is that the negative context words are sampled in the same way, rather than tailored toward the actual target word.", "To apply ACE to this problem we first define the value function for the minimax game, V (D, G), as follows: V (D, G) = E p + (wc) [log D(w c , w t )] − E pnce(wc) [− log(1 − D(w c , w t ))] − E g θ (wc|wt) [− log(1 − D(w c , w t ))] (10) with D = p(y = 1|w t , w c ) and G = g θ (w c |w t ).", "Implementation details For our experiments, we train all our models on a single pass of the May 2017 dump of the English Wikipedia with lowercased unigrams.", "The vocabulary size is restricted to the top 150k most frequent words when training from scratch while for finetuning we use the same vocabulary as Pennington et al.", "(2014) , which is 400k of the most frequent words.", "We use 5 NCE samples for each positive sample and 1 adversarial sample in a window size of 10 and the same positive subsampling scheme proposed by Mikolov et al.", "(2013) .", "Learning for both G and D uses Adam (Kingma and Ba, 2014) optimizer with its default parameters.", "Our conditional discriminator is modeled using the Skip-Gram architecture, which is a two layer neural network with a linear mapping between the layers.", "The generator network consists of an embedding layer followed by two small hidden layers, followed by an output softmax layer.", "The first layer of the generator shares its weights with the second embedding layer in the discriminator network, which we find really speeds up convergence as the generator does not have to relearn its own set of embeddings.", "The difference between the discriminator and generator is that a sigmoid nonlinearity is used after the second layer in the discriminator, while in the generator, a softmax layer is used to define a categorical distribution over negative word candidates.", "We find that controlling the generator entropy is critical for finetuning experiments as otherwise the generator collapses to its favorite negative sample.", "The word embeddings are taken to be the first dense matrix in the discriminator.", "Order Embeddings Hypernym Prediction As introduced in Vendrov et al.", "(2016) , ordered representations over hierarchy can be learned by order embeddings.", "An example task for such ordered representation is hypernym prediction.", "A hypernym pair is a pair of concepts where the first concept is a specialization or an instance of the second.", "For completeness, we briefly describe order embeddings, then analyze ACE on the hypernym prediction task.", "In order embeddings, each entity is represented by a vector in R N , the score for a positive ordered pair of entities (x, y) is defined by s ω (x, y) = ||max(0, y − x)|| 2 and, score for a negative ordered pair (x + , y − ) is defined bỹ s ω (x + , y − ) = max{0, η − s(x + , y − )}, where is η is the margin.", "Let f (u) be the embedding function which takes an entity as input and outputs en embedding vector.", "We define P as a set of positive pairs and N as negative pairs, the separable loss function for order embedding task is defined by: L = (u,v)∈P s ω (f (u), f (v)))+ (u,v)∈Ns (f (u), f (v)) (11) Implementation details Our generator for this task is just a linear fully connected softmax layer, taking an embedding vector from discriminator as input and outputting a categorical distribution over the entity set.", "For the discriminator, we inherit all model setting from Vendrov et al.", "(2016) : we use 50 dimensions hidden state and bash size 1000, a learning rate of 0.01 and the Adam optimizer.", "For the generator, we use a batch size of 1000, a learning rate 0.01 and the Adam optimizer.", "We apply weight decay with rate 0.1 and entropy loss regularization as described in Sec.", "2.4.", "We handle false negative as described in Sec.", "2.5.", "After cross validation, variance reduction and leveraging NCE samples does not greatly affect the order embedding task.", "Knowledge Graph Embeddings Knowledge graphs contain entity and relation data of the form (head entity, relation, tail entity), and the goal is to learn from observed positive entity relations and predict missing links (a.k.a.", "link prediction).", "There have been many works on knowledge graph embeddings, e.g.", "TransE (Bordes et al., 2013) , TransR (Lin et al., 2015) , TransH (Wang et al., 2014) , TransD (Ji et al., 2015) , Complex (Trouillon et al., 2016) , DistMult (Yang et al., 2014) and ConvE (Dettmers et al., 2017) .", "Many of them use a contrastive learning objective.", "Here we take TransD as an example, and modify its noise contrastive learning to ACE, and demonstrate significant improvement in sample efficiency and link prediction results.", "Implementation details Let a positive entity-relation-entity triplet be denoted by ξ + = (h + , r + , t + ), and a negative triplet could either have its head or tail be a negative sample, i.e.", "ξ − = (h − , r + , t + ) or ξ − = (h + , r + , t − ).", "In either case, the general formulation in Sec.", "2.1 still applies.", "The non-separable loss function takes on the form: l = max(0, η + s ω (ξ + ) − s ω (ξ − )) (12) The scoring rule is: s = h ⊥ + r − t ⊥ (13) where r is the embedding vector for r, and h ⊥ is projection of the embedding of h onto the space of r by h ⊥ = h + r p h p h, where r p and h p are projection parameters of the model.", "t ⊥ is defined in a similar way through parameters t, t p and r p .", "The form of the generator g θ (t − |r + , h + ) is chosen to be f θ (h ⊥ , h ⊥ + r), where f θ is a feedforward neural net that concatenates its two input arguments, then propagates through two hidden layers, followed by a final softmax output layer.", "As a function of (r + , h + ), g θ shares parameter with the discriminator, as the inputs to f θ are the embedding vectors.", "During generator learning, only θ is updated and the TransD model embedding parameters are frozen.", "Experiments We evaluate ACE with experiments on word embeddings, order embeddings, and knowledge graph embeddings tasks.", "In short, whenever the original learning objective is contrastive (all tasks except Glove fine-tuning) our results consistently show that ACE improves over NCE.", "In some cases, we include additional comparisons to the state-of-art results on the task to put the significance of such improvements in context: the generic ACE can often make a reasonable baseline competitive with SOTA methods that are optimized for the task.", "For word embeddings, we evaluate models trained from scratch as well as fine-tuned Glove models (Pennington et al., 2014) on word similarity tasks that consist of computing the similarity between word pairs where the ground truth is an average of human scores.", "We choose the Rare word dataset (Luong et al., 2013) and WordSim-353 (Finkelstein et al., 2001) by virtue of our hypothesis that ACE learns better representations for both rare and frequent words.", "We also qualitatively evaluate ACE word embeddings by inspecting the nearest neighbors of selected words.", "For the hypernym prediction task, following Vendrov et al.", "(2016) , hypernym pairs are created from the WordNet hierarchy's transitive closure.", "We use the released random development split and test split from Vendrov et al.", "(2016) , which both contain 4000 edges.", "For knowledge graph embeddings, we use TransD (Ji et al., 2015) as our base model, and perform ablation study to analyze the behavior of ACE with various add-on features, and confirm that entropy regularization is crucial for good performance in ACE.", "We also obtain link prediction results that are competitive or superior to the stateof-arts on the WN18 dataset (Bordes et al., 2014) .", "Training Word Embeddings from scratch In this experiment, we empirically observe that training word embeddings using ACE converges significantly faster than NCE after one epoch.", "As shown in Fig.", "3 both ACE (a mixture of p nce and g θ ) and just g θ (denoted by ADV) significantly outperforms the NCE baseline, with an absolute improvement of 73.1% and 58.5% respectively on RW score.", "We note similar results on WordSim-353 dataset where ACE and ADV outperforms NCE by 40.4% and 45.7%.", "We also evaluate our model qualitatively by inspecting the nearest neighbors of selected words in Table.", "1.", "We first present the five nearest neighbors to each word to show that both NCE and ACE models learn sensible embeddings.", "We then show that ACE embeddings have much better semantic relevance in a larger neighborhood (nearest neighbor 45-50).", "Finetuning Word Embeddings We take off-the-shelf pre-trained Glove embeddings which were trained using 6 billion tokens (Pennington et al., 2014) and fine-tune them using our algorithm.", "It is interesting to note that the original Glove objective does not fit into the contrastive learning framework, but nonetheless we find that they benefit from ACE.", "In fact, we observe that training such that 75% of the words appear as positive contexts is sufficient to beat the largest dimensionality pre-trained Glove model on word similarity tasks.", "We evaluate our performance on the Rare Word and WordSim353 data.", "As can be seen from our results in Table 2 , ACE on RW is not always better and for the 100d and 300d Glove embeddings is marginally worse.", "However, on WordSim353 ACE does considerably better across the board to the point where 50d Glove embeddings outperform the 300d baseline Glove model.", "Hypernym Prediction As shown in Table 3 , with ACE training, our method achieves a 1.5% improvement on accu- racy over Vendrov et al.", "(2016) without tunning any of the discriminator's hyperparameters.", "We further report training curve in Fig.", "1 , we report loss curve on randomly sampled pairs.", "We stress that in the ACE model, we train random pairs and generator generated pairs jointly, as shown in Fig.", "2 , hard negatives help the order embedding model converges faster.", "Ablation Study and Improving TransD To analyze different aspects of ACE, we perform an ablation study on the knowledge graph embedding task.", "As described in Sec.", "4.3, the base Method Accuracy (%) order-embeddings 90.6 order-embeddings + Our ACE 92.0 Table 3 : Order Embedding Performance model (discriminator) we apply ACE to is TransD (Ji et al., 2015) .", "Fig.", "5 shows validation performance as training progresses.", "All variants of ACE converges to better results than base NCE.", "Among ACE variants, all methods that include entropy regularization significantly outperform without entropy regularization.", "Without the self critical baseline variance reduction, learning could progress faster at the beginning but the final performance suffers slightly.", "The best performance is obtained without the additional off-policy learning of the generator.", "Table.", "4 shows the final test results on WN18 link prediction task.", "It is interesting to note that ACE improves MRR score more significantly than hit@10.", "As MRR is a lot more sensitive to the top rankings, i.e., how the correct configuration ranks among the competitive alternatives, this is consistent with the fact that ACE samples hard negatives and forces the base model to learn a more discriminative representation of the positive examples.", "(Trouillon et al., 2016) , which achieves the SOTA on this dataset.", "Among all TransD based models (the best results in this group is underlined), ACE improves over basic NCE and another GAN based approach KBGAN.", "The gap on MRR is likely due to the difference between TransD and COMPLEX models.", "Hard Negative Analysis To better understand the effect of the adversarial samples proposed by the generator we plot the discriminator loss on both p nce and g θ samples.", "In this context, a harder sample means a higher loss assigned by the discriminator.", "Fig.", "4 shows that discriminator loss for the word embedding task on g θ samples are always higher than on p nce samples, confirming that the generator is indeed sampling harder negatives.", "For Hypernym Prediction task, Fig.2 shows discriminator loss on negative pairs sampled from NCE and ACE respectively.", "The higher the loss the harder the negative pair is.", "As indicated in the left plot, loss on the ACE negative terms collapses faster than on the NCE negatives.", "After adding entropy regularization and weight decay, the generator works as expected.", "Limitations When the generator softmax is large, the current implementation of ACE training is computationally expensive.", "Although ACE converges faster per iteration, it may converge more slowly on wall-clock time depending on the cost of the softmax.", "However, embeddings are typically used as pre-trained building blocks for subsequent tasks.", "Thus, their learning is usually the pre-computation step for the more complex downstream models and spending more time is justified, especially with GPU acceleration.", "We believe that the computational cost could potentially be reduced via some existing techniques such as the \"augment and reduce\" variational inference of (Ruiz et al., 2018), adaptive softmax (Grave et al., 2016) , or the \"sparsely-gated\" softmax of Shazeer et al.", "(2017) , but leave that to future work.", "Another limitation is on the theoretical front.", "As noted in Goodfellow (2014) , GAN learning does not implement maximum likelihood estimation (MLE), while NCE has MLE as an asymptotic limit.", "To the best of our knowledge, more distant connections between GAN and MLE training are not known, and tools for analyzing the equilibrium of a min-max game where players are parametrized by deep neural nets are currently not available to the best of our knowledge.", "Conclusion In this paper, we propose Adversarial Contrastive Estimation as a general technique for improving supervised learning problems that learn by contrasting observed and fictitious samples.", "Specifically, we use a generator network in a conditional GAN like setting to propose hard negative examples for our discriminator model.", "We find that a mixture distribution of randomly sampling negative examples along with an adaptive negative sampler leads to improved performances on a variety of embedding tasks.", "We validate our hypothesis that hard negative examples are critical to optimal learning and can be proposed via our ACE framework.", "Finally, we find that controlling the entropy of the generator through a regularization term and properly handling false negatives is crucial for successful training." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "2.3", "2.4", "2.5", "2.6", "3", "4", "4.2", "4.3", "5", "5.1", "5.2", "5.3", "5.4", "5.5", "6", "7" ], "paper_header_content": [ "Introduction", "Background: contrastive learning", "Adversarial mixture noise", "Learning the generator", "Entropy and training stability", "Handling false negatives", "Variance Reduction", "Related Work", "Application of ACE on three tasks 4.1 Word Embeddings", "Order Embeddings Hypernym Prediction", "Knowledge Graph Embeddings", "Experiments", "Training Word Embeddings from scratch", "Finetuning Word Embeddings", "Hypernym Prediction", "Ablation Study and Improving TransD", "Hard Negative Analysis", "Limitations", "Conclusion" ] }
GEM-SciDuet-train-33#paper-1047#slide-2
Hard Negative Examples
Hard Negatives result to higher losses and thus more more informative gradients Not necessarily closest to a positive datapoint in embedding space
Hard Negatives result to higher losses and thus more more informative gradients Not necessarily closest to a positive datapoint in embedding space
[]
GEM-SciDuet-train-33#paper-1047#slide-3
1047
Adversarial Contrastive Estimation
Learning by contrasting positive and negative samples is a general strategy adopted by many methods. Noise contrastive estimation (NCE) for word embeddings and translating embeddings for knowledge graphs are examples in NLP employing this approach. In this work, we view contrastive learning as an abstraction of all such methods and augment the negative sampler into a mixture distribution containing an adversarially learned sampler. The resulting adaptive sampler finds harder negative examples, which forces the main model to learn a better representation of the data. We evaluate our proposal on learning word embeddings, order embeddings and knowledge graph embeddings and observe both faster convergence and improved results on multiple metrics.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226 ], "paper_content_text": [ "Introduction Many models learn by contrasting losses on observed positive examples with those on some fictitious negative examples, trying to decrease some score on positive ones while increasing it on negative ones.", "There are multiple reasons why such contrastive learning approach is needed.", "Computational tractability is one.", "For instance, instead of using softmax to predict a word for learning word embeddings, noise contrastive estimation (NCE) (Dyer, 2014; Mnih and Teh, 2012) can be used in skip-gram or CBOW word embedding models (Gutmann and Hyvärinen, 2012; Mikolov et al., 2013; Mnih and Kavukcuoglu, 2013; Vaswani et al., 2013) .", "Another reason is * authors contributed equally † Work done while author was an intern at Borealis AI modeling need, as certain assumptions are best expressed as some score or energy in margin based or un-normalized probability models (Smith and Eisner, 2005) .", "For example, modeling entity relations as translations or variants thereof in a vector space naturally leads to a distance-based score to be minimized for observed entity-relation-entity triplets (Bordes et al., 2013) .", "Given a scoring function, the gradient of the model's parameters on observed positive examples can be readily computed, but the negative phase requires a design decision on how to sample data.", "In noise contrastive estimation for word embeddings, a negative example is formed by replacing a component of a positive pair by randomly selecting a sampled word from the vocabulary, resulting in a fictitious word-context pair which would be unlikely to actually exist in the dataset.", "This negative sampling by corruption approach is also used in learning knowledge graph embeddings (Bordes et al., 2013; Lin et al., 2015; Ji et al., 2015; Wang et al., 2014; Trouillon et al., 2016; Yang et al., 2014; Dettmers et al., 2017) , order embeddings (Vendrov et al., 2016) , caption generation (Dai and Lin, 2017) , etc.", "Typically the corruption distribution is the same for all inputs like in skip-gram or CBOW NCE, rather than being a conditional distribution that takes into account information about the input sample under consideration.", "Furthermore, the corruption process usually only encodes a human prior as to what constitutes a hard negative sample, rather than being learned from data.", "For these two reasons, the simple fixed corruption process often yields only easy negative examples.", "Easy negatives are sub-optimal for learning discriminative representation as they do not force the model to find critical characteristics of observed positive data, which has been independently discovered in applications outside NLP previously (Shrivastava et al., 2016) .", "Even if hard negatives are occasionally reached, the infrequency means slow convergence.", "Designing a more sophisticated corruption process could be fruitful, but requires costly trialand-error by a human expert.", "In this work, we propose to augment the simple corruption noise process in various embedding models with an adversarially learned conditional distribution, forming a mixture negative sampler that adapts to the underlying data and the embedding model training progress.", "The resulting method is referred to as adversarial contrastive estimation (ACE).", "The adaptive conditional model engages in a minimax game with the primary embedding model, much like in Generative Adversarial Networks (GANs) (Goodfellow et al., 2014a) , where a discriminator net (D), tries to distinguish samples produced by a generator (G) from real data (Goodfellow et al., 2014b) .", "In ACE, the main model learns to distinguish between a real positive example and a negative sample selected by the mixture of a fixed NCE sampler and an adversarial generator.", "The main model and the generator takes alternating turns to update their parameters.", "In fact, our method can be viewed as a conditional GAN (Mirza and Osindero, 2014) on discrete inputs, with a mixture generator consisting of a learned and a fixed distribution, with additional techniques introduced to achieve stable and convergent training of embedding models.", "In our proposed ACE approach, the conditional sampler finds harder negatives than NCE, while being able to gracefully fall back to NCE whenever the generator cannot find hard negatives.", "We demonstrate the efficacy and generality of the proposed method on three different learning tasks, word embeddings (Mikolov et al., 2013) , order embeddings (Vendrov et al., 2016) and knowledge graph embeddings (Ji et al., 2015) .", "Method Background: contrastive learning In the most general form, our method applies to supervised learning problems with a contrastive objective of the following form: L(ω) = E p(x + ,y + ,y − ) l ω (x + , y + , y − ) (1) where l ω (x + , y + , y − ) captures both the model with parameters ω and the loss that scores a positive tuple (x + , y + ) against a negative one (x + , y − ).", "E p(x + ,y + ,y − ) (.)", "denotes expectation with respect to some joint distribution over positive and negative samples.", "Furthermore, by the law of total expectation, and the fact that given x + , the negative sampling is not dependent on the positive label, i.e.", "p(y + , y − |x + ) = p(y + |x + )p(y − |x + ), Eq.", "1 can be re-written as E p(x + ) [E p(y + |x + )p(y − |x + ) l ω (x + , y + , y − )] (2) Separable loss In the case where the loss decomposes into a sum of scores on positive and negative tuples such as l ω (x + , y + , y − ) = s ω (x + , y + )−s ω (x + , y − ), then Expression.", "2 becomes E p + (x) [E p + (y|x) s ω (x, y) − E p − (y|x)sω (x, y)] (3) where we moved the + and − to p for notational brevity.", "Learning by stochastic gradient descent aims to adjust ω to pushing down s ω (x, y) on samples from p + while pushing ups ω (x, y) on samples from p − .", "Note that for generality, the scoring function for negative samples, denoted bỹ s ω , could be slightly different from s ω .", "For instance,s could contain a margin as in the case of Order Embeddings in Sec.", "4.2.", "Non separable loss Eq.", "1 is the general form that we would like to consider because for certain problems, the loss function cannot be separated into sums of terms containing only positive (x + , y + ) and terms with negatives (x + , y − ).", "An example of such a nonseparable loss is the triplet ranking loss (Schroff et al., 2015) : l ω = max(0, η + s ω (x + , y + ) − s ω (x + , y − )), which does not decompose due to the rectification.", "Noise contrastive estimation The typical NCE approach in tasks such as word embeddings (Mikolov et al., 2013) , order embeddings (Vendrov et al., 2016) , and knowledge graph embeddings can be viewed as a special case of Eq.", "2 by taking p(y − |x + ) to be some unconditional p nce (y).", "This leads to efficient computation during training, however, p nce (y) sacrifices the sampling efficiency of learning as the negatives produced using a fixed distribution are not tailored toward x + , and as a result are not necessarily hard negative examples.", "Thus, the model is not forced to discover discriminative representation of observed positive data.", "As training progresses, more and more negative examples are correctly learned, the probability of drawing a hard negative example diminishes further, causing slow convergence.", "Adversarial mixture noise To remedy the above mentioned problem of a fixed unconditional negative sampler, we propose to augment it into a mixture one, λp nce (y) + (1 − λ)g θ (y|x), where g θ is a conditional distribution with a learnable parameter θ and λ is a hyperparameter.", "The objective in Expression.", "2 can then be written as (conditioned on x for notational brevity): L(ω, θ; x) = λ E p(y + |x)pnce(y − ) l ω (x, y + , y − ) + (1 − λ) E p(y + |x)g θ (y − |x) l ω (x, y + , y − ) (4) We learn (ω, θ) in a GAN-style minimax game: min ω max θ V (ω, θ) = min ω max θ E p + (x) L(ω, θ; x) (5 ) The embedding model behind l ω (x, y + , y − ) is similar to the discriminator in (conditional) GAN (or critic in Wasserstein or Energy-based GAN (Zhao et al., 2016) , while g θ (y|x) acts as the generator.", "Henceforth, we will use the term discriminator (D) and embedding model interchangeably, and refer to g θ as the generator.", "Learning the generator There is one important distinction to typical GAN: g θ (y|x) defines a categorical distribution over possible y values, and samples are drawn accordingly; in contrast to typical GAN over continuous data space such as images, where samples are generated by an implicit generative model that warps noise vectors into data points.", "Due to the discrete sampling step, g θ cannot learn by receiving gradient through the discriminator.", "One possible solution is to use the Gumbel-softmax reparametrization trick (Jang et al., 2016; Maddison et al., 2016) , which gives a differentiable approximation.", "However, this differentiability comes at the cost of drawing N Gumbel samples per each categorical sample, where N is the number of categories.", "For word embeddings, N is the vocabulary size, and for knowledge graph embeddings, N is the number of entities, both leading to infeasible computational requirements.", "Instead, we use the REINFORCE (Williams, 1992) gradient estimator for ∇ θ L(θ, x): (1−λ) E −l ω (x, y + , y − )∇ θ log(g θ (y − |x)) (6) where the expectation E is with respect to p(y + , y − |x) = p(y + |x)g θ (y − |x), and the discriminator loss l ω (x, y + , y − ) acts as the reward.", "With a separable loss, the (conditional) value function of the minimax game is: L(ω, θ; x) = E p + (y|x) s ω (x, y) − E pnce(y)sω (x, y) − E g θ (y|x)sω (x, y) (7) and only the last term depends on the generator parameter ω.", "Hence, with a separable loss, the reward is −s(x + , y − ).", "This reduction does not happen with a non-separable loss, and we have to use l ω (x, y + , y − ).", "Entropy and training stability GAN training can suffer from instability and degeneracy where the generator probability mass collapses to a few modes or points.", "Much work has been done to stabilize GAN training in the continuous case Gulrajani et al., 2017; Cao et al., 2018) .", "In ACE, if the generator g θ probability mass collapses to a few candidates, then after the discriminator successfully learns about these negatives, g θ cannot adapt to select new hard negatives, because the REIN-FORCE gradient estimator Eq.", "6 relies on g θ being able to explore other candidates during sampling.", "Therefore, if the g θ probability mass collapses, instead of leading to oscillation as in typical GAN, the min-max game in ACE reaches an equilibrium where the discriminator wins and g θ can no longer adapt, then ACE falls back to NCE since the negative sampler has another mixture component from NCE.", "This behavior of gracefully falling back to NCE is more desirable than the alternative of stalled training if p − (y|x) does not have a simple p nce mixture component.", "However, we would still like to avoid such collapse, as the adversarial samples provide greater learning signals than NCE samples.", "To this end, we propose to use a regularizer to encourage the categorical distribution g θ (y|x) to have high entropy.", "In order to make the the regularizer interpretable and its hyperparameters easy to tune, we design the following form: R ent (x) = min(0, c − H(g θ (y|x))) (8) where H(g θ (y|x)) is the entropy of the categorical distribution g θ (y|x), and c = log(k) is the entropy of a uniform distribution over k choices, and k is a hyper-parameter.", "Intuitively, R ent expresses the prior that the generator should spread its mass over more than k choices for each x.", "Handling false negatives During negative sampling, p − (y|x) could actually produce y that forms a positive pair that exists in the training set, i.e., a false negative.", "This possibility exists in NCE already, but since p nce is not adaptive, the probability of sampling a false negative is low.", "Hence in NCE, the score on this false negative (true observation) pair is pushed up less in the negative term than in the positive term.", "However, with the adaptive sampler, g ω (y|x), false negatives become a much more severe issue.", "g ω (y|x) can learn to concentrate its mass on a few false negatives, significantly canceling the learning of those observations in the positive phase.", "The entropy regularization reduces this problem as it forces the generator to spread its mass, hence reducing the chance of a false negative.", "To further alleviate this problem, whenever computationally feasible, we apply an additional two-step technique.", "First, we maintain a hash map of the training data in memory, and use it to efficiently detect if a negative sample (x + , y − ) is an actual observation.", "If so, its contribution to the loss is given a zero weight in ω learning step.", "Second, to upate θ in the generator learning step, the reward for false negative samples are replaced by a large penalty, so that the REINFORCE gradient update would steer g θ away from those samples.", "The second step is needed to prevent null computation where g θ learns to sample false negatives which are subsequently ignored by the discriminator update for ω. Variance Reduction The basic REINFORCE gradient estimator is poised with high variance, so in practice one often needs to apply variance reduction techniques.", "The most basic form of variance reduction is to subtract a baseline from the reward.", "As long as the baseline is not a function of actions (i.e., samples y − being drawn), the REINFORCE gradient estimator remains unbiased.", "More advanced gradient estimators exist that also reduce variance (Grathwohl et al., 2017; Tucker et al., 2017; Liu et al., 2018) , but for simplicity we use the self-critical baseline method (Rennie et al., 2016) , where the baseline is b(x) = l ω (y + , y , x), or b(x) = −s ω (y , x) in the separable loss case, and y = argmax i g θ (y i |x).", "In other words, the baseline is the reward of the most likely sample according to the generator.", "2.7 Improving exploration in g θ by leveraging NCE samples In Sec.", "2.4 we touched on the need for sufficient exploration in g θ .", "It is possible to also leverage negative samples from NCE to help the generator learn.", "This is essentially off-policy exploration in reinforcement learning since NCE samples are not drawn according to g θ (y|x).", "The generator learning can use importance re-weighting to leverage those samples.", "The resulting REIN-FORCE gradient estimator is basically the same as Eq.", "6 except that the rewards are reweighted by g θ (y − |x)/p nce (y − ), and the expectation is with respect to p(y + |x)p nce (y − ).", "This additional offpolicy learning term provides gradient information for generator learning if g θ (y − |x) is not zero, meaning that for it to be effective in helping exploration, the generator cannot be collapsed at the first place.", "Hence, in practice, this term is only used to further help on top of the entropy regularization, but it does not replace it.", "Related Work Smith and Eisner (2005) proposed contrastive estimation as a way for unsupervised learning of log-linear models by taking implicit evidence from user-defined neighborhoods around observed datapoints.", "Gutmann and Hyvärinen (2010) introduced NCE as an alternative to the hierarchical softmax.", "In the works of Mnih and Teh (2012) and Mnih and Kavukcuoglu (2013) , NCE is applied to log-bilinear models and Vaswani et al.", "(2013) applied NCE to neural probabilistic language models (Yoshua et al., 2003) .", "Compared to these previous NCE methods that rely on simple fixed sampling heuristics, ACE uses an adaptive sampler that produces harder negatives.", "In the domain of max-margin estimation for structured prediction (Taskar et al., 2005) , loss augmented MAP inference plays the role of finding hard negatives (the hardest).", "However, this inference is only tractable in a limited class of models such structured SVM (Tsochantaridis et al., 2005) .", "Compared to those models that use exact maximization to find the hardest negative configuration each time, the generator in ACE can be viewed as learning an approximate amortized inference network.", "Concurrently to this work, Tu and Gimpel (2018) proposes a very similar framework, using a learned inference network for Structured prediction energy networks (SPEN) (Belanger and McCallum, 2016) .", "Concurrent with our work, there have been other interests in applying the GAN to NLP problems (Fedus et al., 2018; Wang et al., 2018; Cai and Wang, 2017) .", "Knowledge graph models naturally lend to a GAN setup, and has been the subject of study in Wang et al.", "(2018) and Cai and Wang (2017) .", "These two concurrent works are most closely related to one of the three tasks on which we study ACE in this work.", "Besides a more general formulation that applies to problems beyond those considered in Wang et al.", "(2018) and Cai and Wang (2017) , the techniques introduced in our work on handling false negatives and entropy regularization lead to improved experimental results as shown in Sec.", "5.4.", "Application of ACE on three tasks 4.1 Word Embeddings Word embeddings learn a vector representation of words from co-occurrences in a text corpus.", "NCE casts this learning problem as a binary classification where the model tries to distinguish positive word and context pairs, from negative noise samples composed of word and false context pairs.", "The NCE objective in Skip-gram (Mikolov et al., 2013) for word embeddings is a separable loss of the form: L = − wt∈V [log p(y = 1|w t , w + c ) + K c=1 log p(y = 0|w t , w − c )] (9) Here, w + c is sampled from the set of true contexts and w − c ∼ Q is sampled k times from a fixed noise distribution.", "Mikolov et al.", "(2013) introduced a further simplification of NCE, called \"Negative Sampling\" (Dyer, 2014) .", "With respect to our ACE framework, the difference between NCE and Negative Sampling is inconsequential, so we continue the discussion using NCE.", "A drawback of this sampling scheme is that it favors more common words as context.", "Another issue is that the negative context words are sampled in the same way, rather than tailored toward the actual target word.", "To apply ACE to this problem we first define the value function for the minimax game, V (D, G), as follows: V (D, G) = E p + (wc) [log D(w c , w t )] − E pnce(wc) [− log(1 − D(w c , w t ))] − E g θ (wc|wt) [− log(1 − D(w c , w t ))] (10) with D = p(y = 1|w t , w c ) and G = g θ (w c |w t ).", "Implementation details For our experiments, we train all our models on a single pass of the May 2017 dump of the English Wikipedia with lowercased unigrams.", "The vocabulary size is restricted to the top 150k most frequent words when training from scratch while for finetuning we use the same vocabulary as Pennington et al.", "(2014) , which is 400k of the most frequent words.", "We use 5 NCE samples for each positive sample and 1 adversarial sample in a window size of 10 and the same positive subsampling scheme proposed by Mikolov et al.", "(2013) .", "Learning for both G and D uses Adam (Kingma and Ba, 2014) optimizer with its default parameters.", "Our conditional discriminator is modeled using the Skip-Gram architecture, which is a two layer neural network with a linear mapping between the layers.", "The generator network consists of an embedding layer followed by two small hidden layers, followed by an output softmax layer.", "The first layer of the generator shares its weights with the second embedding layer in the discriminator network, which we find really speeds up convergence as the generator does not have to relearn its own set of embeddings.", "The difference between the discriminator and generator is that a sigmoid nonlinearity is used after the second layer in the discriminator, while in the generator, a softmax layer is used to define a categorical distribution over negative word candidates.", "We find that controlling the generator entropy is critical for finetuning experiments as otherwise the generator collapses to its favorite negative sample.", "The word embeddings are taken to be the first dense matrix in the discriminator.", "Order Embeddings Hypernym Prediction As introduced in Vendrov et al.", "(2016) , ordered representations over hierarchy can be learned by order embeddings.", "An example task for such ordered representation is hypernym prediction.", "A hypernym pair is a pair of concepts where the first concept is a specialization or an instance of the second.", "For completeness, we briefly describe order embeddings, then analyze ACE on the hypernym prediction task.", "In order embeddings, each entity is represented by a vector in R N , the score for a positive ordered pair of entities (x, y) is defined by s ω (x, y) = ||max(0, y − x)|| 2 and, score for a negative ordered pair (x + , y − ) is defined bỹ s ω (x + , y − ) = max{0, η − s(x + , y − )}, where is η is the margin.", "Let f (u) be the embedding function which takes an entity as input and outputs en embedding vector.", "We define P as a set of positive pairs and N as negative pairs, the separable loss function for order embedding task is defined by: L = (u,v)∈P s ω (f (u), f (v)))+ (u,v)∈Ns (f (u), f (v)) (11) Implementation details Our generator for this task is just a linear fully connected softmax layer, taking an embedding vector from discriminator as input and outputting a categorical distribution over the entity set.", "For the discriminator, we inherit all model setting from Vendrov et al.", "(2016) : we use 50 dimensions hidden state and bash size 1000, a learning rate of 0.01 and the Adam optimizer.", "For the generator, we use a batch size of 1000, a learning rate 0.01 and the Adam optimizer.", "We apply weight decay with rate 0.1 and entropy loss regularization as described in Sec.", "2.4.", "We handle false negative as described in Sec.", "2.5.", "After cross validation, variance reduction and leveraging NCE samples does not greatly affect the order embedding task.", "Knowledge Graph Embeddings Knowledge graphs contain entity and relation data of the form (head entity, relation, tail entity), and the goal is to learn from observed positive entity relations and predict missing links (a.k.a.", "link prediction).", "There have been many works on knowledge graph embeddings, e.g.", "TransE (Bordes et al., 2013) , TransR (Lin et al., 2015) , TransH (Wang et al., 2014) , TransD (Ji et al., 2015) , Complex (Trouillon et al., 2016) , DistMult (Yang et al., 2014) and ConvE (Dettmers et al., 2017) .", "Many of them use a contrastive learning objective.", "Here we take TransD as an example, and modify its noise contrastive learning to ACE, and demonstrate significant improvement in sample efficiency and link prediction results.", "Implementation details Let a positive entity-relation-entity triplet be denoted by ξ + = (h + , r + , t + ), and a negative triplet could either have its head or tail be a negative sample, i.e.", "ξ − = (h − , r + , t + ) or ξ − = (h + , r + , t − ).", "In either case, the general formulation in Sec.", "2.1 still applies.", "The non-separable loss function takes on the form: l = max(0, η + s ω (ξ + ) − s ω (ξ − )) (12) The scoring rule is: s = h ⊥ + r − t ⊥ (13) where r is the embedding vector for r, and h ⊥ is projection of the embedding of h onto the space of r by h ⊥ = h + r p h p h, where r p and h p are projection parameters of the model.", "t ⊥ is defined in a similar way through parameters t, t p and r p .", "The form of the generator g θ (t − |r + , h + ) is chosen to be f θ (h ⊥ , h ⊥ + r), where f θ is a feedforward neural net that concatenates its two input arguments, then propagates through two hidden layers, followed by a final softmax output layer.", "As a function of (r + , h + ), g θ shares parameter with the discriminator, as the inputs to f θ are the embedding vectors.", "During generator learning, only θ is updated and the TransD model embedding parameters are frozen.", "Experiments We evaluate ACE with experiments on word embeddings, order embeddings, and knowledge graph embeddings tasks.", "In short, whenever the original learning objective is contrastive (all tasks except Glove fine-tuning) our results consistently show that ACE improves over NCE.", "In some cases, we include additional comparisons to the state-of-art results on the task to put the significance of such improvements in context: the generic ACE can often make a reasonable baseline competitive with SOTA methods that are optimized for the task.", "For word embeddings, we evaluate models trained from scratch as well as fine-tuned Glove models (Pennington et al., 2014) on word similarity tasks that consist of computing the similarity between word pairs where the ground truth is an average of human scores.", "We choose the Rare word dataset (Luong et al., 2013) and WordSim-353 (Finkelstein et al., 2001) by virtue of our hypothesis that ACE learns better representations for both rare and frequent words.", "We also qualitatively evaluate ACE word embeddings by inspecting the nearest neighbors of selected words.", "For the hypernym prediction task, following Vendrov et al.", "(2016) , hypernym pairs are created from the WordNet hierarchy's transitive closure.", "We use the released random development split and test split from Vendrov et al.", "(2016) , which both contain 4000 edges.", "For knowledge graph embeddings, we use TransD (Ji et al., 2015) as our base model, and perform ablation study to analyze the behavior of ACE with various add-on features, and confirm that entropy regularization is crucial for good performance in ACE.", "We also obtain link prediction results that are competitive or superior to the stateof-arts on the WN18 dataset (Bordes et al., 2014) .", "Training Word Embeddings from scratch In this experiment, we empirically observe that training word embeddings using ACE converges significantly faster than NCE after one epoch.", "As shown in Fig.", "3 both ACE (a mixture of p nce and g θ ) and just g θ (denoted by ADV) significantly outperforms the NCE baseline, with an absolute improvement of 73.1% and 58.5% respectively on RW score.", "We note similar results on WordSim-353 dataset where ACE and ADV outperforms NCE by 40.4% and 45.7%.", "We also evaluate our model qualitatively by inspecting the nearest neighbors of selected words in Table.", "1.", "We first present the five nearest neighbors to each word to show that both NCE and ACE models learn sensible embeddings.", "We then show that ACE embeddings have much better semantic relevance in a larger neighborhood (nearest neighbor 45-50).", "Finetuning Word Embeddings We take off-the-shelf pre-trained Glove embeddings which were trained using 6 billion tokens (Pennington et al., 2014) and fine-tune them using our algorithm.", "It is interesting to note that the original Glove objective does not fit into the contrastive learning framework, but nonetheless we find that they benefit from ACE.", "In fact, we observe that training such that 75% of the words appear as positive contexts is sufficient to beat the largest dimensionality pre-trained Glove model on word similarity tasks.", "We evaluate our performance on the Rare Word and WordSim353 data.", "As can be seen from our results in Table 2 , ACE on RW is not always better and for the 100d and 300d Glove embeddings is marginally worse.", "However, on WordSim353 ACE does considerably better across the board to the point where 50d Glove embeddings outperform the 300d baseline Glove model.", "Hypernym Prediction As shown in Table 3 , with ACE training, our method achieves a 1.5% improvement on accu- racy over Vendrov et al.", "(2016) without tunning any of the discriminator's hyperparameters.", "We further report training curve in Fig.", "1 , we report loss curve on randomly sampled pairs.", "We stress that in the ACE model, we train random pairs and generator generated pairs jointly, as shown in Fig.", "2 , hard negatives help the order embedding model converges faster.", "Ablation Study and Improving TransD To analyze different aspects of ACE, we perform an ablation study on the knowledge graph embedding task.", "As described in Sec.", "4.3, the base Method Accuracy (%) order-embeddings 90.6 order-embeddings + Our ACE 92.0 Table 3 : Order Embedding Performance model (discriminator) we apply ACE to is TransD (Ji et al., 2015) .", "Fig.", "5 shows validation performance as training progresses.", "All variants of ACE converges to better results than base NCE.", "Among ACE variants, all methods that include entropy regularization significantly outperform without entropy regularization.", "Without the self critical baseline variance reduction, learning could progress faster at the beginning but the final performance suffers slightly.", "The best performance is obtained without the additional off-policy learning of the generator.", "Table.", "4 shows the final test results on WN18 link prediction task.", "It is interesting to note that ACE improves MRR score more significantly than hit@10.", "As MRR is a lot more sensitive to the top rankings, i.e., how the correct configuration ranks among the competitive alternatives, this is consistent with the fact that ACE samples hard negatives and forces the base model to learn a more discriminative representation of the positive examples.", "(Trouillon et al., 2016) , which achieves the SOTA on this dataset.", "Among all TransD based models (the best results in this group is underlined), ACE improves over basic NCE and another GAN based approach KBGAN.", "The gap on MRR is likely due to the difference between TransD and COMPLEX models.", "Hard Negative Analysis To better understand the effect of the adversarial samples proposed by the generator we plot the discriminator loss on both p nce and g θ samples.", "In this context, a harder sample means a higher loss assigned by the discriminator.", "Fig.", "4 shows that discriminator loss for the word embedding task on g θ samples are always higher than on p nce samples, confirming that the generator is indeed sampling harder negatives.", "For Hypernym Prediction task, Fig.2 shows discriminator loss on negative pairs sampled from NCE and ACE respectively.", "The higher the loss the harder the negative pair is.", "As indicated in the left plot, loss on the ACE negative terms collapses faster than on the NCE negatives.", "After adding entropy regularization and weight decay, the generator works as expected.", "Limitations When the generator softmax is large, the current implementation of ACE training is computationally expensive.", "Although ACE converges faster per iteration, it may converge more slowly on wall-clock time depending on the cost of the softmax.", "However, embeddings are typically used as pre-trained building blocks for subsequent tasks.", "Thus, their learning is usually the pre-computation step for the more complex downstream models and spending more time is justified, especially with GPU acceleration.", "We believe that the computational cost could potentially be reduced via some existing techniques such as the \"augment and reduce\" variational inference of (Ruiz et al., 2018), adaptive softmax (Grave et al., 2016) , or the \"sparsely-gated\" softmax of Shazeer et al.", "(2017) , but leave that to future work.", "Another limitation is on the theoretical front.", "As noted in Goodfellow (2014) , GAN learning does not implement maximum likelihood estimation (MLE), while NCE has MLE as an asymptotic limit.", "To the best of our knowledge, more distant connections between GAN and MLE training are not known, and tools for analyzing the equilibrium of a min-max game where players are parametrized by deep neural nets are currently not available to the best of our knowledge.", "Conclusion In this paper, we propose Adversarial Contrastive Estimation as a general technique for improving supervised learning problems that learn by contrasting observed and fictitious samples.", "Specifically, we use a generator network in a conditional GAN like setting to propose hard negative examples for our discriminator model.", "We find that a mixture distribution of randomly sampling negative examples along with an adaptive negative sampler leads to improved performances on a variety of embedding tasks.", "We validate our hypothesis that hard negative examples are critical to optimal learning and can be proposed via our ACE framework.", "Finally, we find that controlling the entropy of the generator through a regularization term and properly handling false negatives is crucial for successful training." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "2.3", "2.4", "2.5", "2.6", "3", "4", "4.2", "4.3", "5", "5.1", "5.2", "5.3", "5.4", "5.5", "6", "7" ], "paper_header_content": [ "Introduction", "Background: contrastive learning", "Adversarial mixture noise", "Learning the generator", "Entropy and training stability", "Handling false negatives", "Variance Reduction", "Related Work", "Application of ACE on three tasks 4.1 Word Embeddings", "Order Embeddings Hypernym Prediction", "Knowledge Graph Embeddings", "Experiments", "Training Word Embeddings from scratch", "Finetuning Word Embeddings", "Hypernym Prediction", "Ablation Study and Improving TransD", "Hard Negative Analysis", "Limitations", "Conclusion" ] }
GEM-SciDuet-train-33#paper-1047#slide-3
Technical Contributions
Adversarial Contrastive Estimation: A general technique for hard negative mining using a Conditional GAN like setup. A novel entropy regularizer that prevents generator mode collapse and has good empirical benefits A strategy for handling false negative examples that allows training to progress Empirical validation across 3 different embedding tasks with state of the art results on some metrics
Adversarial Contrastive Estimation: A general technique for hard negative mining using a Conditional GAN like setup. A novel entropy regularizer that prevents generator mode collapse and has good empirical benefits A strategy for handling false negative examples that allows training to progress Empirical validation across 3 different embedding tasks with state of the art results on some metrics
[]
GEM-SciDuet-train-33#paper-1047#slide-4
1047
Adversarial Contrastive Estimation
Learning by contrasting positive and negative samples is a general strategy adopted by many methods. Noise contrastive estimation (NCE) for word embeddings and translating embeddings for knowledge graphs are examples in NLP employing this approach. In this work, we view contrastive learning as an abstraction of all such methods and augment the negative sampler into a mixture distribution containing an adversarially learned sampler. The resulting adaptive sampler finds harder negative examples, which forces the main model to learn a better representation of the data. We evaluate our proposal on learning word embeddings, order embeddings and knowledge graph embeddings and observe both faster convergence and improved results on multiple metrics.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226 ], "paper_content_text": [ "Introduction Many models learn by contrasting losses on observed positive examples with those on some fictitious negative examples, trying to decrease some score on positive ones while increasing it on negative ones.", "There are multiple reasons why such contrastive learning approach is needed.", "Computational tractability is one.", "For instance, instead of using softmax to predict a word for learning word embeddings, noise contrastive estimation (NCE) (Dyer, 2014; Mnih and Teh, 2012) can be used in skip-gram or CBOW word embedding models (Gutmann and Hyvärinen, 2012; Mikolov et al., 2013; Mnih and Kavukcuoglu, 2013; Vaswani et al., 2013) .", "Another reason is * authors contributed equally † Work done while author was an intern at Borealis AI modeling need, as certain assumptions are best expressed as some score or energy in margin based or un-normalized probability models (Smith and Eisner, 2005) .", "For example, modeling entity relations as translations or variants thereof in a vector space naturally leads to a distance-based score to be minimized for observed entity-relation-entity triplets (Bordes et al., 2013) .", "Given a scoring function, the gradient of the model's parameters on observed positive examples can be readily computed, but the negative phase requires a design decision on how to sample data.", "In noise contrastive estimation for word embeddings, a negative example is formed by replacing a component of a positive pair by randomly selecting a sampled word from the vocabulary, resulting in a fictitious word-context pair which would be unlikely to actually exist in the dataset.", "This negative sampling by corruption approach is also used in learning knowledge graph embeddings (Bordes et al., 2013; Lin et al., 2015; Ji et al., 2015; Wang et al., 2014; Trouillon et al., 2016; Yang et al., 2014; Dettmers et al., 2017) , order embeddings (Vendrov et al., 2016) , caption generation (Dai and Lin, 2017) , etc.", "Typically the corruption distribution is the same for all inputs like in skip-gram or CBOW NCE, rather than being a conditional distribution that takes into account information about the input sample under consideration.", "Furthermore, the corruption process usually only encodes a human prior as to what constitutes a hard negative sample, rather than being learned from data.", "For these two reasons, the simple fixed corruption process often yields only easy negative examples.", "Easy negatives are sub-optimal for learning discriminative representation as they do not force the model to find critical characteristics of observed positive data, which has been independently discovered in applications outside NLP previously (Shrivastava et al., 2016) .", "Even if hard negatives are occasionally reached, the infrequency means slow convergence.", "Designing a more sophisticated corruption process could be fruitful, but requires costly trialand-error by a human expert.", "In this work, we propose to augment the simple corruption noise process in various embedding models with an adversarially learned conditional distribution, forming a mixture negative sampler that adapts to the underlying data and the embedding model training progress.", "The resulting method is referred to as adversarial contrastive estimation (ACE).", "The adaptive conditional model engages in a minimax game with the primary embedding model, much like in Generative Adversarial Networks (GANs) (Goodfellow et al., 2014a) , where a discriminator net (D), tries to distinguish samples produced by a generator (G) from real data (Goodfellow et al., 2014b) .", "In ACE, the main model learns to distinguish between a real positive example and a negative sample selected by the mixture of a fixed NCE sampler and an adversarial generator.", "The main model and the generator takes alternating turns to update their parameters.", "In fact, our method can be viewed as a conditional GAN (Mirza and Osindero, 2014) on discrete inputs, with a mixture generator consisting of a learned and a fixed distribution, with additional techniques introduced to achieve stable and convergent training of embedding models.", "In our proposed ACE approach, the conditional sampler finds harder negatives than NCE, while being able to gracefully fall back to NCE whenever the generator cannot find hard negatives.", "We demonstrate the efficacy and generality of the proposed method on three different learning tasks, word embeddings (Mikolov et al., 2013) , order embeddings (Vendrov et al., 2016) and knowledge graph embeddings (Ji et al., 2015) .", "Method Background: contrastive learning In the most general form, our method applies to supervised learning problems with a contrastive objective of the following form: L(ω) = E p(x + ,y + ,y − ) l ω (x + , y + , y − ) (1) where l ω (x + , y + , y − ) captures both the model with parameters ω and the loss that scores a positive tuple (x + , y + ) against a negative one (x + , y − ).", "E p(x + ,y + ,y − ) (.)", "denotes expectation with respect to some joint distribution over positive and negative samples.", "Furthermore, by the law of total expectation, and the fact that given x + , the negative sampling is not dependent on the positive label, i.e.", "p(y + , y − |x + ) = p(y + |x + )p(y − |x + ), Eq.", "1 can be re-written as E p(x + ) [E p(y + |x + )p(y − |x + ) l ω (x + , y + , y − )] (2) Separable loss In the case where the loss decomposes into a sum of scores on positive and negative tuples such as l ω (x + , y + , y − ) = s ω (x + , y + )−s ω (x + , y − ), then Expression.", "2 becomes E p + (x) [E p + (y|x) s ω (x, y) − E p − (y|x)sω (x, y)] (3) where we moved the + and − to p for notational brevity.", "Learning by stochastic gradient descent aims to adjust ω to pushing down s ω (x, y) on samples from p + while pushing ups ω (x, y) on samples from p − .", "Note that for generality, the scoring function for negative samples, denoted bỹ s ω , could be slightly different from s ω .", "For instance,s could contain a margin as in the case of Order Embeddings in Sec.", "4.2.", "Non separable loss Eq.", "1 is the general form that we would like to consider because for certain problems, the loss function cannot be separated into sums of terms containing only positive (x + , y + ) and terms with negatives (x + , y − ).", "An example of such a nonseparable loss is the triplet ranking loss (Schroff et al., 2015) : l ω = max(0, η + s ω (x + , y + ) − s ω (x + , y − )), which does not decompose due to the rectification.", "Noise contrastive estimation The typical NCE approach in tasks such as word embeddings (Mikolov et al., 2013) , order embeddings (Vendrov et al., 2016) , and knowledge graph embeddings can be viewed as a special case of Eq.", "2 by taking p(y − |x + ) to be some unconditional p nce (y).", "This leads to efficient computation during training, however, p nce (y) sacrifices the sampling efficiency of learning as the negatives produced using a fixed distribution are not tailored toward x + , and as a result are not necessarily hard negative examples.", "Thus, the model is not forced to discover discriminative representation of observed positive data.", "As training progresses, more and more negative examples are correctly learned, the probability of drawing a hard negative example diminishes further, causing slow convergence.", "Adversarial mixture noise To remedy the above mentioned problem of a fixed unconditional negative sampler, we propose to augment it into a mixture one, λp nce (y) + (1 − λ)g θ (y|x), where g θ is a conditional distribution with a learnable parameter θ and λ is a hyperparameter.", "The objective in Expression.", "2 can then be written as (conditioned on x for notational brevity): L(ω, θ; x) = λ E p(y + |x)pnce(y − ) l ω (x, y + , y − ) + (1 − λ) E p(y + |x)g θ (y − |x) l ω (x, y + , y − ) (4) We learn (ω, θ) in a GAN-style minimax game: min ω max θ V (ω, θ) = min ω max θ E p + (x) L(ω, θ; x) (5 ) The embedding model behind l ω (x, y + , y − ) is similar to the discriminator in (conditional) GAN (or critic in Wasserstein or Energy-based GAN (Zhao et al., 2016) , while g θ (y|x) acts as the generator.", "Henceforth, we will use the term discriminator (D) and embedding model interchangeably, and refer to g θ as the generator.", "Learning the generator There is one important distinction to typical GAN: g θ (y|x) defines a categorical distribution over possible y values, and samples are drawn accordingly; in contrast to typical GAN over continuous data space such as images, where samples are generated by an implicit generative model that warps noise vectors into data points.", "Due to the discrete sampling step, g θ cannot learn by receiving gradient through the discriminator.", "One possible solution is to use the Gumbel-softmax reparametrization trick (Jang et al., 2016; Maddison et al., 2016) , which gives a differentiable approximation.", "However, this differentiability comes at the cost of drawing N Gumbel samples per each categorical sample, where N is the number of categories.", "For word embeddings, N is the vocabulary size, and for knowledge graph embeddings, N is the number of entities, both leading to infeasible computational requirements.", "Instead, we use the REINFORCE (Williams, 1992) gradient estimator for ∇ θ L(θ, x): (1−λ) E −l ω (x, y + , y − )∇ θ log(g θ (y − |x)) (6) where the expectation E is with respect to p(y + , y − |x) = p(y + |x)g θ (y − |x), and the discriminator loss l ω (x, y + , y − ) acts as the reward.", "With a separable loss, the (conditional) value function of the minimax game is: L(ω, θ; x) = E p + (y|x) s ω (x, y) − E pnce(y)sω (x, y) − E g θ (y|x)sω (x, y) (7) and only the last term depends on the generator parameter ω.", "Hence, with a separable loss, the reward is −s(x + , y − ).", "This reduction does not happen with a non-separable loss, and we have to use l ω (x, y + , y − ).", "Entropy and training stability GAN training can suffer from instability and degeneracy where the generator probability mass collapses to a few modes or points.", "Much work has been done to stabilize GAN training in the continuous case Gulrajani et al., 2017; Cao et al., 2018) .", "In ACE, if the generator g θ probability mass collapses to a few candidates, then after the discriminator successfully learns about these negatives, g θ cannot adapt to select new hard negatives, because the REIN-FORCE gradient estimator Eq.", "6 relies on g θ being able to explore other candidates during sampling.", "Therefore, if the g θ probability mass collapses, instead of leading to oscillation as in typical GAN, the min-max game in ACE reaches an equilibrium where the discriminator wins and g θ can no longer adapt, then ACE falls back to NCE since the negative sampler has another mixture component from NCE.", "This behavior of gracefully falling back to NCE is more desirable than the alternative of stalled training if p − (y|x) does not have a simple p nce mixture component.", "However, we would still like to avoid such collapse, as the adversarial samples provide greater learning signals than NCE samples.", "To this end, we propose to use a regularizer to encourage the categorical distribution g θ (y|x) to have high entropy.", "In order to make the the regularizer interpretable and its hyperparameters easy to tune, we design the following form: R ent (x) = min(0, c − H(g θ (y|x))) (8) where H(g θ (y|x)) is the entropy of the categorical distribution g θ (y|x), and c = log(k) is the entropy of a uniform distribution over k choices, and k is a hyper-parameter.", "Intuitively, R ent expresses the prior that the generator should spread its mass over more than k choices for each x.", "Handling false negatives During negative sampling, p − (y|x) could actually produce y that forms a positive pair that exists in the training set, i.e., a false negative.", "This possibility exists in NCE already, but since p nce is not adaptive, the probability of sampling a false negative is low.", "Hence in NCE, the score on this false negative (true observation) pair is pushed up less in the negative term than in the positive term.", "However, with the adaptive sampler, g ω (y|x), false negatives become a much more severe issue.", "g ω (y|x) can learn to concentrate its mass on a few false negatives, significantly canceling the learning of those observations in the positive phase.", "The entropy regularization reduces this problem as it forces the generator to spread its mass, hence reducing the chance of a false negative.", "To further alleviate this problem, whenever computationally feasible, we apply an additional two-step technique.", "First, we maintain a hash map of the training data in memory, and use it to efficiently detect if a negative sample (x + , y − ) is an actual observation.", "If so, its contribution to the loss is given a zero weight in ω learning step.", "Second, to upate θ in the generator learning step, the reward for false negative samples are replaced by a large penalty, so that the REINFORCE gradient update would steer g θ away from those samples.", "The second step is needed to prevent null computation where g θ learns to sample false negatives which are subsequently ignored by the discriminator update for ω. Variance Reduction The basic REINFORCE gradient estimator is poised with high variance, so in practice one often needs to apply variance reduction techniques.", "The most basic form of variance reduction is to subtract a baseline from the reward.", "As long as the baseline is not a function of actions (i.e., samples y − being drawn), the REINFORCE gradient estimator remains unbiased.", "More advanced gradient estimators exist that also reduce variance (Grathwohl et al., 2017; Tucker et al., 2017; Liu et al., 2018) , but for simplicity we use the self-critical baseline method (Rennie et al., 2016) , where the baseline is b(x) = l ω (y + , y , x), or b(x) = −s ω (y , x) in the separable loss case, and y = argmax i g θ (y i |x).", "In other words, the baseline is the reward of the most likely sample according to the generator.", "2.7 Improving exploration in g θ by leveraging NCE samples In Sec.", "2.4 we touched on the need for sufficient exploration in g θ .", "It is possible to also leverage negative samples from NCE to help the generator learn.", "This is essentially off-policy exploration in reinforcement learning since NCE samples are not drawn according to g θ (y|x).", "The generator learning can use importance re-weighting to leverage those samples.", "The resulting REIN-FORCE gradient estimator is basically the same as Eq.", "6 except that the rewards are reweighted by g θ (y − |x)/p nce (y − ), and the expectation is with respect to p(y + |x)p nce (y − ).", "This additional offpolicy learning term provides gradient information for generator learning if g θ (y − |x) is not zero, meaning that for it to be effective in helping exploration, the generator cannot be collapsed at the first place.", "Hence, in practice, this term is only used to further help on top of the entropy regularization, but it does not replace it.", "Related Work Smith and Eisner (2005) proposed contrastive estimation as a way for unsupervised learning of log-linear models by taking implicit evidence from user-defined neighborhoods around observed datapoints.", "Gutmann and Hyvärinen (2010) introduced NCE as an alternative to the hierarchical softmax.", "In the works of Mnih and Teh (2012) and Mnih and Kavukcuoglu (2013) , NCE is applied to log-bilinear models and Vaswani et al.", "(2013) applied NCE to neural probabilistic language models (Yoshua et al., 2003) .", "Compared to these previous NCE methods that rely on simple fixed sampling heuristics, ACE uses an adaptive sampler that produces harder negatives.", "In the domain of max-margin estimation for structured prediction (Taskar et al., 2005) , loss augmented MAP inference plays the role of finding hard negatives (the hardest).", "However, this inference is only tractable in a limited class of models such structured SVM (Tsochantaridis et al., 2005) .", "Compared to those models that use exact maximization to find the hardest negative configuration each time, the generator in ACE can be viewed as learning an approximate amortized inference network.", "Concurrently to this work, Tu and Gimpel (2018) proposes a very similar framework, using a learned inference network for Structured prediction energy networks (SPEN) (Belanger and McCallum, 2016) .", "Concurrent with our work, there have been other interests in applying the GAN to NLP problems (Fedus et al., 2018; Wang et al., 2018; Cai and Wang, 2017) .", "Knowledge graph models naturally lend to a GAN setup, and has been the subject of study in Wang et al.", "(2018) and Cai and Wang (2017) .", "These two concurrent works are most closely related to one of the three tasks on which we study ACE in this work.", "Besides a more general formulation that applies to problems beyond those considered in Wang et al.", "(2018) and Cai and Wang (2017) , the techniques introduced in our work on handling false negatives and entropy regularization lead to improved experimental results as shown in Sec.", "5.4.", "Application of ACE on three tasks 4.1 Word Embeddings Word embeddings learn a vector representation of words from co-occurrences in a text corpus.", "NCE casts this learning problem as a binary classification where the model tries to distinguish positive word and context pairs, from negative noise samples composed of word and false context pairs.", "The NCE objective in Skip-gram (Mikolov et al., 2013) for word embeddings is a separable loss of the form: L = − wt∈V [log p(y = 1|w t , w + c ) + K c=1 log p(y = 0|w t , w − c )] (9) Here, w + c is sampled from the set of true contexts and w − c ∼ Q is sampled k times from a fixed noise distribution.", "Mikolov et al.", "(2013) introduced a further simplification of NCE, called \"Negative Sampling\" (Dyer, 2014) .", "With respect to our ACE framework, the difference between NCE and Negative Sampling is inconsequential, so we continue the discussion using NCE.", "A drawback of this sampling scheme is that it favors more common words as context.", "Another issue is that the negative context words are sampled in the same way, rather than tailored toward the actual target word.", "To apply ACE to this problem we first define the value function for the minimax game, V (D, G), as follows: V (D, G) = E p + (wc) [log D(w c , w t )] − E pnce(wc) [− log(1 − D(w c , w t ))] − E g θ (wc|wt) [− log(1 − D(w c , w t ))] (10) with D = p(y = 1|w t , w c ) and G = g θ (w c |w t ).", "Implementation details For our experiments, we train all our models on a single pass of the May 2017 dump of the English Wikipedia with lowercased unigrams.", "The vocabulary size is restricted to the top 150k most frequent words when training from scratch while for finetuning we use the same vocabulary as Pennington et al.", "(2014) , which is 400k of the most frequent words.", "We use 5 NCE samples for each positive sample and 1 adversarial sample in a window size of 10 and the same positive subsampling scheme proposed by Mikolov et al.", "(2013) .", "Learning for both G and D uses Adam (Kingma and Ba, 2014) optimizer with its default parameters.", "Our conditional discriminator is modeled using the Skip-Gram architecture, which is a two layer neural network with a linear mapping between the layers.", "The generator network consists of an embedding layer followed by two small hidden layers, followed by an output softmax layer.", "The first layer of the generator shares its weights with the second embedding layer in the discriminator network, which we find really speeds up convergence as the generator does not have to relearn its own set of embeddings.", "The difference between the discriminator and generator is that a sigmoid nonlinearity is used after the second layer in the discriminator, while in the generator, a softmax layer is used to define a categorical distribution over negative word candidates.", "We find that controlling the generator entropy is critical for finetuning experiments as otherwise the generator collapses to its favorite negative sample.", "The word embeddings are taken to be the first dense matrix in the discriminator.", "Order Embeddings Hypernym Prediction As introduced in Vendrov et al.", "(2016) , ordered representations over hierarchy can be learned by order embeddings.", "An example task for such ordered representation is hypernym prediction.", "A hypernym pair is a pair of concepts where the first concept is a specialization or an instance of the second.", "For completeness, we briefly describe order embeddings, then analyze ACE on the hypernym prediction task.", "In order embeddings, each entity is represented by a vector in R N , the score for a positive ordered pair of entities (x, y) is defined by s ω (x, y) = ||max(0, y − x)|| 2 and, score for a negative ordered pair (x + , y − ) is defined bỹ s ω (x + , y − ) = max{0, η − s(x + , y − )}, where is η is the margin.", "Let f (u) be the embedding function which takes an entity as input and outputs en embedding vector.", "We define P as a set of positive pairs and N as negative pairs, the separable loss function for order embedding task is defined by: L = (u,v)∈P s ω (f (u), f (v)))+ (u,v)∈Ns (f (u), f (v)) (11) Implementation details Our generator for this task is just a linear fully connected softmax layer, taking an embedding vector from discriminator as input and outputting a categorical distribution over the entity set.", "For the discriminator, we inherit all model setting from Vendrov et al.", "(2016) : we use 50 dimensions hidden state and bash size 1000, a learning rate of 0.01 and the Adam optimizer.", "For the generator, we use a batch size of 1000, a learning rate 0.01 and the Adam optimizer.", "We apply weight decay with rate 0.1 and entropy loss regularization as described in Sec.", "2.4.", "We handle false negative as described in Sec.", "2.5.", "After cross validation, variance reduction and leveraging NCE samples does not greatly affect the order embedding task.", "Knowledge Graph Embeddings Knowledge graphs contain entity and relation data of the form (head entity, relation, tail entity), and the goal is to learn from observed positive entity relations and predict missing links (a.k.a.", "link prediction).", "There have been many works on knowledge graph embeddings, e.g.", "TransE (Bordes et al., 2013) , TransR (Lin et al., 2015) , TransH (Wang et al., 2014) , TransD (Ji et al., 2015) , Complex (Trouillon et al., 2016) , DistMult (Yang et al., 2014) and ConvE (Dettmers et al., 2017) .", "Many of them use a contrastive learning objective.", "Here we take TransD as an example, and modify its noise contrastive learning to ACE, and demonstrate significant improvement in sample efficiency and link prediction results.", "Implementation details Let a positive entity-relation-entity triplet be denoted by ξ + = (h + , r + , t + ), and a negative triplet could either have its head or tail be a negative sample, i.e.", "ξ − = (h − , r + , t + ) or ξ − = (h + , r + , t − ).", "In either case, the general formulation in Sec.", "2.1 still applies.", "The non-separable loss function takes on the form: l = max(0, η + s ω (ξ + ) − s ω (ξ − )) (12) The scoring rule is: s = h ⊥ + r − t ⊥ (13) where r is the embedding vector for r, and h ⊥ is projection of the embedding of h onto the space of r by h ⊥ = h + r p h p h, where r p and h p are projection parameters of the model.", "t ⊥ is defined in a similar way through parameters t, t p and r p .", "The form of the generator g θ (t − |r + , h + ) is chosen to be f θ (h ⊥ , h ⊥ + r), where f θ is a feedforward neural net that concatenates its two input arguments, then propagates through two hidden layers, followed by a final softmax output layer.", "As a function of (r + , h + ), g θ shares parameter with the discriminator, as the inputs to f θ are the embedding vectors.", "During generator learning, only θ is updated and the TransD model embedding parameters are frozen.", "Experiments We evaluate ACE with experiments on word embeddings, order embeddings, and knowledge graph embeddings tasks.", "In short, whenever the original learning objective is contrastive (all tasks except Glove fine-tuning) our results consistently show that ACE improves over NCE.", "In some cases, we include additional comparisons to the state-of-art results on the task to put the significance of such improvements in context: the generic ACE can often make a reasonable baseline competitive with SOTA methods that are optimized for the task.", "For word embeddings, we evaluate models trained from scratch as well as fine-tuned Glove models (Pennington et al., 2014) on word similarity tasks that consist of computing the similarity between word pairs where the ground truth is an average of human scores.", "We choose the Rare word dataset (Luong et al., 2013) and WordSim-353 (Finkelstein et al., 2001) by virtue of our hypothesis that ACE learns better representations for both rare and frequent words.", "We also qualitatively evaluate ACE word embeddings by inspecting the nearest neighbors of selected words.", "For the hypernym prediction task, following Vendrov et al.", "(2016) , hypernym pairs are created from the WordNet hierarchy's transitive closure.", "We use the released random development split and test split from Vendrov et al.", "(2016) , which both contain 4000 edges.", "For knowledge graph embeddings, we use TransD (Ji et al., 2015) as our base model, and perform ablation study to analyze the behavior of ACE with various add-on features, and confirm that entropy regularization is crucial for good performance in ACE.", "We also obtain link prediction results that are competitive or superior to the stateof-arts on the WN18 dataset (Bordes et al., 2014) .", "Training Word Embeddings from scratch In this experiment, we empirically observe that training word embeddings using ACE converges significantly faster than NCE after one epoch.", "As shown in Fig.", "3 both ACE (a mixture of p nce and g θ ) and just g θ (denoted by ADV) significantly outperforms the NCE baseline, with an absolute improvement of 73.1% and 58.5% respectively on RW score.", "We note similar results on WordSim-353 dataset where ACE and ADV outperforms NCE by 40.4% and 45.7%.", "We also evaluate our model qualitatively by inspecting the nearest neighbors of selected words in Table.", "1.", "We first present the five nearest neighbors to each word to show that both NCE and ACE models learn sensible embeddings.", "We then show that ACE embeddings have much better semantic relevance in a larger neighborhood (nearest neighbor 45-50).", "Finetuning Word Embeddings We take off-the-shelf pre-trained Glove embeddings which were trained using 6 billion tokens (Pennington et al., 2014) and fine-tune them using our algorithm.", "It is interesting to note that the original Glove objective does not fit into the contrastive learning framework, but nonetheless we find that they benefit from ACE.", "In fact, we observe that training such that 75% of the words appear as positive contexts is sufficient to beat the largest dimensionality pre-trained Glove model on word similarity tasks.", "We evaluate our performance on the Rare Word and WordSim353 data.", "As can be seen from our results in Table 2 , ACE on RW is not always better and for the 100d and 300d Glove embeddings is marginally worse.", "However, on WordSim353 ACE does considerably better across the board to the point where 50d Glove embeddings outperform the 300d baseline Glove model.", "Hypernym Prediction As shown in Table 3 , with ACE training, our method achieves a 1.5% improvement on accu- racy over Vendrov et al.", "(2016) without tunning any of the discriminator's hyperparameters.", "We further report training curve in Fig.", "1 , we report loss curve on randomly sampled pairs.", "We stress that in the ACE model, we train random pairs and generator generated pairs jointly, as shown in Fig.", "2 , hard negatives help the order embedding model converges faster.", "Ablation Study and Improving TransD To analyze different aspects of ACE, we perform an ablation study on the knowledge graph embedding task.", "As described in Sec.", "4.3, the base Method Accuracy (%) order-embeddings 90.6 order-embeddings + Our ACE 92.0 Table 3 : Order Embedding Performance model (discriminator) we apply ACE to is TransD (Ji et al., 2015) .", "Fig.", "5 shows validation performance as training progresses.", "All variants of ACE converges to better results than base NCE.", "Among ACE variants, all methods that include entropy regularization significantly outperform without entropy regularization.", "Without the self critical baseline variance reduction, learning could progress faster at the beginning but the final performance suffers slightly.", "The best performance is obtained without the additional off-policy learning of the generator.", "Table.", "4 shows the final test results on WN18 link prediction task.", "It is interesting to note that ACE improves MRR score more significantly than hit@10.", "As MRR is a lot more sensitive to the top rankings, i.e., how the correct configuration ranks among the competitive alternatives, this is consistent with the fact that ACE samples hard negatives and forces the base model to learn a more discriminative representation of the positive examples.", "(Trouillon et al., 2016) , which achieves the SOTA on this dataset.", "Among all TransD based models (the best results in this group is underlined), ACE improves over basic NCE and another GAN based approach KBGAN.", "The gap on MRR is likely due to the difference between TransD and COMPLEX models.", "Hard Negative Analysis To better understand the effect of the adversarial samples proposed by the generator we plot the discriminator loss on both p nce and g θ samples.", "In this context, a harder sample means a higher loss assigned by the discriminator.", "Fig.", "4 shows that discriminator loss for the word embedding task on g θ samples are always higher than on p nce samples, confirming that the generator is indeed sampling harder negatives.", "For Hypernym Prediction task, Fig.2 shows discriminator loss on negative pairs sampled from NCE and ACE respectively.", "The higher the loss the harder the negative pair is.", "As indicated in the left plot, loss on the ACE negative terms collapses faster than on the NCE negatives.", "After adding entropy regularization and weight decay, the generator works as expected.", "Limitations When the generator softmax is large, the current implementation of ACE training is computationally expensive.", "Although ACE converges faster per iteration, it may converge more slowly on wall-clock time depending on the cost of the softmax.", "However, embeddings are typically used as pre-trained building blocks for subsequent tasks.", "Thus, their learning is usually the pre-computation step for the more complex downstream models and spending more time is justified, especially with GPU acceleration.", "We believe that the computational cost could potentially be reduced via some existing techniques such as the \"augment and reduce\" variational inference of (Ruiz et al., 2018), adaptive softmax (Grave et al., 2016) , or the \"sparsely-gated\" softmax of Shazeer et al.", "(2017) , but leave that to future work.", "Another limitation is on the theoretical front.", "As noted in Goodfellow (2014) , GAN learning does not implement maximum likelihood estimation (MLE), while NCE has MLE as an asymptotic limit.", "To the best of our knowledge, more distant connections between GAN and MLE training are not known, and tools for analyzing the equilibrium of a min-max game where players are parametrized by deep neural nets are currently not available to the best of our knowledge.", "Conclusion In this paper, we propose Adversarial Contrastive Estimation as a general technique for improving supervised learning problems that learn by contrasting observed and fictitious samples.", "Specifically, we use a generator network in a conditional GAN like setting to propose hard negative examples for our discriminator model.", "We find that a mixture distribution of randomly sampling negative examples along with an adaptive negative sampler leads to improved performances on a variety of embedding tasks.", "We validate our hypothesis that hard negative examples are critical to optimal learning and can be proposed via our ACE framework.", "Finally, we find that controlling the entropy of the generator through a regularization term and properly handling false negatives is crucial for successful training." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "2.3", "2.4", "2.5", "2.6", "3", "4", "4.2", "4.3", "5", "5.1", "5.2", "5.3", "5.4", "5.5", "6", "7" ], "paper_header_content": [ "Introduction", "Background: contrastive learning", "Adversarial mixture noise", "Learning the generator", "Entropy and training stability", "Handling false negatives", "Variance Reduction", "Related Work", "Application of ACE on three tasks 4.1 Word Embeddings", "Order Embeddings Hypernym Prediction", "Knowledge Graph Embeddings", "Experiments", "Training Word Embeddings from scratch", "Finetuning Word Embeddings", "Hypernym Prediction", "Ablation Study and Improving TransD", "Hard Negative Analysis", "Limitations", "Conclusion" ] }
GEM-SciDuet-train-33#paper-1047#slide-4
Adversarial Contrastive Estimation
We want to generate negatives that ... fool a discriminative model into misclassifying. Use a Conditional GAN to sample hard negatives given x+. We can augment NCE with an adversarial sampler, pnce(y) + (1 )g(y |x).
We want to generate negatives that ... fool a discriminative model into misclassifying. Use a Conditional GAN to sample hard negatives given x+. We can augment NCE with an adversarial sampler, pnce(y) + (1 )g(y |x).
[]
GEM-SciDuet-train-33#paper-1047#slide-6
1047
Adversarial Contrastive Estimation
Learning by contrasting positive and negative samples is a general strategy adopted by many methods. Noise contrastive estimation (NCE) for word embeddings and translating embeddings for knowledge graphs are examples in NLP employing this approach. In this work, we view contrastive learning as an abstraction of all such methods and augment the negative sampler into a mixture distribution containing an adversarially learned sampler. The resulting adaptive sampler finds harder negative examples, which forces the main model to learn a better representation of the data. We evaluate our proposal on learning word embeddings, order embeddings and knowledge graph embeddings and observe both faster convergence and improved results on multiple metrics.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226 ], "paper_content_text": [ "Introduction Many models learn by contrasting losses on observed positive examples with those on some fictitious negative examples, trying to decrease some score on positive ones while increasing it on negative ones.", "There are multiple reasons why such contrastive learning approach is needed.", "Computational tractability is one.", "For instance, instead of using softmax to predict a word for learning word embeddings, noise contrastive estimation (NCE) (Dyer, 2014; Mnih and Teh, 2012) can be used in skip-gram or CBOW word embedding models (Gutmann and Hyvärinen, 2012; Mikolov et al., 2013; Mnih and Kavukcuoglu, 2013; Vaswani et al., 2013) .", "Another reason is * authors contributed equally † Work done while author was an intern at Borealis AI modeling need, as certain assumptions are best expressed as some score or energy in margin based or un-normalized probability models (Smith and Eisner, 2005) .", "For example, modeling entity relations as translations or variants thereof in a vector space naturally leads to a distance-based score to be minimized for observed entity-relation-entity triplets (Bordes et al., 2013) .", "Given a scoring function, the gradient of the model's parameters on observed positive examples can be readily computed, but the negative phase requires a design decision on how to sample data.", "In noise contrastive estimation for word embeddings, a negative example is formed by replacing a component of a positive pair by randomly selecting a sampled word from the vocabulary, resulting in a fictitious word-context pair which would be unlikely to actually exist in the dataset.", "This negative sampling by corruption approach is also used in learning knowledge graph embeddings (Bordes et al., 2013; Lin et al., 2015; Ji et al., 2015; Wang et al., 2014; Trouillon et al., 2016; Yang et al., 2014; Dettmers et al., 2017) , order embeddings (Vendrov et al., 2016) , caption generation (Dai and Lin, 2017) , etc.", "Typically the corruption distribution is the same for all inputs like in skip-gram or CBOW NCE, rather than being a conditional distribution that takes into account information about the input sample under consideration.", "Furthermore, the corruption process usually only encodes a human prior as to what constitutes a hard negative sample, rather than being learned from data.", "For these two reasons, the simple fixed corruption process often yields only easy negative examples.", "Easy negatives are sub-optimal for learning discriminative representation as they do not force the model to find critical characteristics of observed positive data, which has been independently discovered in applications outside NLP previously (Shrivastava et al., 2016) .", "Even if hard negatives are occasionally reached, the infrequency means slow convergence.", "Designing a more sophisticated corruption process could be fruitful, but requires costly trialand-error by a human expert.", "In this work, we propose to augment the simple corruption noise process in various embedding models with an adversarially learned conditional distribution, forming a mixture negative sampler that adapts to the underlying data and the embedding model training progress.", "The resulting method is referred to as adversarial contrastive estimation (ACE).", "The adaptive conditional model engages in a minimax game with the primary embedding model, much like in Generative Adversarial Networks (GANs) (Goodfellow et al., 2014a) , where a discriminator net (D), tries to distinguish samples produced by a generator (G) from real data (Goodfellow et al., 2014b) .", "In ACE, the main model learns to distinguish between a real positive example and a negative sample selected by the mixture of a fixed NCE sampler and an adversarial generator.", "The main model and the generator takes alternating turns to update their parameters.", "In fact, our method can be viewed as a conditional GAN (Mirza and Osindero, 2014) on discrete inputs, with a mixture generator consisting of a learned and a fixed distribution, with additional techniques introduced to achieve stable and convergent training of embedding models.", "In our proposed ACE approach, the conditional sampler finds harder negatives than NCE, while being able to gracefully fall back to NCE whenever the generator cannot find hard negatives.", "We demonstrate the efficacy and generality of the proposed method on three different learning tasks, word embeddings (Mikolov et al., 2013) , order embeddings (Vendrov et al., 2016) and knowledge graph embeddings (Ji et al., 2015) .", "Method Background: contrastive learning In the most general form, our method applies to supervised learning problems with a contrastive objective of the following form: L(ω) = E p(x + ,y + ,y − ) l ω (x + , y + , y − ) (1) where l ω (x + , y + , y − ) captures both the model with parameters ω and the loss that scores a positive tuple (x + , y + ) against a negative one (x + , y − ).", "E p(x + ,y + ,y − ) (.)", "denotes expectation with respect to some joint distribution over positive and negative samples.", "Furthermore, by the law of total expectation, and the fact that given x + , the negative sampling is not dependent on the positive label, i.e.", "p(y + , y − |x + ) = p(y + |x + )p(y − |x + ), Eq.", "1 can be re-written as E p(x + ) [E p(y + |x + )p(y − |x + ) l ω (x + , y + , y − )] (2) Separable loss In the case where the loss decomposes into a sum of scores on positive and negative tuples such as l ω (x + , y + , y − ) = s ω (x + , y + )−s ω (x + , y − ), then Expression.", "2 becomes E p + (x) [E p + (y|x) s ω (x, y) − E p − (y|x)sω (x, y)] (3) where we moved the + and − to p for notational brevity.", "Learning by stochastic gradient descent aims to adjust ω to pushing down s ω (x, y) on samples from p + while pushing ups ω (x, y) on samples from p − .", "Note that for generality, the scoring function for negative samples, denoted bỹ s ω , could be slightly different from s ω .", "For instance,s could contain a margin as in the case of Order Embeddings in Sec.", "4.2.", "Non separable loss Eq.", "1 is the general form that we would like to consider because for certain problems, the loss function cannot be separated into sums of terms containing only positive (x + , y + ) and terms with negatives (x + , y − ).", "An example of such a nonseparable loss is the triplet ranking loss (Schroff et al., 2015) : l ω = max(0, η + s ω (x + , y + ) − s ω (x + , y − )), which does not decompose due to the rectification.", "Noise contrastive estimation The typical NCE approach in tasks such as word embeddings (Mikolov et al., 2013) , order embeddings (Vendrov et al., 2016) , and knowledge graph embeddings can be viewed as a special case of Eq.", "2 by taking p(y − |x + ) to be some unconditional p nce (y).", "This leads to efficient computation during training, however, p nce (y) sacrifices the sampling efficiency of learning as the negatives produced using a fixed distribution are not tailored toward x + , and as a result are not necessarily hard negative examples.", "Thus, the model is not forced to discover discriminative representation of observed positive data.", "As training progresses, more and more negative examples are correctly learned, the probability of drawing a hard negative example diminishes further, causing slow convergence.", "Adversarial mixture noise To remedy the above mentioned problem of a fixed unconditional negative sampler, we propose to augment it into a mixture one, λp nce (y) + (1 − λ)g θ (y|x), where g θ is a conditional distribution with a learnable parameter θ and λ is a hyperparameter.", "The objective in Expression.", "2 can then be written as (conditioned on x for notational brevity): L(ω, θ; x) = λ E p(y + |x)pnce(y − ) l ω (x, y + , y − ) + (1 − λ) E p(y + |x)g θ (y − |x) l ω (x, y + , y − ) (4) We learn (ω, θ) in a GAN-style minimax game: min ω max θ V (ω, θ) = min ω max θ E p + (x) L(ω, θ; x) (5 ) The embedding model behind l ω (x, y + , y − ) is similar to the discriminator in (conditional) GAN (or critic in Wasserstein or Energy-based GAN (Zhao et al., 2016) , while g θ (y|x) acts as the generator.", "Henceforth, we will use the term discriminator (D) and embedding model interchangeably, and refer to g θ as the generator.", "Learning the generator There is one important distinction to typical GAN: g θ (y|x) defines a categorical distribution over possible y values, and samples are drawn accordingly; in contrast to typical GAN over continuous data space such as images, where samples are generated by an implicit generative model that warps noise vectors into data points.", "Due to the discrete sampling step, g θ cannot learn by receiving gradient through the discriminator.", "One possible solution is to use the Gumbel-softmax reparametrization trick (Jang et al., 2016; Maddison et al., 2016) , which gives a differentiable approximation.", "However, this differentiability comes at the cost of drawing N Gumbel samples per each categorical sample, where N is the number of categories.", "For word embeddings, N is the vocabulary size, and for knowledge graph embeddings, N is the number of entities, both leading to infeasible computational requirements.", "Instead, we use the REINFORCE (Williams, 1992) gradient estimator for ∇ θ L(θ, x): (1−λ) E −l ω (x, y + , y − )∇ θ log(g θ (y − |x)) (6) where the expectation E is with respect to p(y + , y − |x) = p(y + |x)g θ (y − |x), and the discriminator loss l ω (x, y + , y − ) acts as the reward.", "With a separable loss, the (conditional) value function of the minimax game is: L(ω, θ; x) = E p + (y|x) s ω (x, y) − E pnce(y)sω (x, y) − E g θ (y|x)sω (x, y) (7) and only the last term depends on the generator parameter ω.", "Hence, with a separable loss, the reward is −s(x + , y − ).", "This reduction does not happen with a non-separable loss, and we have to use l ω (x, y + , y − ).", "Entropy and training stability GAN training can suffer from instability and degeneracy where the generator probability mass collapses to a few modes or points.", "Much work has been done to stabilize GAN training in the continuous case Gulrajani et al., 2017; Cao et al., 2018) .", "In ACE, if the generator g θ probability mass collapses to a few candidates, then after the discriminator successfully learns about these negatives, g θ cannot adapt to select new hard negatives, because the REIN-FORCE gradient estimator Eq.", "6 relies on g θ being able to explore other candidates during sampling.", "Therefore, if the g θ probability mass collapses, instead of leading to oscillation as in typical GAN, the min-max game in ACE reaches an equilibrium where the discriminator wins and g θ can no longer adapt, then ACE falls back to NCE since the negative sampler has another mixture component from NCE.", "This behavior of gracefully falling back to NCE is more desirable than the alternative of stalled training if p − (y|x) does not have a simple p nce mixture component.", "However, we would still like to avoid such collapse, as the adversarial samples provide greater learning signals than NCE samples.", "To this end, we propose to use a regularizer to encourage the categorical distribution g θ (y|x) to have high entropy.", "In order to make the the regularizer interpretable and its hyperparameters easy to tune, we design the following form: R ent (x) = min(0, c − H(g θ (y|x))) (8) where H(g θ (y|x)) is the entropy of the categorical distribution g θ (y|x), and c = log(k) is the entropy of a uniform distribution over k choices, and k is a hyper-parameter.", "Intuitively, R ent expresses the prior that the generator should spread its mass over more than k choices for each x.", "Handling false negatives During negative sampling, p − (y|x) could actually produce y that forms a positive pair that exists in the training set, i.e., a false negative.", "This possibility exists in NCE already, but since p nce is not adaptive, the probability of sampling a false negative is low.", "Hence in NCE, the score on this false negative (true observation) pair is pushed up less in the negative term than in the positive term.", "However, with the adaptive sampler, g ω (y|x), false negatives become a much more severe issue.", "g ω (y|x) can learn to concentrate its mass on a few false negatives, significantly canceling the learning of those observations in the positive phase.", "The entropy regularization reduces this problem as it forces the generator to spread its mass, hence reducing the chance of a false negative.", "To further alleviate this problem, whenever computationally feasible, we apply an additional two-step technique.", "First, we maintain a hash map of the training data in memory, and use it to efficiently detect if a negative sample (x + , y − ) is an actual observation.", "If so, its contribution to the loss is given a zero weight in ω learning step.", "Second, to upate θ in the generator learning step, the reward for false negative samples are replaced by a large penalty, so that the REINFORCE gradient update would steer g θ away from those samples.", "The second step is needed to prevent null computation where g θ learns to sample false negatives which are subsequently ignored by the discriminator update for ω. Variance Reduction The basic REINFORCE gradient estimator is poised with high variance, so in practice one often needs to apply variance reduction techniques.", "The most basic form of variance reduction is to subtract a baseline from the reward.", "As long as the baseline is not a function of actions (i.e., samples y − being drawn), the REINFORCE gradient estimator remains unbiased.", "More advanced gradient estimators exist that also reduce variance (Grathwohl et al., 2017; Tucker et al., 2017; Liu et al., 2018) , but for simplicity we use the self-critical baseline method (Rennie et al., 2016) , where the baseline is b(x) = l ω (y + , y , x), or b(x) = −s ω (y , x) in the separable loss case, and y = argmax i g θ (y i |x).", "In other words, the baseline is the reward of the most likely sample according to the generator.", "2.7 Improving exploration in g θ by leveraging NCE samples In Sec.", "2.4 we touched on the need for sufficient exploration in g θ .", "It is possible to also leverage negative samples from NCE to help the generator learn.", "This is essentially off-policy exploration in reinforcement learning since NCE samples are not drawn according to g θ (y|x).", "The generator learning can use importance re-weighting to leverage those samples.", "The resulting REIN-FORCE gradient estimator is basically the same as Eq.", "6 except that the rewards are reweighted by g θ (y − |x)/p nce (y − ), and the expectation is with respect to p(y + |x)p nce (y − ).", "This additional offpolicy learning term provides gradient information for generator learning if g θ (y − |x) is not zero, meaning that for it to be effective in helping exploration, the generator cannot be collapsed at the first place.", "Hence, in practice, this term is only used to further help on top of the entropy regularization, but it does not replace it.", "Related Work Smith and Eisner (2005) proposed contrastive estimation as a way for unsupervised learning of log-linear models by taking implicit evidence from user-defined neighborhoods around observed datapoints.", "Gutmann and Hyvärinen (2010) introduced NCE as an alternative to the hierarchical softmax.", "In the works of Mnih and Teh (2012) and Mnih and Kavukcuoglu (2013) , NCE is applied to log-bilinear models and Vaswani et al.", "(2013) applied NCE to neural probabilistic language models (Yoshua et al., 2003) .", "Compared to these previous NCE methods that rely on simple fixed sampling heuristics, ACE uses an adaptive sampler that produces harder negatives.", "In the domain of max-margin estimation for structured prediction (Taskar et al., 2005) , loss augmented MAP inference plays the role of finding hard negatives (the hardest).", "However, this inference is only tractable in a limited class of models such structured SVM (Tsochantaridis et al., 2005) .", "Compared to those models that use exact maximization to find the hardest negative configuration each time, the generator in ACE can be viewed as learning an approximate amortized inference network.", "Concurrently to this work, Tu and Gimpel (2018) proposes a very similar framework, using a learned inference network for Structured prediction energy networks (SPEN) (Belanger and McCallum, 2016) .", "Concurrent with our work, there have been other interests in applying the GAN to NLP problems (Fedus et al., 2018; Wang et al., 2018; Cai and Wang, 2017) .", "Knowledge graph models naturally lend to a GAN setup, and has been the subject of study in Wang et al.", "(2018) and Cai and Wang (2017) .", "These two concurrent works are most closely related to one of the three tasks on which we study ACE in this work.", "Besides a more general formulation that applies to problems beyond those considered in Wang et al.", "(2018) and Cai and Wang (2017) , the techniques introduced in our work on handling false negatives and entropy regularization lead to improved experimental results as shown in Sec.", "5.4.", "Application of ACE on three tasks 4.1 Word Embeddings Word embeddings learn a vector representation of words from co-occurrences in a text corpus.", "NCE casts this learning problem as a binary classification where the model tries to distinguish positive word and context pairs, from negative noise samples composed of word and false context pairs.", "The NCE objective in Skip-gram (Mikolov et al., 2013) for word embeddings is a separable loss of the form: L = − wt∈V [log p(y = 1|w t , w + c ) + K c=1 log p(y = 0|w t , w − c )] (9) Here, w + c is sampled from the set of true contexts and w − c ∼ Q is sampled k times from a fixed noise distribution.", "Mikolov et al.", "(2013) introduced a further simplification of NCE, called \"Negative Sampling\" (Dyer, 2014) .", "With respect to our ACE framework, the difference between NCE and Negative Sampling is inconsequential, so we continue the discussion using NCE.", "A drawback of this sampling scheme is that it favors more common words as context.", "Another issue is that the negative context words are sampled in the same way, rather than tailored toward the actual target word.", "To apply ACE to this problem we first define the value function for the minimax game, V (D, G), as follows: V (D, G) = E p + (wc) [log D(w c , w t )] − E pnce(wc) [− log(1 − D(w c , w t ))] − E g θ (wc|wt) [− log(1 − D(w c , w t ))] (10) with D = p(y = 1|w t , w c ) and G = g θ (w c |w t ).", "Implementation details For our experiments, we train all our models on a single pass of the May 2017 dump of the English Wikipedia with lowercased unigrams.", "The vocabulary size is restricted to the top 150k most frequent words when training from scratch while for finetuning we use the same vocabulary as Pennington et al.", "(2014) , which is 400k of the most frequent words.", "We use 5 NCE samples for each positive sample and 1 adversarial sample in a window size of 10 and the same positive subsampling scheme proposed by Mikolov et al.", "(2013) .", "Learning for both G and D uses Adam (Kingma and Ba, 2014) optimizer with its default parameters.", "Our conditional discriminator is modeled using the Skip-Gram architecture, which is a two layer neural network with a linear mapping between the layers.", "The generator network consists of an embedding layer followed by two small hidden layers, followed by an output softmax layer.", "The first layer of the generator shares its weights with the second embedding layer in the discriminator network, which we find really speeds up convergence as the generator does not have to relearn its own set of embeddings.", "The difference between the discriminator and generator is that a sigmoid nonlinearity is used after the second layer in the discriminator, while in the generator, a softmax layer is used to define a categorical distribution over negative word candidates.", "We find that controlling the generator entropy is critical for finetuning experiments as otherwise the generator collapses to its favorite negative sample.", "The word embeddings are taken to be the first dense matrix in the discriminator.", "Order Embeddings Hypernym Prediction As introduced in Vendrov et al.", "(2016) , ordered representations over hierarchy can be learned by order embeddings.", "An example task for such ordered representation is hypernym prediction.", "A hypernym pair is a pair of concepts where the first concept is a specialization or an instance of the second.", "For completeness, we briefly describe order embeddings, then analyze ACE on the hypernym prediction task.", "In order embeddings, each entity is represented by a vector in R N , the score for a positive ordered pair of entities (x, y) is defined by s ω (x, y) = ||max(0, y − x)|| 2 and, score for a negative ordered pair (x + , y − ) is defined bỹ s ω (x + , y − ) = max{0, η − s(x + , y − )}, where is η is the margin.", "Let f (u) be the embedding function which takes an entity as input and outputs en embedding vector.", "We define P as a set of positive pairs and N as negative pairs, the separable loss function for order embedding task is defined by: L = (u,v)∈P s ω (f (u), f (v)))+ (u,v)∈Ns (f (u), f (v)) (11) Implementation details Our generator for this task is just a linear fully connected softmax layer, taking an embedding vector from discriminator as input and outputting a categorical distribution over the entity set.", "For the discriminator, we inherit all model setting from Vendrov et al.", "(2016) : we use 50 dimensions hidden state and bash size 1000, a learning rate of 0.01 and the Adam optimizer.", "For the generator, we use a batch size of 1000, a learning rate 0.01 and the Adam optimizer.", "We apply weight decay with rate 0.1 and entropy loss regularization as described in Sec.", "2.4.", "We handle false negative as described in Sec.", "2.5.", "After cross validation, variance reduction and leveraging NCE samples does not greatly affect the order embedding task.", "Knowledge Graph Embeddings Knowledge graphs contain entity and relation data of the form (head entity, relation, tail entity), and the goal is to learn from observed positive entity relations and predict missing links (a.k.a.", "link prediction).", "There have been many works on knowledge graph embeddings, e.g.", "TransE (Bordes et al., 2013) , TransR (Lin et al., 2015) , TransH (Wang et al., 2014) , TransD (Ji et al., 2015) , Complex (Trouillon et al., 2016) , DistMult (Yang et al., 2014) and ConvE (Dettmers et al., 2017) .", "Many of them use a contrastive learning objective.", "Here we take TransD as an example, and modify its noise contrastive learning to ACE, and demonstrate significant improvement in sample efficiency and link prediction results.", "Implementation details Let a positive entity-relation-entity triplet be denoted by ξ + = (h + , r + , t + ), and a negative triplet could either have its head or tail be a negative sample, i.e.", "ξ − = (h − , r + , t + ) or ξ − = (h + , r + , t − ).", "In either case, the general formulation in Sec.", "2.1 still applies.", "The non-separable loss function takes on the form: l = max(0, η + s ω (ξ + ) − s ω (ξ − )) (12) The scoring rule is: s = h ⊥ + r − t ⊥ (13) where r is the embedding vector for r, and h ⊥ is projection of the embedding of h onto the space of r by h ⊥ = h + r p h p h, where r p and h p are projection parameters of the model.", "t ⊥ is defined in a similar way through parameters t, t p and r p .", "The form of the generator g θ (t − |r + , h + ) is chosen to be f θ (h ⊥ , h ⊥ + r), where f θ is a feedforward neural net that concatenates its two input arguments, then propagates through two hidden layers, followed by a final softmax output layer.", "As a function of (r + , h + ), g θ shares parameter with the discriminator, as the inputs to f θ are the embedding vectors.", "During generator learning, only θ is updated and the TransD model embedding parameters are frozen.", "Experiments We evaluate ACE with experiments on word embeddings, order embeddings, and knowledge graph embeddings tasks.", "In short, whenever the original learning objective is contrastive (all tasks except Glove fine-tuning) our results consistently show that ACE improves over NCE.", "In some cases, we include additional comparisons to the state-of-art results on the task to put the significance of such improvements in context: the generic ACE can often make a reasonable baseline competitive with SOTA methods that are optimized for the task.", "For word embeddings, we evaluate models trained from scratch as well as fine-tuned Glove models (Pennington et al., 2014) on word similarity tasks that consist of computing the similarity between word pairs where the ground truth is an average of human scores.", "We choose the Rare word dataset (Luong et al., 2013) and WordSim-353 (Finkelstein et al., 2001) by virtue of our hypothesis that ACE learns better representations for both rare and frequent words.", "We also qualitatively evaluate ACE word embeddings by inspecting the nearest neighbors of selected words.", "For the hypernym prediction task, following Vendrov et al.", "(2016) , hypernym pairs are created from the WordNet hierarchy's transitive closure.", "We use the released random development split and test split from Vendrov et al.", "(2016) , which both contain 4000 edges.", "For knowledge graph embeddings, we use TransD (Ji et al., 2015) as our base model, and perform ablation study to analyze the behavior of ACE with various add-on features, and confirm that entropy regularization is crucial for good performance in ACE.", "We also obtain link prediction results that are competitive or superior to the stateof-arts on the WN18 dataset (Bordes et al., 2014) .", "Training Word Embeddings from scratch In this experiment, we empirically observe that training word embeddings using ACE converges significantly faster than NCE after one epoch.", "As shown in Fig.", "3 both ACE (a mixture of p nce and g θ ) and just g θ (denoted by ADV) significantly outperforms the NCE baseline, with an absolute improvement of 73.1% and 58.5% respectively on RW score.", "We note similar results on WordSim-353 dataset where ACE and ADV outperforms NCE by 40.4% and 45.7%.", "We also evaluate our model qualitatively by inspecting the nearest neighbors of selected words in Table.", "1.", "We first present the five nearest neighbors to each word to show that both NCE and ACE models learn sensible embeddings.", "We then show that ACE embeddings have much better semantic relevance in a larger neighborhood (nearest neighbor 45-50).", "Finetuning Word Embeddings We take off-the-shelf pre-trained Glove embeddings which were trained using 6 billion tokens (Pennington et al., 2014) and fine-tune them using our algorithm.", "It is interesting to note that the original Glove objective does not fit into the contrastive learning framework, but nonetheless we find that they benefit from ACE.", "In fact, we observe that training such that 75% of the words appear as positive contexts is sufficient to beat the largest dimensionality pre-trained Glove model on word similarity tasks.", "We evaluate our performance on the Rare Word and WordSim353 data.", "As can be seen from our results in Table 2 , ACE on RW is not always better and for the 100d and 300d Glove embeddings is marginally worse.", "However, on WordSim353 ACE does considerably better across the board to the point where 50d Glove embeddings outperform the 300d baseline Glove model.", "Hypernym Prediction As shown in Table 3 , with ACE training, our method achieves a 1.5% improvement on accu- racy over Vendrov et al.", "(2016) without tunning any of the discriminator's hyperparameters.", "We further report training curve in Fig.", "1 , we report loss curve on randomly sampled pairs.", "We stress that in the ACE model, we train random pairs and generator generated pairs jointly, as shown in Fig.", "2 , hard negatives help the order embedding model converges faster.", "Ablation Study and Improving TransD To analyze different aspects of ACE, we perform an ablation study on the knowledge graph embedding task.", "As described in Sec.", "4.3, the base Method Accuracy (%) order-embeddings 90.6 order-embeddings + Our ACE 92.0 Table 3 : Order Embedding Performance model (discriminator) we apply ACE to is TransD (Ji et al., 2015) .", "Fig.", "5 shows validation performance as training progresses.", "All variants of ACE converges to better results than base NCE.", "Among ACE variants, all methods that include entropy regularization significantly outperform without entropy regularization.", "Without the self critical baseline variance reduction, learning could progress faster at the beginning but the final performance suffers slightly.", "The best performance is obtained without the additional off-policy learning of the generator.", "Table.", "4 shows the final test results on WN18 link prediction task.", "It is interesting to note that ACE improves MRR score more significantly than hit@10.", "As MRR is a lot more sensitive to the top rankings, i.e., how the correct configuration ranks among the competitive alternatives, this is consistent with the fact that ACE samples hard negatives and forces the base model to learn a more discriminative representation of the positive examples.", "(Trouillon et al., 2016) , which achieves the SOTA on this dataset.", "Among all TransD based models (the best results in this group is underlined), ACE improves over basic NCE and another GAN based approach KBGAN.", "The gap on MRR is likely due to the difference between TransD and COMPLEX models.", "Hard Negative Analysis To better understand the effect of the adversarial samples proposed by the generator we plot the discriminator loss on both p nce and g θ samples.", "In this context, a harder sample means a higher loss assigned by the discriminator.", "Fig.", "4 shows that discriminator loss for the word embedding task on g θ samples are always higher than on p nce samples, confirming that the generator is indeed sampling harder negatives.", "For Hypernym Prediction task, Fig.2 shows discriminator loss on negative pairs sampled from NCE and ACE respectively.", "The higher the loss the harder the negative pair is.", "As indicated in the left plot, loss on the ACE negative terms collapses faster than on the NCE negatives.", "After adding entropy regularization and weight decay, the generator works as expected.", "Limitations When the generator softmax is large, the current implementation of ACE training is computationally expensive.", "Although ACE converges faster per iteration, it may converge more slowly on wall-clock time depending on the cost of the softmax.", "However, embeddings are typically used as pre-trained building blocks for subsequent tasks.", "Thus, their learning is usually the pre-computation step for the more complex downstream models and spending more time is justified, especially with GPU acceleration.", "We believe that the computational cost could potentially be reduced via some existing techniques such as the \"augment and reduce\" variational inference of (Ruiz et al., 2018), adaptive softmax (Grave et al., 2016) , or the \"sparsely-gated\" softmax of Shazeer et al.", "(2017) , but leave that to future work.", "Another limitation is on the theoretical front.", "As noted in Goodfellow (2014) , GAN learning does not implement maximum likelihood estimation (MLE), while NCE has MLE as an asymptotic limit.", "To the best of our knowledge, more distant connections between GAN and MLE training are not known, and tools for analyzing the equilibrium of a min-max game where players are parametrized by deep neural nets are currently not available to the best of our knowledge.", "Conclusion In this paper, we propose Adversarial Contrastive Estimation as a general technique for improving supervised learning problems that learn by contrasting observed and fictitious samples.", "Specifically, we use a generator network in a conditional GAN like setting to propose hard negative examples for our discriminator model.", "We find that a mixture distribution of randomly sampling negative examples along with an adaptive negative sampler leads to improved performances on a variety of embedding tasks.", "We validate our hypothesis that hard negative examples are critical to optimal learning and can be proposed via our ACE framework.", "Finally, we find that controlling the entropy of the generator through a regularization term and properly handling false negatives is crucial for successful training." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "2.3", "2.4", "2.5", "2.6", "3", "4", "4.2", "4.3", "5", "5.1", "5.2", "5.3", "5.4", "5.5", "6", "7" ], "paper_header_content": [ "Introduction", "Background: contrastive learning", "Adversarial mixture noise", "Learning the generator", "Entropy and training stability", "Handling false negatives", "Variance Reduction", "Related Work", "Application of ACE on three tasks 4.1 Word Embeddings", "Order Embeddings Hypernym Prediction", "Knowledge Graph Embeddings", "Experiments", "Training Word Embeddings from scratch", "Finetuning Word Embeddings", "Hypernym Prediction", "Ablation Study and Improving TransD", "Hard Negative Analysis", "Limitations", "Conclusion" ] }
GEM-SciDuet-train-33#paper-1047#slide-6
The ACE Generator
Picking a negative example is a discrete choice and not differentiable Simplest way to train via Policy Gradients is the REINFORCE gradient estimator Learning is done via a GAN style min-max game
Picking a negative example is a discrete choice and not differentiable Simplest way to train via Policy Gradients is the REINFORCE gradient estimator Learning is done via a GAN style min-max game
[]
GEM-SciDuet-train-33#paper-1047#slide-7
1047
Adversarial Contrastive Estimation
Learning by contrasting positive and negative samples is a general strategy adopted by many methods. Noise contrastive estimation (NCE) for word embeddings and translating embeddings for knowledge graphs are examples in NLP employing this approach. In this work, we view contrastive learning as an abstraction of all such methods and augment the negative sampler into a mixture distribution containing an adversarially learned sampler. The resulting adaptive sampler finds harder negative examples, which forces the main model to learn a better representation of the data. We evaluate our proposal on learning word embeddings, order embeddings and knowledge graph embeddings and observe both faster convergence and improved results on multiple metrics.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226 ], "paper_content_text": [ "Introduction Many models learn by contrasting losses on observed positive examples with those on some fictitious negative examples, trying to decrease some score on positive ones while increasing it on negative ones.", "There are multiple reasons why such contrastive learning approach is needed.", "Computational tractability is one.", "For instance, instead of using softmax to predict a word for learning word embeddings, noise contrastive estimation (NCE) (Dyer, 2014; Mnih and Teh, 2012) can be used in skip-gram or CBOW word embedding models (Gutmann and Hyvärinen, 2012; Mikolov et al., 2013; Mnih and Kavukcuoglu, 2013; Vaswani et al., 2013) .", "Another reason is * authors contributed equally † Work done while author was an intern at Borealis AI modeling need, as certain assumptions are best expressed as some score or energy in margin based or un-normalized probability models (Smith and Eisner, 2005) .", "For example, modeling entity relations as translations or variants thereof in a vector space naturally leads to a distance-based score to be minimized for observed entity-relation-entity triplets (Bordes et al., 2013) .", "Given a scoring function, the gradient of the model's parameters on observed positive examples can be readily computed, but the negative phase requires a design decision on how to sample data.", "In noise contrastive estimation for word embeddings, a negative example is formed by replacing a component of a positive pair by randomly selecting a sampled word from the vocabulary, resulting in a fictitious word-context pair which would be unlikely to actually exist in the dataset.", "This negative sampling by corruption approach is also used in learning knowledge graph embeddings (Bordes et al., 2013; Lin et al., 2015; Ji et al., 2015; Wang et al., 2014; Trouillon et al., 2016; Yang et al., 2014; Dettmers et al., 2017) , order embeddings (Vendrov et al., 2016) , caption generation (Dai and Lin, 2017) , etc.", "Typically the corruption distribution is the same for all inputs like in skip-gram or CBOW NCE, rather than being a conditional distribution that takes into account information about the input sample under consideration.", "Furthermore, the corruption process usually only encodes a human prior as to what constitutes a hard negative sample, rather than being learned from data.", "For these two reasons, the simple fixed corruption process often yields only easy negative examples.", "Easy negatives are sub-optimal for learning discriminative representation as they do not force the model to find critical characteristics of observed positive data, which has been independently discovered in applications outside NLP previously (Shrivastava et al., 2016) .", "Even if hard negatives are occasionally reached, the infrequency means slow convergence.", "Designing a more sophisticated corruption process could be fruitful, but requires costly trialand-error by a human expert.", "In this work, we propose to augment the simple corruption noise process in various embedding models with an adversarially learned conditional distribution, forming a mixture negative sampler that adapts to the underlying data and the embedding model training progress.", "The resulting method is referred to as adversarial contrastive estimation (ACE).", "The adaptive conditional model engages in a minimax game with the primary embedding model, much like in Generative Adversarial Networks (GANs) (Goodfellow et al., 2014a) , where a discriminator net (D), tries to distinguish samples produced by a generator (G) from real data (Goodfellow et al., 2014b) .", "In ACE, the main model learns to distinguish between a real positive example and a negative sample selected by the mixture of a fixed NCE sampler and an adversarial generator.", "The main model and the generator takes alternating turns to update their parameters.", "In fact, our method can be viewed as a conditional GAN (Mirza and Osindero, 2014) on discrete inputs, with a mixture generator consisting of a learned and a fixed distribution, with additional techniques introduced to achieve stable and convergent training of embedding models.", "In our proposed ACE approach, the conditional sampler finds harder negatives than NCE, while being able to gracefully fall back to NCE whenever the generator cannot find hard negatives.", "We demonstrate the efficacy and generality of the proposed method on three different learning tasks, word embeddings (Mikolov et al., 2013) , order embeddings (Vendrov et al., 2016) and knowledge graph embeddings (Ji et al., 2015) .", "Method Background: contrastive learning In the most general form, our method applies to supervised learning problems with a contrastive objective of the following form: L(ω) = E p(x + ,y + ,y − ) l ω (x + , y + , y − ) (1) where l ω (x + , y + , y − ) captures both the model with parameters ω and the loss that scores a positive tuple (x + , y + ) against a negative one (x + , y − ).", "E p(x + ,y + ,y − ) (.)", "denotes expectation with respect to some joint distribution over positive and negative samples.", "Furthermore, by the law of total expectation, and the fact that given x + , the negative sampling is not dependent on the positive label, i.e.", "p(y + , y − |x + ) = p(y + |x + )p(y − |x + ), Eq.", "1 can be re-written as E p(x + ) [E p(y + |x + )p(y − |x + ) l ω (x + , y + , y − )] (2) Separable loss In the case where the loss decomposes into a sum of scores on positive and negative tuples such as l ω (x + , y + , y − ) = s ω (x + , y + )−s ω (x + , y − ), then Expression.", "2 becomes E p + (x) [E p + (y|x) s ω (x, y) − E p − (y|x)sω (x, y)] (3) where we moved the + and − to p for notational brevity.", "Learning by stochastic gradient descent aims to adjust ω to pushing down s ω (x, y) on samples from p + while pushing ups ω (x, y) on samples from p − .", "Note that for generality, the scoring function for negative samples, denoted bỹ s ω , could be slightly different from s ω .", "For instance,s could contain a margin as in the case of Order Embeddings in Sec.", "4.2.", "Non separable loss Eq.", "1 is the general form that we would like to consider because for certain problems, the loss function cannot be separated into sums of terms containing only positive (x + , y + ) and terms with negatives (x + , y − ).", "An example of such a nonseparable loss is the triplet ranking loss (Schroff et al., 2015) : l ω = max(0, η + s ω (x + , y + ) − s ω (x + , y − )), which does not decompose due to the rectification.", "Noise contrastive estimation The typical NCE approach in tasks such as word embeddings (Mikolov et al., 2013) , order embeddings (Vendrov et al., 2016) , and knowledge graph embeddings can be viewed as a special case of Eq.", "2 by taking p(y − |x + ) to be some unconditional p nce (y).", "This leads to efficient computation during training, however, p nce (y) sacrifices the sampling efficiency of learning as the negatives produced using a fixed distribution are not tailored toward x + , and as a result are not necessarily hard negative examples.", "Thus, the model is not forced to discover discriminative representation of observed positive data.", "As training progresses, more and more negative examples are correctly learned, the probability of drawing a hard negative example diminishes further, causing slow convergence.", "Adversarial mixture noise To remedy the above mentioned problem of a fixed unconditional negative sampler, we propose to augment it into a mixture one, λp nce (y) + (1 − λ)g θ (y|x), where g θ is a conditional distribution with a learnable parameter θ and λ is a hyperparameter.", "The objective in Expression.", "2 can then be written as (conditioned on x for notational brevity): L(ω, θ; x) = λ E p(y + |x)pnce(y − ) l ω (x, y + , y − ) + (1 − λ) E p(y + |x)g θ (y − |x) l ω (x, y + , y − ) (4) We learn (ω, θ) in a GAN-style minimax game: min ω max θ V (ω, θ) = min ω max θ E p + (x) L(ω, θ; x) (5 ) The embedding model behind l ω (x, y + , y − ) is similar to the discriminator in (conditional) GAN (or critic in Wasserstein or Energy-based GAN (Zhao et al., 2016) , while g θ (y|x) acts as the generator.", "Henceforth, we will use the term discriminator (D) and embedding model interchangeably, and refer to g θ as the generator.", "Learning the generator There is one important distinction to typical GAN: g θ (y|x) defines a categorical distribution over possible y values, and samples are drawn accordingly; in contrast to typical GAN over continuous data space such as images, where samples are generated by an implicit generative model that warps noise vectors into data points.", "Due to the discrete sampling step, g θ cannot learn by receiving gradient through the discriminator.", "One possible solution is to use the Gumbel-softmax reparametrization trick (Jang et al., 2016; Maddison et al., 2016) , which gives a differentiable approximation.", "However, this differentiability comes at the cost of drawing N Gumbel samples per each categorical sample, where N is the number of categories.", "For word embeddings, N is the vocabulary size, and for knowledge graph embeddings, N is the number of entities, both leading to infeasible computational requirements.", "Instead, we use the REINFORCE (Williams, 1992) gradient estimator for ∇ θ L(θ, x): (1−λ) E −l ω (x, y + , y − )∇ θ log(g θ (y − |x)) (6) where the expectation E is with respect to p(y + , y − |x) = p(y + |x)g θ (y − |x), and the discriminator loss l ω (x, y + , y − ) acts as the reward.", "With a separable loss, the (conditional) value function of the minimax game is: L(ω, θ; x) = E p + (y|x) s ω (x, y) − E pnce(y)sω (x, y) − E g θ (y|x)sω (x, y) (7) and only the last term depends on the generator parameter ω.", "Hence, with a separable loss, the reward is −s(x + , y − ).", "This reduction does not happen with a non-separable loss, and we have to use l ω (x, y + , y − ).", "Entropy and training stability GAN training can suffer from instability and degeneracy where the generator probability mass collapses to a few modes or points.", "Much work has been done to stabilize GAN training in the continuous case Gulrajani et al., 2017; Cao et al., 2018) .", "In ACE, if the generator g θ probability mass collapses to a few candidates, then after the discriminator successfully learns about these negatives, g θ cannot adapt to select new hard negatives, because the REIN-FORCE gradient estimator Eq.", "6 relies on g θ being able to explore other candidates during sampling.", "Therefore, if the g θ probability mass collapses, instead of leading to oscillation as in typical GAN, the min-max game in ACE reaches an equilibrium where the discriminator wins and g θ can no longer adapt, then ACE falls back to NCE since the negative sampler has another mixture component from NCE.", "This behavior of gracefully falling back to NCE is more desirable than the alternative of stalled training if p − (y|x) does not have a simple p nce mixture component.", "However, we would still like to avoid such collapse, as the adversarial samples provide greater learning signals than NCE samples.", "To this end, we propose to use a regularizer to encourage the categorical distribution g θ (y|x) to have high entropy.", "In order to make the the regularizer interpretable and its hyperparameters easy to tune, we design the following form: R ent (x) = min(0, c − H(g θ (y|x))) (8) where H(g θ (y|x)) is the entropy of the categorical distribution g θ (y|x), and c = log(k) is the entropy of a uniform distribution over k choices, and k is a hyper-parameter.", "Intuitively, R ent expresses the prior that the generator should spread its mass over more than k choices for each x.", "Handling false negatives During negative sampling, p − (y|x) could actually produce y that forms a positive pair that exists in the training set, i.e., a false negative.", "This possibility exists in NCE already, but since p nce is not adaptive, the probability of sampling a false negative is low.", "Hence in NCE, the score on this false negative (true observation) pair is pushed up less in the negative term than in the positive term.", "However, with the adaptive sampler, g ω (y|x), false negatives become a much more severe issue.", "g ω (y|x) can learn to concentrate its mass on a few false negatives, significantly canceling the learning of those observations in the positive phase.", "The entropy regularization reduces this problem as it forces the generator to spread its mass, hence reducing the chance of a false negative.", "To further alleviate this problem, whenever computationally feasible, we apply an additional two-step technique.", "First, we maintain a hash map of the training data in memory, and use it to efficiently detect if a negative sample (x + , y − ) is an actual observation.", "If so, its contribution to the loss is given a zero weight in ω learning step.", "Second, to upate θ in the generator learning step, the reward for false negative samples are replaced by a large penalty, so that the REINFORCE gradient update would steer g θ away from those samples.", "The second step is needed to prevent null computation where g θ learns to sample false negatives which are subsequently ignored by the discriminator update for ω. Variance Reduction The basic REINFORCE gradient estimator is poised with high variance, so in practice one often needs to apply variance reduction techniques.", "The most basic form of variance reduction is to subtract a baseline from the reward.", "As long as the baseline is not a function of actions (i.e., samples y − being drawn), the REINFORCE gradient estimator remains unbiased.", "More advanced gradient estimators exist that also reduce variance (Grathwohl et al., 2017; Tucker et al., 2017; Liu et al., 2018) , but for simplicity we use the self-critical baseline method (Rennie et al., 2016) , where the baseline is b(x) = l ω (y + , y , x), or b(x) = −s ω (y , x) in the separable loss case, and y = argmax i g θ (y i |x).", "In other words, the baseline is the reward of the most likely sample according to the generator.", "2.7 Improving exploration in g θ by leveraging NCE samples In Sec.", "2.4 we touched on the need for sufficient exploration in g θ .", "It is possible to also leverage negative samples from NCE to help the generator learn.", "This is essentially off-policy exploration in reinforcement learning since NCE samples are not drawn according to g θ (y|x).", "The generator learning can use importance re-weighting to leverage those samples.", "The resulting REIN-FORCE gradient estimator is basically the same as Eq.", "6 except that the rewards are reweighted by g θ (y − |x)/p nce (y − ), and the expectation is with respect to p(y + |x)p nce (y − ).", "This additional offpolicy learning term provides gradient information for generator learning if g θ (y − |x) is not zero, meaning that for it to be effective in helping exploration, the generator cannot be collapsed at the first place.", "Hence, in practice, this term is only used to further help on top of the entropy regularization, but it does not replace it.", "Related Work Smith and Eisner (2005) proposed contrastive estimation as a way for unsupervised learning of log-linear models by taking implicit evidence from user-defined neighborhoods around observed datapoints.", "Gutmann and Hyvärinen (2010) introduced NCE as an alternative to the hierarchical softmax.", "In the works of Mnih and Teh (2012) and Mnih and Kavukcuoglu (2013) , NCE is applied to log-bilinear models and Vaswani et al.", "(2013) applied NCE to neural probabilistic language models (Yoshua et al., 2003) .", "Compared to these previous NCE methods that rely on simple fixed sampling heuristics, ACE uses an adaptive sampler that produces harder negatives.", "In the domain of max-margin estimation for structured prediction (Taskar et al., 2005) , loss augmented MAP inference plays the role of finding hard negatives (the hardest).", "However, this inference is only tractable in a limited class of models such structured SVM (Tsochantaridis et al., 2005) .", "Compared to those models that use exact maximization to find the hardest negative configuration each time, the generator in ACE can be viewed as learning an approximate amortized inference network.", "Concurrently to this work, Tu and Gimpel (2018) proposes a very similar framework, using a learned inference network for Structured prediction energy networks (SPEN) (Belanger and McCallum, 2016) .", "Concurrent with our work, there have been other interests in applying the GAN to NLP problems (Fedus et al., 2018; Wang et al., 2018; Cai and Wang, 2017) .", "Knowledge graph models naturally lend to a GAN setup, and has been the subject of study in Wang et al.", "(2018) and Cai and Wang (2017) .", "These two concurrent works are most closely related to one of the three tasks on which we study ACE in this work.", "Besides a more general formulation that applies to problems beyond those considered in Wang et al.", "(2018) and Cai and Wang (2017) , the techniques introduced in our work on handling false negatives and entropy regularization lead to improved experimental results as shown in Sec.", "5.4.", "Application of ACE on three tasks 4.1 Word Embeddings Word embeddings learn a vector representation of words from co-occurrences in a text corpus.", "NCE casts this learning problem as a binary classification where the model tries to distinguish positive word and context pairs, from negative noise samples composed of word and false context pairs.", "The NCE objective in Skip-gram (Mikolov et al., 2013) for word embeddings is a separable loss of the form: L = − wt∈V [log p(y = 1|w t , w + c ) + K c=1 log p(y = 0|w t , w − c )] (9) Here, w + c is sampled from the set of true contexts and w − c ∼ Q is sampled k times from a fixed noise distribution.", "Mikolov et al.", "(2013) introduced a further simplification of NCE, called \"Negative Sampling\" (Dyer, 2014) .", "With respect to our ACE framework, the difference between NCE and Negative Sampling is inconsequential, so we continue the discussion using NCE.", "A drawback of this sampling scheme is that it favors more common words as context.", "Another issue is that the negative context words are sampled in the same way, rather than tailored toward the actual target word.", "To apply ACE to this problem we first define the value function for the minimax game, V (D, G), as follows: V (D, G) = E p + (wc) [log D(w c , w t )] − E pnce(wc) [− log(1 − D(w c , w t ))] − E g θ (wc|wt) [− log(1 − D(w c , w t ))] (10) with D = p(y = 1|w t , w c ) and G = g θ (w c |w t ).", "Implementation details For our experiments, we train all our models on a single pass of the May 2017 dump of the English Wikipedia with lowercased unigrams.", "The vocabulary size is restricted to the top 150k most frequent words when training from scratch while for finetuning we use the same vocabulary as Pennington et al.", "(2014) , which is 400k of the most frequent words.", "We use 5 NCE samples for each positive sample and 1 adversarial sample in a window size of 10 and the same positive subsampling scheme proposed by Mikolov et al.", "(2013) .", "Learning for both G and D uses Adam (Kingma and Ba, 2014) optimizer with its default parameters.", "Our conditional discriminator is modeled using the Skip-Gram architecture, which is a two layer neural network with a linear mapping between the layers.", "The generator network consists of an embedding layer followed by two small hidden layers, followed by an output softmax layer.", "The first layer of the generator shares its weights with the second embedding layer in the discriminator network, which we find really speeds up convergence as the generator does not have to relearn its own set of embeddings.", "The difference between the discriminator and generator is that a sigmoid nonlinearity is used after the second layer in the discriminator, while in the generator, a softmax layer is used to define a categorical distribution over negative word candidates.", "We find that controlling the generator entropy is critical for finetuning experiments as otherwise the generator collapses to its favorite negative sample.", "The word embeddings are taken to be the first dense matrix in the discriminator.", "Order Embeddings Hypernym Prediction As introduced in Vendrov et al.", "(2016) , ordered representations over hierarchy can be learned by order embeddings.", "An example task for such ordered representation is hypernym prediction.", "A hypernym pair is a pair of concepts where the first concept is a specialization or an instance of the second.", "For completeness, we briefly describe order embeddings, then analyze ACE on the hypernym prediction task.", "In order embeddings, each entity is represented by a vector in R N , the score for a positive ordered pair of entities (x, y) is defined by s ω (x, y) = ||max(0, y − x)|| 2 and, score for a negative ordered pair (x + , y − ) is defined bỹ s ω (x + , y − ) = max{0, η − s(x + , y − )}, where is η is the margin.", "Let f (u) be the embedding function which takes an entity as input and outputs en embedding vector.", "We define P as a set of positive pairs and N as negative pairs, the separable loss function for order embedding task is defined by: L = (u,v)∈P s ω (f (u), f (v)))+ (u,v)∈Ns (f (u), f (v)) (11) Implementation details Our generator for this task is just a linear fully connected softmax layer, taking an embedding vector from discriminator as input and outputting a categorical distribution over the entity set.", "For the discriminator, we inherit all model setting from Vendrov et al.", "(2016) : we use 50 dimensions hidden state and bash size 1000, a learning rate of 0.01 and the Adam optimizer.", "For the generator, we use a batch size of 1000, a learning rate 0.01 and the Adam optimizer.", "We apply weight decay with rate 0.1 and entropy loss regularization as described in Sec.", "2.4.", "We handle false negative as described in Sec.", "2.5.", "After cross validation, variance reduction and leveraging NCE samples does not greatly affect the order embedding task.", "Knowledge Graph Embeddings Knowledge graphs contain entity and relation data of the form (head entity, relation, tail entity), and the goal is to learn from observed positive entity relations and predict missing links (a.k.a.", "link prediction).", "There have been many works on knowledge graph embeddings, e.g.", "TransE (Bordes et al., 2013) , TransR (Lin et al., 2015) , TransH (Wang et al., 2014) , TransD (Ji et al., 2015) , Complex (Trouillon et al., 2016) , DistMult (Yang et al., 2014) and ConvE (Dettmers et al., 2017) .", "Many of them use a contrastive learning objective.", "Here we take TransD as an example, and modify its noise contrastive learning to ACE, and demonstrate significant improvement in sample efficiency and link prediction results.", "Implementation details Let a positive entity-relation-entity triplet be denoted by ξ + = (h + , r + , t + ), and a negative triplet could either have its head or tail be a negative sample, i.e.", "ξ − = (h − , r + , t + ) or ξ − = (h + , r + , t − ).", "In either case, the general formulation in Sec.", "2.1 still applies.", "The non-separable loss function takes on the form: l = max(0, η + s ω (ξ + ) − s ω (ξ − )) (12) The scoring rule is: s = h ⊥ + r − t ⊥ (13) where r is the embedding vector for r, and h ⊥ is projection of the embedding of h onto the space of r by h ⊥ = h + r p h p h, where r p and h p are projection parameters of the model.", "t ⊥ is defined in a similar way through parameters t, t p and r p .", "The form of the generator g θ (t − |r + , h + ) is chosen to be f θ (h ⊥ , h ⊥ + r), where f θ is a feedforward neural net that concatenates its two input arguments, then propagates through two hidden layers, followed by a final softmax output layer.", "As a function of (r + , h + ), g θ shares parameter with the discriminator, as the inputs to f θ are the embedding vectors.", "During generator learning, only θ is updated and the TransD model embedding parameters are frozen.", "Experiments We evaluate ACE with experiments on word embeddings, order embeddings, and knowledge graph embeddings tasks.", "In short, whenever the original learning objective is contrastive (all tasks except Glove fine-tuning) our results consistently show that ACE improves over NCE.", "In some cases, we include additional comparisons to the state-of-art results on the task to put the significance of such improvements in context: the generic ACE can often make a reasonable baseline competitive with SOTA methods that are optimized for the task.", "For word embeddings, we evaluate models trained from scratch as well as fine-tuned Glove models (Pennington et al., 2014) on word similarity tasks that consist of computing the similarity between word pairs where the ground truth is an average of human scores.", "We choose the Rare word dataset (Luong et al., 2013) and WordSim-353 (Finkelstein et al., 2001) by virtue of our hypothesis that ACE learns better representations for both rare and frequent words.", "We also qualitatively evaluate ACE word embeddings by inspecting the nearest neighbors of selected words.", "For the hypernym prediction task, following Vendrov et al.", "(2016) , hypernym pairs are created from the WordNet hierarchy's transitive closure.", "We use the released random development split and test split from Vendrov et al.", "(2016) , which both contain 4000 edges.", "For knowledge graph embeddings, we use TransD (Ji et al., 2015) as our base model, and perform ablation study to analyze the behavior of ACE with various add-on features, and confirm that entropy regularization is crucial for good performance in ACE.", "We also obtain link prediction results that are competitive or superior to the stateof-arts on the WN18 dataset (Bordes et al., 2014) .", "Training Word Embeddings from scratch In this experiment, we empirically observe that training word embeddings using ACE converges significantly faster than NCE after one epoch.", "As shown in Fig.", "3 both ACE (a mixture of p nce and g θ ) and just g θ (denoted by ADV) significantly outperforms the NCE baseline, with an absolute improvement of 73.1% and 58.5% respectively on RW score.", "We note similar results on WordSim-353 dataset where ACE and ADV outperforms NCE by 40.4% and 45.7%.", "We also evaluate our model qualitatively by inspecting the nearest neighbors of selected words in Table.", "1.", "We first present the five nearest neighbors to each word to show that both NCE and ACE models learn sensible embeddings.", "We then show that ACE embeddings have much better semantic relevance in a larger neighborhood (nearest neighbor 45-50).", "Finetuning Word Embeddings We take off-the-shelf pre-trained Glove embeddings which were trained using 6 billion tokens (Pennington et al., 2014) and fine-tune them using our algorithm.", "It is interesting to note that the original Glove objective does not fit into the contrastive learning framework, but nonetheless we find that they benefit from ACE.", "In fact, we observe that training such that 75% of the words appear as positive contexts is sufficient to beat the largest dimensionality pre-trained Glove model on word similarity tasks.", "We evaluate our performance on the Rare Word and WordSim353 data.", "As can be seen from our results in Table 2 , ACE on RW is not always better and for the 100d and 300d Glove embeddings is marginally worse.", "However, on WordSim353 ACE does considerably better across the board to the point where 50d Glove embeddings outperform the 300d baseline Glove model.", "Hypernym Prediction As shown in Table 3 , with ACE training, our method achieves a 1.5% improvement on accu- racy over Vendrov et al.", "(2016) without tunning any of the discriminator's hyperparameters.", "We further report training curve in Fig.", "1 , we report loss curve on randomly sampled pairs.", "We stress that in the ACE model, we train random pairs and generator generated pairs jointly, as shown in Fig.", "2 , hard negatives help the order embedding model converges faster.", "Ablation Study and Improving TransD To analyze different aspects of ACE, we perform an ablation study on the knowledge graph embedding task.", "As described in Sec.", "4.3, the base Method Accuracy (%) order-embeddings 90.6 order-embeddings + Our ACE 92.0 Table 3 : Order Embedding Performance model (discriminator) we apply ACE to is TransD (Ji et al., 2015) .", "Fig.", "5 shows validation performance as training progresses.", "All variants of ACE converges to better results than base NCE.", "Among ACE variants, all methods that include entropy regularization significantly outperform without entropy regularization.", "Without the self critical baseline variance reduction, learning could progress faster at the beginning but the final performance suffers slightly.", "The best performance is obtained without the additional off-policy learning of the generator.", "Table.", "4 shows the final test results on WN18 link prediction task.", "It is interesting to note that ACE improves MRR score more significantly than hit@10.", "As MRR is a lot more sensitive to the top rankings, i.e., how the correct configuration ranks among the competitive alternatives, this is consistent with the fact that ACE samples hard negatives and forces the base model to learn a more discriminative representation of the positive examples.", "(Trouillon et al., 2016) , which achieves the SOTA on this dataset.", "Among all TransD based models (the best results in this group is underlined), ACE improves over basic NCE and another GAN based approach KBGAN.", "The gap on MRR is likely due to the difference between TransD and COMPLEX models.", "Hard Negative Analysis To better understand the effect of the adversarial samples proposed by the generator we plot the discriminator loss on both p nce and g θ samples.", "In this context, a harder sample means a higher loss assigned by the discriminator.", "Fig.", "4 shows that discriminator loss for the word embedding task on g θ samples are always higher than on p nce samples, confirming that the generator is indeed sampling harder negatives.", "For Hypernym Prediction task, Fig.2 shows discriminator loss on negative pairs sampled from NCE and ACE respectively.", "The higher the loss the harder the negative pair is.", "As indicated in the left plot, loss on the ACE negative terms collapses faster than on the NCE negatives.", "After adding entropy regularization and weight decay, the generator works as expected.", "Limitations When the generator softmax is large, the current implementation of ACE training is computationally expensive.", "Although ACE converges faster per iteration, it may converge more slowly on wall-clock time depending on the cost of the softmax.", "However, embeddings are typically used as pre-trained building blocks for subsequent tasks.", "Thus, their learning is usually the pre-computation step for the more complex downstream models and spending more time is justified, especially with GPU acceleration.", "We believe that the computational cost could potentially be reduced via some existing techniques such as the \"augment and reduce\" variational inference of (Ruiz et al., 2018), adaptive softmax (Grave et al., 2016) , or the \"sparsely-gated\" softmax of Shazeer et al.", "(2017) , but leave that to future work.", "Another limitation is on the theoretical front.", "As noted in Goodfellow (2014) , GAN learning does not implement maximum likelihood estimation (MLE), while NCE has MLE as an asymptotic limit.", "To the best of our knowledge, more distant connections between GAN and MLE training are not known, and tools for analyzing the equilibrium of a min-max game where players are parametrized by deep neural nets are currently not available to the best of our knowledge.", "Conclusion In this paper, we propose Adversarial Contrastive Estimation as a general technique for improving supervised learning problems that learn by contrasting observed and fictitious samples.", "Specifically, we use a generator network in a conditional GAN like setting to propose hard negative examples for our discriminator model.", "We find that a mixture distribution of randomly sampling negative examples along with an adaptive negative sampler leads to improved performances on a variety of embedding tasks.", "We validate our hypothesis that hard negative examples are critical to optimal learning and can be proposed via our ACE framework.", "Finally, we find that controlling the entropy of the generator through a regularization term and properly handling false negatives is crucial for successful training." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "2.3", "2.4", "2.5", "2.6", "3", "4", "4.2", "4.3", "5", "5.1", "5.2", "5.3", "5.4", "5.5", "6", "7" ], "paper_header_content": [ "Introduction", "Background: contrastive learning", "Adversarial mixture noise", "Learning the generator", "Entropy and training stability", "Handling false negatives", "Variance Reduction", "Related Work", "Application of ACE on three tasks 4.1 Word Embeddings", "Order Embeddings Hypernym Prediction", "Knowledge Graph Embeddings", "Experiments", "Training Word Embeddings from scratch", "Finetuning Word Embeddings", "Hypernym Prediction", "Ablation Study and Improving TransD", "Hard Negative Analysis", "Limitations", "Conclusion" ] }
GEM-SciDuet-train-33#paper-1047#slide-7
Technical Contributions for effective training
GAN training can suffer from mode collapse? What happens if the generator collapses on its favorite few negative examples? Add a entropy regularizer term to the generators loss: H(g(y |x)) is the entropy of the categorical distribution c = log(k) is the entropy of a uniform distribution over k choices The Generator can sample false negatives gradient cancellation Apply an additional two-step technique, whenever computationally feasible. Maintain an in memory hash map of the training data and Discriminator filters out false negatives Generator receives a penalty for producing the false negative Entropy Regularizer spreads out the probability mass REINFORCE is known to have extremely high variance. Reduce Variance using the self-critical baseline. Other baselines and gradient estimators are also good options. The generator is not learning from the NCE samples. Use Importance Sampling. Generator can leverage NCE samples for exploration in an off-policy scheme. The modified reward now looks like
GAN training can suffer from mode collapse? What happens if the generator collapses on its favorite few negative examples? Add a entropy regularizer term to the generators loss: H(g(y |x)) is the entropy of the categorical distribution c = log(k) is the entropy of a uniform distribution over k choices The Generator can sample false negatives gradient cancellation Apply an additional two-step technique, whenever computationally feasible. Maintain an in memory hash map of the training data and Discriminator filters out false negatives Generator receives a penalty for producing the false negative Entropy Regularizer spreads out the probability mass REINFORCE is known to have extremely high variance. Reduce Variance using the self-critical baseline. Other baselines and gradient estimators are also good options. The generator is not learning from the NCE samples. Use Importance Sampling. Generator can leverage NCE samples for exploration in an off-policy scheme. The modified reward now looks like
[]
GEM-SciDuet-train-33#paper-1047#slide-9
1047
Adversarial Contrastive Estimation
Learning by contrasting positive and negative samples is a general strategy adopted by many methods. Noise contrastive estimation (NCE) for word embeddings and translating embeddings for knowledge graphs are examples in NLP employing this approach. In this work, we view contrastive learning as an abstraction of all such methods and augment the negative sampler into a mixture distribution containing an adversarially learned sampler. The resulting adaptive sampler finds harder negative examples, which forces the main model to learn a better representation of the data. We evaluate our proposal on learning word embeddings, order embeddings and knowledge graph embeddings and observe both faster convergence and improved results on multiple metrics.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226 ], "paper_content_text": [ "Introduction Many models learn by contrasting losses on observed positive examples with those on some fictitious negative examples, trying to decrease some score on positive ones while increasing it on negative ones.", "There are multiple reasons why such contrastive learning approach is needed.", "Computational tractability is one.", "For instance, instead of using softmax to predict a word for learning word embeddings, noise contrastive estimation (NCE) (Dyer, 2014; Mnih and Teh, 2012) can be used in skip-gram or CBOW word embedding models (Gutmann and Hyvärinen, 2012; Mikolov et al., 2013; Mnih and Kavukcuoglu, 2013; Vaswani et al., 2013) .", "Another reason is * authors contributed equally † Work done while author was an intern at Borealis AI modeling need, as certain assumptions are best expressed as some score or energy in margin based or un-normalized probability models (Smith and Eisner, 2005) .", "For example, modeling entity relations as translations or variants thereof in a vector space naturally leads to a distance-based score to be minimized for observed entity-relation-entity triplets (Bordes et al., 2013) .", "Given a scoring function, the gradient of the model's parameters on observed positive examples can be readily computed, but the negative phase requires a design decision on how to sample data.", "In noise contrastive estimation for word embeddings, a negative example is formed by replacing a component of a positive pair by randomly selecting a sampled word from the vocabulary, resulting in a fictitious word-context pair which would be unlikely to actually exist in the dataset.", "This negative sampling by corruption approach is also used in learning knowledge graph embeddings (Bordes et al., 2013; Lin et al., 2015; Ji et al., 2015; Wang et al., 2014; Trouillon et al., 2016; Yang et al., 2014; Dettmers et al., 2017) , order embeddings (Vendrov et al., 2016) , caption generation (Dai and Lin, 2017) , etc.", "Typically the corruption distribution is the same for all inputs like in skip-gram or CBOW NCE, rather than being a conditional distribution that takes into account information about the input sample under consideration.", "Furthermore, the corruption process usually only encodes a human prior as to what constitutes a hard negative sample, rather than being learned from data.", "For these two reasons, the simple fixed corruption process often yields only easy negative examples.", "Easy negatives are sub-optimal for learning discriminative representation as they do not force the model to find critical characteristics of observed positive data, which has been independently discovered in applications outside NLP previously (Shrivastava et al., 2016) .", "Even if hard negatives are occasionally reached, the infrequency means slow convergence.", "Designing a more sophisticated corruption process could be fruitful, but requires costly trialand-error by a human expert.", "In this work, we propose to augment the simple corruption noise process in various embedding models with an adversarially learned conditional distribution, forming a mixture negative sampler that adapts to the underlying data and the embedding model training progress.", "The resulting method is referred to as adversarial contrastive estimation (ACE).", "The adaptive conditional model engages in a minimax game with the primary embedding model, much like in Generative Adversarial Networks (GANs) (Goodfellow et al., 2014a) , where a discriminator net (D), tries to distinguish samples produced by a generator (G) from real data (Goodfellow et al., 2014b) .", "In ACE, the main model learns to distinguish between a real positive example and a negative sample selected by the mixture of a fixed NCE sampler and an adversarial generator.", "The main model and the generator takes alternating turns to update their parameters.", "In fact, our method can be viewed as a conditional GAN (Mirza and Osindero, 2014) on discrete inputs, with a mixture generator consisting of a learned and a fixed distribution, with additional techniques introduced to achieve stable and convergent training of embedding models.", "In our proposed ACE approach, the conditional sampler finds harder negatives than NCE, while being able to gracefully fall back to NCE whenever the generator cannot find hard negatives.", "We demonstrate the efficacy and generality of the proposed method on three different learning tasks, word embeddings (Mikolov et al., 2013) , order embeddings (Vendrov et al., 2016) and knowledge graph embeddings (Ji et al., 2015) .", "Method Background: contrastive learning In the most general form, our method applies to supervised learning problems with a contrastive objective of the following form: L(ω) = E p(x + ,y + ,y − ) l ω (x + , y + , y − ) (1) where l ω (x + , y + , y − ) captures both the model with parameters ω and the loss that scores a positive tuple (x + , y + ) against a negative one (x + , y − ).", "E p(x + ,y + ,y − ) (.)", "denotes expectation with respect to some joint distribution over positive and negative samples.", "Furthermore, by the law of total expectation, and the fact that given x + , the negative sampling is not dependent on the positive label, i.e.", "p(y + , y − |x + ) = p(y + |x + )p(y − |x + ), Eq.", "1 can be re-written as E p(x + ) [E p(y + |x + )p(y − |x + ) l ω (x + , y + , y − )] (2) Separable loss In the case where the loss decomposes into a sum of scores on positive and negative tuples such as l ω (x + , y + , y − ) = s ω (x + , y + )−s ω (x + , y − ), then Expression.", "2 becomes E p + (x) [E p + (y|x) s ω (x, y) − E p − (y|x)sω (x, y)] (3) where we moved the + and − to p for notational brevity.", "Learning by stochastic gradient descent aims to adjust ω to pushing down s ω (x, y) on samples from p + while pushing ups ω (x, y) on samples from p − .", "Note that for generality, the scoring function for negative samples, denoted bỹ s ω , could be slightly different from s ω .", "For instance,s could contain a margin as in the case of Order Embeddings in Sec.", "4.2.", "Non separable loss Eq.", "1 is the general form that we would like to consider because for certain problems, the loss function cannot be separated into sums of terms containing only positive (x + , y + ) and terms with negatives (x + , y − ).", "An example of such a nonseparable loss is the triplet ranking loss (Schroff et al., 2015) : l ω = max(0, η + s ω (x + , y + ) − s ω (x + , y − )), which does not decompose due to the rectification.", "Noise contrastive estimation The typical NCE approach in tasks such as word embeddings (Mikolov et al., 2013) , order embeddings (Vendrov et al., 2016) , and knowledge graph embeddings can be viewed as a special case of Eq.", "2 by taking p(y − |x + ) to be some unconditional p nce (y).", "This leads to efficient computation during training, however, p nce (y) sacrifices the sampling efficiency of learning as the negatives produced using a fixed distribution are not tailored toward x + , and as a result are not necessarily hard negative examples.", "Thus, the model is not forced to discover discriminative representation of observed positive data.", "As training progresses, more and more negative examples are correctly learned, the probability of drawing a hard negative example diminishes further, causing slow convergence.", "Adversarial mixture noise To remedy the above mentioned problem of a fixed unconditional negative sampler, we propose to augment it into a mixture one, λp nce (y) + (1 − λ)g θ (y|x), where g θ is a conditional distribution with a learnable parameter θ and λ is a hyperparameter.", "The objective in Expression.", "2 can then be written as (conditioned on x for notational brevity): L(ω, θ; x) = λ E p(y + |x)pnce(y − ) l ω (x, y + , y − ) + (1 − λ) E p(y + |x)g θ (y − |x) l ω (x, y + , y − ) (4) We learn (ω, θ) in a GAN-style minimax game: min ω max θ V (ω, θ) = min ω max θ E p + (x) L(ω, θ; x) (5 ) The embedding model behind l ω (x, y + , y − ) is similar to the discriminator in (conditional) GAN (or critic in Wasserstein or Energy-based GAN (Zhao et al., 2016) , while g θ (y|x) acts as the generator.", "Henceforth, we will use the term discriminator (D) and embedding model interchangeably, and refer to g θ as the generator.", "Learning the generator There is one important distinction to typical GAN: g θ (y|x) defines a categorical distribution over possible y values, and samples are drawn accordingly; in contrast to typical GAN over continuous data space such as images, where samples are generated by an implicit generative model that warps noise vectors into data points.", "Due to the discrete sampling step, g θ cannot learn by receiving gradient through the discriminator.", "One possible solution is to use the Gumbel-softmax reparametrization trick (Jang et al., 2016; Maddison et al., 2016) , which gives a differentiable approximation.", "However, this differentiability comes at the cost of drawing N Gumbel samples per each categorical sample, where N is the number of categories.", "For word embeddings, N is the vocabulary size, and for knowledge graph embeddings, N is the number of entities, both leading to infeasible computational requirements.", "Instead, we use the REINFORCE (Williams, 1992) gradient estimator for ∇ θ L(θ, x): (1−λ) E −l ω (x, y + , y − )∇ θ log(g θ (y − |x)) (6) where the expectation E is with respect to p(y + , y − |x) = p(y + |x)g θ (y − |x), and the discriminator loss l ω (x, y + , y − ) acts as the reward.", "With a separable loss, the (conditional) value function of the minimax game is: L(ω, θ; x) = E p + (y|x) s ω (x, y) − E pnce(y)sω (x, y) − E g θ (y|x)sω (x, y) (7) and only the last term depends on the generator parameter ω.", "Hence, with a separable loss, the reward is −s(x + , y − ).", "This reduction does not happen with a non-separable loss, and we have to use l ω (x, y + , y − ).", "Entropy and training stability GAN training can suffer from instability and degeneracy where the generator probability mass collapses to a few modes or points.", "Much work has been done to stabilize GAN training in the continuous case Gulrajani et al., 2017; Cao et al., 2018) .", "In ACE, if the generator g θ probability mass collapses to a few candidates, then after the discriminator successfully learns about these negatives, g θ cannot adapt to select new hard negatives, because the REIN-FORCE gradient estimator Eq.", "6 relies on g θ being able to explore other candidates during sampling.", "Therefore, if the g θ probability mass collapses, instead of leading to oscillation as in typical GAN, the min-max game in ACE reaches an equilibrium where the discriminator wins and g θ can no longer adapt, then ACE falls back to NCE since the negative sampler has another mixture component from NCE.", "This behavior of gracefully falling back to NCE is more desirable than the alternative of stalled training if p − (y|x) does not have a simple p nce mixture component.", "However, we would still like to avoid such collapse, as the adversarial samples provide greater learning signals than NCE samples.", "To this end, we propose to use a regularizer to encourage the categorical distribution g θ (y|x) to have high entropy.", "In order to make the the regularizer interpretable and its hyperparameters easy to tune, we design the following form: R ent (x) = min(0, c − H(g θ (y|x))) (8) where H(g θ (y|x)) is the entropy of the categorical distribution g θ (y|x), and c = log(k) is the entropy of a uniform distribution over k choices, and k is a hyper-parameter.", "Intuitively, R ent expresses the prior that the generator should spread its mass over more than k choices for each x.", "Handling false negatives During negative sampling, p − (y|x) could actually produce y that forms a positive pair that exists in the training set, i.e., a false negative.", "This possibility exists in NCE already, but since p nce is not adaptive, the probability of sampling a false negative is low.", "Hence in NCE, the score on this false negative (true observation) pair is pushed up less in the negative term than in the positive term.", "However, with the adaptive sampler, g ω (y|x), false negatives become a much more severe issue.", "g ω (y|x) can learn to concentrate its mass on a few false negatives, significantly canceling the learning of those observations in the positive phase.", "The entropy regularization reduces this problem as it forces the generator to spread its mass, hence reducing the chance of a false negative.", "To further alleviate this problem, whenever computationally feasible, we apply an additional two-step technique.", "First, we maintain a hash map of the training data in memory, and use it to efficiently detect if a negative sample (x + , y − ) is an actual observation.", "If so, its contribution to the loss is given a zero weight in ω learning step.", "Second, to upate θ in the generator learning step, the reward for false negative samples are replaced by a large penalty, so that the REINFORCE gradient update would steer g θ away from those samples.", "The second step is needed to prevent null computation where g θ learns to sample false negatives which are subsequently ignored by the discriminator update for ω. Variance Reduction The basic REINFORCE gradient estimator is poised with high variance, so in practice one often needs to apply variance reduction techniques.", "The most basic form of variance reduction is to subtract a baseline from the reward.", "As long as the baseline is not a function of actions (i.e., samples y − being drawn), the REINFORCE gradient estimator remains unbiased.", "More advanced gradient estimators exist that also reduce variance (Grathwohl et al., 2017; Tucker et al., 2017; Liu et al., 2018) , but for simplicity we use the self-critical baseline method (Rennie et al., 2016) , where the baseline is b(x) = l ω (y + , y , x), or b(x) = −s ω (y , x) in the separable loss case, and y = argmax i g θ (y i |x).", "In other words, the baseline is the reward of the most likely sample according to the generator.", "2.7 Improving exploration in g θ by leveraging NCE samples In Sec.", "2.4 we touched on the need for sufficient exploration in g θ .", "It is possible to also leverage negative samples from NCE to help the generator learn.", "This is essentially off-policy exploration in reinforcement learning since NCE samples are not drawn according to g θ (y|x).", "The generator learning can use importance re-weighting to leverage those samples.", "The resulting REIN-FORCE gradient estimator is basically the same as Eq.", "6 except that the rewards are reweighted by g θ (y − |x)/p nce (y − ), and the expectation is with respect to p(y + |x)p nce (y − ).", "This additional offpolicy learning term provides gradient information for generator learning if g θ (y − |x) is not zero, meaning that for it to be effective in helping exploration, the generator cannot be collapsed at the first place.", "Hence, in practice, this term is only used to further help on top of the entropy regularization, but it does not replace it.", "Related Work Smith and Eisner (2005) proposed contrastive estimation as a way for unsupervised learning of log-linear models by taking implicit evidence from user-defined neighborhoods around observed datapoints.", "Gutmann and Hyvärinen (2010) introduced NCE as an alternative to the hierarchical softmax.", "In the works of Mnih and Teh (2012) and Mnih and Kavukcuoglu (2013) , NCE is applied to log-bilinear models and Vaswani et al.", "(2013) applied NCE to neural probabilistic language models (Yoshua et al., 2003) .", "Compared to these previous NCE methods that rely on simple fixed sampling heuristics, ACE uses an adaptive sampler that produces harder negatives.", "In the domain of max-margin estimation for structured prediction (Taskar et al., 2005) , loss augmented MAP inference plays the role of finding hard negatives (the hardest).", "However, this inference is only tractable in a limited class of models such structured SVM (Tsochantaridis et al., 2005) .", "Compared to those models that use exact maximization to find the hardest negative configuration each time, the generator in ACE can be viewed as learning an approximate amortized inference network.", "Concurrently to this work, Tu and Gimpel (2018) proposes a very similar framework, using a learned inference network for Structured prediction energy networks (SPEN) (Belanger and McCallum, 2016) .", "Concurrent with our work, there have been other interests in applying the GAN to NLP problems (Fedus et al., 2018; Wang et al., 2018; Cai and Wang, 2017) .", "Knowledge graph models naturally lend to a GAN setup, and has been the subject of study in Wang et al.", "(2018) and Cai and Wang (2017) .", "These two concurrent works are most closely related to one of the three tasks on which we study ACE in this work.", "Besides a more general formulation that applies to problems beyond those considered in Wang et al.", "(2018) and Cai and Wang (2017) , the techniques introduced in our work on handling false negatives and entropy regularization lead to improved experimental results as shown in Sec.", "5.4.", "Application of ACE on three tasks 4.1 Word Embeddings Word embeddings learn a vector representation of words from co-occurrences in a text corpus.", "NCE casts this learning problem as a binary classification where the model tries to distinguish positive word and context pairs, from negative noise samples composed of word and false context pairs.", "The NCE objective in Skip-gram (Mikolov et al., 2013) for word embeddings is a separable loss of the form: L = − wt∈V [log p(y = 1|w t , w + c ) + K c=1 log p(y = 0|w t , w − c )] (9) Here, w + c is sampled from the set of true contexts and w − c ∼ Q is sampled k times from a fixed noise distribution.", "Mikolov et al.", "(2013) introduced a further simplification of NCE, called \"Negative Sampling\" (Dyer, 2014) .", "With respect to our ACE framework, the difference between NCE and Negative Sampling is inconsequential, so we continue the discussion using NCE.", "A drawback of this sampling scheme is that it favors more common words as context.", "Another issue is that the negative context words are sampled in the same way, rather than tailored toward the actual target word.", "To apply ACE to this problem we first define the value function for the minimax game, V (D, G), as follows: V (D, G) = E p + (wc) [log D(w c , w t )] − E pnce(wc) [− log(1 − D(w c , w t ))] − E g θ (wc|wt) [− log(1 − D(w c , w t ))] (10) with D = p(y = 1|w t , w c ) and G = g θ (w c |w t ).", "Implementation details For our experiments, we train all our models on a single pass of the May 2017 dump of the English Wikipedia with lowercased unigrams.", "The vocabulary size is restricted to the top 150k most frequent words when training from scratch while for finetuning we use the same vocabulary as Pennington et al.", "(2014) , which is 400k of the most frequent words.", "We use 5 NCE samples for each positive sample and 1 adversarial sample in a window size of 10 and the same positive subsampling scheme proposed by Mikolov et al.", "(2013) .", "Learning for both G and D uses Adam (Kingma and Ba, 2014) optimizer with its default parameters.", "Our conditional discriminator is modeled using the Skip-Gram architecture, which is a two layer neural network with a linear mapping between the layers.", "The generator network consists of an embedding layer followed by two small hidden layers, followed by an output softmax layer.", "The first layer of the generator shares its weights with the second embedding layer in the discriminator network, which we find really speeds up convergence as the generator does not have to relearn its own set of embeddings.", "The difference between the discriminator and generator is that a sigmoid nonlinearity is used after the second layer in the discriminator, while in the generator, a softmax layer is used to define a categorical distribution over negative word candidates.", "We find that controlling the generator entropy is critical for finetuning experiments as otherwise the generator collapses to its favorite negative sample.", "The word embeddings are taken to be the first dense matrix in the discriminator.", "Order Embeddings Hypernym Prediction As introduced in Vendrov et al.", "(2016) , ordered representations over hierarchy can be learned by order embeddings.", "An example task for such ordered representation is hypernym prediction.", "A hypernym pair is a pair of concepts where the first concept is a specialization or an instance of the second.", "For completeness, we briefly describe order embeddings, then analyze ACE on the hypernym prediction task.", "In order embeddings, each entity is represented by a vector in R N , the score for a positive ordered pair of entities (x, y) is defined by s ω (x, y) = ||max(0, y − x)|| 2 and, score for a negative ordered pair (x + , y − ) is defined bỹ s ω (x + , y − ) = max{0, η − s(x + , y − )}, where is η is the margin.", "Let f (u) be the embedding function which takes an entity as input and outputs en embedding vector.", "We define P as a set of positive pairs and N as negative pairs, the separable loss function for order embedding task is defined by: L = (u,v)∈P s ω (f (u), f (v)))+ (u,v)∈Ns (f (u), f (v)) (11) Implementation details Our generator for this task is just a linear fully connected softmax layer, taking an embedding vector from discriminator as input and outputting a categorical distribution over the entity set.", "For the discriminator, we inherit all model setting from Vendrov et al.", "(2016) : we use 50 dimensions hidden state and bash size 1000, a learning rate of 0.01 and the Adam optimizer.", "For the generator, we use a batch size of 1000, a learning rate 0.01 and the Adam optimizer.", "We apply weight decay with rate 0.1 and entropy loss regularization as described in Sec.", "2.4.", "We handle false negative as described in Sec.", "2.5.", "After cross validation, variance reduction and leveraging NCE samples does not greatly affect the order embedding task.", "Knowledge Graph Embeddings Knowledge graphs contain entity and relation data of the form (head entity, relation, tail entity), and the goal is to learn from observed positive entity relations and predict missing links (a.k.a.", "link prediction).", "There have been many works on knowledge graph embeddings, e.g.", "TransE (Bordes et al., 2013) , TransR (Lin et al., 2015) , TransH (Wang et al., 2014) , TransD (Ji et al., 2015) , Complex (Trouillon et al., 2016) , DistMult (Yang et al., 2014) and ConvE (Dettmers et al., 2017) .", "Many of them use a contrastive learning objective.", "Here we take TransD as an example, and modify its noise contrastive learning to ACE, and demonstrate significant improvement in sample efficiency and link prediction results.", "Implementation details Let a positive entity-relation-entity triplet be denoted by ξ + = (h + , r + , t + ), and a negative triplet could either have its head or tail be a negative sample, i.e.", "ξ − = (h − , r + , t + ) or ξ − = (h + , r + , t − ).", "In either case, the general formulation in Sec.", "2.1 still applies.", "The non-separable loss function takes on the form: l = max(0, η + s ω (ξ + ) − s ω (ξ − )) (12) The scoring rule is: s = h ⊥ + r − t ⊥ (13) where r is the embedding vector for r, and h ⊥ is projection of the embedding of h onto the space of r by h ⊥ = h + r p h p h, where r p and h p are projection parameters of the model.", "t ⊥ is defined in a similar way through parameters t, t p and r p .", "The form of the generator g θ (t − |r + , h + ) is chosen to be f θ (h ⊥ , h ⊥ + r), where f θ is a feedforward neural net that concatenates its two input arguments, then propagates through two hidden layers, followed by a final softmax output layer.", "As a function of (r + , h + ), g θ shares parameter with the discriminator, as the inputs to f θ are the embedding vectors.", "During generator learning, only θ is updated and the TransD model embedding parameters are frozen.", "Experiments We evaluate ACE with experiments on word embeddings, order embeddings, and knowledge graph embeddings tasks.", "In short, whenever the original learning objective is contrastive (all tasks except Glove fine-tuning) our results consistently show that ACE improves over NCE.", "In some cases, we include additional comparisons to the state-of-art results on the task to put the significance of such improvements in context: the generic ACE can often make a reasonable baseline competitive with SOTA methods that are optimized for the task.", "For word embeddings, we evaluate models trained from scratch as well as fine-tuned Glove models (Pennington et al., 2014) on word similarity tasks that consist of computing the similarity between word pairs where the ground truth is an average of human scores.", "We choose the Rare word dataset (Luong et al., 2013) and WordSim-353 (Finkelstein et al., 2001) by virtue of our hypothesis that ACE learns better representations for both rare and frequent words.", "We also qualitatively evaluate ACE word embeddings by inspecting the nearest neighbors of selected words.", "For the hypernym prediction task, following Vendrov et al.", "(2016) , hypernym pairs are created from the WordNet hierarchy's transitive closure.", "We use the released random development split and test split from Vendrov et al.", "(2016) , which both contain 4000 edges.", "For knowledge graph embeddings, we use TransD (Ji et al., 2015) as our base model, and perform ablation study to analyze the behavior of ACE with various add-on features, and confirm that entropy regularization is crucial for good performance in ACE.", "We also obtain link prediction results that are competitive or superior to the stateof-arts on the WN18 dataset (Bordes et al., 2014) .", "Training Word Embeddings from scratch In this experiment, we empirically observe that training word embeddings using ACE converges significantly faster than NCE after one epoch.", "As shown in Fig.", "3 both ACE (a mixture of p nce and g θ ) and just g θ (denoted by ADV) significantly outperforms the NCE baseline, with an absolute improvement of 73.1% and 58.5% respectively on RW score.", "We note similar results on WordSim-353 dataset where ACE and ADV outperforms NCE by 40.4% and 45.7%.", "We also evaluate our model qualitatively by inspecting the nearest neighbors of selected words in Table.", "1.", "We first present the five nearest neighbors to each word to show that both NCE and ACE models learn sensible embeddings.", "We then show that ACE embeddings have much better semantic relevance in a larger neighborhood (nearest neighbor 45-50).", "Finetuning Word Embeddings We take off-the-shelf pre-trained Glove embeddings which were trained using 6 billion tokens (Pennington et al., 2014) and fine-tune them using our algorithm.", "It is interesting to note that the original Glove objective does not fit into the contrastive learning framework, but nonetheless we find that they benefit from ACE.", "In fact, we observe that training such that 75% of the words appear as positive contexts is sufficient to beat the largest dimensionality pre-trained Glove model on word similarity tasks.", "We evaluate our performance on the Rare Word and WordSim353 data.", "As can be seen from our results in Table 2 , ACE on RW is not always better and for the 100d and 300d Glove embeddings is marginally worse.", "However, on WordSim353 ACE does considerably better across the board to the point where 50d Glove embeddings outperform the 300d baseline Glove model.", "Hypernym Prediction As shown in Table 3 , with ACE training, our method achieves a 1.5% improvement on accu- racy over Vendrov et al.", "(2016) without tunning any of the discriminator's hyperparameters.", "We further report training curve in Fig.", "1 , we report loss curve on randomly sampled pairs.", "We stress that in the ACE model, we train random pairs and generator generated pairs jointly, as shown in Fig.", "2 , hard negatives help the order embedding model converges faster.", "Ablation Study and Improving TransD To analyze different aspects of ACE, we perform an ablation study on the knowledge graph embedding task.", "As described in Sec.", "4.3, the base Method Accuracy (%) order-embeddings 90.6 order-embeddings + Our ACE 92.0 Table 3 : Order Embedding Performance model (discriminator) we apply ACE to is TransD (Ji et al., 2015) .", "Fig.", "5 shows validation performance as training progresses.", "All variants of ACE converges to better results than base NCE.", "Among ACE variants, all methods that include entropy regularization significantly outperform without entropy regularization.", "Without the self critical baseline variance reduction, learning could progress faster at the beginning but the final performance suffers slightly.", "The best performance is obtained without the additional off-policy learning of the generator.", "Table.", "4 shows the final test results on WN18 link prediction task.", "It is interesting to note that ACE improves MRR score more significantly than hit@10.", "As MRR is a lot more sensitive to the top rankings, i.e., how the correct configuration ranks among the competitive alternatives, this is consistent with the fact that ACE samples hard negatives and forces the base model to learn a more discriminative representation of the positive examples.", "(Trouillon et al., 2016) , which achieves the SOTA on this dataset.", "Among all TransD based models (the best results in this group is underlined), ACE improves over basic NCE and another GAN based approach KBGAN.", "The gap on MRR is likely due to the difference between TransD and COMPLEX models.", "Hard Negative Analysis To better understand the effect of the adversarial samples proposed by the generator we plot the discriminator loss on both p nce and g θ samples.", "In this context, a harder sample means a higher loss assigned by the discriminator.", "Fig.", "4 shows that discriminator loss for the word embedding task on g θ samples are always higher than on p nce samples, confirming that the generator is indeed sampling harder negatives.", "For Hypernym Prediction task, Fig.2 shows discriminator loss on negative pairs sampled from NCE and ACE respectively.", "The higher the loss the harder the negative pair is.", "As indicated in the left plot, loss on the ACE negative terms collapses faster than on the NCE negatives.", "After adding entropy regularization and weight decay, the generator works as expected.", "Limitations When the generator softmax is large, the current implementation of ACE training is computationally expensive.", "Although ACE converges faster per iteration, it may converge more slowly on wall-clock time depending on the cost of the softmax.", "However, embeddings are typically used as pre-trained building blocks for subsequent tasks.", "Thus, their learning is usually the pre-computation step for the more complex downstream models and spending more time is justified, especially with GPU acceleration.", "We believe that the computational cost could potentially be reduced via some existing techniques such as the \"augment and reduce\" variational inference of (Ruiz et al., 2018), adaptive softmax (Grave et al., 2016) , or the \"sparsely-gated\" softmax of Shazeer et al.", "(2017) , but leave that to future work.", "Another limitation is on the theoretical front.", "As noted in Goodfellow (2014) , GAN learning does not implement maximum likelihood estimation (MLE), while NCE has MLE as an asymptotic limit.", "To the best of our knowledge, more distant connections between GAN and MLE training are not known, and tools for analyzing the equilibrium of a min-max game where players are parametrized by deep neural nets are currently not available to the best of our knowledge.", "Conclusion In this paper, we propose Adversarial Contrastive Estimation as a general technique for improving supervised learning problems that learn by contrasting observed and fictitious samples.", "Specifically, we use a generator network in a conditional GAN like setting to propose hard negative examples for our discriminator model.", "We find that a mixture distribution of randomly sampling negative examples along with an adaptive negative sampler leads to improved performances on a variety of embedding tasks.", "We validate our hypothesis that hard negative examples are critical to optimal learning and can be proposed via our ACE framework.", "Finally, we find that controlling the entropy of the generator through a regularization term and properly handling false negatives is crucial for successful training." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "2.3", "2.4", "2.5", "2.6", "3", "4", "4.2", "4.3", "5", "5.1", "5.2", "5.3", "5.4", "5.5", "6", "7" ], "paper_header_content": [ "Introduction", "Background: contrastive learning", "Adversarial mixture noise", "Learning the generator", "Entropy and training stability", "Handling false negatives", "Variance Reduction", "Related Work", "Application of ACE on three tasks 4.1 Word Embeddings", "Order Embeddings Hypernym Prediction", "Knowledge Graph Embeddings", "Experiments", "Training Word Embeddings from scratch", "Finetuning Word Embeddings", "Hypernym Prediction", "Ablation Study and Improving TransD", "Hard Negative Analysis", "Limitations", "Conclusion" ] }
GEM-SciDuet-train-33#paper-1047#slide-9
Contemporary Work
GANs for NLP that are close to our work MaskGAN Fedus et. al 2018 Incorporating GAN for Negative Sampling in Knowledge Representation Learning Wang et. al 2018 KBGAN Cai and Wang 2017
GANs for NLP that are close to our work MaskGAN Fedus et. al 2018 Incorporating GAN for Negative Sampling in Knowledge Representation Learning Wang et. al 2018 KBGAN Cai and Wang 2017
[]
GEM-SciDuet-train-33#paper-1047#slide-10
1047
Adversarial Contrastive Estimation
Learning by contrasting positive and negative samples is a general strategy adopted by many methods. Noise contrastive estimation (NCE) for word embeddings and translating embeddings for knowledge graphs are examples in NLP employing this approach. In this work, we view contrastive learning as an abstraction of all such methods and augment the negative sampler into a mixture distribution containing an adversarially learned sampler. The resulting adaptive sampler finds harder negative examples, which forces the main model to learn a better representation of the data. We evaluate our proposal on learning word embeddings, order embeddings and knowledge graph embeddings and observe both faster convergence and improved results on multiple metrics.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226 ], "paper_content_text": [ "Introduction Many models learn by contrasting losses on observed positive examples with those on some fictitious negative examples, trying to decrease some score on positive ones while increasing it on negative ones.", "There are multiple reasons why such contrastive learning approach is needed.", "Computational tractability is one.", "For instance, instead of using softmax to predict a word for learning word embeddings, noise contrastive estimation (NCE) (Dyer, 2014; Mnih and Teh, 2012) can be used in skip-gram or CBOW word embedding models (Gutmann and Hyvärinen, 2012; Mikolov et al., 2013; Mnih and Kavukcuoglu, 2013; Vaswani et al., 2013) .", "Another reason is * authors contributed equally † Work done while author was an intern at Borealis AI modeling need, as certain assumptions are best expressed as some score or energy in margin based or un-normalized probability models (Smith and Eisner, 2005) .", "For example, modeling entity relations as translations or variants thereof in a vector space naturally leads to a distance-based score to be minimized for observed entity-relation-entity triplets (Bordes et al., 2013) .", "Given a scoring function, the gradient of the model's parameters on observed positive examples can be readily computed, but the negative phase requires a design decision on how to sample data.", "In noise contrastive estimation for word embeddings, a negative example is formed by replacing a component of a positive pair by randomly selecting a sampled word from the vocabulary, resulting in a fictitious word-context pair which would be unlikely to actually exist in the dataset.", "This negative sampling by corruption approach is also used in learning knowledge graph embeddings (Bordes et al., 2013; Lin et al., 2015; Ji et al., 2015; Wang et al., 2014; Trouillon et al., 2016; Yang et al., 2014; Dettmers et al., 2017) , order embeddings (Vendrov et al., 2016) , caption generation (Dai and Lin, 2017) , etc.", "Typically the corruption distribution is the same for all inputs like in skip-gram or CBOW NCE, rather than being a conditional distribution that takes into account information about the input sample under consideration.", "Furthermore, the corruption process usually only encodes a human prior as to what constitutes a hard negative sample, rather than being learned from data.", "For these two reasons, the simple fixed corruption process often yields only easy negative examples.", "Easy negatives are sub-optimal for learning discriminative representation as they do not force the model to find critical characteristics of observed positive data, which has been independently discovered in applications outside NLP previously (Shrivastava et al., 2016) .", "Even if hard negatives are occasionally reached, the infrequency means slow convergence.", "Designing a more sophisticated corruption process could be fruitful, but requires costly trialand-error by a human expert.", "In this work, we propose to augment the simple corruption noise process in various embedding models with an adversarially learned conditional distribution, forming a mixture negative sampler that adapts to the underlying data and the embedding model training progress.", "The resulting method is referred to as adversarial contrastive estimation (ACE).", "The adaptive conditional model engages in a minimax game with the primary embedding model, much like in Generative Adversarial Networks (GANs) (Goodfellow et al., 2014a) , where a discriminator net (D), tries to distinguish samples produced by a generator (G) from real data (Goodfellow et al., 2014b) .", "In ACE, the main model learns to distinguish between a real positive example and a negative sample selected by the mixture of a fixed NCE sampler and an adversarial generator.", "The main model and the generator takes alternating turns to update their parameters.", "In fact, our method can be viewed as a conditional GAN (Mirza and Osindero, 2014) on discrete inputs, with a mixture generator consisting of a learned and a fixed distribution, with additional techniques introduced to achieve stable and convergent training of embedding models.", "In our proposed ACE approach, the conditional sampler finds harder negatives than NCE, while being able to gracefully fall back to NCE whenever the generator cannot find hard negatives.", "We demonstrate the efficacy and generality of the proposed method on three different learning tasks, word embeddings (Mikolov et al., 2013) , order embeddings (Vendrov et al., 2016) and knowledge graph embeddings (Ji et al., 2015) .", "Method Background: contrastive learning In the most general form, our method applies to supervised learning problems with a contrastive objective of the following form: L(ω) = E p(x + ,y + ,y − ) l ω (x + , y + , y − ) (1) where l ω (x + , y + , y − ) captures both the model with parameters ω and the loss that scores a positive tuple (x + , y + ) against a negative one (x + , y − ).", "E p(x + ,y + ,y − ) (.)", "denotes expectation with respect to some joint distribution over positive and negative samples.", "Furthermore, by the law of total expectation, and the fact that given x + , the negative sampling is not dependent on the positive label, i.e.", "p(y + , y − |x + ) = p(y + |x + )p(y − |x + ), Eq.", "1 can be re-written as E p(x + ) [E p(y + |x + )p(y − |x + ) l ω (x + , y + , y − )] (2) Separable loss In the case where the loss decomposes into a sum of scores on positive and negative tuples such as l ω (x + , y + , y − ) = s ω (x + , y + )−s ω (x + , y − ), then Expression.", "2 becomes E p + (x) [E p + (y|x) s ω (x, y) − E p − (y|x)sω (x, y)] (3) where we moved the + and − to p for notational brevity.", "Learning by stochastic gradient descent aims to adjust ω to pushing down s ω (x, y) on samples from p + while pushing ups ω (x, y) on samples from p − .", "Note that for generality, the scoring function for negative samples, denoted bỹ s ω , could be slightly different from s ω .", "For instance,s could contain a margin as in the case of Order Embeddings in Sec.", "4.2.", "Non separable loss Eq.", "1 is the general form that we would like to consider because for certain problems, the loss function cannot be separated into sums of terms containing only positive (x + , y + ) and terms with negatives (x + , y − ).", "An example of such a nonseparable loss is the triplet ranking loss (Schroff et al., 2015) : l ω = max(0, η + s ω (x + , y + ) − s ω (x + , y − )), which does not decompose due to the rectification.", "Noise contrastive estimation The typical NCE approach in tasks such as word embeddings (Mikolov et al., 2013) , order embeddings (Vendrov et al., 2016) , and knowledge graph embeddings can be viewed as a special case of Eq.", "2 by taking p(y − |x + ) to be some unconditional p nce (y).", "This leads to efficient computation during training, however, p nce (y) sacrifices the sampling efficiency of learning as the negatives produced using a fixed distribution are not tailored toward x + , and as a result are not necessarily hard negative examples.", "Thus, the model is not forced to discover discriminative representation of observed positive data.", "As training progresses, more and more negative examples are correctly learned, the probability of drawing a hard negative example diminishes further, causing slow convergence.", "Adversarial mixture noise To remedy the above mentioned problem of a fixed unconditional negative sampler, we propose to augment it into a mixture one, λp nce (y) + (1 − λ)g θ (y|x), where g θ is a conditional distribution with a learnable parameter θ and λ is a hyperparameter.", "The objective in Expression.", "2 can then be written as (conditioned on x for notational brevity): L(ω, θ; x) = λ E p(y + |x)pnce(y − ) l ω (x, y + , y − ) + (1 − λ) E p(y + |x)g θ (y − |x) l ω (x, y + , y − ) (4) We learn (ω, θ) in a GAN-style minimax game: min ω max θ V (ω, θ) = min ω max θ E p + (x) L(ω, θ; x) (5 ) The embedding model behind l ω (x, y + , y − ) is similar to the discriminator in (conditional) GAN (or critic in Wasserstein or Energy-based GAN (Zhao et al., 2016) , while g θ (y|x) acts as the generator.", "Henceforth, we will use the term discriminator (D) and embedding model interchangeably, and refer to g θ as the generator.", "Learning the generator There is one important distinction to typical GAN: g θ (y|x) defines a categorical distribution over possible y values, and samples are drawn accordingly; in contrast to typical GAN over continuous data space such as images, where samples are generated by an implicit generative model that warps noise vectors into data points.", "Due to the discrete sampling step, g θ cannot learn by receiving gradient through the discriminator.", "One possible solution is to use the Gumbel-softmax reparametrization trick (Jang et al., 2016; Maddison et al., 2016) , which gives a differentiable approximation.", "However, this differentiability comes at the cost of drawing N Gumbel samples per each categorical sample, where N is the number of categories.", "For word embeddings, N is the vocabulary size, and for knowledge graph embeddings, N is the number of entities, both leading to infeasible computational requirements.", "Instead, we use the REINFORCE (Williams, 1992) gradient estimator for ∇ θ L(θ, x): (1−λ) E −l ω (x, y + , y − )∇ θ log(g θ (y − |x)) (6) where the expectation E is with respect to p(y + , y − |x) = p(y + |x)g θ (y − |x), and the discriminator loss l ω (x, y + , y − ) acts as the reward.", "With a separable loss, the (conditional) value function of the minimax game is: L(ω, θ; x) = E p + (y|x) s ω (x, y) − E pnce(y)sω (x, y) − E g θ (y|x)sω (x, y) (7) and only the last term depends on the generator parameter ω.", "Hence, with a separable loss, the reward is −s(x + , y − ).", "This reduction does not happen with a non-separable loss, and we have to use l ω (x, y + , y − ).", "Entropy and training stability GAN training can suffer from instability and degeneracy where the generator probability mass collapses to a few modes or points.", "Much work has been done to stabilize GAN training in the continuous case Gulrajani et al., 2017; Cao et al., 2018) .", "In ACE, if the generator g θ probability mass collapses to a few candidates, then after the discriminator successfully learns about these negatives, g θ cannot adapt to select new hard negatives, because the REIN-FORCE gradient estimator Eq.", "6 relies on g θ being able to explore other candidates during sampling.", "Therefore, if the g θ probability mass collapses, instead of leading to oscillation as in typical GAN, the min-max game in ACE reaches an equilibrium where the discriminator wins and g θ can no longer adapt, then ACE falls back to NCE since the negative sampler has another mixture component from NCE.", "This behavior of gracefully falling back to NCE is more desirable than the alternative of stalled training if p − (y|x) does not have a simple p nce mixture component.", "However, we would still like to avoid such collapse, as the adversarial samples provide greater learning signals than NCE samples.", "To this end, we propose to use a regularizer to encourage the categorical distribution g θ (y|x) to have high entropy.", "In order to make the the regularizer interpretable and its hyperparameters easy to tune, we design the following form: R ent (x) = min(0, c − H(g θ (y|x))) (8) where H(g θ (y|x)) is the entropy of the categorical distribution g θ (y|x), and c = log(k) is the entropy of a uniform distribution over k choices, and k is a hyper-parameter.", "Intuitively, R ent expresses the prior that the generator should spread its mass over more than k choices for each x.", "Handling false negatives During negative sampling, p − (y|x) could actually produce y that forms a positive pair that exists in the training set, i.e., a false negative.", "This possibility exists in NCE already, but since p nce is not adaptive, the probability of sampling a false negative is low.", "Hence in NCE, the score on this false negative (true observation) pair is pushed up less in the negative term than in the positive term.", "However, with the adaptive sampler, g ω (y|x), false negatives become a much more severe issue.", "g ω (y|x) can learn to concentrate its mass on a few false negatives, significantly canceling the learning of those observations in the positive phase.", "The entropy regularization reduces this problem as it forces the generator to spread its mass, hence reducing the chance of a false negative.", "To further alleviate this problem, whenever computationally feasible, we apply an additional two-step technique.", "First, we maintain a hash map of the training data in memory, and use it to efficiently detect if a negative sample (x + , y − ) is an actual observation.", "If so, its contribution to the loss is given a zero weight in ω learning step.", "Second, to upate θ in the generator learning step, the reward for false negative samples are replaced by a large penalty, so that the REINFORCE gradient update would steer g θ away from those samples.", "The second step is needed to prevent null computation where g θ learns to sample false negatives which are subsequently ignored by the discriminator update for ω. Variance Reduction The basic REINFORCE gradient estimator is poised with high variance, so in practice one often needs to apply variance reduction techniques.", "The most basic form of variance reduction is to subtract a baseline from the reward.", "As long as the baseline is not a function of actions (i.e., samples y − being drawn), the REINFORCE gradient estimator remains unbiased.", "More advanced gradient estimators exist that also reduce variance (Grathwohl et al., 2017; Tucker et al., 2017; Liu et al., 2018) , but for simplicity we use the self-critical baseline method (Rennie et al., 2016) , where the baseline is b(x) = l ω (y + , y , x), or b(x) = −s ω (y , x) in the separable loss case, and y = argmax i g θ (y i |x).", "In other words, the baseline is the reward of the most likely sample according to the generator.", "2.7 Improving exploration in g θ by leveraging NCE samples In Sec.", "2.4 we touched on the need for sufficient exploration in g θ .", "It is possible to also leverage negative samples from NCE to help the generator learn.", "This is essentially off-policy exploration in reinforcement learning since NCE samples are not drawn according to g θ (y|x).", "The generator learning can use importance re-weighting to leverage those samples.", "The resulting REIN-FORCE gradient estimator is basically the same as Eq.", "6 except that the rewards are reweighted by g θ (y − |x)/p nce (y − ), and the expectation is with respect to p(y + |x)p nce (y − ).", "This additional offpolicy learning term provides gradient information for generator learning if g θ (y − |x) is not zero, meaning that for it to be effective in helping exploration, the generator cannot be collapsed at the first place.", "Hence, in practice, this term is only used to further help on top of the entropy regularization, but it does not replace it.", "Related Work Smith and Eisner (2005) proposed contrastive estimation as a way for unsupervised learning of log-linear models by taking implicit evidence from user-defined neighborhoods around observed datapoints.", "Gutmann and Hyvärinen (2010) introduced NCE as an alternative to the hierarchical softmax.", "In the works of Mnih and Teh (2012) and Mnih and Kavukcuoglu (2013) , NCE is applied to log-bilinear models and Vaswani et al.", "(2013) applied NCE to neural probabilistic language models (Yoshua et al., 2003) .", "Compared to these previous NCE methods that rely on simple fixed sampling heuristics, ACE uses an adaptive sampler that produces harder negatives.", "In the domain of max-margin estimation for structured prediction (Taskar et al., 2005) , loss augmented MAP inference plays the role of finding hard negatives (the hardest).", "However, this inference is only tractable in a limited class of models such structured SVM (Tsochantaridis et al., 2005) .", "Compared to those models that use exact maximization to find the hardest negative configuration each time, the generator in ACE can be viewed as learning an approximate amortized inference network.", "Concurrently to this work, Tu and Gimpel (2018) proposes a very similar framework, using a learned inference network for Structured prediction energy networks (SPEN) (Belanger and McCallum, 2016) .", "Concurrent with our work, there have been other interests in applying the GAN to NLP problems (Fedus et al., 2018; Wang et al., 2018; Cai and Wang, 2017) .", "Knowledge graph models naturally lend to a GAN setup, and has been the subject of study in Wang et al.", "(2018) and Cai and Wang (2017) .", "These two concurrent works are most closely related to one of the three tasks on which we study ACE in this work.", "Besides a more general formulation that applies to problems beyond those considered in Wang et al.", "(2018) and Cai and Wang (2017) , the techniques introduced in our work on handling false negatives and entropy regularization lead to improved experimental results as shown in Sec.", "5.4.", "Application of ACE on three tasks 4.1 Word Embeddings Word embeddings learn a vector representation of words from co-occurrences in a text corpus.", "NCE casts this learning problem as a binary classification where the model tries to distinguish positive word and context pairs, from negative noise samples composed of word and false context pairs.", "The NCE objective in Skip-gram (Mikolov et al., 2013) for word embeddings is a separable loss of the form: L = − wt∈V [log p(y = 1|w t , w + c ) + K c=1 log p(y = 0|w t , w − c )] (9) Here, w + c is sampled from the set of true contexts and w − c ∼ Q is sampled k times from a fixed noise distribution.", "Mikolov et al.", "(2013) introduced a further simplification of NCE, called \"Negative Sampling\" (Dyer, 2014) .", "With respect to our ACE framework, the difference between NCE and Negative Sampling is inconsequential, so we continue the discussion using NCE.", "A drawback of this sampling scheme is that it favors more common words as context.", "Another issue is that the negative context words are sampled in the same way, rather than tailored toward the actual target word.", "To apply ACE to this problem we first define the value function for the minimax game, V (D, G), as follows: V (D, G) = E p + (wc) [log D(w c , w t )] − E pnce(wc) [− log(1 − D(w c , w t ))] − E g θ (wc|wt) [− log(1 − D(w c , w t ))] (10) with D = p(y = 1|w t , w c ) and G = g θ (w c |w t ).", "Implementation details For our experiments, we train all our models on a single pass of the May 2017 dump of the English Wikipedia with lowercased unigrams.", "The vocabulary size is restricted to the top 150k most frequent words when training from scratch while for finetuning we use the same vocabulary as Pennington et al.", "(2014) , which is 400k of the most frequent words.", "We use 5 NCE samples for each positive sample and 1 adversarial sample in a window size of 10 and the same positive subsampling scheme proposed by Mikolov et al.", "(2013) .", "Learning for both G and D uses Adam (Kingma and Ba, 2014) optimizer with its default parameters.", "Our conditional discriminator is modeled using the Skip-Gram architecture, which is a two layer neural network with a linear mapping between the layers.", "The generator network consists of an embedding layer followed by two small hidden layers, followed by an output softmax layer.", "The first layer of the generator shares its weights with the second embedding layer in the discriminator network, which we find really speeds up convergence as the generator does not have to relearn its own set of embeddings.", "The difference between the discriminator and generator is that a sigmoid nonlinearity is used after the second layer in the discriminator, while in the generator, a softmax layer is used to define a categorical distribution over negative word candidates.", "We find that controlling the generator entropy is critical for finetuning experiments as otherwise the generator collapses to its favorite negative sample.", "The word embeddings are taken to be the first dense matrix in the discriminator.", "Order Embeddings Hypernym Prediction As introduced in Vendrov et al.", "(2016) , ordered representations over hierarchy can be learned by order embeddings.", "An example task for such ordered representation is hypernym prediction.", "A hypernym pair is a pair of concepts where the first concept is a specialization or an instance of the second.", "For completeness, we briefly describe order embeddings, then analyze ACE on the hypernym prediction task.", "In order embeddings, each entity is represented by a vector in R N , the score for a positive ordered pair of entities (x, y) is defined by s ω (x, y) = ||max(0, y − x)|| 2 and, score for a negative ordered pair (x + , y − ) is defined bỹ s ω (x + , y − ) = max{0, η − s(x + , y − )}, where is η is the margin.", "Let f (u) be the embedding function which takes an entity as input and outputs en embedding vector.", "We define P as a set of positive pairs and N as negative pairs, the separable loss function for order embedding task is defined by: L = (u,v)∈P s ω (f (u), f (v)))+ (u,v)∈Ns (f (u), f (v)) (11) Implementation details Our generator for this task is just a linear fully connected softmax layer, taking an embedding vector from discriminator as input and outputting a categorical distribution over the entity set.", "For the discriminator, we inherit all model setting from Vendrov et al.", "(2016) : we use 50 dimensions hidden state and bash size 1000, a learning rate of 0.01 and the Adam optimizer.", "For the generator, we use a batch size of 1000, a learning rate 0.01 and the Adam optimizer.", "We apply weight decay with rate 0.1 and entropy loss regularization as described in Sec.", "2.4.", "We handle false negative as described in Sec.", "2.5.", "After cross validation, variance reduction and leveraging NCE samples does not greatly affect the order embedding task.", "Knowledge Graph Embeddings Knowledge graphs contain entity and relation data of the form (head entity, relation, tail entity), and the goal is to learn from observed positive entity relations and predict missing links (a.k.a.", "link prediction).", "There have been many works on knowledge graph embeddings, e.g.", "TransE (Bordes et al., 2013) , TransR (Lin et al., 2015) , TransH (Wang et al., 2014) , TransD (Ji et al., 2015) , Complex (Trouillon et al., 2016) , DistMult (Yang et al., 2014) and ConvE (Dettmers et al., 2017) .", "Many of them use a contrastive learning objective.", "Here we take TransD as an example, and modify its noise contrastive learning to ACE, and demonstrate significant improvement in sample efficiency and link prediction results.", "Implementation details Let a positive entity-relation-entity triplet be denoted by ξ + = (h + , r + , t + ), and a negative triplet could either have its head or tail be a negative sample, i.e.", "ξ − = (h − , r + , t + ) or ξ − = (h + , r + , t − ).", "In either case, the general formulation in Sec.", "2.1 still applies.", "The non-separable loss function takes on the form: l = max(0, η + s ω (ξ + ) − s ω (ξ − )) (12) The scoring rule is: s = h ⊥ + r − t ⊥ (13) where r is the embedding vector for r, and h ⊥ is projection of the embedding of h onto the space of r by h ⊥ = h + r p h p h, where r p and h p are projection parameters of the model.", "t ⊥ is defined in a similar way through parameters t, t p and r p .", "The form of the generator g θ (t − |r + , h + ) is chosen to be f θ (h ⊥ , h ⊥ + r), where f θ is a feedforward neural net that concatenates its two input arguments, then propagates through two hidden layers, followed by a final softmax output layer.", "As a function of (r + , h + ), g θ shares parameter with the discriminator, as the inputs to f θ are the embedding vectors.", "During generator learning, only θ is updated and the TransD model embedding parameters are frozen.", "Experiments We evaluate ACE with experiments on word embeddings, order embeddings, and knowledge graph embeddings tasks.", "In short, whenever the original learning objective is contrastive (all tasks except Glove fine-tuning) our results consistently show that ACE improves over NCE.", "In some cases, we include additional comparisons to the state-of-art results on the task to put the significance of such improvements in context: the generic ACE can often make a reasonable baseline competitive with SOTA methods that are optimized for the task.", "For word embeddings, we evaluate models trained from scratch as well as fine-tuned Glove models (Pennington et al., 2014) on word similarity tasks that consist of computing the similarity between word pairs where the ground truth is an average of human scores.", "We choose the Rare word dataset (Luong et al., 2013) and WordSim-353 (Finkelstein et al., 2001) by virtue of our hypothesis that ACE learns better representations for both rare and frequent words.", "We also qualitatively evaluate ACE word embeddings by inspecting the nearest neighbors of selected words.", "For the hypernym prediction task, following Vendrov et al.", "(2016) , hypernym pairs are created from the WordNet hierarchy's transitive closure.", "We use the released random development split and test split from Vendrov et al.", "(2016) , which both contain 4000 edges.", "For knowledge graph embeddings, we use TransD (Ji et al., 2015) as our base model, and perform ablation study to analyze the behavior of ACE with various add-on features, and confirm that entropy regularization is crucial for good performance in ACE.", "We also obtain link prediction results that are competitive or superior to the stateof-arts on the WN18 dataset (Bordes et al., 2014) .", "Training Word Embeddings from scratch In this experiment, we empirically observe that training word embeddings using ACE converges significantly faster than NCE after one epoch.", "As shown in Fig.", "3 both ACE (a mixture of p nce and g θ ) and just g θ (denoted by ADV) significantly outperforms the NCE baseline, with an absolute improvement of 73.1% and 58.5% respectively on RW score.", "We note similar results on WordSim-353 dataset where ACE and ADV outperforms NCE by 40.4% and 45.7%.", "We also evaluate our model qualitatively by inspecting the nearest neighbors of selected words in Table.", "1.", "We first present the five nearest neighbors to each word to show that both NCE and ACE models learn sensible embeddings.", "We then show that ACE embeddings have much better semantic relevance in a larger neighborhood (nearest neighbor 45-50).", "Finetuning Word Embeddings We take off-the-shelf pre-trained Glove embeddings which were trained using 6 billion tokens (Pennington et al., 2014) and fine-tune them using our algorithm.", "It is interesting to note that the original Glove objective does not fit into the contrastive learning framework, but nonetheless we find that they benefit from ACE.", "In fact, we observe that training such that 75% of the words appear as positive contexts is sufficient to beat the largest dimensionality pre-trained Glove model on word similarity tasks.", "We evaluate our performance on the Rare Word and WordSim353 data.", "As can be seen from our results in Table 2 , ACE on RW is not always better and for the 100d and 300d Glove embeddings is marginally worse.", "However, on WordSim353 ACE does considerably better across the board to the point where 50d Glove embeddings outperform the 300d baseline Glove model.", "Hypernym Prediction As shown in Table 3 , with ACE training, our method achieves a 1.5% improvement on accu- racy over Vendrov et al.", "(2016) without tunning any of the discriminator's hyperparameters.", "We further report training curve in Fig.", "1 , we report loss curve on randomly sampled pairs.", "We stress that in the ACE model, we train random pairs and generator generated pairs jointly, as shown in Fig.", "2 , hard negatives help the order embedding model converges faster.", "Ablation Study and Improving TransD To analyze different aspects of ACE, we perform an ablation study on the knowledge graph embedding task.", "As described in Sec.", "4.3, the base Method Accuracy (%) order-embeddings 90.6 order-embeddings + Our ACE 92.0 Table 3 : Order Embedding Performance model (discriminator) we apply ACE to is TransD (Ji et al., 2015) .", "Fig.", "5 shows validation performance as training progresses.", "All variants of ACE converges to better results than base NCE.", "Among ACE variants, all methods that include entropy regularization significantly outperform without entropy regularization.", "Without the self critical baseline variance reduction, learning could progress faster at the beginning but the final performance suffers slightly.", "The best performance is obtained without the additional off-policy learning of the generator.", "Table.", "4 shows the final test results on WN18 link prediction task.", "It is interesting to note that ACE improves MRR score more significantly than hit@10.", "As MRR is a lot more sensitive to the top rankings, i.e., how the correct configuration ranks among the competitive alternatives, this is consistent with the fact that ACE samples hard negatives and forces the base model to learn a more discriminative representation of the positive examples.", "(Trouillon et al., 2016) , which achieves the SOTA on this dataset.", "Among all TransD based models (the best results in this group is underlined), ACE improves over basic NCE and another GAN based approach KBGAN.", "The gap on MRR is likely due to the difference between TransD and COMPLEX models.", "Hard Negative Analysis To better understand the effect of the adversarial samples proposed by the generator we plot the discriminator loss on both p nce and g θ samples.", "In this context, a harder sample means a higher loss assigned by the discriminator.", "Fig.", "4 shows that discriminator loss for the word embedding task on g θ samples are always higher than on p nce samples, confirming that the generator is indeed sampling harder negatives.", "For Hypernym Prediction task, Fig.2 shows discriminator loss on negative pairs sampled from NCE and ACE respectively.", "The higher the loss the harder the negative pair is.", "As indicated in the left plot, loss on the ACE negative terms collapses faster than on the NCE negatives.", "After adding entropy regularization and weight decay, the generator works as expected.", "Limitations When the generator softmax is large, the current implementation of ACE training is computationally expensive.", "Although ACE converges faster per iteration, it may converge more slowly on wall-clock time depending on the cost of the softmax.", "However, embeddings are typically used as pre-trained building blocks for subsequent tasks.", "Thus, their learning is usually the pre-computation step for the more complex downstream models and spending more time is justified, especially with GPU acceleration.", "We believe that the computational cost could potentially be reduced via some existing techniques such as the \"augment and reduce\" variational inference of (Ruiz et al., 2018), adaptive softmax (Grave et al., 2016) , or the \"sparsely-gated\" softmax of Shazeer et al.", "(2017) , but leave that to future work.", "Another limitation is on the theoretical front.", "As noted in Goodfellow (2014) , GAN learning does not implement maximum likelihood estimation (MLE), while NCE has MLE as an asymptotic limit.", "To the best of our knowledge, more distant connections between GAN and MLE training are not known, and tools for analyzing the equilibrium of a min-max game where players are parametrized by deep neural nets are currently not available to the best of our knowledge.", "Conclusion In this paper, we propose Adversarial Contrastive Estimation as a general technique for improving supervised learning problems that learn by contrasting observed and fictitious samples.", "Specifically, we use a generator network in a conditional GAN like setting to propose hard negative examples for our discriminator model.", "We find that a mixture distribution of randomly sampling negative examples along with an adaptive negative sampler leads to improved performances on a variety of embedding tasks.", "We validate our hypothesis that hard negative examples are critical to optimal learning and can be proposed via our ACE framework.", "Finally, we find that controlling the entropy of the generator through a regularization term and properly handling false negatives is crucial for successful training." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "2.3", "2.4", "2.5", "2.6", "3", "4", "4.2", "4.3", "5", "5.1", "5.2", "5.3", "5.4", "5.5", "6", "7" ], "paper_header_content": [ "Introduction", "Background: contrastive learning", "Adversarial mixture noise", "Learning the generator", "Entropy and training stability", "Handling false negatives", "Variance Reduction", "Related Work", "Application of ACE on three tasks 4.1 Word Embeddings", "Order Embeddings Hypernym Prediction", "Knowledge Graph Embeddings", "Experiments", "Training Word Embeddings from scratch", "Finetuning Word Embeddings", "Hypernym Prediction", "Ablation Study and Improving TransD", "Hard Negative Analysis", "Limitations", "Conclusion" ] }
GEM-SciDuet-train-33#paper-1047#slide-10
Example Knowledge Graph Embeddings
Data in the form of triplets (head entity, relation, tail entity). For example {United states of America, partially contained by ocean, Pacific} Basic Idea: The embeddings for h, r t should roughly satisfy h r t Goal is to learn from observed positive entity relations and predict missing links.
Data in the form of triplets (head entity, relation, tail entity). For example {United states of America, partially contained by ocean, Pacific} Basic Idea: The embeddings for h, r t should roughly satisfy h r t Goal is to learn from observed positive entity relations and predict missing links.
[]
GEM-SciDuet-train-33#paper-1047#slide-11
1047
Adversarial Contrastive Estimation
Learning by contrasting positive and negative samples is a general strategy adopted by many methods. Noise contrastive estimation (NCE) for word embeddings and translating embeddings for knowledge graphs are examples in NLP employing this approach. In this work, we view contrastive learning as an abstraction of all such methods and augment the negative sampler into a mixture distribution containing an adversarially learned sampler. The resulting adaptive sampler finds harder negative examples, which forces the main model to learn a better representation of the data. We evaluate our proposal on learning word embeddings, order embeddings and knowledge graph embeddings and observe both faster convergence and improved results on multiple metrics.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226 ], "paper_content_text": [ "Introduction Many models learn by contrasting losses on observed positive examples with those on some fictitious negative examples, trying to decrease some score on positive ones while increasing it on negative ones.", "There are multiple reasons why such contrastive learning approach is needed.", "Computational tractability is one.", "For instance, instead of using softmax to predict a word for learning word embeddings, noise contrastive estimation (NCE) (Dyer, 2014; Mnih and Teh, 2012) can be used in skip-gram or CBOW word embedding models (Gutmann and Hyvärinen, 2012; Mikolov et al., 2013; Mnih and Kavukcuoglu, 2013; Vaswani et al., 2013) .", "Another reason is * authors contributed equally † Work done while author was an intern at Borealis AI modeling need, as certain assumptions are best expressed as some score or energy in margin based or un-normalized probability models (Smith and Eisner, 2005) .", "For example, modeling entity relations as translations or variants thereof in a vector space naturally leads to a distance-based score to be minimized for observed entity-relation-entity triplets (Bordes et al., 2013) .", "Given a scoring function, the gradient of the model's parameters on observed positive examples can be readily computed, but the negative phase requires a design decision on how to sample data.", "In noise contrastive estimation for word embeddings, a negative example is formed by replacing a component of a positive pair by randomly selecting a sampled word from the vocabulary, resulting in a fictitious word-context pair which would be unlikely to actually exist in the dataset.", "This negative sampling by corruption approach is also used in learning knowledge graph embeddings (Bordes et al., 2013; Lin et al., 2015; Ji et al., 2015; Wang et al., 2014; Trouillon et al., 2016; Yang et al., 2014; Dettmers et al., 2017) , order embeddings (Vendrov et al., 2016) , caption generation (Dai and Lin, 2017) , etc.", "Typically the corruption distribution is the same for all inputs like in skip-gram or CBOW NCE, rather than being a conditional distribution that takes into account information about the input sample under consideration.", "Furthermore, the corruption process usually only encodes a human prior as to what constitutes a hard negative sample, rather than being learned from data.", "For these two reasons, the simple fixed corruption process often yields only easy negative examples.", "Easy negatives are sub-optimal for learning discriminative representation as they do not force the model to find critical characteristics of observed positive data, which has been independently discovered in applications outside NLP previously (Shrivastava et al., 2016) .", "Even if hard negatives are occasionally reached, the infrequency means slow convergence.", "Designing a more sophisticated corruption process could be fruitful, but requires costly trialand-error by a human expert.", "In this work, we propose to augment the simple corruption noise process in various embedding models with an adversarially learned conditional distribution, forming a mixture negative sampler that adapts to the underlying data and the embedding model training progress.", "The resulting method is referred to as adversarial contrastive estimation (ACE).", "The adaptive conditional model engages in a minimax game with the primary embedding model, much like in Generative Adversarial Networks (GANs) (Goodfellow et al., 2014a) , where a discriminator net (D), tries to distinguish samples produced by a generator (G) from real data (Goodfellow et al., 2014b) .", "In ACE, the main model learns to distinguish between a real positive example and a negative sample selected by the mixture of a fixed NCE sampler and an adversarial generator.", "The main model and the generator takes alternating turns to update their parameters.", "In fact, our method can be viewed as a conditional GAN (Mirza and Osindero, 2014) on discrete inputs, with a mixture generator consisting of a learned and a fixed distribution, with additional techniques introduced to achieve stable and convergent training of embedding models.", "In our proposed ACE approach, the conditional sampler finds harder negatives than NCE, while being able to gracefully fall back to NCE whenever the generator cannot find hard negatives.", "We demonstrate the efficacy and generality of the proposed method on three different learning tasks, word embeddings (Mikolov et al., 2013) , order embeddings (Vendrov et al., 2016) and knowledge graph embeddings (Ji et al., 2015) .", "Method Background: contrastive learning In the most general form, our method applies to supervised learning problems with a contrastive objective of the following form: L(ω) = E p(x + ,y + ,y − ) l ω (x + , y + , y − ) (1) where l ω (x + , y + , y − ) captures both the model with parameters ω and the loss that scores a positive tuple (x + , y + ) against a negative one (x + , y − ).", "E p(x + ,y + ,y − ) (.)", "denotes expectation with respect to some joint distribution over positive and negative samples.", "Furthermore, by the law of total expectation, and the fact that given x + , the negative sampling is not dependent on the positive label, i.e.", "p(y + , y − |x + ) = p(y + |x + )p(y − |x + ), Eq.", "1 can be re-written as E p(x + ) [E p(y + |x + )p(y − |x + ) l ω (x + , y + , y − )] (2) Separable loss In the case where the loss decomposes into a sum of scores on positive and negative tuples such as l ω (x + , y + , y − ) = s ω (x + , y + )−s ω (x + , y − ), then Expression.", "2 becomes E p + (x) [E p + (y|x) s ω (x, y) − E p − (y|x)sω (x, y)] (3) where we moved the + and − to p for notational brevity.", "Learning by stochastic gradient descent aims to adjust ω to pushing down s ω (x, y) on samples from p + while pushing ups ω (x, y) on samples from p − .", "Note that for generality, the scoring function for negative samples, denoted bỹ s ω , could be slightly different from s ω .", "For instance,s could contain a margin as in the case of Order Embeddings in Sec.", "4.2.", "Non separable loss Eq.", "1 is the general form that we would like to consider because for certain problems, the loss function cannot be separated into sums of terms containing only positive (x + , y + ) and terms with negatives (x + , y − ).", "An example of such a nonseparable loss is the triplet ranking loss (Schroff et al., 2015) : l ω = max(0, η + s ω (x + , y + ) − s ω (x + , y − )), which does not decompose due to the rectification.", "Noise contrastive estimation The typical NCE approach in tasks such as word embeddings (Mikolov et al., 2013) , order embeddings (Vendrov et al., 2016) , and knowledge graph embeddings can be viewed as a special case of Eq.", "2 by taking p(y − |x + ) to be some unconditional p nce (y).", "This leads to efficient computation during training, however, p nce (y) sacrifices the sampling efficiency of learning as the negatives produced using a fixed distribution are not tailored toward x + , and as a result are not necessarily hard negative examples.", "Thus, the model is not forced to discover discriminative representation of observed positive data.", "As training progresses, more and more negative examples are correctly learned, the probability of drawing a hard negative example diminishes further, causing slow convergence.", "Adversarial mixture noise To remedy the above mentioned problem of a fixed unconditional negative sampler, we propose to augment it into a mixture one, λp nce (y) + (1 − λ)g θ (y|x), where g θ is a conditional distribution with a learnable parameter θ and λ is a hyperparameter.", "The objective in Expression.", "2 can then be written as (conditioned on x for notational brevity): L(ω, θ; x) = λ E p(y + |x)pnce(y − ) l ω (x, y + , y − ) + (1 − λ) E p(y + |x)g θ (y − |x) l ω (x, y + , y − ) (4) We learn (ω, θ) in a GAN-style minimax game: min ω max θ V (ω, θ) = min ω max θ E p + (x) L(ω, θ; x) (5 ) The embedding model behind l ω (x, y + , y − ) is similar to the discriminator in (conditional) GAN (or critic in Wasserstein or Energy-based GAN (Zhao et al., 2016) , while g θ (y|x) acts as the generator.", "Henceforth, we will use the term discriminator (D) and embedding model interchangeably, and refer to g θ as the generator.", "Learning the generator There is one important distinction to typical GAN: g θ (y|x) defines a categorical distribution over possible y values, and samples are drawn accordingly; in contrast to typical GAN over continuous data space such as images, where samples are generated by an implicit generative model that warps noise vectors into data points.", "Due to the discrete sampling step, g θ cannot learn by receiving gradient through the discriminator.", "One possible solution is to use the Gumbel-softmax reparametrization trick (Jang et al., 2016; Maddison et al., 2016) , which gives a differentiable approximation.", "However, this differentiability comes at the cost of drawing N Gumbel samples per each categorical sample, where N is the number of categories.", "For word embeddings, N is the vocabulary size, and for knowledge graph embeddings, N is the number of entities, both leading to infeasible computational requirements.", "Instead, we use the REINFORCE (Williams, 1992) gradient estimator for ∇ θ L(θ, x): (1−λ) E −l ω (x, y + , y − )∇ θ log(g θ (y − |x)) (6) where the expectation E is with respect to p(y + , y − |x) = p(y + |x)g θ (y − |x), and the discriminator loss l ω (x, y + , y − ) acts as the reward.", "With a separable loss, the (conditional) value function of the minimax game is: L(ω, θ; x) = E p + (y|x) s ω (x, y) − E pnce(y)sω (x, y) − E g θ (y|x)sω (x, y) (7) and only the last term depends on the generator parameter ω.", "Hence, with a separable loss, the reward is −s(x + , y − ).", "This reduction does not happen with a non-separable loss, and we have to use l ω (x, y + , y − ).", "Entropy and training stability GAN training can suffer from instability and degeneracy where the generator probability mass collapses to a few modes or points.", "Much work has been done to stabilize GAN training in the continuous case Gulrajani et al., 2017; Cao et al., 2018) .", "In ACE, if the generator g θ probability mass collapses to a few candidates, then after the discriminator successfully learns about these negatives, g θ cannot adapt to select new hard negatives, because the REIN-FORCE gradient estimator Eq.", "6 relies on g θ being able to explore other candidates during sampling.", "Therefore, if the g θ probability mass collapses, instead of leading to oscillation as in typical GAN, the min-max game in ACE reaches an equilibrium where the discriminator wins and g θ can no longer adapt, then ACE falls back to NCE since the negative sampler has another mixture component from NCE.", "This behavior of gracefully falling back to NCE is more desirable than the alternative of stalled training if p − (y|x) does not have a simple p nce mixture component.", "However, we would still like to avoid such collapse, as the adversarial samples provide greater learning signals than NCE samples.", "To this end, we propose to use a regularizer to encourage the categorical distribution g θ (y|x) to have high entropy.", "In order to make the the regularizer interpretable and its hyperparameters easy to tune, we design the following form: R ent (x) = min(0, c − H(g θ (y|x))) (8) where H(g θ (y|x)) is the entropy of the categorical distribution g θ (y|x), and c = log(k) is the entropy of a uniform distribution over k choices, and k is a hyper-parameter.", "Intuitively, R ent expresses the prior that the generator should spread its mass over more than k choices for each x.", "Handling false negatives During negative sampling, p − (y|x) could actually produce y that forms a positive pair that exists in the training set, i.e., a false negative.", "This possibility exists in NCE already, but since p nce is not adaptive, the probability of sampling a false negative is low.", "Hence in NCE, the score on this false negative (true observation) pair is pushed up less in the negative term than in the positive term.", "However, with the adaptive sampler, g ω (y|x), false negatives become a much more severe issue.", "g ω (y|x) can learn to concentrate its mass on a few false negatives, significantly canceling the learning of those observations in the positive phase.", "The entropy regularization reduces this problem as it forces the generator to spread its mass, hence reducing the chance of a false negative.", "To further alleviate this problem, whenever computationally feasible, we apply an additional two-step technique.", "First, we maintain a hash map of the training data in memory, and use it to efficiently detect if a negative sample (x + , y − ) is an actual observation.", "If so, its contribution to the loss is given a zero weight in ω learning step.", "Second, to upate θ in the generator learning step, the reward for false negative samples are replaced by a large penalty, so that the REINFORCE gradient update would steer g θ away from those samples.", "The second step is needed to prevent null computation where g θ learns to sample false negatives which are subsequently ignored by the discriminator update for ω. Variance Reduction The basic REINFORCE gradient estimator is poised with high variance, so in practice one often needs to apply variance reduction techniques.", "The most basic form of variance reduction is to subtract a baseline from the reward.", "As long as the baseline is not a function of actions (i.e., samples y − being drawn), the REINFORCE gradient estimator remains unbiased.", "More advanced gradient estimators exist that also reduce variance (Grathwohl et al., 2017; Tucker et al., 2017; Liu et al., 2018) , but for simplicity we use the self-critical baseline method (Rennie et al., 2016) , where the baseline is b(x) = l ω (y + , y , x), or b(x) = −s ω (y , x) in the separable loss case, and y = argmax i g θ (y i |x).", "In other words, the baseline is the reward of the most likely sample according to the generator.", "2.7 Improving exploration in g θ by leveraging NCE samples In Sec.", "2.4 we touched on the need for sufficient exploration in g θ .", "It is possible to also leverage negative samples from NCE to help the generator learn.", "This is essentially off-policy exploration in reinforcement learning since NCE samples are not drawn according to g θ (y|x).", "The generator learning can use importance re-weighting to leverage those samples.", "The resulting REIN-FORCE gradient estimator is basically the same as Eq.", "6 except that the rewards are reweighted by g θ (y − |x)/p nce (y − ), and the expectation is with respect to p(y + |x)p nce (y − ).", "This additional offpolicy learning term provides gradient information for generator learning if g θ (y − |x) is not zero, meaning that for it to be effective in helping exploration, the generator cannot be collapsed at the first place.", "Hence, in practice, this term is only used to further help on top of the entropy regularization, but it does not replace it.", "Related Work Smith and Eisner (2005) proposed contrastive estimation as a way for unsupervised learning of log-linear models by taking implicit evidence from user-defined neighborhoods around observed datapoints.", "Gutmann and Hyvärinen (2010) introduced NCE as an alternative to the hierarchical softmax.", "In the works of Mnih and Teh (2012) and Mnih and Kavukcuoglu (2013) , NCE is applied to log-bilinear models and Vaswani et al.", "(2013) applied NCE to neural probabilistic language models (Yoshua et al., 2003) .", "Compared to these previous NCE methods that rely on simple fixed sampling heuristics, ACE uses an adaptive sampler that produces harder negatives.", "In the domain of max-margin estimation for structured prediction (Taskar et al., 2005) , loss augmented MAP inference plays the role of finding hard negatives (the hardest).", "However, this inference is only tractable in a limited class of models such structured SVM (Tsochantaridis et al., 2005) .", "Compared to those models that use exact maximization to find the hardest negative configuration each time, the generator in ACE can be viewed as learning an approximate amortized inference network.", "Concurrently to this work, Tu and Gimpel (2018) proposes a very similar framework, using a learned inference network for Structured prediction energy networks (SPEN) (Belanger and McCallum, 2016) .", "Concurrent with our work, there have been other interests in applying the GAN to NLP problems (Fedus et al., 2018; Wang et al., 2018; Cai and Wang, 2017) .", "Knowledge graph models naturally lend to a GAN setup, and has been the subject of study in Wang et al.", "(2018) and Cai and Wang (2017) .", "These two concurrent works are most closely related to one of the three tasks on which we study ACE in this work.", "Besides a more general formulation that applies to problems beyond those considered in Wang et al.", "(2018) and Cai and Wang (2017) , the techniques introduced in our work on handling false negatives and entropy regularization lead to improved experimental results as shown in Sec.", "5.4.", "Application of ACE on three tasks 4.1 Word Embeddings Word embeddings learn a vector representation of words from co-occurrences in a text corpus.", "NCE casts this learning problem as a binary classification where the model tries to distinguish positive word and context pairs, from negative noise samples composed of word and false context pairs.", "The NCE objective in Skip-gram (Mikolov et al., 2013) for word embeddings is a separable loss of the form: L = − wt∈V [log p(y = 1|w t , w + c ) + K c=1 log p(y = 0|w t , w − c )] (9) Here, w + c is sampled from the set of true contexts and w − c ∼ Q is sampled k times from a fixed noise distribution.", "Mikolov et al.", "(2013) introduced a further simplification of NCE, called \"Negative Sampling\" (Dyer, 2014) .", "With respect to our ACE framework, the difference between NCE and Negative Sampling is inconsequential, so we continue the discussion using NCE.", "A drawback of this sampling scheme is that it favors more common words as context.", "Another issue is that the negative context words are sampled in the same way, rather than tailored toward the actual target word.", "To apply ACE to this problem we first define the value function for the minimax game, V (D, G), as follows: V (D, G) = E p + (wc) [log D(w c , w t )] − E pnce(wc) [− log(1 − D(w c , w t ))] − E g θ (wc|wt) [− log(1 − D(w c , w t ))] (10) with D = p(y = 1|w t , w c ) and G = g θ (w c |w t ).", "Implementation details For our experiments, we train all our models on a single pass of the May 2017 dump of the English Wikipedia with lowercased unigrams.", "The vocabulary size is restricted to the top 150k most frequent words when training from scratch while for finetuning we use the same vocabulary as Pennington et al.", "(2014) , which is 400k of the most frequent words.", "We use 5 NCE samples for each positive sample and 1 adversarial sample in a window size of 10 and the same positive subsampling scheme proposed by Mikolov et al.", "(2013) .", "Learning for both G and D uses Adam (Kingma and Ba, 2014) optimizer with its default parameters.", "Our conditional discriminator is modeled using the Skip-Gram architecture, which is a two layer neural network with a linear mapping between the layers.", "The generator network consists of an embedding layer followed by two small hidden layers, followed by an output softmax layer.", "The first layer of the generator shares its weights with the second embedding layer in the discriminator network, which we find really speeds up convergence as the generator does not have to relearn its own set of embeddings.", "The difference between the discriminator and generator is that a sigmoid nonlinearity is used after the second layer in the discriminator, while in the generator, a softmax layer is used to define a categorical distribution over negative word candidates.", "We find that controlling the generator entropy is critical for finetuning experiments as otherwise the generator collapses to its favorite negative sample.", "The word embeddings are taken to be the first dense matrix in the discriminator.", "Order Embeddings Hypernym Prediction As introduced in Vendrov et al.", "(2016) , ordered representations over hierarchy can be learned by order embeddings.", "An example task for such ordered representation is hypernym prediction.", "A hypernym pair is a pair of concepts where the first concept is a specialization or an instance of the second.", "For completeness, we briefly describe order embeddings, then analyze ACE on the hypernym prediction task.", "In order embeddings, each entity is represented by a vector in R N , the score for a positive ordered pair of entities (x, y) is defined by s ω (x, y) = ||max(0, y − x)|| 2 and, score for a negative ordered pair (x + , y − ) is defined bỹ s ω (x + , y − ) = max{0, η − s(x + , y − )}, where is η is the margin.", "Let f (u) be the embedding function which takes an entity as input and outputs en embedding vector.", "We define P as a set of positive pairs and N as negative pairs, the separable loss function for order embedding task is defined by: L = (u,v)∈P s ω (f (u), f (v)))+ (u,v)∈Ns (f (u), f (v)) (11) Implementation details Our generator for this task is just a linear fully connected softmax layer, taking an embedding vector from discriminator as input and outputting a categorical distribution over the entity set.", "For the discriminator, we inherit all model setting from Vendrov et al.", "(2016) : we use 50 dimensions hidden state and bash size 1000, a learning rate of 0.01 and the Adam optimizer.", "For the generator, we use a batch size of 1000, a learning rate 0.01 and the Adam optimizer.", "We apply weight decay with rate 0.1 and entropy loss regularization as described in Sec.", "2.4.", "We handle false negative as described in Sec.", "2.5.", "After cross validation, variance reduction and leveraging NCE samples does not greatly affect the order embedding task.", "Knowledge Graph Embeddings Knowledge graphs contain entity and relation data of the form (head entity, relation, tail entity), and the goal is to learn from observed positive entity relations and predict missing links (a.k.a.", "link prediction).", "There have been many works on knowledge graph embeddings, e.g.", "TransE (Bordes et al., 2013) , TransR (Lin et al., 2015) , TransH (Wang et al., 2014) , TransD (Ji et al., 2015) , Complex (Trouillon et al., 2016) , DistMult (Yang et al., 2014) and ConvE (Dettmers et al., 2017) .", "Many of them use a contrastive learning objective.", "Here we take TransD as an example, and modify its noise contrastive learning to ACE, and demonstrate significant improvement in sample efficiency and link prediction results.", "Implementation details Let a positive entity-relation-entity triplet be denoted by ξ + = (h + , r + , t + ), and a negative triplet could either have its head or tail be a negative sample, i.e.", "ξ − = (h − , r + , t + ) or ξ − = (h + , r + , t − ).", "In either case, the general formulation in Sec.", "2.1 still applies.", "The non-separable loss function takes on the form: l = max(0, η + s ω (ξ + ) − s ω (ξ − )) (12) The scoring rule is: s = h ⊥ + r − t ⊥ (13) where r is the embedding vector for r, and h ⊥ is projection of the embedding of h onto the space of r by h ⊥ = h + r p h p h, where r p and h p are projection parameters of the model.", "t ⊥ is defined in a similar way through parameters t, t p and r p .", "The form of the generator g θ (t − |r + , h + ) is chosen to be f θ (h ⊥ , h ⊥ + r), where f θ is a feedforward neural net that concatenates its two input arguments, then propagates through two hidden layers, followed by a final softmax output layer.", "As a function of (r + , h + ), g θ shares parameter with the discriminator, as the inputs to f θ are the embedding vectors.", "During generator learning, only θ is updated and the TransD model embedding parameters are frozen.", "Experiments We evaluate ACE with experiments on word embeddings, order embeddings, and knowledge graph embeddings tasks.", "In short, whenever the original learning objective is contrastive (all tasks except Glove fine-tuning) our results consistently show that ACE improves over NCE.", "In some cases, we include additional comparisons to the state-of-art results on the task to put the significance of such improvements in context: the generic ACE can often make a reasonable baseline competitive with SOTA methods that are optimized for the task.", "For word embeddings, we evaluate models trained from scratch as well as fine-tuned Glove models (Pennington et al., 2014) on word similarity tasks that consist of computing the similarity between word pairs where the ground truth is an average of human scores.", "We choose the Rare word dataset (Luong et al., 2013) and WordSim-353 (Finkelstein et al., 2001) by virtue of our hypothesis that ACE learns better representations for both rare and frequent words.", "We also qualitatively evaluate ACE word embeddings by inspecting the nearest neighbors of selected words.", "For the hypernym prediction task, following Vendrov et al.", "(2016) , hypernym pairs are created from the WordNet hierarchy's transitive closure.", "We use the released random development split and test split from Vendrov et al.", "(2016) , which both contain 4000 edges.", "For knowledge graph embeddings, we use TransD (Ji et al., 2015) as our base model, and perform ablation study to analyze the behavior of ACE with various add-on features, and confirm that entropy regularization is crucial for good performance in ACE.", "We also obtain link prediction results that are competitive or superior to the stateof-arts on the WN18 dataset (Bordes et al., 2014) .", "Training Word Embeddings from scratch In this experiment, we empirically observe that training word embeddings using ACE converges significantly faster than NCE after one epoch.", "As shown in Fig.", "3 both ACE (a mixture of p nce and g θ ) and just g θ (denoted by ADV) significantly outperforms the NCE baseline, with an absolute improvement of 73.1% and 58.5% respectively on RW score.", "We note similar results on WordSim-353 dataset where ACE and ADV outperforms NCE by 40.4% and 45.7%.", "We also evaluate our model qualitatively by inspecting the nearest neighbors of selected words in Table.", "1.", "We first present the five nearest neighbors to each word to show that both NCE and ACE models learn sensible embeddings.", "We then show that ACE embeddings have much better semantic relevance in a larger neighborhood (nearest neighbor 45-50).", "Finetuning Word Embeddings We take off-the-shelf pre-trained Glove embeddings which were trained using 6 billion tokens (Pennington et al., 2014) and fine-tune them using our algorithm.", "It is interesting to note that the original Glove objective does not fit into the contrastive learning framework, but nonetheless we find that they benefit from ACE.", "In fact, we observe that training such that 75% of the words appear as positive contexts is sufficient to beat the largest dimensionality pre-trained Glove model on word similarity tasks.", "We evaluate our performance on the Rare Word and WordSim353 data.", "As can be seen from our results in Table 2 , ACE on RW is not always better and for the 100d and 300d Glove embeddings is marginally worse.", "However, on WordSim353 ACE does considerably better across the board to the point where 50d Glove embeddings outperform the 300d baseline Glove model.", "Hypernym Prediction As shown in Table 3 , with ACE training, our method achieves a 1.5% improvement on accu- racy over Vendrov et al.", "(2016) without tunning any of the discriminator's hyperparameters.", "We further report training curve in Fig.", "1 , we report loss curve on randomly sampled pairs.", "We stress that in the ACE model, we train random pairs and generator generated pairs jointly, as shown in Fig.", "2 , hard negatives help the order embedding model converges faster.", "Ablation Study and Improving TransD To analyze different aspects of ACE, we perform an ablation study on the knowledge graph embedding task.", "As described in Sec.", "4.3, the base Method Accuracy (%) order-embeddings 90.6 order-embeddings + Our ACE 92.0 Table 3 : Order Embedding Performance model (discriminator) we apply ACE to is TransD (Ji et al., 2015) .", "Fig.", "5 shows validation performance as training progresses.", "All variants of ACE converges to better results than base NCE.", "Among ACE variants, all methods that include entropy regularization significantly outperform without entropy regularization.", "Without the self critical baseline variance reduction, learning could progress faster at the beginning but the final performance suffers slightly.", "The best performance is obtained without the additional off-policy learning of the generator.", "Table.", "4 shows the final test results on WN18 link prediction task.", "It is interesting to note that ACE improves MRR score more significantly than hit@10.", "As MRR is a lot more sensitive to the top rankings, i.e., how the correct configuration ranks among the competitive alternatives, this is consistent with the fact that ACE samples hard negatives and forces the base model to learn a more discriminative representation of the positive examples.", "(Trouillon et al., 2016) , which achieves the SOTA on this dataset.", "Among all TransD based models (the best results in this group is underlined), ACE improves over basic NCE and another GAN based approach KBGAN.", "The gap on MRR is likely due to the difference between TransD and COMPLEX models.", "Hard Negative Analysis To better understand the effect of the adversarial samples proposed by the generator we plot the discriminator loss on both p nce and g θ samples.", "In this context, a harder sample means a higher loss assigned by the discriminator.", "Fig.", "4 shows that discriminator loss for the word embedding task on g θ samples are always higher than on p nce samples, confirming that the generator is indeed sampling harder negatives.", "For Hypernym Prediction task, Fig.2 shows discriminator loss on negative pairs sampled from NCE and ACE respectively.", "The higher the loss the harder the negative pair is.", "As indicated in the left plot, loss on the ACE negative terms collapses faster than on the NCE negatives.", "After adding entropy regularization and weight decay, the generator works as expected.", "Limitations When the generator softmax is large, the current implementation of ACE training is computationally expensive.", "Although ACE converges faster per iteration, it may converge more slowly on wall-clock time depending on the cost of the softmax.", "However, embeddings are typically used as pre-trained building blocks for subsequent tasks.", "Thus, their learning is usually the pre-computation step for the more complex downstream models and spending more time is justified, especially with GPU acceleration.", "We believe that the computational cost could potentially be reduced via some existing techniques such as the \"augment and reduce\" variational inference of (Ruiz et al., 2018), adaptive softmax (Grave et al., 2016) , or the \"sparsely-gated\" softmax of Shazeer et al.", "(2017) , but leave that to future work.", "Another limitation is on the theoretical front.", "As noted in Goodfellow (2014) , GAN learning does not implement maximum likelihood estimation (MLE), while NCE has MLE as an asymptotic limit.", "To the best of our knowledge, more distant connections between GAN and MLE training are not known, and tools for analyzing the equilibrium of a min-max game where players are parametrized by deep neural nets are currently not available to the best of our knowledge.", "Conclusion In this paper, we propose Adversarial Contrastive Estimation as a general technique for improving supervised learning problems that learn by contrasting observed and fictitious samples.", "Specifically, we use a generator network in a conditional GAN like setting to propose hard negative examples for our discriminator model.", "We find that a mixture distribution of randomly sampling negative examples along with an adaptive negative sampler leads to improved performances on a variety of embedding tasks.", "We validate our hypothesis that hard negative examples are critical to optimal learning and can be proposed via our ACE framework.", "Finally, we find that controlling the entropy of the generator through a regularization term and properly handling false negatives is crucial for successful training." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "2.3", "2.4", "2.5", "2.6", "3", "4", "4.2", "4.3", "5", "5.1", "5.2", "5.3", "5.4", "5.5", "6", "7" ], "paper_header_content": [ "Introduction", "Background: contrastive learning", "Adversarial mixture noise", "Learning the generator", "Entropy and training stability", "Handling false negatives", "Variance Reduction", "Related Work", "Application of ACE on three tasks 4.1 Word Embeddings", "Order Embeddings Hypernym Prediction", "Knowledge Graph Embeddings", "Experiments", "Training Word Embeddings from scratch", "Finetuning Word Embeddings", "Hypernym Prediction", "Ablation Study and Improving TransD", "Hard Negative Analysis", "Limitations", "Conclusion" ] }
GEM-SciDuet-train-33#paper-1047#slide-11
ACE for Knowledge Graph Embeddings
Negative Triplet: Either negative head or tail is sampled i.e. ACE Generator: g(t |r+, h+) or g(h|r+, t+) parametrized by a feed forward neural net.
Negative Triplet: Either negative head or tail is sampled i.e. ACE Generator: g(t |r+, h+) or g(h|r+, t+) parametrized by a feed forward neural net.
[]
GEM-SciDuet-train-33#paper-1047#slide-13
1047
Adversarial Contrastive Estimation
Learning by contrasting positive and negative samples is a general strategy adopted by many methods. Noise contrastive estimation (NCE) for word embeddings and translating embeddings for knowledge graphs are examples in NLP employing this approach. In this work, we view contrastive learning as an abstraction of all such methods and augment the negative sampler into a mixture distribution containing an adversarially learned sampler. The resulting adaptive sampler finds harder negative examples, which forces the main model to learn a better representation of the data. We evaluate our proposal on learning word embeddings, order embeddings and knowledge graph embeddings and observe both faster convergence and improved results on multiple metrics.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226 ], "paper_content_text": [ "Introduction Many models learn by contrasting losses on observed positive examples with those on some fictitious negative examples, trying to decrease some score on positive ones while increasing it on negative ones.", "There are multiple reasons why such contrastive learning approach is needed.", "Computational tractability is one.", "For instance, instead of using softmax to predict a word for learning word embeddings, noise contrastive estimation (NCE) (Dyer, 2014; Mnih and Teh, 2012) can be used in skip-gram or CBOW word embedding models (Gutmann and Hyvärinen, 2012; Mikolov et al., 2013; Mnih and Kavukcuoglu, 2013; Vaswani et al., 2013) .", "Another reason is * authors contributed equally † Work done while author was an intern at Borealis AI modeling need, as certain assumptions are best expressed as some score or energy in margin based or un-normalized probability models (Smith and Eisner, 2005) .", "For example, modeling entity relations as translations or variants thereof in a vector space naturally leads to a distance-based score to be minimized for observed entity-relation-entity triplets (Bordes et al., 2013) .", "Given a scoring function, the gradient of the model's parameters on observed positive examples can be readily computed, but the negative phase requires a design decision on how to sample data.", "In noise contrastive estimation for word embeddings, a negative example is formed by replacing a component of a positive pair by randomly selecting a sampled word from the vocabulary, resulting in a fictitious word-context pair which would be unlikely to actually exist in the dataset.", "This negative sampling by corruption approach is also used in learning knowledge graph embeddings (Bordes et al., 2013; Lin et al., 2015; Ji et al., 2015; Wang et al., 2014; Trouillon et al., 2016; Yang et al., 2014; Dettmers et al., 2017) , order embeddings (Vendrov et al., 2016) , caption generation (Dai and Lin, 2017) , etc.", "Typically the corruption distribution is the same for all inputs like in skip-gram or CBOW NCE, rather than being a conditional distribution that takes into account information about the input sample under consideration.", "Furthermore, the corruption process usually only encodes a human prior as to what constitutes a hard negative sample, rather than being learned from data.", "For these two reasons, the simple fixed corruption process often yields only easy negative examples.", "Easy negatives are sub-optimal for learning discriminative representation as they do not force the model to find critical characteristics of observed positive data, which has been independently discovered in applications outside NLP previously (Shrivastava et al., 2016) .", "Even if hard negatives are occasionally reached, the infrequency means slow convergence.", "Designing a more sophisticated corruption process could be fruitful, but requires costly trialand-error by a human expert.", "In this work, we propose to augment the simple corruption noise process in various embedding models with an adversarially learned conditional distribution, forming a mixture negative sampler that adapts to the underlying data and the embedding model training progress.", "The resulting method is referred to as adversarial contrastive estimation (ACE).", "The adaptive conditional model engages in a minimax game with the primary embedding model, much like in Generative Adversarial Networks (GANs) (Goodfellow et al., 2014a) , where a discriminator net (D), tries to distinguish samples produced by a generator (G) from real data (Goodfellow et al., 2014b) .", "In ACE, the main model learns to distinguish between a real positive example and a negative sample selected by the mixture of a fixed NCE sampler and an adversarial generator.", "The main model and the generator takes alternating turns to update their parameters.", "In fact, our method can be viewed as a conditional GAN (Mirza and Osindero, 2014) on discrete inputs, with a mixture generator consisting of a learned and a fixed distribution, with additional techniques introduced to achieve stable and convergent training of embedding models.", "In our proposed ACE approach, the conditional sampler finds harder negatives than NCE, while being able to gracefully fall back to NCE whenever the generator cannot find hard negatives.", "We demonstrate the efficacy and generality of the proposed method on three different learning tasks, word embeddings (Mikolov et al., 2013) , order embeddings (Vendrov et al., 2016) and knowledge graph embeddings (Ji et al., 2015) .", "Method Background: contrastive learning In the most general form, our method applies to supervised learning problems with a contrastive objective of the following form: L(ω) = E p(x + ,y + ,y − ) l ω (x + , y + , y − ) (1) where l ω (x + , y + , y − ) captures both the model with parameters ω and the loss that scores a positive tuple (x + , y + ) against a negative one (x + , y − ).", "E p(x + ,y + ,y − ) (.)", "denotes expectation with respect to some joint distribution over positive and negative samples.", "Furthermore, by the law of total expectation, and the fact that given x + , the negative sampling is not dependent on the positive label, i.e.", "p(y + , y − |x + ) = p(y + |x + )p(y − |x + ), Eq.", "1 can be re-written as E p(x + ) [E p(y + |x + )p(y − |x + ) l ω (x + , y + , y − )] (2) Separable loss In the case where the loss decomposes into a sum of scores on positive and negative tuples such as l ω (x + , y + , y − ) = s ω (x + , y + )−s ω (x + , y − ), then Expression.", "2 becomes E p + (x) [E p + (y|x) s ω (x, y) − E p − (y|x)sω (x, y)] (3) where we moved the + and − to p for notational brevity.", "Learning by stochastic gradient descent aims to adjust ω to pushing down s ω (x, y) on samples from p + while pushing ups ω (x, y) on samples from p − .", "Note that for generality, the scoring function for negative samples, denoted bỹ s ω , could be slightly different from s ω .", "For instance,s could contain a margin as in the case of Order Embeddings in Sec.", "4.2.", "Non separable loss Eq.", "1 is the general form that we would like to consider because for certain problems, the loss function cannot be separated into sums of terms containing only positive (x + , y + ) and terms with negatives (x + , y − ).", "An example of such a nonseparable loss is the triplet ranking loss (Schroff et al., 2015) : l ω = max(0, η + s ω (x + , y + ) − s ω (x + , y − )), which does not decompose due to the rectification.", "Noise contrastive estimation The typical NCE approach in tasks such as word embeddings (Mikolov et al., 2013) , order embeddings (Vendrov et al., 2016) , and knowledge graph embeddings can be viewed as a special case of Eq.", "2 by taking p(y − |x + ) to be some unconditional p nce (y).", "This leads to efficient computation during training, however, p nce (y) sacrifices the sampling efficiency of learning as the negatives produced using a fixed distribution are not tailored toward x + , and as a result are not necessarily hard negative examples.", "Thus, the model is not forced to discover discriminative representation of observed positive data.", "As training progresses, more and more negative examples are correctly learned, the probability of drawing a hard negative example diminishes further, causing slow convergence.", "Adversarial mixture noise To remedy the above mentioned problem of a fixed unconditional negative sampler, we propose to augment it into a mixture one, λp nce (y) + (1 − λ)g θ (y|x), where g θ is a conditional distribution with a learnable parameter θ and λ is a hyperparameter.", "The objective in Expression.", "2 can then be written as (conditioned on x for notational brevity): L(ω, θ; x) = λ E p(y + |x)pnce(y − ) l ω (x, y + , y − ) + (1 − λ) E p(y + |x)g θ (y − |x) l ω (x, y + , y − ) (4) We learn (ω, θ) in a GAN-style minimax game: min ω max θ V (ω, θ) = min ω max θ E p + (x) L(ω, θ; x) (5 ) The embedding model behind l ω (x, y + , y − ) is similar to the discriminator in (conditional) GAN (or critic in Wasserstein or Energy-based GAN (Zhao et al., 2016) , while g θ (y|x) acts as the generator.", "Henceforth, we will use the term discriminator (D) and embedding model interchangeably, and refer to g θ as the generator.", "Learning the generator There is one important distinction to typical GAN: g θ (y|x) defines a categorical distribution over possible y values, and samples are drawn accordingly; in contrast to typical GAN over continuous data space such as images, where samples are generated by an implicit generative model that warps noise vectors into data points.", "Due to the discrete sampling step, g θ cannot learn by receiving gradient through the discriminator.", "One possible solution is to use the Gumbel-softmax reparametrization trick (Jang et al., 2016; Maddison et al., 2016) , which gives a differentiable approximation.", "However, this differentiability comes at the cost of drawing N Gumbel samples per each categorical sample, where N is the number of categories.", "For word embeddings, N is the vocabulary size, and for knowledge graph embeddings, N is the number of entities, both leading to infeasible computational requirements.", "Instead, we use the REINFORCE (Williams, 1992) gradient estimator for ∇ θ L(θ, x): (1−λ) E −l ω (x, y + , y − )∇ θ log(g θ (y − |x)) (6) where the expectation E is with respect to p(y + , y − |x) = p(y + |x)g θ (y − |x), and the discriminator loss l ω (x, y + , y − ) acts as the reward.", "With a separable loss, the (conditional) value function of the minimax game is: L(ω, θ; x) = E p + (y|x) s ω (x, y) − E pnce(y)sω (x, y) − E g θ (y|x)sω (x, y) (7) and only the last term depends on the generator parameter ω.", "Hence, with a separable loss, the reward is −s(x + , y − ).", "This reduction does not happen with a non-separable loss, and we have to use l ω (x, y + , y − ).", "Entropy and training stability GAN training can suffer from instability and degeneracy where the generator probability mass collapses to a few modes or points.", "Much work has been done to stabilize GAN training in the continuous case Gulrajani et al., 2017; Cao et al., 2018) .", "In ACE, if the generator g θ probability mass collapses to a few candidates, then after the discriminator successfully learns about these negatives, g θ cannot adapt to select new hard negatives, because the REIN-FORCE gradient estimator Eq.", "6 relies on g θ being able to explore other candidates during sampling.", "Therefore, if the g θ probability mass collapses, instead of leading to oscillation as in typical GAN, the min-max game in ACE reaches an equilibrium where the discriminator wins and g θ can no longer adapt, then ACE falls back to NCE since the negative sampler has another mixture component from NCE.", "This behavior of gracefully falling back to NCE is more desirable than the alternative of stalled training if p − (y|x) does not have a simple p nce mixture component.", "However, we would still like to avoid such collapse, as the adversarial samples provide greater learning signals than NCE samples.", "To this end, we propose to use a regularizer to encourage the categorical distribution g θ (y|x) to have high entropy.", "In order to make the the regularizer interpretable and its hyperparameters easy to tune, we design the following form: R ent (x) = min(0, c − H(g θ (y|x))) (8) where H(g θ (y|x)) is the entropy of the categorical distribution g θ (y|x), and c = log(k) is the entropy of a uniform distribution over k choices, and k is a hyper-parameter.", "Intuitively, R ent expresses the prior that the generator should spread its mass over more than k choices for each x.", "Handling false negatives During negative sampling, p − (y|x) could actually produce y that forms a positive pair that exists in the training set, i.e., a false negative.", "This possibility exists in NCE already, but since p nce is not adaptive, the probability of sampling a false negative is low.", "Hence in NCE, the score on this false negative (true observation) pair is pushed up less in the negative term than in the positive term.", "However, with the adaptive sampler, g ω (y|x), false negatives become a much more severe issue.", "g ω (y|x) can learn to concentrate its mass on a few false negatives, significantly canceling the learning of those observations in the positive phase.", "The entropy regularization reduces this problem as it forces the generator to spread its mass, hence reducing the chance of a false negative.", "To further alleviate this problem, whenever computationally feasible, we apply an additional two-step technique.", "First, we maintain a hash map of the training data in memory, and use it to efficiently detect if a negative sample (x + , y − ) is an actual observation.", "If so, its contribution to the loss is given a zero weight in ω learning step.", "Second, to upate θ in the generator learning step, the reward for false negative samples are replaced by a large penalty, so that the REINFORCE gradient update would steer g θ away from those samples.", "The second step is needed to prevent null computation where g θ learns to sample false negatives which are subsequently ignored by the discriminator update for ω. Variance Reduction The basic REINFORCE gradient estimator is poised with high variance, so in practice one often needs to apply variance reduction techniques.", "The most basic form of variance reduction is to subtract a baseline from the reward.", "As long as the baseline is not a function of actions (i.e., samples y − being drawn), the REINFORCE gradient estimator remains unbiased.", "More advanced gradient estimators exist that also reduce variance (Grathwohl et al., 2017; Tucker et al., 2017; Liu et al., 2018) , but for simplicity we use the self-critical baseline method (Rennie et al., 2016) , where the baseline is b(x) = l ω (y + , y , x), or b(x) = −s ω (y , x) in the separable loss case, and y = argmax i g θ (y i |x).", "In other words, the baseline is the reward of the most likely sample according to the generator.", "2.7 Improving exploration in g θ by leveraging NCE samples In Sec.", "2.4 we touched on the need for sufficient exploration in g θ .", "It is possible to also leverage negative samples from NCE to help the generator learn.", "This is essentially off-policy exploration in reinforcement learning since NCE samples are not drawn according to g θ (y|x).", "The generator learning can use importance re-weighting to leverage those samples.", "The resulting REIN-FORCE gradient estimator is basically the same as Eq.", "6 except that the rewards are reweighted by g θ (y − |x)/p nce (y − ), and the expectation is with respect to p(y + |x)p nce (y − ).", "This additional offpolicy learning term provides gradient information for generator learning if g θ (y − |x) is not zero, meaning that for it to be effective in helping exploration, the generator cannot be collapsed at the first place.", "Hence, in practice, this term is only used to further help on top of the entropy regularization, but it does not replace it.", "Related Work Smith and Eisner (2005) proposed contrastive estimation as a way for unsupervised learning of log-linear models by taking implicit evidence from user-defined neighborhoods around observed datapoints.", "Gutmann and Hyvärinen (2010) introduced NCE as an alternative to the hierarchical softmax.", "In the works of Mnih and Teh (2012) and Mnih and Kavukcuoglu (2013) , NCE is applied to log-bilinear models and Vaswani et al.", "(2013) applied NCE to neural probabilistic language models (Yoshua et al., 2003) .", "Compared to these previous NCE methods that rely on simple fixed sampling heuristics, ACE uses an adaptive sampler that produces harder negatives.", "In the domain of max-margin estimation for structured prediction (Taskar et al., 2005) , loss augmented MAP inference plays the role of finding hard negatives (the hardest).", "However, this inference is only tractable in a limited class of models such structured SVM (Tsochantaridis et al., 2005) .", "Compared to those models that use exact maximization to find the hardest negative configuration each time, the generator in ACE can be viewed as learning an approximate amortized inference network.", "Concurrently to this work, Tu and Gimpel (2018) proposes a very similar framework, using a learned inference network for Structured prediction energy networks (SPEN) (Belanger and McCallum, 2016) .", "Concurrent with our work, there have been other interests in applying the GAN to NLP problems (Fedus et al., 2018; Wang et al., 2018; Cai and Wang, 2017) .", "Knowledge graph models naturally lend to a GAN setup, and has been the subject of study in Wang et al.", "(2018) and Cai and Wang (2017) .", "These two concurrent works are most closely related to one of the three tasks on which we study ACE in this work.", "Besides a more general formulation that applies to problems beyond those considered in Wang et al.", "(2018) and Cai and Wang (2017) , the techniques introduced in our work on handling false negatives and entropy regularization lead to improved experimental results as shown in Sec.", "5.4.", "Application of ACE on three tasks 4.1 Word Embeddings Word embeddings learn a vector representation of words from co-occurrences in a text corpus.", "NCE casts this learning problem as a binary classification where the model tries to distinguish positive word and context pairs, from negative noise samples composed of word and false context pairs.", "The NCE objective in Skip-gram (Mikolov et al., 2013) for word embeddings is a separable loss of the form: L = − wt∈V [log p(y = 1|w t , w + c ) + K c=1 log p(y = 0|w t , w − c )] (9) Here, w + c is sampled from the set of true contexts and w − c ∼ Q is sampled k times from a fixed noise distribution.", "Mikolov et al.", "(2013) introduced a further simplification of NCE, called \"Negative Sampling\" (Dyer, 2014) .", "With respect to our ACE framework, the difference between NCE and Negative Sampling is inconsequential, so we continue the discussion using NCE.", "A drawback of this sampling scheme is that it favors more common words as context.", "Another issue is that the negative context words are sampled in the same way, rather than tailored toward the actual target word.", "To apply ACE to this problem we first define the value function for the minimax game, V (D, G), as follows: V (D, G) = E p + (wc) [log D(w c , w t )] − E pnce(wc) [− log(1 − D(w c , w t ))] − E g θ (wc|wt) [− log(1 − D(w c , w t ))] (10) with D = p(y = 1|w t , w c ) and G = g θ (w c |w t ).", "Implementation details For our experiments, we train all our models on a single pass of the May 2017 dump of the English Wikipedia with lowercased unigrams.", "The vocabulary size is restricted to the top 150k most frequent words when training from scratch while for finetuning we use the same vocabulary as Pennington et al.", "(2014) , which is 400k of the most frequent words.", "We use 5 NCE samples for each positive sample and 1 adversarial sample in a window size of 10 and the same positive subsampling scheme proposed by Mikolov et al.", "(2013) .", "Learning for both G and D uses Adam (Kingma and Ba, 2014) optimizer with its default parameters.", "Our conditional discriminator is modeled using the Skip-Gram architecture, which is a two layer neural network with a linear mapping between the layers.", "The generator network consists of an embedding layer followed by two small hidden layers, followed by an output softmax layer.", "The first layer of the generator shares its weights with the second embedding layer in the discriminator network, which we find really speeds up convergence as the generator does not have to relearn its own set of embeddings.", "The difference between the discriminator and generator is that a sigmoid nonlinearity is used after the second layer in the discriminator, while in the generator, a softmax layer is used to define a categorical distribution over negative word candidates.", "We find that controlling the generator entropy is critical for finetuning experiments as otherwise the generator collapses to its favorite negative sample.", "The word embeddings are taken to be the first dense matrix in the discriminator.", "Order Embeddings Hypernym Prediction As introduced in Vendrov et al.", "(2016) , ordered representations over hierarchy can be learned by order embeddings.", "An example task for such ordered representation is hypernym prediction.", "A hypernym pair is a pair of concepts where the first concept is a specialization or an instance of the second.", "For completeness, we briefly describe order embeddings, then analyze ACE on the hypernym prediction task.", "In order embeddings, each entity is represented by a vector in R N , the score for a positive ordered pair of entities (x, y) is defined by s ω (x, y) = ||max(0, y − x)|| 2 and, score for a negative ordered pair (x + , y − ) is defined bỹ s ω (x + , y − ) = max{0, η − s(x + , y − )}, where is η is the margin.", "Let f (u) be the embedding function which takes an entity as input and outputs en embedding vector.", "We define P as a set of positive pairs and N as negative pairs, the separable loss function for order embedding task is defined by: L = (u,v)∈P s ω (f (u), f (v)))+ (u,v)∈Ns (f (u), f (v)) (11) Implementation details Our generator for this task is just a linear fully connected softmax layer, taking an embedding vector from discriminator as input and outputting a categorical distribution over the entity set.", "For the discriminator, we inherit all model setting from Vendrov et al.", "(2016) : we use 50 dimensions hidden state and bash size 1000, a learning rate of 0.01 and the Adam optimizer.", "For the generator, we use a batch size of 1000, a learning rate 0.01 and the Adam optimizer.", "We apply weight decay with rate 0.1 and entropy loss regularization as described in Sec.", "2.4.", "We handle false negative as described in Sec.", "2.5.", "After cross validation, variance reduction and leveraging NCE samples does not greatly affect the order embedding task.", "Knowledge Graph Embeddings Knowledge graphs contain entity and relation data of the form (head entity, relation, tail entity), and the goal is to learn from observed positive entity relations and predict missing links (a.k.a.", "link prediction).", "There have been many works on knowledge graph embeddings, e.g.", "TransE (Bordes et al., 2013) , TransR (Lin et al., 2015) , TransH (Wang et al., 2014) , TransD (Ji et al., 2015) , Complex (Trouillon et al., 2016) , DistMult (Yang et al., 2014) and ConvE (Dettmers et al., 2017) .", "Many of them use a contrastive learning objective.", "Here we take TransD as an example, and modify its noise contrastive learning to ACE, and demonstrate significant improvement in sample efficiency and link prediction results.", "Implementation details Let a positive entity-relation-entity triplet be denoted by ξ + = (h + , r + , t + ), and a negative triplet could either have its head or tail be a negative sample, i.e.", "ξ − = (h − , r + , t + ) or ξ − = (h + , r + , t − ).", "In either case, the general formulation in Sec.", "2.1 still applies.", "The non-separable loss function takes on the form: l = max(0, η + s ω (ξ + ) − s ω (ξ − )) (12) The scoring rule is: s = h ⊥ + r − t ⊥ (13) where r is the embedding vector for r, and h ⊥ is projection of the embedding of h onto the space of r by h ⊥ = h + r p h p h, where r p and h p are projection parameters of the model.", "t ⊥ is defined in a similar way through parameters t, t p and r p .", "The form of the generator g θ (t − |r + , h + ) is chosen to be f θ (h ⊥ , h ⊥ + r), where f θ is a feedforward neural net that concatenates its two input arguments, then propagates through two hidden layers, followed by a final softmax output layer.", "As a function of (r + , h + ), g θ shares parameter with the discriminator, as the inputs to f θ are the embedding vectors.", "During generator learning, only θ is updated and the TransD model embedding parameters are frozen.", "Experiments We evaluate ACE with experiments on word embeddings, order embeddings, and knowledge graph embeddings tasks.", "In short, whenever the original learning objective is contrastive (all tasks except Glove fine-tuning) our results consistently show that ACE improves over NCE.", "In some cases, we include additional comparisons to the state-of-art results on the task to put the significance of such improvements in context: the generic ACE can often make a reasonable baseline competitive with SOTA methods that are optimized for the task.", "For word embeddings, we evaluate models trained from scratch as well as fine-tuned Glove models (Pennington et al., 2014) on word similarity tasks that consist of computing the similarity between word pairs where the ground truth is an average of human scores.", "We choose the Rare word dataset (Luong et al., 2013) and WordSim-353 (Finkelstein et al., 2001) by virtue of our hypothesis that ACE learns better representations for both rare and frequent words.", "We also qualitatively evaluate ACE word embeddings by inspecting the nearest neighbors of selected words.", "For the hypernym prediction task, following Vendrov et al.", "(2016) , hypernym pairs are created from the WordNet hierarchy's transitive closure.", "We use the released random development split and test split from Vendrov et al.", "(2016) , which both contain 4000 edges.", "For knowledge graph embeddings, we use TransD (Ji et al., 2015) as our base model, and perform ablation study to analyze the behavior of ACE with various add-on features, and confirm that entropy regularization is crucial for good performance in ACE.", "We also obtain link prediction results that are competitive or superior to the stateof-arts on the WN18 dataset (Bordes et al., 2014) .", "Training Word Embeddings from scratch In this experiment, we empirically observe that training word embeddings using ACE converges significantly faster than NCE after one epoch.", "As shown in Fig.", "3 both ACE (a mixture of p nce and g θ ) and just g θ (denoted by ADV) significantly outperforms the NCE baseline, with an absolute improvement of 73.1% and 58.5% respectively on RW score.", "We note similar results on WordSim-353 dataset where ACE and ADV outperforms NCE by 40.4% and 45.7%.", "We also evaluate our model qualitatively by inspecting the nearest neighbors of selected words in Table.", "1.", "We first present the five nearest neighbors to each word to show that both NCE and ACE models learn sensible embeddings.", "We then show that ACE embeddings have much better semantic relevance in a larger neighborhood (nearest neighbor 45-50).", "Finetuning Word Embeddings We take off-the-shelf pre-trained Glove embeddings which were trained using 6 billion tokens (Pennington et al., 2014) and fine-tune them using our algorithm.", "It is interesting to note that the original Glove objective does not fit into the contrastive learning framework, but nonetheless we find that they benefit from ACE.", "In fact, we observe that training such that 75% of the words appear as positive contexts is sufficient to beat the largest dimensionality pre-trained Glove model on word similarity tasks.", "We evaluate our performance on the Rare Word and WordSim353 data.", "As can be seen from our results in Table 2 , ACE on RW is not always better and for the 100d and 300d Glove embeddings is marginally worse.", "However, on WordSim353 ACE does considerably better across the board to the point where 50d Glove embeddings outperform the 300d baseline Glove model.", "Hypernym Prediction As shown in Table 3 , with ACE training, our method achieves a 1.5% improvement on accu- racy over Vendrov et al.", "(2016) without tunning any of the discriminator's hyperparameters.", "We further report training curve in Fig.", "1 , we report loss curve on randomly sampled pairs.", "We stress that in the ACE model, we train random pairs and generator generated pairs jointly, as shown in Fig.", "2 , hard negatives help the order embedding model converges faster.", "Ablation Study and Improving TransD To analyze different aspects of ACE, we perform an ablation study on the knowledge graph embedding task.", "As described in Sec.", "4.3, the base Method Accuracy (%) order-embeddings 90.6 order-embeddings + Our ACE 92.0 Table 3 : Order Embedding Performance model (discriminator) we apply ACE to is TransD (Ji et al., 2015) .", "Fig.", "5 shows validation performance as training progresses.", "All variants of ACE converges to better results than base NCE.", "Among ACE variants, all methods that include entropy regularization significantly outperform without entropy regularization.", "Without the self critical baseline variance reduction, learning could progress faster at the beginning but the final performance suffers slightly.", "The best performance is obtained without the additional off-policy learning of the generator.", "Table.", "4 shows the final test results on WN18 link prediction task.", "It is interesting to note that ACE improves MRR score more significantly than hit@10.", "As MRR is a lot more sensitive to the top rankings, i.e., how the correct configuration ranks among the competitive alternatives, this is consistent with the fact that ACE samples hard negatives and forces the base model to learn a more discriminative representation of the positive examples.", "(Trouillon et al., 2016) , which achieves the SOTA on this dataset.", "Among all TransD based models (the best results in this group is underlined), ACE improves over basic NCE and another GAN based approach KBGAN.", "The gap on MRR is likely due to the difference between TransD and COMPLEX models.", "Hard Negative Analysis To better understand the effect of the adversarial samples proposed by the generator we plot the discriminator loss on both p nce and g θ samples.", "In this context, a harder sample means a higher loss assigned by the discriminator.", "Fig.", "4 shows that discriminator loss for the word embedding task on g θ samples are always higher than on p nce samples, confirming that the generator is indeed sampling harder negatives.", "For Hypernym Prediction task, Fig.2 shows discriminator loss on negative pairs sampled from NCE and ACE respectively.", "The higher the loss the harder the negative pair is.", "As indicated in the left plot, loss on the ACE negative terms collapses faster than on the NCE negatives.", "After adding entropy regularization and weight decay, the generator works as expected.", "Limitations When the generator softmax is large, the current implementation of ACE training is computationally expensive.", "Although ACE converges faster per iteration, it may converge more slowly on wall-clock time depending on the cost of the softmax.", "However, embeddings are typically used as pre-trained building blocks for subsequent tasks.", "Thus, their learning is usually the pre-computation step for the more complex downstream models and spending more time is justified, especially with GPU acceleration.", "We believe that the computational cost could potentially be reduced via some existing techniques such as the \"augment and reduce\" variational inference of (Ruiz et al., 2018), adaptive softmax (Grave et al., 2016) , or the \"sparsely-gated\" softmax of Shazeer et al.", "(2017) , but leave that to future work.", "Another limitation is on the theoretical front.", "As noted in Goodfellow (2014) , GAN learning does not implement maximum likelihood estimation (MLE), while NCE has MLE as an asymptotic limit.", "To the best of our knowledge, more distant connections between GAN and MLE training are not known, and tools for analyzing the equilibrium of a min-max game where players are parametrized by deep neural nets are currently not available to the best of our knowledge.", "Conclusion In this paper, we propose Adversarial Contrastive Estimation as a general technique for improving supervised learning problems that learn by contrasting observed and fictitious samples.", "Specifically, we use a generator network in a conditional GAN like setting to propose hard negative examples for our discriminator model.", "We find that a mixture distribution of randomly sampling negative examples along with an adaptive negative sampler leads to improved performances on a variety of embedding tasks.", "We validate our hypothesis that hard negative examples are critical to optimal learning and can be proposed via our ACE framework.", "Finally, we find that controlling the entropy of the generator through a regularization term and properly handling false negatives is crucial for successful training." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "2.3", "2.4", "2.5", "2.6", "3", "4", "4.2", "4.3", "5", "5.1", "5.2", "5.3", "5.4", "5.5", "6", "7" ], "paper_header_content": [ "Introduction", "Background: contrastive learning", "Adversarial mixture noise", "Learning the generator", "Entropy and training stability", "Handling false negatives", "Variance Reduction", "Related Work", "Application of ACE on three tasks 4.1 Word Embeddings", "Order Embeddings Hypernym Prediction", "Knowledge Graph Embeddings", "Experiments", "Training Word Embeddings from scratch", "Finetuning Word Embeddings", "Hypernym Prediction", "Ablation Study and Improving TransD", "Hard Negative Analysis", "Limitations", "Conclusion" ] }
GEM-SciDuet-train-33#paper-1047#slide-13
ACE for Order Embeddings
Hypernym Prediction: A hypernym pair is a pair of concepts where the irst f concept is a specialization or an instance of the second. Learning embeddings that are hierarchy preserving. The Root Node is at the origin and all other embeddings lie on the positive semi-space Constraint enforces the magnitude of the parents embedding to be smaller than childs in every dimension Sibling nodes are not subjected to this constraint.
Hypernym Prediction: A hypernym pair is a pair of concepts where the irst f concept is a specialization or an instance of the second. Learning embeddings that are hierarchy preserving. The Root Node is at the origin and all other embeddings lie on the positive semi-space Constraint enforces the magnitude of the parents embedding to be smaller than childs in every dimension Sibling nodes are not subjected to this constraint.
[]
GEM-SciDuet-train-34#paper-1048#slide-0
1048
A Case Study on Neural Headline Generation for Editing Support
There have been many studies on neural headline generation models trained with a lot of (article, headline) pairs. However, there are few situations for putting such models into practical use in the real world since news articles typically already have corresponding headlines. In this paper, we describe a practical use case of neural headline generation in a news aggregator, where dozens of professional editors constantly select important news articles and manually create their headlines, which are much shorter than the original headlines. Specifically, we show how to deploy our model to an editing support tool and report the results of comparing the behavior of the editors before and after the release.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186 ], "paper_content_text": [ "Introduction A news-aggregator is a website or mobile application that aggregates a large amount of web content, e.g., online newspapers provided by different publishers.", "The main purpose of such a service is to help users obtain important news out of vast amounts of information quickly and easily.", "Therefore, it is critical to consider how to compactly show news, as well as what type of news to select, to improve service quality.", "In fact, the news-aggregator of Yahoo!", "JAPAN 1 , the largest Japanese portal site, is supported by dozens of professional editors who constantly select important news articles and manually create their new headlines called short titles, which are much shorter than the original headline, to construct a newstopic list.", "Note that we use the term \"title\" to avoid confusion with the original news headline, although they are similar concepts.", "* Both authors contributed equally to this work.", "1 https://www.yahoo.co.jp/ (a) List of news topics including short titles.", "(b) Page of news entry including headline and lead.", "Figure 1 shows screenshots of the newsaggregator of Yahoo!", "JAPAN, where the English translations of the short title, headline and lead are listed in Table 1 .", "The left figure (a) shows the list of news topics (important news articles), which includes short titles, and the right figure (b) shows the entry page of the first topic in the list, which consists of a headline and lead.", "The lead is a short version of the article and can be used by users to decide whether to read the whole article.", "The editors' job is to create a short title from news content including the headline and lead.", "A short title has two advantages over a normal headline; one is quick understandability of the content and the other is saving display space by using a single line.", "This means that short titles can increase a user's chances of reaching interesting articles.", "Since the click-through rate of news articles is directly related to ad revenue, even a small improvement in short titles has a significant impact on business.", "We tackle an automatic-generation task of such short titles for a news aggregator to support the Japanese English translation Short title The prime minister cannot say that there is no surmise.", "Headline It cannot be said that there is no \"sontaku (surmise)\" with absolute certainty.", "The prime minister Abe said about the problem of \"Kake Gakuen (Kake school)\".", "Lead Prime Minister Shinzo Abe said, in an intensive deliberation with the House of Councilors Budget Committee held on the afternoon of the 14th, as an answer to a question about whether bureaucrats surmised to the prime minister regarding the Kake suspicion, \"It is difficult to understand whether there is a sontaku (surmise)\".", "He said \"It cannot be said that there was nothing wrong,\" while explaining that \"I do not need to be obsequious\".", "An answer to Ichiro Tsukada (LDP).", "Table 1 : Short title, headline, and lead in Figure 1 (b) with English versions.", "editorial process.", "Our task is a variant of newsheadline generation, which has been extensively studied, as described in Section 6.", "A clear difference between their task and ours is that we need to generate short titles from news content including headlines.", "Thus, we formulate our task as an abstractive summarization from multiple information sources, i.e., headlines and leads, based on an encoder-decoder model (Section 2).", "There are roughly three approaches for handling multiple information sources.", "The first approach is to merge all sources with some weights based on the importance of each source, which can be achieved by a weighted average of the context vectors, as in multimodal summarization (Hori et al., 2017) .", "This is the most general approach since the other two can also be regarded as special cases of the weighted average.", "The second approach is to use one source as the main source and others as secondary ones.", "This is effective when the main source can be clearly determined, such as query-focused summarization (Nema et al., 2017) , where the target document is main and a query is secondary.", "The third approach is to find the salient components of the sources.", "This is suitable when there are many sources including less informative ones (redundant sources), such as lengthydocument summarization that outputs a multisentence summary (Tan et al., 2017) , where each sentence can be regarded as one source.", "We addressed an extension of the weighted average approach and compared our proposed model with a multimodal model (Hori et al., 2017) from the first approach and a query-based model (Nema et al., 2017) from the second approach, as well as the normal encoder-decoder model.", "Since we have only two sources (headlines and leads), where the headline source is clearly salient for generating a short title, the third approach can be reduced to the normal encoder-decoder model.", "Our contributions are as follows.", "• We report on a case study of short-title generation of news articles for a news aggregator as a real-world application of neural headline generation.", "This study supports previous studies based on the encoder-decoder model from a practical standpoint since most real-world news articles basically already have headlines, which means that there has been little direct application of these previous studies.", "• We propose an encoder-decoder model with multiple encoders for separately encoding news headlines and leads (Section 3).", "Our comparative experiments with several baselines involving evaluations done by crowdsourcing workers showed the effectiveness of our model, especially using the \"usefulness\" measure (Section 4).", "• We describe how to deploy our model to an editing support tool and show the results of comparing the editors' behavior before and after releasing the tool (Section 5), which imply that the editors began to refer to generated titles after the release.", "late the following conditional likelihood p(y | x) = T −1 ∏ t=1 p(y t+1 | y ≤t , x) (1) with respect to each pair (x, y) of an input sequence x = x 1 · · · x S and output sequence y = y 1 · · · y T , where y ≤t = y 1 · · · y t , and maximize its mean.", "The model p(y | x) in Eq.", "(1) is computed by a combination of two recurrent neural networks (RNNs): an encoder and decoder.", "The encoder reads an input sequence x to recognize its content, and the decoder predicts an output sequence y corresponding to the content.", "More formally, an encoder calculates a hidden state h s for each element x s in a x by using the state transition function f enc of the encoder: h s = f enc (x s , h s−1 ).", "In a similar fashion, a decoder calculates a hidden stateĥ t for each element y t in a y by using the state transition function f dec of the decoder after setting the last hidden state of the encoder as the initial state of the decoder (ĥ 0 = h S ): h t = f dec (y t ,ĥ t−1 ).", "Then, a prediction of outputs for eachĥ t is calculated using the output function g dec with an attention mechanism: p(y t+1 | y ≤t , x) = g dec (ĥ t , c t ), (2) where c t is a weighted average of the encoder hidden states {h 1 , · · · , h S }, defined by c t = S ∑ s=1 a t (s)h s , (3) where a t (s) represents a weight of an encoder hidden state h s with respect to a decoder hidden statê h t .", "c t represents a soft alignment (or attention weight) to the source sequence at the target position t, so it is called a context.", "Proposed Method We propose an encoder-decoder model with multiple encoders.", "For simplicity, we describe our model assuming two encoders for news headlines and leads.", "Let d t and d ′ t be contexts calculated with Eq.", "(3) with the headline encoder and lead encoder, respectively.", "Our model combines the two context vectors inspired by a gating mechanism in long-short term memory networks (Hochreiter and Schmidhuber, 1997) as follows: w t = σ(W [d t ; d ′ t ;ĥ t ]), (4) w ′ t = σ(W ′ [d t ; d ′ t ;ĥ t ]), (5) c t = w t ⊙ d t + w ′ t ⊙ d ′ t , (6) where function σ represents the sigmoid function, i.e., σ(x) = 1/(1 + e −x ), and the operator ⊙ represents the element-wise product.", "Eq.", "(4) calculates a gating weight w t for d t , where W represents a weight matrix for a concatenated vector [d t ; d ′ t ;ĥ t ].", "Similarly, Eq.", "(5) calculates a gating weight w ′ t for d ′ t .", "Eq.", "(6) calculates a mixed context c t made from the two contexts, d t and d ′ t .", "Finally, the output function in our model is constructed by substituting c t with c t in Eq.", "(2).", "Our model can be regarded as an extension of the multimodal fusion model (Hori et al., 2017) , where multiple contexts are mixed using scalar weights, i.e., c t = αd t + βd ′ t , where α and β are positive scalar weights calculated using an attention mechanism such as a t (s) in Eq.", "(3).", "Our model can obtain a more sophisticated mixed context than their model since that model only takes into account which encoder to weigh at a time step, while our model adjusts weights on the element level.", "Experiments Dataset We prepared a dataset extracted from the newsaggregator of Yahoo!", "JAPAN by Web crawling.", "The dataset included 263K (headline, lead, short title) triples, and was split into three parts, i.e., for training (90%), validation (5%), and testing (5%).", "We preprocessed them by separating characters for training since our preliminary experiments showed that character-based training clearly performed better than word-based training.", "The statistics of our dataset are as follows.", "The average lengths of headlines, leads, and short titles are 24.87, 128.49, and 13.05 Japanese characters, respectively.", "The dictionary sizes (for characters) of headlines, leads, and short titles are 3618, 4226, and 3156, respectively.", "Each news article has only one short title created by a professional editor.", "The percentage of short titles equal to their headlines is only 0.13%, while the percentage of extractively solvable instances, in which the characters in each short title are completely matched by those in the corresponding headline, was about 20%.", "However, the average edit distance (Levenshtein, 1966 ) between short titles and headlines was 23.74.", "This means that short titles cannot be easily created from headlines.", "Training We implemented our model on the OpenNMT 2 toolkit.", "We used a convolutional neural network (CNN) (Kim, 2014) , instead of an RNN, to construct the lead encoder since leads are longer than headlines and require much more computational time.", "Since the CNN encoder outputs all hidden states for an input sequence in the same format as the RNN encoder, we can easily apply these states to Eq.", "(3).", "Our headline encoder still remains as an RNN (i.e., bidirectional LSTM) for fair comparison with the default implementation.", "We used a stochastic gradient descent algorithm with Nesterov momentum (Nesterov, 1983) as an optimizer, after initializing parameters by uniform sampling on (−0.1, 0.1).", "Table 2 lists the details of the hyper-parameter settings in our experiment.", "Other settings were basically the same as the default implementation of OpenNMT.", "Evaluation We conducted two crowdsourcing tasks to separately measure readability and usefulness.", "The readability task asked ten workers how readable each short title was on a four-point scale (higher is better), while the usefulness task asked them how useful the short title was compared to the corresponding article.", "The score of each generated short title was calculated by averaging the scores collected from the ten workers.", "Compared Models We prepared four models, our model GateFusion and three baselines MultiModal, QueryBased, and OpenNMT, listed below.", "We implemented the fusion mechanisms of MultiModal and 2 https://github.com/OpenNMT/OpenNMT-py Table 3 : Mean scores of readability (r), usefulness (u), and their average r+u 2 based on crowdsourcing.", "The \" †\" mark shows a statistical significance from all three baselines OpenNMT, MultiModal, and QueryBased on a one-tailed, paired t-test (p < 0.01).", "QueryBased on OpenNMT using an RNN encoder for headlines and CNN encoder for leads (see Appendix A for detailed definitions).", "• GateFusion: Our model with a gating mechanism described in Section 3.", "This is a fusion based on vector weights.", "• MultiModal: A multimodal model proposed by (Hori et al., 2017) , which can handle multimodal information such as image and audio as well as text by using separate encoders.", "The model combines contexts obtained from the encoders via an attention mechanism such as a t (s) in Eq.", "(3).", "This is a fusion based on scalar weights.", "• QueryBased: A query-based model proposed by (Nema et al., 2017) , which can finetune the attention on a document by using a query for query-focused summarization.", "We regard a headline as a document and a lead as a query since the headline is more similar to its short title.", "Specifically, the model finetunes an attention weight a t (s) for calculating a headline context d t by using a pre-computed lead context d ′ t .", "This is a fusion based on cascade connection.", "• OpenNMT: An encoder-decoder model with a single encoder implemented in OpenNMT, whose input is a headline only, because a variant using a lead did not perform better than this setting.", "Table 3 lists the results from the crowdsourcing tasks for readability and usefulness (see Appendix B for the details of these scores).", "Editor and Prefix in the top block of rows show the results of correct short titles created by editors and a naive model using the first 13.5 Japanese characters 3 , respectively.", "The middle and bottom blocks represent the three baselines and our models, respectively.", "We explain our hybrid model HybridFusion later.", "Each model was prepared as an ensemble of ten models by random initialization, aiming for robust performance.", "Our GateFusion clearly performed better than the three baselines regarding usefulness and interestingly outperformed even Editor.", "This implies that GateFusion tends to aggressively copy elements from source sequences.", "However, this seemed to result in complicated expressions; thus, GateFusion performed the worst with respect to readability.", "To overcome this weakness, we developed a hybrid model HybridFusion that consists of GateFusion and another fusion model QueryBased, which performed relatively well in terms of readability.", "The results indicate that HybridFusion performed the best regarding readability and usefulness.", "It can be considered that QueryBased helps GateFusion generate headline-style outputs since QueryBased mainly uses the headline source.", "Table 4 lists output examples generated by the best model OpenNMT from the three baselines and our best model HybridFusion (see Appendix C for more examples).", "In this case, the difference between OpenNMT and HybridFusion is easily comprehensible.", "The former selected \" (evolution)\", and the latter selected \" (Darvish)\" from the headline.", "In Japanese headlines, the last word tends to be important, so using the last word is basically a good strategy.", "However, the lead indicates that \"Darvish\" is more important than \"evolution\" (actually, there is no word \"evolution\" in the lead); thus, HybridFusion was able to correctly select the long name \"Darvish\" and abbreviate it to \" (Dar)\".", "In addition, it forcibly changed the style to the short title's style by putting the name into the forefront to easily get users' attention.", "This suggests that our neural-headline-generation model HybridFusion can successfully work even in this real-world application.", "Results Deployment to Editing Support Tool We deployed our short-title-generation model to an editing support tool in collaboration with the 3 13.5 is the limit in the news-aggregator, where space, numbers, and alphabet characters are counted as 0.5.", "Figure 2 : Screenshot of editing support tool displaying generated candidates for creating a short title.", "news service, as shown in Figure 2 .", "In the tool, when an editor enters the URL of an article, the tool can automatically fetch the headline and lead of the article and display up to five candidates next to the edit form of a short title, as shown in the dotted box in the figure.", "These candidates are hypotheses (with high probabilities) generated by the beam search based on the model.", "Then, the editor can effectively create a short title by referring to the generated candidates.", "This supporting feature is expected to be useful especially for inexperienced editors since the quality of short titles is heavily dependent on editors' experience.", "From now on, we briefly describe three features of the tool to improve its usability when displaying candidates: cutoff of unpromising candidates, skipping redundant candidates, and highlighting unknown characters.", "After that, we discuss the effect of the deployment analyzing user behavior before and after releasing the tool.", "Cutoff of Unpromising Candidates The quality of displayed candidates is one of the main factors that affect the usability of the tool.", "If the tool frequently displays unpromising candidates, editors will gradually start ignoring them.", "Therefore, we cutoff unpromising candidates whose perplexity scores are higher than a certain threshold, where the perplexity score of a candidate is calculated by the inverse of the geometric mean of the generation probabilities for all characters in the candidate.", "We set the threshold considering the results of the editors' manual evaluation, where they checked if each candidate was acceptable or not.", "Specifically, we used 1.47 (=1/0.68) as the threshold, which means that the (geometric) mean character likelihood in the candidate should be higher than 0.68.", "If all candidates are judged as unpromising, the tool displays a message like \"No promising candidates.\"", "Skipping Redundant Candidates The purpose of the tool is to give editors some new ideas for creating short titles, so it is not useful to display redundant candidates similar to others.", "Therefore, we skip candidates whose edit distance (Levenshtein, 1966) to the other candidates is lower than a threshold when selecting hypotheses in descending order of probability.", "Formally, the edit distance between two texts is defined as the minimum number of single-character edits (insertions, deletions, or substitutions) required to change one text into the other.", "We set the threshold to 2 so as to restrict variations of Japanese particles as there are many particles with a similar meaning in Japanese 4 , e.g., \" (ha)\" and \" (ga)\".", "Although we used a unit cost for the edit distance, we can adjust the cost of each edit operation so that the tool can ignore variations of prepositions if we want to use English texts.", "Highlighting Unknown Characters One difficulty of neural models is that there is a possibility of generating incorrect or fake titles, which do not correspond to the article.", "This is a serious issue for news editing support since displayed candidates can mislead editors.", "For example, if the tool displays \" (Fujinami)\" for the news about \" (Fujinami)\", where they are different names with the same pronunciation, editors might choose the incorrect one.", "As a simple solution, we highlighted unknown characters that do not appear in both headline and lead in red.", "In Figure 2 , two phrases (\"B\" and \" \") are highlighted since they do not appear in the headline and lead.", "When a candidate includes highlighted characters, editors can carefully check if the candidate is semantically correct.", "Note that we did not exclude candidates with unknown characters so that the model can aggressively generate paraphrases and abbreviations.", "For example, the tool ROUGE-L (± SE) # articles Before 52.71% (± 0.56) 1773 After 57.65% (± 0.53) 1959 Table 5 : Sequence matching rates (ROUGE-L) of editors' titles and generated titles, which are averaged over articles over three weeks before/after releasing tool.", "suggests \" B(Soft B.)\"", "as an abbreviation of \" (Softbank)\" in the figure.", "Effect of Deployment To investigate the effect of the deployment, we compared the sequence matching rates between editors' correct titles and generated candidates before and after releasing the tool.", "The sequence matching rate is basically calculated by ROUGE-L (Lin, 2004) , which is defined as the rate of the length of the longest common subsequence between two sequences, i.e., a correct title and a generated candidate.", "Because we have multiple candidates for each article, we calculate the sequence matching rate as the maximum of their ROUGE-L scores, assuming that editors may refer to the most promising candidate.", "Note that the candidates were filtered by the aforementioned features, so we omitted a few articles without candidates.", "Table 5 shows the results of the sequence matching rates averaged over the articles over three weeks before and after releasing the tool.", "The results indicate that the ROUGE-L score increased by about 5 percentage points after the release.", "This implies that editors created their titles by referring to the displayed candidates to some extent.", "In fact, the ratio of the exact matched titles (ROUGE-L = 100%) in all articles (before/after the release) increased after the release by a factor of 1.62(i.e., from 3.78% to 6.13%).", "Similarly, the ratio of the 80% matched titles (ROUGE-L ≥ 80%) also increased by a factor of 1.32 (i.e., from 14.04% to 18.53%).", "This suggests that professional editors obtained new ideas from generated titles of the tool.", "Related Work We briefly review related studies from three aspects: news headline generation, editing support, and application of headline generation.", "In summary, our work is the first attempt to deploy a neural news-headline-generation model to a realworld application, i.e., news editing support tool.", "News-headline-generation tasks have been extensively studied since early times (Wang et al., 2005; Soricut and Marcu, 2006; Woodsend et al., 2010; Alfonseca et al., 2013; Sun et al., 2015; Colmenares et al., 2015) .", "In this line of research, Rush et al.", "(2015) proposed a neural model to generate news headlines and released a benchmark dataset for their task, and consequently this task has recently received increasing attention (Chopra et al., 2016; Takase et al., 2016; Kiyono et al., 2017; Zhou et al., 2017; Ayana et al., 2017; Raffel et al., 2017; Cao et al., 2018; Kobayashi, 2018) .", "However, their approaches were basically based on the encoderdecoder model, which is trained with a lot of (article, headline) pairs.", "This means that there are few situations for putting their models into the real world because news articles typically already have corresponding headlines, and most editors create a headline before its content (according to a senior journalist).", "Therefore, our work can strongly support their approaches from a practical perspective.", "Considering technologies used for editing support, there have been many studies for various purposes, such as spelling error correction (Farra et al., 2014; Hasan et al., 2015; Etoori et al., 2018) , grammatical error correction (Dahlmeier and Ng, 2012; Susanto et al., 2014; Choshen and Abend, 2018) , fact checking (Baly et al., 2018; Thorne and Vlachos, 2018; Lee et al., 2018) , fluency evaluation (Vadlapudi and Katragadda, 2010; Heilman et al., 2014; Kann et al., 2018) , and so on.", "However, when we consider their studies on our task, they are only used after editing (writing a draft).", "On the other hand, the purpose of our tool is different from theirs since our tool can support editors before or during editing.", "The usage of (interactive) machine translation systems (Denkowski et al., 2014; González-Rubio et al., 2016; Wuebker et al., 2016; Ye et al., 2016; Takeno et al., 2017) for supporting manual post-editing are similar to our purpose, but their task is completely different from ours.", "In other words, their task is a translation without information loss, whereas our task is a summarization that requires information compression.", "We believe that a case study on summarization is still important for the summarization community.", "There have been several studies reporting case studies on headline generation for different real services: (a) question headlines on question answering service (Higurashi et al., 2018) , (b) product headlines on e-commerce service (Wang et al., 2018) , and (c) headlines for product curation pages Camargo de Souza et al., 2018) .", "The first two (a) and (b) are extractive approaches, and the last one (c) is an abstractive approach, where the input is a set of slot/value pairs, such as \"color/white.\"", "That is, our task is more difficult to use in the real-world.", "In addition, application to news services tends to be sensitive since news articles contain serious contents such as incidents, accidents, and disasters.", "Thus, our work should be valuable as a rare case study applying a neural model to such a news service.", "Conclusion We addressed short-title generation from news articles for a news aggregator to support the editorial process.", "We proposed an encoder-decoder model with multiple encoders for separately encoding multiple information sources, i.e., news headlines and leads.", "Comparative experiments using crowdsourcing showed that our hybrid model performed better than the baselines, especially using the usefulness measure.", "We deployed our model to an editing support tool and empirically confirmed that professional editors began to refer to the generated titles after the release.", "Future research will include verifying how much our headline generation model can affect practical performance indicators, such as click-through rate.", "In this case, we need to develop a much safer model since our model sometimes yields erroneous outputs or fake news titles, which cannot be directly used in the commercial service." ] }
{ "paper_header_number": [ "1", "3", "4.1", "4.2", "4.3", "4.4", "5", "5.1", "5.2", "5.3", "5.4", "6", "7" ], "paper_header_content": [ "Introduction", "Proposed Method", "Dataset", "Training", "Evaluation", "Compared Models", "Deployment to Editing Support Tool", "Cutoff of Unpromising Candidates", "Skipping Redundant Candidates", "Highlighting Unknown Characters", "Effect of Deployment", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-34#paper-1048#slide-0
Summary
Address short title generation for a news aggregation service, where editors create short titles to introduce important articles Show a practical use case of neural headline generation Most news articles basically already have headlines Propose an encoder-decoder model with multiple encoders Deploy our model to an editing support tool and show the results of comparing the editors behavior Copyright 2019 Yahoo Japan Corporation. All Rights Reserved.
Address short title generation for a news aggregation service, where editors create short titles to introduce important articles Show a practical use case of neural headline generation Most news articles basically already have headlines Propose an encoder-decoder model with multiple encoders Deploy our model to an editing support tool and show the results of comparing the editors behavior Copyright 2019 Yahoo Japan Corporation. All Rights Reserved.
[]
GEM-SciDuet-train-34#paper-1048#slide-1
1048
A Case Study on Neural Headline Generation for Editing Support
There have been many studies on neural headline generation models trained with a lot of (article, headline) pairs. However, there are few situations for putting such models into practical use in the real world since news articles typically already have corresponding headlines. In this paper, we describe a practical use case of neural headline generation in a news aggregator, where dozens of professional editors constantly select important news articles and manually create their headlines, which are much shorter than the original headlines. Specifically, we show how to deploy our model to an editing support tool and report the results of comparing the behavior of the editors before and after the release.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186 ], "paper_content_text": [ "Introduction A news-aggregator is a website or mobile application that aggregates a large amount of web content, e.g., online newspapers provided by different publishers.", "The main purpose of such a service is to help users obtain important news out of vast amounts of information quickly and easily.", "Therefore, it is critical to consider how to compactly show news, as well as what type of news to select, to improve service quality.", "In fact, the news-aggregator of Yahoo!", "JAPAN 1 , the largest Japanese portal site, is supported by dozens of professional editors who constantly select important news articles and manually create their new headlines called short titles, which are much shorter than the original headline, to construct a newstopic list.", "Note that we use the term \"title\" to avoid confusion with the original news headline, although they are similar concepts.", "* Both authors contributed equally to this work.", "1 https://www.yahoo.co.jp/ (a) List of news topics including short titles.", "(b) Page of news entry including headline and lead.", "Figure 1 shows screenshots of the newsaggregator of Yahoo!", "JAPAN, where the English translations of the short title, headline and lead are listed in Table 1 .", "The left figure (a) shows the list of news topics (important news articles), which includes short titles, and the right figure (b) shows the entry page of the first topic in the list, which consists of a headline and lead.", "The lead is a short version of the article and can be used by users to decide whether to read the whole article.", "The editors' job is to create a short title from news content including the headline and lead.", "A short title has two advantages over a normal headline; one is quick understandability of the content and the other is saving display space by using a single line.", "This means that short titles can increase a user's chances of reaching interesting articles.", "Since the click-through rate of news articles is directly related to ad revenue, even a small improvement in short titles has a significant impact on business.", "We tackle an automatic-generation task of such short titles for a news aggregator to support the Japanese English translation Short title The prime minister cannot say that there is no surmise.", "Headline It cannot be said that there is no \"sontaku (surmise)\" with absolute certainty.", "The prime minister Abe said about the problem of \"Kake Gakuen (Kake school)\".", "Lead Prime Minister Shinzo Abe said, in an intensive deliberation with the House of Councilors Budget Committee held on the afternoon of the 14th, as an answer to a question about whether bureaucrats surmised to the prime minister regarding the Kake suspicion, \"It is difficult to understand whether there is a sontaku (surmise)\".", "He said \"It cannot be said that there was nothing wrong,\" while explaining that \"I do not need to be obsequious\".", "An answer to Ichiro Tsukada (LDP).", "Table 1 : Short title, headline, and lead in Figure 1 (b) with English versions.", "editorial process.", "Our task is a variant of newsheadline generation, which has been extensively studied, as described in Section 6.", "A clear difference between their task and ours is that we need to generate short titles from news content including headlines.", "Thus, we formulate our task as an abstractive summarization from multiple information sources, i.e., headlines and leads, based on an encoder-decoder model (Section 2).", "There are roughly three approaches for handling multiple information sources.", "The first approach is to merge all sources with some weights based on the importance of each source, which can be achieved by a weighted average of the context vectors, as in multimodal summarization (Hori et al., 2017) .", "This is the most general approach since the other two can also be regarded as special cases of the weighted average.", "The second approach is to use one source as the main source and others as secondary ones.", "This is effective when the main source can be clearly determined, such as query-focused summarization (Nema et al., 2017) , where the target document is main and a query is secondary.", "The third approach is to find the salient components of the sources.", "This is suitable when there are many sources including less informative ones (redundant sources), such as lengthydocument summarization that outputs a multisentence summary (Tan et al., 2017) , where each sentence can be regarded as one source.", "We addressed an extension of the weighted average approach and compared our proposed model with a multimodal model (Hori et al., 2017) from the first approach and a query-based model (Nema et al., 2017) from the second approach, as well as the normal encoder-decoder model.", "Since we have only two sources (headlines and leads), where the headline source is clearly salient for generating a short title, the third approach can be reduced to the normal encoder-decoder model.", "Our contributions are as follows.", "• We report on a case study of short-title generation of news articles for a news aggregator as a real-world application of neural headline generation.", "This study supports previous studies based on the encoder-decoder model from a practical standpoint since most real-world news articles basically already have headlines, which means that there has been little direct application of these previous studies.", "• We propose an encoder-decoder model with multiple encoders for separately encoding news headlines and leads (Section 3).", "Our comparative experiments with several baselines involving evaluations done by crowdsourcing workers showed the effectiveness of our model, especially using the \"usefulness\" measure (Section 4).", "• We describe how to deploy our model to an editing support tool and show the results of comparing the editors' behavior before and after releasing the tool (Section 5), which imply that the editors began to refer to generated titles after the release.", "late the following conditional likelihood p(y | x) = T −1 ∏ t=1 p(y t+1 | y ≤t , x) (1) with respect to each pair (x, y) of an input sequence x = x 1 · · · x S and output sequence y = y 1 · · · y T , where y ≤t = y 1 · · · y t , and maximize its mean.", "The model p(y | x) in Eq.", "(1) is computed by a combination of two recurrent neural networks (RNNs): an encoder and decoder.", "The encoder reads an input sequence x to recognize its content, and the decoder predicts an output sequence y corresponding to the content.", "More formally, an encoder calculates a hidden state h s for each element x s in a x by using the state transition function f enc of the encoder: h s = f enc (x s , h s−1 ).", "In a similar fashion, a decoder calculates a hidden stateĥ t for each element y t in a y by using the state transition function f dec of the decoder after setting the last hidden state of the encoder as the initial state of the decoder (ĥ 0 = h S ): h t = f dec (y t ,ĥ t−1 ).", "Then, a prediction of outputs for eachĥ t is calculated using the output function g dec with an attention mechanism: p(y t+1 | y ≤t , x) = g dec (ĥ t , c t ), (2) where c t is a weighted average of the encoder hidden states {h 1 , · · · , h S }, defined by c t = S ∑ s=1 a t (s)h s , (3) where a t (s) represents a weight of an encoder hidden state h s with respect to a decoder hidden statê h t .", "c t represents a soft alignment (or attention weight) to the source sequence at the target position t, so it is called a context.", "Proposed Method We propose an encoder-decoder model with multiple encoders.", "For simplicity, we describe our model assuming two encoders for news headlines and leads.", "Let d t and d ′ t be contexts calculated with Eq.", "(3) with the headline encoder and lead encoder, respectively.", "Our model combines the two context vectors inspired by a gating mechanism in long-short term memory networks (Hochreiter and Schmidhuber, 1997) as follows: w t = σ(W [d t ; d ′ t ;ĥ t ]), (4) w ′ t = σ(W ′ [d t ; d ′ t ;ĥ t ]), (5) c t = w t ⊙ d t + w ′ t ⊙ d ′ t , (6) where function σ represents the sigmoid function, i.e., σ(x) = 1/(1 + e −x ), and the operator ⊙ represents the element-wise product.", "Eq.", "(4) calculates a gating weight w t for d t , where W represents a weight matrix for a concatenated vector [d t ; d ′ t ;ĥ t ].", "Similarly, Eq.", "(5) calculates a gating weight w ′ t for d ′ t .", "Eq.", "(6) calculates a mixed context c t made from the two contexts, d t and d ′ t .", "Finally, the output function in our model is constructed by substituting c t with c t in Eq.", "(2).", "Our model can be regarded as an extension of the multimodal fusion model (Hori et al., 2017) , where multiple contexts are mixed using scalar weights, i.e., c t = αd t + βd ′ t , where α and β are positive scalar weights calculated using an attention mechanism such as a t (s) in Eq.", "(3).", "Our model can obtain a more sophisticated mixed context than their model since that model only takes into account which encoder to weigh at a time step, while our model adjusts weights on the element level.", "Experiments Dataset We prepared a dataset extracted from the newsaggregator of Yahoo!", "JAPAN by Web crawling.", "The dataset included 263K (headline, lead, short title) triples, and was split into three parts, i.e., for training (90%), validation (5%), and testing (5%).", "We preprocessed them by separating characters for training since our preliminary experiments showed that character-based training clearly performed better than word-based training.", "The statistics of our dataset are as follows.", "The average lengths of headlines, leads, and short titles are 24.87, 128.49, and 13.05 Japanese characters, respectively.", "The dictionary sizes (for characters) of headlines, leads, and short titles are 3618, 4226, and 3156, respectively.", "Each news article has only one short title created by a professional editor.", "The percentage of short titles equal to their headlines is only 0.13%, while the percentage of extractively solvable instances, in which the characters in each short title are completely matched by those in the corresponding headline, was about 20%.", "However, the average edit distance (Levenshtein, 1966 ) between short titles and headlines was 23.74.", "This means that short titles cannot be easily created from headlines.", "Training We implemented our model on the OpenNMT 2 toolkit.", "We used a convolutional neural network (CNN) (Kim, 2014) , instead of an RNN, to construct the lead encoder since leads are longer than headlines and require much more computational time.", "Since the CNN encoder outputs all hidden states for an input sequence in the same format as the RNN encoder, we can easily apply these states to Eq.", "(3).", "Our headline encoder still remains as an RNN (i.e., bidirectional LSTM) for fair comparison with the default implementation.", "We used a stochastic gradient descent algorithm with Nesterov momentum (Nesterov, 1983) as an optimizer, after initializing parameters by uniform sampling on (−0.1, 0.1).", "Table 2 lists the details of the hyper-parameter settings in our experiment.", "Other settings were basically the same as the default implementation of OpenNMT.", "Evaluation We conducted two crowdsourcing tasks to separately measure readability and usefulness.", "The readability task asked ten workers how readable each short title was on a four-point scale (higher is better), while the usefulness task asked them how useful the short title was compared to the corresponding article.", "The score of each generated short title was calculated by averaging the scores collected from the ten workers.", "Compared Models We prepared four models, our model GateFusion and three baselines MultiModal, QueryBased, and OpenNMT, listed below.", "We implemented the fusion mechanisms of MultiModal and 2 https://github.com/OpenNMT/OpenNMT-py Table 3 : Mean scores of readability (r), usefulness (u), and their average r+u 2 based on crowdsourcing.", "The \" †\" mark shows a statistical significance from all three baselines OpenNMT, MultiModal, and QueryBased on a one-tailed, paired t-test (p < 0.01).", "QueryBased on OpenNMT using an RNN encoder for headlines and CNN encoder for leads (see Appendix A for detailed definitions).", "• GateFusion: Our model with a gating mechanism described in Section 3.", "This is a fusion based on vector weights.", "• MultiModal: A multimodal model proposed by (Hori et al., 2017) , which can handle multimodal information such as image and audio as well as text by using separate encoders.", "The model combines contexts obtained from the encoders via an attention mechanism such as a t (s) in Eq.", "(3).", "This is a fusion based on scalar weights.", "• QueryBased: A query-based model proposed by (Nema et al., 2017) , which can finetune the attention on a document by using a query for query-focused summarization.", "We regard a headline as a document and a lead as a query since the headline is more similar to its short title.", "Specifically, the model finetunes an attention weight a t (s) for calculating a headline context d t by using a pre-computed lead context d ′ t .", "This is a fusion based on cascade connection.", "• OpenNMT: An encoder-decoder model with a single encoder implemented in OpenNMT, whose input is a headline only, because a variant using a lead did not perform better than this setting.", "Table 3 lists the results from the crowdsourcing tasks for readability and usefulness (see Appendix B for the details of these scores).", "Editor and Prefix in the top block of rows show the results of correct short titles created by editors and a naive model using the first 13.5 Japanese characters 3 , respectively.", "The middle and bottom blocks represent the three baselines and our models, respectively.", "We explain our hybrid model HybridFusion later.", "Each model was prepared as an ensemble of ten models by random initialization, aiming for robust performance.", "Our GateFusion clearly performed better than the three baselines regarding usefulness and interestingly outperformed even Editor.", "This implies that GateFusion tends to aggressively copy elements from source sequences.", "However, this seemed to result in complicated expressions; thus, GateFusion performed the worst with respect to readability.", "To overcome this weakness, we developed a hybrid model HybridFusion that consists of GateFusion and another fusion model QueryBased, which performed relatively well in terms of readability.", "The results indicate that HybridFusion performed the best regarding readability and usefulness.", "It can be considered that QueryBased helps GateFusion generate headline-style outputs since QueryBased mainly uses the headline source.", "Table 4 lists output examples generated by the best model OpenNMT from the three baselines and our best model HybridFusion (see Appendix C for more examples).", "In this case, the difference between OpenNMT and HybridFusion is easily comprehensible.", "The former selected \" (evolution)\", and the latter selected \" (Darvish)\" from the headline.", "In Japanese headlines, the last word tends to be important, so using the last word is basically a good strategy.", "However, the lead indicates that \"Darvish\" is more important than \"evolution\" (actually, there is no word \"evolution\" in the lead); thus, HybridFusion was able to correctly select the long name \"Darvish\" and abbreviate it to \" (Dar)\".", "In addition, it forcibly changed the style to the short title's style by putting the name into the forefront to easily get users' attention.", "This suggests that our neural-headline-generation model HybridFusion can successfully work even in this real-world application.", "Results Deployment to Editing Support Tool We deployed our short-title-generation model to an editing support tool in collaboration with the 3 13.5 is the limit in the news-aggregator, where space, numbers, and alphabet characters are counted as 0.5.", "Figure 2 : Screenshot of editing support tool displaying generated candidates for creating a short title.", "news service, as shown in Figure 2 .", "In the tool, when an editor enters the URL of an article, the tool can automatically fetch the headline and lead of the article and display up to five candidates next to the edit form of a short title, as shown in the dotted box in the figure.", "These candidates are hypotheses (with high probabilities) generated by the beam search based on the model.", "Then, the editor can effectively create a short title by referring to the generated candidates.", "This supporting feature is expected to be useful especially for inexperienced editors since the quality of short titles is heavily dependent on editors' experience.", "From now on, we briefly describe three features of the tool to improve its usability when displaying candidates: cutoff of unpromising candidates, skipping redundant candidates, and highlighting unknown characters.", "After that, we discuss the effect of the deployment analyzing user behavior before and after releasing the tool.", "Cutoff of Unpromising Candidates The quality of displayed candidates is one of the main factors that affect the usability of the tool.", "If the tool frequently displays unpromising candidates, editors will gradually start ignoring them.", "Therefore, we cutoff unpromising candidates whose perplexity scores are higher than a certain threshold, where the perplexity score of a candidate is calculated by the inverse of the geometric mean of the generation probabilities for all characters in the candidate.", "We set the threshold considering the results of the editors' manual evaluation, where they checked if each candidate was acceptable or not.", "Specifically, we used 1.47 (=1/0.68) as the threshold, which means that the (geometric) mean character likelihood in the candidate should be higher than 0.68.", "If all candidates are judged as unpromising, the tool displays a message like \"No promising candidates.\"", "Skipping Redundant Candidates The purpose of the tool is to give editors some new ideas for creating short titles, so it is not useful to display redundant candidates similar to others.", "Therefore, we skip candidates whose edit distance (Levenshtein, 1966) to the other candidates is lower than a threshold when selecting hypotheses in descending order of probability.", "Formally, the edit distance between two texts is defined as the minimum number of single-character edits (insertions, deletions, or substitutions) required to change one text into the other.", "We set the threshold to 2 so as to restrict variations of Japanese particles as there are many particles with a similar meaning in Japanese 4 , e.g., \" (ha)\" and \" (ga)\".", "Although we used a unit cost for the edit distance, we can adjust the cost of each edit operation so that the tool can ignore variations of prepositions if we want to use English texts.", "Highlighting Unknown Characters One difficulty of neural models is that there is a possibility of generating incorrect or fake titles, which do not correspond to the article.", "This is a serious issue for news editing support since displayed candidates can mislead editors.", "For example, if the tool displays \" (Fujinami)\" for the news about \" (Fujinami)\", where they are different names with the same pronunciation, editors might choose the incorrect one.", "As a simple solution, we highlighted unknown characters that do not appear in both headline and lead in red.", "In Figure 2 , two phrases (\"B\" and \" \") are highlighted since they do not appear in the headline and lead.", "When a candidate includes highlighted characters, editors can carefully check if the candidate is semantically correct.", "Note that we did not exclude candidates with unknown characters so that the model can aggressively generate paraphrases and abbreviations.", "For example, the tool ROUGE-L (± SE) # articles Before 52.71% (± 0.56) 1773 After 57.65% (± 0.53) 1959 Table 5 : Sequence matching rates (ROUGE-L) of editors' titles and generated titles, which are averaged over articles over three weeks before/after releasing tool.", "suggests \" B(Soft B.)\"", "as an abbreviation of \" (Softbank)\" in the figure.", "Effect of Deployment To investigate the effect of the deployment, we compared the sequence matching rates between editors' correct titles and generated candidates before and after releasing the tool.", "The sequence matching rate is basically calculated by ROUGE-L (Lin, 2004) , which is defined as the rate of the length of the longest common subsequence between two sequences, i.e., a correct title and a generated candidate.", "Because we have multiple candidates for each article, we calculate the sequence matching rate as the maximum of their ROUGE-L scores, assuming that editors may refer to the most promising candidate.", "Note that the candidates were filtered by the aforementioned features, so we omitted a few articles without candidates.", "Table 5 shows the results of the sequence matching rates averaged over the articles over three weeks before and after releasing the tool.", "The results indicate that the ROUGE-L score increased by about 5 percentage points after the release.", "This implies that editors created their titles by referring to the displayed candidates to some extent.", "In fact, the ratio of the exact matched titles (ROUGE-L = 100%) in all articles (before/after the release) increased after the release by a factor of 1.62(i.e., from 3.78% to 6.13%).", "Similarly, the ratio of the 80% matched titles (ROUGE-L ≥ 80%) also increased by a factor of 1.32 (i.e., from 14.04% to 18.53%).", "This suggests that professional editors obtained new ideas from generated titles of the tool.", "Related Work We briefly review related studies from three aspects: news headline generation, editing support, and application of headline generation.", "In summary, our work is the first attempt to deploy a neural news-headline-generation model to a realworld application, i.e., news editing support tool.", "News-headline-generation tasks have been extensively studied since early times (Wang et al., 2005; Soricut and Marcu, 2006; Woodsend et al., 2010; Alfonseca et al., 2013; Sun et al., 2015; Colmenares et al., 2015) .", "In this line of research, Rush et al.", "(2015) proposed a neural model to generate news headlines and released a benchmark dataset for their task, and consequently this task has recently received increasing attention (Chopra et al., 2016; Takase et al., 2016; Kiyono et al., 2017; Zhou et al., 2017; Ayana et al., 2017; Raffel et al., 2017; Cao et al., 2018; Kobayashi, 2018) .", "However, their approaches were basically based on the encoderdecoder model, which is trained with a lot of (article, headline) pairs.", "This means that there are few situations for putting their models into the real world because news articles typically already have corresponding headlines, and most editors create a headline before its content (according to a senior journalist).", "Therefore, our work can strongly support their approaches from a practical perspective.", "Considering technologies used for editing support, there have been many studies for various purposes, such as spelling error correction (Farra et al., 2014; Hasan et al., 2015; Etoori et al., 2018) , grammatical error correction (Dahlmeier and Ng, 2012; Susanto et al., 2014; Choshen and Abend, 2018) , fact checking (Baly et al., 2018; Thorne and Vlachos, 2018; Lee et al., 2018) , fluency evaluation (Vadlapudi and Katragadda, 2010; Heilman et al., 2014; Kann et al., 2018) , and so on.", "However, when we consider their studies on our task, they are only used after editing (writing a draft).", "On the other hand, the purpose of our tool is different from theirs since our tool can support editors before or during editing.", "The usage of (interactive) machine translation systems (Denkowski et al., 2014; González-Rubio et al., 2016; Wuebker et al., 2016; Ye et al., 2016; Takeno et al., 2017) for supporting manual post-editing are similar to our purpose, but their task is completely different from ours.", "In other words, their task is a translation without information loss, whereas our task is a summarization that requires information compression.", "We believe that a case study on summarization is still important for the summarization community.", "There have been several studies reporting case studies on headline generation for different real services: (a) question headlines on question answering service (Higurashi et al., 2018) , (b) product headlines on e-commerce service (Wang et al., 2018) , and (c) headlines for product curation pages Camargo de Souza et al., 2018) .", "The first two (a) and (b) are extractive approaches, and the last one (c) is an abstractive approach, where the input is a set of slot/value pairs, such as \"color/white.\"", "That is, our task is more difficult to use in the real-world.", "In addition, application to news services tends to be sensitive since news articles contain serious contents such as incidents, accidents, and disasters.", "Thus, our work should be valuable as a rare case study applying a neural model to such a news service.", "Conclusion We addressed short-title generation from news articles for a news aggregator to support the editorial process.", "We proposed an encoder-decoder model with multiple encoders for separately encoding multiple information sources, i.e., news headlines and leads.", "Comparative experiments using crowdsourcing showed that our hybrid model performed better than the baselines, especially using the usefulness measure.", "We deployed our model to an editing support tool and empirically confirmed that professional editors began to refer to the generated titles after the release.", "Future research will include verifying how much our headline generation model can affect practical performance indicators, such as click-through rate.", "In this case, we need to develop a much safer model since our model sometimes yields erroneous outputs or fake news titles, which cannot be directly used in the commercial service." ] }
{ "paper_header_number": [ "1", "3", "4.1", "4.2", "4.3", "4.4", "5", "5.1", "5.2", "5.3", "5.4", "6", "7" ], "paper_header_content": [ "Introduction", "Proposed Method", "Dataset", "Training", "Evaluation", "Compared Models", "Deployment to Editing Support Tool", "Cutoff of Unpromising Candidates", "Skipping Redundant Candidates", "Highlighting Unknown Characters", "Effect of Deployment", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-34#paper-1048#slide-1
Yahoo News
Biggest news portal in Japan delivered by providers Editors choice feature Professional editors 2. Put a new Copyright 2019 Yahoo Japan Corporation. All Rights Reserved.
Biggest news portal in Japan delivered by providers Editors choice feature Professional editors 2. Put a new Copyright 2019 Yahoo Japan Corporation. All Rights Reserved.
[]
GEM-SciDuet-train-34#paper-1048#slide-2
1048
A Case Study on Neural Headline Generation for Editing Support
There have been many studies on neural headline generation models trained with a lot of (article, headline) pairs. However, there are few situations for putting such models into practical use in the real world since news articles typically already have corresponding headlines. In this paper, we describe a practical use case of neural headline generation in a news aggregator, where dozens of professional editors constantly select important news articles and manually create their headlines, which are much shorter than the original headlines. Specifically, we show how to deploy our model to an editing support tool and report the results of comparing the behavior of the editors before and after the release.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186 ], "paper_content_text": [ "Introduction A news-aggregator is a website or mobile application that aggregates a large amount of web content, e.g., online newspapers provided by different publishers.", "The main purpose of such a service is to help users obtain important news out of vast amounts of information quickly and easily.", "Therefore, it is critical to consider how to compactly show news, as well as what type of news to select, to improve service quality.", "In fact, the news-aggregator of Yahoo!", "JAPAN 1 , the largest Japanese portal site, is supported by dozens of professional editors who constantly select important news articles and manually create their new headlines called short titles, which are much shorter than the original headline, to construct a newstopic list.", "Note that we use the term \"title\" to avoid confusion with the original news headline, although they are similar concepts.", "* Both authors contributed equally to this work.", "1 https://www.yahoo.co.jp/ (a) List of news topics including short titles.", "(b) Page of news entry including headline and lead.", "Figure 1 shows screenshots of the newsaggregator of Yahoo!", "JAPAN, where the English translations of the short title, headline and lead are listed in Table 1 .", "The left figure (a) shows the list of news topics (important news articles), which includes short titles, and the right figure (b) shows the entry page of the first topic in the list, which consists of a headline and lead.", "The lead is a short version of the article and can be used by users to decide whether to read the whole article.", "The editors' job is to create a short title from news content including the headline and lead.", "A short title has two advantages over a normal headline; one is quick understandability of the content and the other is saving display space by using a single line.", "This means that short titles can increase a user's chances of reaching interesting articles.", "Since the click-through rate of news articles is directly related to ad revenue, even a small improvement in short titles has a significant impact on business.", "We tackle an automatic-generation task of such short titles for a news aggregator to support the Japanese English translation Short title The prime minister cannot say that there is no surmise.", "Headline It cannot be said that there is no \"sontaku (surmise)\" with absolute certainty.", "The prime minister Abe said about the problem of \"Kake Gakuen (Kake school)\".", "Lead Prime Minister Shinzo Abe said, in an intensive deliberation with the House of Councilors Budget Committee held on the afternoon of the 14th, as an answer to a question about whether bureaucrats surmised to the prime minister regarding the Kake suspicion, \"It is difficult to understand whether there is a sontaku (surmise)\".", "He said \"It cannot be said that there was nothing wrong,\" while explaining that \"I do not need to be obsequious\".", "An answer to Ichiro Tsukada (LDP).", "Table 1 : Short title, headline, and lead in Figure 1 (b) with English versions.", "editorial process.", "Our task is a variant of newsheadline generation, which has been extensively studied, as described in Section 6.", "A clear difference between their task and ours is that we need to generate short titles from news content including headlines.", "Thus, we formulate our task as an abstractive summarization from multiple information sources, i.e., headlines and leads, based on an encoder-decoder model (Section 2).", "There are roughly three approaches for handling multiple information sources.", "The first approach is to merge all sources with some weights based on the importance of each source, which can be achieved by a weighted average of the context vectors, as in multimodal summarization (Hori et al., 2017) .", "This is the most general approach since the other two can also be regarded as special cases of the weighted average.", "The second approach is to use one source as the main source and others as secondary ones.", "This is effective when the main source can be clearly determined, such as query-focused summarization (Nema et al., 2017) , where the target document is main and a query is secondary.", "The third approach is to find the salient components of the sources.", "This is suitable when there are many sources including less informative ones (redundant sources), such as lengthydocument summarization that outputs a multisentence summary (Tan et al., 2017) , where each sentence can be regarded as one source.", "We addressed an extension of the weighted average approach and compared our proposed model with a multimodal model (Hori et al., 2017) from the first approach and a query-based model (Nema et al., 2017) from the second approach, as well as the normal encoder-decoder model.", "Since we have only two sources (headlines and leads), where the headline source is clearly salient for generating a short title, the third approach can be reduced to the normal encoder-decoder model.", "Our contributions are as follows.", "• We report on a case study of short-title generation of news articles for a news aggregator as a real-world application of neural headline generation.", "This study supports previous studies based on the encoder-decoder model from a practical standpoint since most real-world news articles basically already have headlines, which means that there has been little direct application of these previous studies.", "• We propose an encoder-decoder model with multiple encoders for separately encoding news headlines and leads (Section 3).", "Our comparative experiments with several baselines involving evaluations done by crowdsourcing workers showed the effectiveness of our model, especially using the \"usefulness\" measure (Section 4).", "• We describe how to deploy our model to an editing support tool and show the results of comparing the editors' behavior before and after releasing the tool (Section 5), which imply that the editors began to refer to generated titles after the release.", "late the following conditional likelihood p(y | x) = T −1 ∏ t=1 p(y t+1 | y ≤t , x) (1) with respect to each pair (x, y) of an input sequence x = x 1 · · · x S and output sequence y = y 1 · · · y T , where y ≤t = y 1 · · · y t , and maximize its mean.", "The model p(y | x) in Eq.", "(1) is computed by a combination of two recurrent neural networks (RNNs): an encoder and decoder.", "The encoder reads an input sequence x to recognize its content, and the decoder predicts an output sequence y corresponding to the content.", "More formally, an encoder calculates a hidden state h s for each element x s in a x by using the state transition function f enc of the encoder: h s = f enc (x s , h s−1 ).", "In a similar fashion, a decoder calculates a hidden stateĥ t for each element y t in a y by using the state transition function f dec of the decoder after setting the last hidden state of the encoder as the initial state of the decoder (ĥ 0 = h S ): h t = f dec (y t ,ĥ t−1 ).", "Then, a prediction of outputs for eachĥ t is calculated using the output function g dec with an attention mechanism: p(y t+1 | y ≤t , x) = g dec (ĥ t , c t ), (2) where c t is a weighted average of the encoder hidden states {h 1 , · · · , h S }, defined by c t = S ∑ s=1 a t (s)h s , (3) where a t (s) represents a weight of an encoder hidden state h s with respect to a decoder hidden statê h t .", "c t represents a soft alignment (or attention weight) to the source sequence at the target position t, so it is called a context.", "Proposed Method We propose an encoder-decoder model with multiple encoders.", "For simplicity, we describe our model assuming two encoders for news headlines and leads.", "Let d t and d ′ t be contexts calculated with Eq.", "(3) with the headline encoder and lead encoder, respectively.", "Our model combines the two context vectors inspired by a gating mechanism in long-short term memory networks (Hochreiter and Schmidhuber, 1997) as follows: w t = σ(W [d t ; d ′ t ;ĥ t ]), (4) w ′ t = σ(W ′ [d t ; d ′ t ;ĥ t ]), (5) c t = w t ⊙ d t + w ′ t ⊙ d ′ t , (6) where function σ represents the sigmoid function, i.e., σ(x) = 1/(1 + e −x ), and the operator ⊙ represents the element-wise product.", "Eq.", "(4) calculates a gating weight w t for d t , where W represents a weight matrix for a concatenated vector [d t ; d ′ t ;ĥ t ].", "Similarly, Eq.", "(5) calculates a gating weight w ′ t for d ′ t .", "Eq.", "(6) calculates a mixed context c t made from the two contexts, d t and d ′ t .", "Finally, the output function in our model is constructed by substituting c t with c t in Eq.", "(2).", "Our model can be regarded as an extension of the multimodal fusion model (Hori et al., 2017) , where multiple contexts are mixed using scalar weights, i.e., c t = αd t + βd ′ t , where α and β are positive scalar weights calculated using an attention mechanism such as a t (s) in Eq.", "(3).", "Our model can obtain a more sophisticated mixed context than their model since that model only takes into account which encoder to weigh at a time step, while our model adjusts weights on the element level.", "Experiments Dataset We prepared a dataset extracted from the newsaggregator of Yahoo!", "JAPAN by Web crawling.", "The dataset included 263K (headline, lead, short title) triples, and was split into three parts, i.e., for training (90%), validation (5%), and testing (5%).", "We preprocessed them by separating characters for training since our preliminary experiments showed that character-based training clearly performed better than word-based training.", "The statistics of our dataset are as follows.", "The average lengths of headlines, leads, and short titles are 24.87, 128.49, and 13.05 Japanese characters, respectively.", "The dictionary sizes (for characters) of headlines, leads, and short titles are 3618, 4226, and 3156, respectively.", "Each news article has only one short title created by a professional editor.", "The percentage of short titles equal to their headlines is only 0.13%, while the percentage of extractively solvable instances, in which the characters in each short title are completely matched by those in the corresponding headline, was about 20%.", "However, the average edit distance (Levenshtein, 1966 ) between short titles and headlines was 23.74.", "This means that short titles cannot be easily created from headlines.", "Training We implemented our model on the OpenNMT 2 toolkit.", "We used a convolutional neural network (CNN) (Kim, 2014) , instead of an RNN, to construct the lead encoder since leads are longer than headlines and require much more computational time.", "Since the CNN encoder outputs all hidden states for an input sequence in the same format as the RNN encoder, we can easily apply these states to Eq.", "(3).", "Our headline encoder still remains as an RNN (i.e., bidirectional LSTM) for fair comparison with the default implementation.", "We used a stochastic gradient descent algorithm with Nesterov momentum (Nesterov, 1983) as an optimizer, after initializing parameters by uniform sampling on (−0.1, 0.1).", "Table 2 lists the details of the hyper-parameter settings in our experiment.", "Other settings were basically the same as the default implementation of OpenNMT.", "Evaluation We conducted two crowdsourcing tasks to separately measure readability and usefulness.", "The readability task asked ten workers how readable each short title was on a four-point scale (higher is better), while the usefulness task asked them how useful the short title was compared to the corresponding article.", "The score of each generated short title was calculated by averaging the scores collected from the ten workers.", "Compared Models We prepared four models, our model GateFusion and three baselines MultiModal, QueryBased, and OpenNMT, listed below.", "We implemented the fusion mechanisms of MultiModal and 2 https://github.com/OpenNMT/OpenNMT-py Table 3 : Mean scores of readability (r), usefulness (u), and their average r+u 2 based on crowdsourcing.", "The \" †\" mark shows a statistical significance from all three baselines OpenNMT, MultiModal, and QueryBased on a one-tailed, paired t-test (p < 0.01).", "QueryBased on OpenNMT using an RNN encoder for headlines and CNN encoder for leads (see Appendix A for detailed definitions).", "• GateFusion: Our model with a gating mechanism described in Section 3.", "This is a fusion based on vector weights.", "• MultiModal: A multimodal model proposed by (Hori et al., 2017) , which can handle multimodal information such as image and audio as well as text by using separate encoders.", "The model combines contexts obtained from the encoders via an attention mechanism such as a t (s) in Eq.", "(3).", "This is a fusion based on scalar weights.", "• QueryBased: A query-based model proposed by (Nema et al., 2017) , which can finetune the attention on a document by using a query for query-focused summarization.", "We regard a headline as a document and a lead as a query since the headline is more similar to its short title.", "Specifically, the model finetunes an attention weight a t (s) for calculating a headline context d t by using a pre-computed lead context d ′ t .", "This is a fusion based on cascade connection.", "• OpenNMT: An encoder-decoder model with a single encoder implemented in OpenNMT, whose input is a headline only, because a variant using a lead did not perform better than this setting.", "Table 3 lists the results from the crowdsourcing tasks for readability and usefulness (see Appendix B for the details of these scores).", "Editor and Prefix in the top block of rows show the results of correct short titles created by editors and a naive model using the first 13.5 Japanese characters 3 , respectively.", "The middle and bottom blocks represent the three baselines and our models, respectively.", "We explain our hybrid model HybridFusion later.", "Each model was prepared as an ensemble of ten models by random initialization, aiming for robust performance.", "Our GateFusion clearly performed better than the three baselines regarding usefulness and interestingly outperformed even Editor.", "This implies that GateFusion tends to aggressively copy elements from source sequences.", "However, this seemed to result in complicated expressions; thus, GateFusion performed the worst with respect to readability.", "To overcome this weakness, we developed a hybrid model HybridFusion that consists of GateFusion and another fusion model QueryBased, which performed relatively well in terms of readability.", "The results indicate that HybridFusion performed the best regarding readability and usefulness.", "It can be considered that QueryBased helps GateFusion generate headline-style outputs since QueryBased mainly uses the headline source.", "Table 4 lists output examples generated by the best model OpenNMT from the three baselines and our best model HybridFusion (see Appendix C for more examples).", "In this case, the difference between OpenNMT and HybridFusion is easily comprehensible.", "The former selected \" (evolution)\", and the latter selected \" (Darvish)\" from the headline.", "In Japanese headlines, the last word tends to be important, so using the last word is basically a good strategy.", "However, the lead indicates that \"Darvish\" is more important than \"evolution\" (actually, there is no word \"evolution\" in the lead); thus, HybridFusion was able to correctly select the long name \"Darvish\" and abbreviate it to \" (Dar)\".", "In addition, it forcibly changed the style to the short title's style by putting the name into the forefront to easily get users' attention.", "This suggests that our neural-headline-generation model HybridFusion can successfully work even in this real-world application.", "Results Deployment to Editing Support Tool We deployed our short-title-generation model to an editing support tool in collaboration with the 3 13.5 is the limit in the news-aggregator, where space, numbers, and alphabet characters are counted as 0.5.", "Figure 2 : Screenshot of editing support tool displaying generated candidates for creating a short title.", "news service, as shown in Figure 2 .", "In the tool, when an editor enters the URL of an article, the tool can automatically fetch the headline and lead of the article and display up to five candidates next to the edit form of a short title, as shown in the dotted box in the figure.", "These candidates are hypotheses (with high probabilities) generated by the beam search based on the model.", "Then, the editor can effectively create a short title by referring to the generated candidates.", "This supporting feature is expected to be useful especially for inexperienced editors since the quality of short titles is heavily dependent on editors' experience.", "From now on, we briefly describe three features of the tool to improve its usability when displaying candidates: cutoff of unpromising candidates, skipping redundant candidates, and highlighting unknown characters.", "After that, we discuss the effect of the deployment analyzing user behavior before and after releasing the tool.", "Cutoff of Unpromising Candidates The quality of displayed candidates is one of the main factors that affect the usability of the tool.", "If the tool frequently displays unpromising candidates, editors will gradually start ignoring them.", "Therefore, we cutoff unpromising candidates whose perplexity scores are higher than a certain threshold, where the perplexity score of a candidate is calculated by the inverse of the geometric mean of the generation probabilities for all characters in the candidate.", "We set the threshold considering the results of the editors' manual evaluation, where they checked if each candidate was acceptable or not.", "Specifically, we used 1.47 (=1/0.68) as the threshold, which means that the (geometric) mean character likelihood in the candidate should be higher than 0.68.", "If all candidates are judged as unpromising, the tool displays a message like \"No promising candidates.\"", "Skipping Redundant Candidates The purpose of the tool is to give editors some new ideas for creating short titles, so it is not useful to display redundant candidates similar to others.", "Therefore, we skip candidates whose edit distance (Levenshtein, 1966) to the other candidates is lower than a threshold when selecting hypotheses in descending order of probability.", "Formally, the edit distance between two texts is defined as the minimum number of single-character edits (insertions, deletions, or substitutions) required to change one text into the other.", "We set the threshold to 2 so as to restrict variations of Japanese particles as there are many particles with a similar meaning in Japanese 4 , e.g., \" (ha)\" and \" (ga)\".", "Although we used a unit cost for the edit distance, we can adjust the cost of each edit operation so that the tool can ignore variations of prepositions if we want to use English texts.", "Highlighting Unknown Characters One difficulty of neural models is that there is a possibility of generating incorrect or fake titles, which do not correspond to the article.", "This is a serious issue for news editing support since displayed candidates can mislead editors.", "For example, if the tool displays \" (Fujinami)\" for the news about \" (Fujinami)\", where they are different names with the same pronunciation, editors might choose the incorrect one.", "As a simple solution, we highlighted unknown characters that do not appear in both headline and lead in red.", "In Figure 2 , two phrases (\"B\" and \" \") are highlighted since they do not appear in the headline and lead.", "When a candidate includes highlighted characters, editors can carefully check if the candidate is semantically correct.", "Note that we did not exclude candidates with unknown characters so that the model can aggressively generate paraphrases and abbreviations.", "For example, the tool ROUGE-L (± SE) # articles Before 52.71% (± 0.56) 1773 After 57.65% (± 0.53) 1959 Table 5 : Sequence matching rates (ROUGE-L) of editors' titles and generated titles, which are averaged over articles over three weeks before/after releasing tool.", "suggests \" B(Soft B.)\"", "as an abbreviation of \" (Softbank)\" in the figure.", "Effect of Deployment To investigate the effect of the deployment, we compared the sequence matching rates between editors' correct titles and generated candidates before and after releasing the tool.", "The sequence matching rate is basically calculated by ROUGE-L (Lin, 2004) , which is defined as the rate of the length of the longest common subsequence between two sequences, i.e., a correct title and a generated candidate.", "Because we have multiple candidates for each article, we calculate the sequence matching rate as the maximum of their ROUGE-L scores, assuming that editors may refer to the most promising candidate.", "Note that the candidates were filtered by the aforementioned features, so we omitted a few articles without candidates.", "Table 5 shows the results of the sequence matching rates averaged over the articles over three weeks before and after releasing the tool.", "The results indicate that the ROUGE-L score increased by about 5 percentage points after the release.", "This implies that editors created their titles by referring to the displayed candidates to some extent.", "In fact, the ratio of the exact matched titles (ROUGE-L = 100%) in all articles (before/after the release) increased after the release by a factor of 1.62(i.e., from 3.78% to 6.13%).", "Similarly, the ratio of the 80% matched titles (ROUGE-L ≥ 80%) also increased by a factor of 1.32 (i.e., from 14.04% to 18.53%).", "This suggests that professional editors obtained new ideas from generated titles of the tool.", "Related Work We briefly review related studies from three aspects: news headline generation, editing support, and application of headline generation.", "In summary, our work is the first attempt to deploy a neural news-headline-generation model to a realworld application, i.e., news editing support tool.", "News-headline-generation tasks have been extensively studied since early times (Wang et al., 2005; Soricut and Marcu, 2006; Woodsend et al., 2010; Alfonseca et al., 2013; Sun et al., 2015; Colmenares et al., 2015) .", "In this line of research, Rush et al.", "(2015) proposed a neural model to generate news headlines and released a benchmark dataset for their task, and consequently this task has recently received increasing attention (Chopra et al., 2016; Takase et al., 2016; Kiyono et al., 2017; Zhou et al., 2017; Ayana et al., 2017; Raffel et al., 2017; Cao et al., 2018; Kobayashi, 2018) .", "However, their approaches were basically based on the encoderdecoder model, which is trained with a lot of (article, headline) pairs.", "This means that there are few situations for putting their models into the real world because news articles typically already have corresponding headlines, and most editors create a headline before its content (according to a senior journalist).", "Therefore, our work can strongly support their approaches from a practical perspective.", "Considering technologies used for editing support, there have been many studies for various purposes, such as spelling error correction (Farra et al., 2014; Hasan et al., 2015; Etoori et al., 2018) , grammatical error correction (Dahlmeier and Ng, 2012; Susanto et al., 2014; Choshen and Abend, 2018) , fact checking (Baly et al., 2018; Thorne and Vlachos, 2018; Lee et al., 2018) , fluency evaluation (Vadlapudi and Katragadda, 2010; Heilman et al., 2014; Kann et al., 2018) , and so on.", "However, when we consider their studies on our task, they are only used after editing (writing a draft).", "On the other hand, the purpose of our tool is different from theirs since our tool can support editors before or during editing.", "The usage of (interactive) machine translation systems (Denkowski et al., 2014; González-Rubio et al., 2016; Wuebker et al., 2016; Ye et al., 2016; Takeno et al., 2017) for supporting manual post-editing are similar to our purpose, but their task is completely different from ours.", "In other words, their task is a translation without information loss, whereas our task is a summarization that requires information compression.", "We believe that a case study on summarization is still important for the summarization community.", "There have been several studies reporting case studies on headline generation for different real services: (a) question headlines on question answering service (Higurashi et al., 2018) , (b) product headlines on e-commerce service (Wang et al., 2018) , and (c) headlines for product curation pages Camargo de Souza et al., 2018) .", "The first two (a) and (b) are extractive approaches, and the last one (c) is an abstractive approach, where the input is a set of slot/value pairs, such as \"color/white.\"", "That is, our task is more difficult to use in the real-world.", "In addition, application to news services tends to be sensitive since news articles contain serious contents such as incidents, accidents, and disasters.", "Thus, our work should be valuable as a rare case study applying a neural model to such a news service.", "Conclusion We addressed short-title generation from news articles for a news aggregator to support the editorial process.", "We proposed an encoder-decoder model with multiple encoders for separately encoding multiple information sources, i.e., news headlines and leads.", "Comparative experiments using crowdsourcing showed that our hybrid model performed better than the baselines, especially using the usefulness measure.", "We deployed our model to an editing support tool and empirically confirmed that professional editors began to refer to the generated titles after the release.", "Future research will include verifying how much our headline generation model can affect practical performance indicators, such as click-through rate.", "In this case, we need to develop a much safer model since our model sometimes yields erroneous outputs or fake news titles, which cannot be directly used in the commercial service." ] }
{ "paper_header_number": [ "1", "3", "4.1", "4.2", "4.3", "4.4", "5", "5.1", "5.2", "5.3", "5.4", "6", "7" ], "paper_header_content": [ "Introduction", "Proposed Method", "Dataset", "Training", "Evaluation", "Compared Models", "Deployment to Editing Support Tool", "Cutoff of Unpromising Candidates", "Skipping Redundant Candidates", "Highlighting Unknown Characters", "Effect of Deployment", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-34#paper-1048#slide-2
Short title generation as editing support
Purpose: To generate short title candidates to help editors Task: Translation from (headline, lead) to short title Lead is a short version (summary) of the article Selected news article List of news articles Copyright 2019 Yahoo Japan Corporation. All Rights Reserved.
Purpose: To generate short title candidates to help editors Task: Translation from (headline, lead) to short title Lead is a short version (summary) of the article Selected news article List of news articles Copyright 2019 Yahoo Japan Corporation. All Rights Reserved.
[]
GEM-SciDuet-train-34#paper-1048#slide-3
1048
A Case Study on Neural Headline Generation for Editing Support
There have been many studies on neural headline generation models trained with a lot of (article, headline) pairs. However, there are few situations for putting such models into practical use in the real world since news articles typically already have corresponding headlines. In this paper, we describe a practical use case of neural headline generation in a news aggregator, where dozens of professional editors constantly select important news articles and manually create their headlines, which are much shorter than the original headlines. Specifically, we show how to deploy our model to an editing support tool and report the results of comparing the behavior of the editors before and after the release.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186 ], "paper_content_text": [ "Introduction A news-aggregator is a website or mobile application that aggregates a large amount of web content, e.g., online newspapers provided by different publishers.", "The main purpose of such a service is to help users obtain important news out of vast amounts of information quickly and easily.", "Therefore, it is critical to consider how to compactly show news, as well as what type of news to select, to improve service quality.", "In fact, the news-aggregator of Yahoo!", "JAPAN 1 , the largest Japanese portal site, is supported by dozens of professional editors who constantly select important news articles and manually create their new headlines called short titles, which are much shorter than the original headline, to construct a newstopic list.", "Note that we use the term \"title\" to avoid confusion with the original news headline, although they are similar concepts.", "* Both authors contributed equally to this work.", "1 https://www.yahoo.co.jp/ (a) List of news topics including short titles.", "(b) Page of news entry including headline and lead.", "Figure 1 shows screenshots of the newsaggregator of Yahoo!", "JAPAN, where the English translations of the short title, headline and lead are listed in Table 1 .", "The left figure (a) shows the list of news topics (important news articles), which includes short titles, and the right figure (b) shows the entry page of the first topic in the list, which consists of a headline and lead.", "The lead is a short version of the article and can be used by users to decide whether to read the whole article.", "The editors' job is to create a short title from news content including the headline and lead.", "A short title has two advantages over a normal headline; one is quick understandability of the content and the other is saving display space by using a single line.", "This means that short titles can increase a user's chances of reaching interesting articles.", "Since the click-through rate of news articles is directly related to ad revenue, even a small improvement in short titles has a significant impact on business.", "We tackle an automatic-generation task of such short titles for a news aggregator to support the Japanese English translation Short title The prime minister cannot say that there is no surmise.", "Headline It cannot be said that there is no \"sontaku (surmise)\" with absolute certainty.", "The prime minister Abe said about the problem of \"Kake Gakuen (Kake school)\".", "Lead Prime Minister Shinzo Abe said, in an intensive deliberation with the House of Councilors Budget Committee held on the afternoon of the 14th, as an answer to a question about whether bureaucrats surmised to the prime minister regarding the Kake suspicion, \"It is difficult to understand whether there is a sontaku (surmise)\".", "He said \"It cannot be said that there was nothing wrong,\" while explaining that \"I do not need to be obsequious\".", "An answer to Ichiro Tsukada (LDP).", "Table 1 : Short title, headline, and lead in Figure 1 (b) with English versions.", "editorial process.", "Our task is a variant of newsheadline generation, which has been extensively studied, as described in Section 6.", "A clear difference between their task and ours is that we need to generate short titles from news content including headlines.", "Thus, we formulate our task as an abstractive summarization from multiple information sources, i.e., headlines and leads, based on an encoder-decoder model (Section 2).", "There are roughly three approaches for handling multiple information sources.", "The first approach is to merge all sources with some weights based on the importance of each source, which can be achieved by a weighted average of the context vectors, as in multimodal summarization (Hori et al., 2017) .", "This is the most general approach since the other two can also be regarded as special cases of the weighted average.", "The second approach is to use one source as the main source and others as secondary ones.", "This is effective when the main source can be clearly determined, such as query-focused summarization (Nema et al., 2017) , where the target document is main and a query is secondary.", "The third approach is to find the salient components of the sources.", "This is suitable when there are many sources including less informative ones (redundant sources), such as lengthydocument summarization that outputs a multisentence summary (Tan et al., 2017) , where each sentence can be regarded as one source.", "We addressed an extension of the weighted average approach and compared our proposed model with a multimodal model (Hori et al., 2017) from the first approach and a query-based model (Nema et al., 2017) from the second approach, as well as the normal encoder-decoder model.", "Since we have only two sources (headlines and leads), where the headline source is clearly salient for generating a short title, the third approach can be reduced to the normal encoder-decoder model.", "Our contributions are as follows.", "• We report on a case study of short-title generation of news articles for a news aggregator as a real-world application of neural headline generation.", "This study supports previous studies based on the encoder-decoder model from a practical standpoint since most real-world news articles basically already have headlines, which means that there has been little direct application of these previous studies.", "• We propose an encoder-decoder model with multiple encoders for separately encoding news headlines and leads (Section 3).", "Our comparative experiments with several baselines involving evaluations done by crowdsourcing workers showed the effectiveness of our model, especially using the \"usefulness\" measure (Section 4).", "• We describe how to deploy our model to an editing support tool and show the results of comparing the editors' behavior before and after releasing the tool (Section 5), which imply that the editors began to refer to generated titles after the release.", "late the following conditional likelihood p(y | x) = T −1 ∏ t=1 p(y t+1 | y ≤t , x) (1) with respect to each pair (x, y) of an input sequence x = x 1 · · · x S and output sequence y = y 1 · · · y T , where y ≤t = y 1 · · · y t , and maximize its mean.", "The model p(y | x) in Eq.", "(1) is computed by a combination of two recurrent neural networks (RNNs): an encoder and decoder.", "The encoder reads an input sequence x to recognize its content, and the decoder predicts an output sequence y corresponding to the content.", "More formally, an encoder calculates a hidden state h s for each element x s in a x by using the state transition function f enc of the encoder: h s = f enc (x s , h s−1 ).", "In a similar fashion, a decoder calculates a hidden stateĥ t for each element y t in a y by using the state transition function f dec of the decoder after setting the last hidden state of the encoder as the initial state of the decoder (ĥ 0 = h S ): h t = f dec (y t ,ĥ t−1 ).", "Then, a prediction of outputs for eachĥ t is calculated using the output function g dec with an attention mechanism: p(y t+1 | y ≤t , x) = g dec (ĥ t , c t ), (2) where c t is a weighted average of the encoder hidden states {h 1 , · · · , h S }, defined by c t = S ∑ s=1 a t (s)h s , (3) where a t (s) represents a weight of an encoder hidden state h s with respect to a decoder hidden statê h t .", "c t represents a soft alignment (or attention weight) to the source sequence at the target position t, so it is called a context.", "Proposed Method We propose an encoder-decoder model with multiple encoders.", "For simplicity, we describe our model assuming two encoders for news headlines and leads.", "Let d t and d ′ t be contexts calculated with Eq.", "(3) with the headline encoder and lead encoder, respectively.", "Our model combines the two context vectors inspired by a gating mechanism in long-short term memory networks (Hochreiter and Schmidhuber, 1997) as follows: w t = σ(W [d t ; d ′ t ;ĥ t ]), (4) w ′ t = σ(W ′ [d t ; d ′ t ;ĥ t ]), (5) c t = w t ⊙ d t + w ′ t ⊙ d ′ t , (6) where function σ represents the sigmoid function, i.e., σ(x) = 1/(1 + e −x ), and the operator ⊙ represents the element-wise product.", "Eq.", "(4) calculates a gating weight w t for d t , where W represents a weight matrix for a concatenated vector [d t ; d ′ t ;ĥ t ].", "Similarly, Eq.", "(5) calculates a gating weight w ′ t for d ′ t .", "Eq.", "(6) calculates a mixed context c t made from the two contexts, d t and d ′ t .", "Finally, the output function in our model is constructed by substituting c t with c t in Eq.", "(2).", "Our model can be regarded as an extension of the multimodal fusion model (Hori et al., 2017) , where multiple contexts are mixed using scalar weights, i.e., c t = αd t + βd ′ t , where α and β are positive scalar weights calculated using an attention mechanism such as a t (s) in Eq.", "(3).", "Our model can obtain a more sophisticated mixed context than their model since that model only takes into account which encoder to weigh at a time step, while our model adjusts weights on the element level.", "Experiments Dataset We prepared a dataset extracted from the newsaggregator of Yahoo!", "JAPAN by Web crawling.", "The dataset included 263K (headline, lead, short title) triples, and was split into three parts, i.e., for training (90%), validation (5%), and testing (5%).", "We preprocessed them by separating characters for training since our preliminary experiments showed that character-based training clearly performed better than word-based training.", "The statistics of our dataset are as follows.", "The average lengths of headlines, leads, and short titles are 24.87, 128.49, and 13.05 Japanese characters, respectively.", "The dictionary sizes (for characters) of headlines, leads, and short titles are 3618, 4226, and 3156, respectively.", "Each news article has only one short title created by a professional editor.", "The percentage of short titles equal to their headlines is only 0.13%, while the percentage of extractively solvable instances, in which the characters in each short title are completely matched by those in the corresponding headline, was about 20%.", "However, the average edit distance (Levenshtein, 1966 ) between short titles and headlines was 23.74.", "This means that short titles cannot be easily created from headlines.", "Training We implemented our model on the OpenNMT 2 toolkit.", "We used a convolutional neural network (CNN) (Kim, 2014) , instead of an RNN, to construct the lead encoder since leads are longer than headlines and require much more computational time.", "Since the CNN encoder outputs all hidden states for an input sequence in the same format as the RNN encoder, we can easily apply these states to Eq.", "(3).", "Our headline encoder still remains as an RNN (i.e., bidirectional LSTM) for fair comparison with the default implementation.", "We used a stochastic gradient descent algorithm with Nesterov momentum (Nesterov, 1983) as an optimizer, after initializing parameters by uniform sampling on (−0.1, 0.1).", "Table 2 lists the details of the hyper-parameter settings in our experiment.", "Other settings were basically the same as the default implementation of OpenNMT.", "Evaluation We conducted two crowdsourcing tasks to separately measure readability and usefulness.", "The readability task asked ten workers how readable each short title was on a four-point scale (higher is better), while the usefulness task asked them how useful the short title was compared to the corresponding article.", "The score of each generated short title was calculated by averaging the scores collected from the ten workers.", "Compared Models We prepared four models, our model GateFusion and three baselines MultiModal, QueryBased, and OpenNMT, listed below.", "We implemented the fusion mechanisms of MultiModal and 2 https://github.com/OpenNMT/OpenNMT-py Table 3 : Mean scores of readability (r), usefulness (u), and their average r+u 2 based on crowdsourcing.", "The \" †\" mark shows a statistical significance from all three baselines OpenNMT, MultiModal, and QueryBased on a one-tailed, paired t-test (p < 0.01).", "QueryBased on OpenNMT using an RNN encoder for headlines and CNN encoder for leads (see Appendix A for detailed definitions).", "• GateFusion: Our model with a gating mechanism described in Section 3.", "This is a fusion based on vector weights.", "• MultiModal: A multimodal model proposed by (Hori et al., 2017) , which can handle multimodal information such as image and audio as well as text by using separate encoders.", "The model combines contexts obtained from the encoders via an attention mechanism such as a t (s) in Eq.", "(3).", "This is a fusion based on scalar weights.", "• QueryBased: A query-based model proposed by (Nema et al., 2017) , which can finetune the attention on a document by using a query for query-focused summarization.", "We regard a headline as a document and a lead as a query since the headline is more similar to its short title.", "Specifically, the model finetunes an attention weight a t (s) for calculating a headline context d t by using a pre-computed lead context d ′ t .", "This is a fusion based on cascade connection.", "• OpenNMT: An encoder-decoder model with a single encoder implemented in OpenNMT, whose input is a headline only, because a variant using a lead did not perform better than this setting.", "Table 3 lists the results from the crowdsourcing tasks for readability and usefulness (see Appendix B for the details of these scores).", "Editor and Prefix in the top block of rows show the results of correct short titles created by editors and a naive model using the first 13.5 Japanese characters 3 , respectively.", "The middle and bottom blocks represent the three baselines and our models, respectively.", "We explain our hybrid model HybridFusion later.", "Each model was prepared as an ensemble of ten models by random initialization, aiming for robust performance.", "Our GateFusion clearly performed better than the three baselines regarding usefulness and interestingly outperformed even Editor.", "This implies that GateFusion tends to aggressively copy elements from source sequences.", "However, this seemed to result in complicated expressions; thus, GateFusion performed the worst with respect to readability.", "To overcome this weakness, we developed a hybrid model HybridFusion that consists of GateFusion and another fusion model QueryBased, which performed relatively well in terms of readability.", "The results indicate that HybridFusion performed the best regarding readability and usefulness.", "It can be considered that QueryBased helps GateFusion generate headline-style outputs since QueryBased mainly uses the headline source.", "Table 4 lists output examples generated by the best model OpenNMT from the three baselines and our best model HybridFusion (see Appendix C for more examples).", "In this case, the difference between OpenNMT and HybridFusion is easily comprehensible.", "The former selected \" (evolution)\", and the latter selected \" (Darvish)\" from the headline.", "In Japanese headlines, the last word tends to be important, so using the last word is basically a good strategy.", "However, the lead indicates that \"Darvish\" is more important than \"evolution\" (actually, there is no word \"evolution\" in the lead); thus, HybridFusion was able to correctly select the long name \"Darvish\" and abbreviate it to \" (Dar)\".", "In addition, it forcibly changed the style to the short title's style by putting the name into the forefront to easily get users' attention.", "This suggests that our neural-headline-generation model HybridFusion can successfully work even in this real-world application.", "Results Deployment to Editing Support Tool We deployed our short-title-generation model to an editing support tool in collaboration with the 3 13.5 is the limit in the news-aggregator, where space, numbers, and alphabet characters are counted as 0.5.", "Figure 2 : Screenshot of editing support tool displaying generated candidates for creating a short title.", "news service, as shown in Figure 2 .", "In the tool, when an editor enters the URL of an article, the tool can automatically fetch the headline and lead of the article and display up to five candidates next to the edit form of a short title, as shown in the dotted box in the figure.", "These candidates are hypotheses (with high probabilities) generated by the beam search based on the model.", "Then, the editor can effectively create a short title by referring to the generated candidates.", "This supporting feature is expected to be useful especially for inexperienced editors since the quality of short titles is heavily dependent on editors' experience.", "From now on, we briefly describe three features of the tool to improve its usability when displaying candidates: cutoff of unpromising candidates, skipping redundant candidates, and highlighting unknown characters.", "After that, we discuss the effect of the deployment analyzing user behavior before and after releasing the tool.", "Cutoff of Unpromising Candidates The quality of displayed candidates is one of the main factors that affect the usability of the tool.", "If the tool frequently displays unpromising candidates, editors will gradually start ignoring them.", "Therefore, we cutoff unpromising candidates whose perplexity scores are higher than a certain threshold, where the perplexity score of a candidate is calculated by the inverse of the geometric mean of the generation probabilities for all characters in the candidate.", "We set the threshold considering the results of the editors' manual evaluation, where they checked if each candidate was acceptable or not.", "Specifically, we used 1.47 (=1/0.68) as the threshold, which means that the (geometric) mean character likelihood in the candidate should be higher than 0.68.", "If all candidates are judged as unpromising, the tool displays a message like \"No promising candidates.\"", "Skipping Redundant Candidates The purpose of the tool is to give editors some new ideas for creating short titles, so it is not useful to display redundant candidates similar to others.", "Therefore, we skip candidates whose edit distance (Levenshtein, 1966) to the other candidates is lower than a threshold when selecting hypotheses in descending order of probability.", "Formally, the edit distance between two texts is defined as the minimum number of single-character edits (insertions, deletions, or substitutions) required to change one text into the other.", "We set the threshold to 2 so as to restrict variations of Japanese particles as there are many particles with a similar meaning in Japanese 4 , e.g., \" (ha)\" and \" (ga)\".", "Although we used a unit cost for the edit distance, we can adjust the cost of each edit operation so that the tool can ignore variations of prepositions if we want to use English texts.", "Highlighting Unknown Characters One difficulty of neural models is that there is a possibility of generating incorrect or fake titles, which do not correspond to the article.", "This is a serious issue for news editing support since displayed candidates can mislead editors.", "For example, if the tool displays \" (Fujinami)\" for the news about \" (Fujinami)\", where they are different names with the same pronunciation, editors might choose the incorrect one.", "As a simple solution, we highlighted unknown characters that do not appear in both headline and lead in red.", "In Figure 2 , two phrases (\"B\" and \" \") are highlighted since they do not appear in the headline and lead.", "When a candidate includes highlighted characters, editors can carefully check if the candidate is semantically correct.", "Note that we did not exclude candidates with unknown characters so that the model can aggressively generate paraphrases and abbreviations.", "For example, the tool ROUGE-L (± SE) # articles Before 52.71% (± 0.56) 1773 After 57.65% (± 0.53) 1959 Table 5 : Sequence matching rates (ROUGE-L) of editors' titles and generated titles, which are averaged over articles over three weeks before/after releasing tool.", "suggests \" B(Soft B.)\"", "as an abbreviation of \" (Softbank)\" in the figure.", "Effect of Deployment To investigate the effect of the deployment, we compared the sequence matching rates between editors' correct titles and generated candidates before and after releasing the tool.", "The sequence matching rate is basically calculated by ROUGE-L (Lin, 2004) , which is defined as the rate of the length of the longest common subsequence between two sequences, i.e., a correct title and a generated candidate.", "Because we have multiple candidates for each article, we calculate the sequence matching rate as the maximum of their ROUGE-L scores, assuming that editors may refer to the most promising candidate.", "Note that the candidates were filtered by the aforementioned features, so we omitted a few articles without candidates.", "Table 5 shows the results of the sequence matching rates averaged over the articles over three weeks before and after releasing the tool.", "The results indicate that the ROUGE-L score increased by about 5 percentage points after the release.", "This implies that editors created their titles by referring to the displayed candidates to some extent.", "In fact, the ratio of the exact matched titles (ROUGE-L = 100%) in all articles (before/after the release) increased after the release by a factor of 1.62(i.e., from 3.78% to 6.13%).", "Similarly, the ratio of the 80% matched titles (ROUGE-L ≥ 80%) also increased by a factor of 1.32 (i.e., from 14.04% to 18.53%).", "This suggests that professional editors obtained new ideas from generated titles of the tool.", "Related Work We briefly review related studies from three aspects: news headline generation, editing support, and application of headline generation.", "In summary, our work is the first attempt to deploy a neural news-headline-generation model to a realworld application, i.e., news editing support tool.", "News-headline-generation tasks have been extensively studied since early times (Wang et al., 2005; Soricut and Marcu, 2006; Woodsend et al., 2010; Alfonseca et al., 2013; Sun et al., 2015; Colmenares et al., 2015) .", "In this line of research, Rush et al.", "(2015) proposed a neural model to generate news headlines and released a benchmark dataset for their task, and consequently this task has recently received increasing attention (Chopra et al., 2016; Takase et al., 2016; Kiyono et al., 2017; Zhou et al., 2017; Ayana et al., 2017; Raffel et al., 2017; Cao et al., 2018; Kobayashi, 2018) .", "However, their approaches were basically based on the encoderdecoder model, which is trained with a lot of (article, headline) pairs.", "This means that there are few situations for putting their models into the real world because news articles typically already have corresponding headlines, and most editors create a headline before its content (according to a senior journalist).", "Therefore, our work can strongly support their approaches from a practical perspective.", "Considering technologies used for editing support, there have been many studies for various purposes, such as spelling error correction (Farra et al., 2014; Hasan et al., 2015; Etoori et al., 2018) , grammatical error correction (Dahlmeier and Ng, 2012; Susanto et al., 2014; Choshen and Abend, 2018) , fact checking (Baly et al., 2018; Thorne and Vlachos, 2018; Lee et al., 2018) , fluency evaluation (Vadlapudi and Katragadda, 2010; Heilman et al., 2014; Kann et al., 2018) , and so on.", "However, when we consider their studies on our task, they are only used after editing (writing a draft).", "On the other hand, the purpose of our tool is different from theirs since our tool can support editors before or during editing.", "The usage of (interactive) machine translation systems (Denkowski et al., 2014; González-Rubio et al., 2016; Wuebker et al., 2016; Ye et al., 2016; Takeno et al., 2017) for supporting manual post-editing are similar to our purpose, but their task is completely different from ours.", "In other words, their task is a translation without information loss, whereas our task is a summarization that requires information compression.", "We believe that a case study on summarization is still important for the summarization community.", "There have been several studies reporting case studies on headline generation for different real services: (a) question headlines on question answering service (Higurashi et al., 2018) , (b) product headlines on e-commerce service (Wang et al., 2018) , and (c) headlines for product curation pages Camargo de Souza et al., 2018) .", "The first two (a) and (b) are extractive approaches, and the last one (c) is an abstractive approach, where the input is a set of slot/value pairs, such as \"color/white.\"", "That is, our task is more difficult to use in the real-world.", "In addition, application to news services tends to be sensitive since news articles contain serious contents such as incidents, accidents, and disasters.", "Thus, our work should be valuable as a rare case study applying a neural model to such a news service.", "Conclusion We addressed short-title generation from news articles for a news aggregator to support the editorial process.", "We proposed an encoder-decoder model with multiple encoders for separately encoding multiple information sources, i.e., news headlines and leads.", "Comparative experiments using crowdsourcing showed that our hybrid model performed better than the baselines, especially using the usefulness measure.", "We deployed our model to an editing support tool and empirically confirmed that professional editors began to refer to the generated titles after the release.", "Future research will include verifying how much our headline generation model can affect practical performance indicators, such as click-through rate.", "In this case, we need to develop a much safer model since our model sometimes yields erroneous outputs or fake news titles, which cannot be directly used in the commercial service." ] }
{ "paper_header_number": [ "1", "3", "4.1", "4.2", "4.3", "4.4", "5", "5.1", "5.2", "5.3", "5.4", "6", "7" ], "paper_header_content": [ "Introduction", "Proposed Method", "Dataset", "Training", "Evaluation", "Compared Models", "Deployment to Editing Support Tool", "Cutoff of Unpromising Candidates", "Skipping Redundant Candidates", "Highlighting Unknown Characters", "Effect of Deployment", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-34#paper-1048#slide-3
Example of short title headline lead
different Short title generation task is not so easy Copyright 2019 Yahoo Japan Corporation. All Rights Reserved.
different Short title generation task is not so easy Copyright 2019 Yahoo Japan Corporation. All Rights Reserved.
[]
GEM-SciDuet-train-34#paper-1048#slide-4
1048
A Case Study on Neural Headline Generation for Editing Support
There have been many studies on neural headline generation models trained with a lot of (article, headline) pairs. However, there are few situations for putting such models into practical use in the real world since news articles typically already have corresponding headlines. In this paper, we describe a practical use case of neural headline generation in a news aggregator, where dozens of professional editors constantly select important news articles and manually create their headlines, which are much shorter than the original headlines. Specifically, we show how to deploy our model to an editing support tool and report the results of comparing the behavior of the editors before and after the release.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186 ], "paper_content_text": [ "Introduction A news-aggregator is a website or mobile application that aggregates a large amount of web content, e.g., online newspapers provided by different publishers.", "The main purpose of such a service is to help users obtain important news out of vast amounts of information quickly and easily.", "Therefore, it is critical to consider how to compactly show news, as well as what type of news to select, to improve service quality.", "In fact, the news-aggregator of Yahoo!", "JAPAN 1 , the largest Japanese portal site, is supported by dozens of professional editors who constantly select important news articles and manually create their new headlines called short titles, which are much shorter than the original headline, to construct a newstopic list.", "Note that we use the term \"title\" to avoid confusion with the original news headline, although they are similar concepts.", "* Both authors contributed equally to this work.", "1 https://www.yahoo.co.jp/ (a) List of news topics including short titles.", "(b) Page of news entry including headline and lead.", "Figure 1 shows screenshots of the newsaggregator of Yahoo!", "JAPAN, where the English translations of the short title, headline and lead are listed in Table 1 .", "The left figure (a) shows the list of news topics (important news articles), which includes short titles, and the right figure (b) shows the entry page of the first topic in the list, which consists of a headline and lead.", "The lead is a short version of the article and can be used by users to decide whether to read the whole article.", "The editors' job is to create a short title from news content including the headline and lead.", "A short title has two advantages over a normal headline; one is quick understandability of the content and the other is saving display space by using a single line.", "This means that short titles can increase a user's chances of reaching interesting articles.", "Since the click-through rate of news articles is directly related to ad revenue, even a small improvement in short titles has a significant impact on business.", "We tackle an automatic-generation task of such short titles for a news aggregator to support the Japanese English translation Short title The prime minister cannot say that there is no surmise.", "Headline It cannot be said that there is no \"sontaku (surmise)\" with absolute certainty.", "The prime minister Abe said about the problem of \"Kake Gakuen (Kake school)\".", "Lead Prime Minister Shinzo Abe said, in an intensive deliberation with the House of Councilors Budget Committee held on the afternoon of the 14th, as an answer to a question about whether bureaucrats surmised to the prime minister regarding the Kake suspicion, \"It is difficult to understand whether there is a sontaku (surmise)\".", "He said \"It cannot be said that there was nothing wrong,\" while explaining that \"I do not need to be obsequious\".", "An answer to Ichiro Tsukada (LDP).", "Table 1 : Short title, headline, and lead in Figure 1 (b) with English versions.", "editorial process.", "Our task is a variant of newsheadline generation, which has been extensively studied, as described in Section 6.", "A clear difference between their task and ours is that we need to generate short titles from news content including headlines.", "Thus, we formulate our task as an abstractive summarization from multiple information sources, i.e., headlines and leads, based on an encoder-decoder model (Section 2).", "There are roughly three approaches for handling multiple information sources.", "The first approach is to merge all sources with some weights based on the importance of each source, which can be achieved by a weighted average of the context vectors, as in multimodal summarization (Hori et al., 2017) .", "This is the most general approach since the other two can also be regarded as special cases of the weighted average.", "The second approach is to use one source as the main source and others as secondary ones.", "This is effective when the main source can be clearly determined, such as query-focused summarization (Nema et al., 2017) , where the target document is main and a query is secondary.", "The third approach is to find the salient components of the sources.", "This is suitable when there are many sources including less informative ones (redundant sources), such as lengthydocument summarization that outputs a multisentence summary (Tan et al., 2017) , where each sentence can be regarded as one source.", "We addressed an extension of the weighted average approach and compared our proposed model with a multimodal model (Hori et al., 2017) from the first approach and a query-based model (Nema et al., 2017) from the second approach, as well as the normal encoder-decoder model.", "Since we have only two sources (headlines and leads), where the headline source is clearly salient for generating a short title, the third approach can be reduced to the normal encoder-decoder model.", "Our contributions are as follows.", "• We report on a case study of short-title generation of news articles for a news aggregator as a real-world application of neural headline generation.", "This study supports previous studies based on the encoder-decoder model from a practical standpoint since most real-world news articles basically already have headlines, which means that there has been little direct application of these previous studies.", "• We propose an encoder-decoder model with multiple encoders for separately encoding news headlines and leads (Section 3).", "Our comparative experiments with several baselines involving evaluations done by crowdsourcing workers showed the effectiveness of our model, especially using the \"usefulness\" measure (Section 4).", "• We describe how to deploy our model to an editing support tool and show the results of comparing the editors' behavior before and after releasing the tool (Section 5), which imply that the editors began to refer to generated titles after the release.", "late the following conditional likelihood p(y | x) = T −1 ∏ t=1 p(y t+1 | y ≤t , x) (1) with respect to each pair (x, y) of an input sequence x = x 1 · · · x S and output sequence y = y 1 · · · y T , where y ≤t = y 1 · · · y t , and maximize its mean.", "The model p(y | x) in Eq.", "(1) is computed by a combination of two recurrent neural networks (RNNs): an encoder and decoder.", "The encoder reads an input sequence x to recognize its content, and the decoder predicts an output sequence y corresponding to the content.", "More formally, an encoder calculates a hidden state h s for each element x s in a x by using the state transition function f enc of the encoder: h s = f enc (x s , h s−1 ).", "In a similar fashion, a decoder calculates a hidden stateĥ t for each element y t in a y by using the state transition function f dec of the decoder after setting the last hidden state of the encoder as the initial state of the decoder (ĥ 0 = h S ): h t = f dec (y t ,ĥ t−1 ).", "Then, a prediction of outputs for eachĥ t is calculated using the output function g dec with an attention mechanism: p(y t+1 | y ≤t , x) = g dec (ĥ t , c t ), (2) where c t is a weighted average of the encoder hidden states {h 1 , · · · , h S }, defined by c t = S ∑ s=1 a t (s)h s , (3) where a t (s) represents a weight of an encoder hidden state h s with respect to a decoder hidden statê h t .", "c t represents a soft alignment (or attention weight) to the source sequence at the target position t, so it is called a context.", "Proposed Method We propose an encoder-decoder model with multiple encoders.", "For simplicity, we describe our model assuming two encoders for news headlines and leads.", "Let d t and d ′ t be contexts calculated with Eq.", "(3) with the headline encoder and lead encoder, respectively.", "Our model combines the two context vectors inspired by a gating mechanism in long-short term memory networks (Hochreiter and Schmidhuber, 1997) as follows: w t = σ(W [d t ; d ′ t ;ĥ t ]), (4) w ′ t = σ(W ′ [d t ; d ′ t ;ĥ t ]), (5) c t = w t ⊙ d t + w ′ t ⊙ d ′ t , (6) where function σ represents the sigmoid function, i.e., σ(x) = 1/(1 + e −x ), and the operator ⊙ represents the element-wise product.", "Eq.", "(4) calculates a gating weight w t for d t , where W represents a weight matrix for a concatenated vector [d t ; d ′ t ;ĥ t ].", "Similarly, Eq.", "(5) calculates a gating weight w ′ t for d ′ t .", "Eq.", "(6) calculates a mixed context c t made from the two contexts, d t and d ′ t .", "Finally, the output function in our model is constructed by substituting c t with c t in Eq.", "(2).", "Our model can be regarded as an extension of the multimodal fusion model (Hori et al., 2017) , where multiple contexts are mixed using scalar weights, i.e., c t = αd t + βd ′ t , where α and β are positive scalar weights calculated using an attention mechanism such as a t (s) in Eq.", "(3).", "Our model can obtain a more sophisticated mixed context than their model since that model only takes into account which encoder to weigh at a time step, while our model adjusts weights on the element level.", "Experiments Dataset We prepared a dataset extracted from the newsaggregator of Yahoo!", "JAPAN by Web crawling.", "The dataset included 263K (headline, lead, short title) triples, and was split into three parts, i.e., for training (90%), validation (5%), and testing (5%).", "We preprocessed them by separating characters for training since our preliminary experiments showed that character-based training clearly performed better than word-based training.", "The statistics of our dataset are as follows.", "The average lengths of headlines, leads, and short titles are 24.87, 128.49, and 13.05 Japanese characters, respectively.", "The dictionary sizes (for characters) of headlines, leads, and short titles are 3618, 4226, and 3156, respectively.", "Each news article has only one short title created by a professional editor.", "The percentage of short titles equal to their headlines is only 0.13%, while the percentage of extractively solvable instances, in which the characters in each short title are completely matched by those in the corresponding headline, was about 20%.", "However, the average edit distance (Levenshtein, 1966 ) between short titles and headlines was 23.74.", "This means that short titles cannot be easily created from headlines.", "Training We implemented our model on the OpenNMT 2 toolkit.", "We used a convolutional neural network (CNN) (Kim, 2014) , instead of an RNN, to construct the lead encoder since leads are longer than headlines and require much more computational time.", "Since the CNN encoder outputs all hidden states for an input sequence in the same format as the RNN encoder, we can easily apply these states to Eq.", "(3).", "Our headline encoder still remains as an RNN (i.e., bidirectional LSTM) for fair comparison with the default implementation.", "We used a stochastic gradient descent algorithm with Nesterov momentum (Nesterov, 1983) as an optimizer, after initializing parameters by uniform sampling on (−0.1, 0.1).", "Table 2 lists the details of the hyper-parameter settings in our experiment.", "Other settings were basically the same as the default implementation of OpenNMT.", "Evaluation We conducted two crowdsourcing tasks to separately measure readability and usefulness.", "The readability task asked ten workers how readable each short title was on a four-point scale (higher is better), while the usefulness task asked them how useful the short title was compared to the corresponding article.", "The score of each generated short title was calculated by averaging the scores collected from the ten workers.", "Compared Models We prepared four models, our model GateFusion and three baselines MultiModal, QueryBased, and OpenNMT, listed below.", "We implemented the fusion mechanisms of MultiModal and 2 https://github.com/OpenNMT/OpenNMT-py Table 3 : Mean scores of readability (r), usefulness (u), and their average r+u 2 based on crowdsourcing.", "The \" †\" mark shows a statistical significance from all three baselines OpenNMT, MultiModal, and QueryBased on a one-tailed, paired t-test (p < 0.01).", "QueryBased on OpenNMT using an RNN encoder for headlines and CNN encoder for leads (see Appendix A for detailed definitions).", "• GateFusion: Our model with a gating mechanism described in Section 3.", "This is a fusion based on vector weights.", "• MultiModal: A multimodal model proposed by (Hori et al., 2017) , which can handle multimodal information such as image and audio as well as text by using separate encoders.", "The model combines contexts obtained from the encoders via an attention mechanism such as a t (s) in Eq.", "(3).", "This is a fusion based on scalar weights.", "• QueryBased: A query-based model proposed by (Nema et al., 2017) , which can finetune the attention on a document by using a query for query-focused summarization.", "We regard a headline as a document and a lead as a query since the headline is more similar to its short title.", "Specifically, the model finetunes an attention weight a t (s) for calculating a headline context d t by using a pre-computed lead context d ′ t .", "This is a fusion based on cascade connection.", "• OpenNMT: An encoder-decoder model with a single encoder implemented in OpenNMT, whose input is a headline only, because a variant using a lead did not perform better than this setting.", "Table 3 lists the results from the crowdsourcing tasks for readability and usefulness (see Appendix B for the details of these scores).", "Editor and Prefix in the top block of rows show the results of correct short titles created by editors and a naive model using the first 13.5 Japanese characters 3 , respectively.", "The middle and bottom blocks represent the three baselines and our models, respectively.", "We explain our hybrid model HybridFusion later.", "Each model was prepared as an ensemble of ten models by random initialization, aiming for robust performance.", "Our GateFusion clearly performed better than the three baselines regarding usefulness and interestingly outperformed even Editor.", "This implies that GateFusion tends to aggressively copy elements from source sequences.", "However, this seemed to result in complicated expressions; thus, GateFusion performed the worst with respect to readability.", "To overcome this weakness, we developed a hybrid model HybridFusion that consists of GateFusion and another fusion model QueryBased, which performed relatively well in terms of readability.", "The results indicate that HybridFusion performed the best regarding readability and usefulness.", "It can be considered that QueryBased helps GateFusion generate headline-style outputs since QueryBased mainly uses the headline source.", "Table 4 lists output examples generated by the best model OpenNMT from the three baselines and our best model HybridFusion (see Appendix C for more examples).", "In this case, the difference between OpenNMT and HybridFusion is easily comprehensible.", "The former selected \" (evolution)\", and the latter selected \" (Darvish)\" from the headline.", "In Japanese headlines, the last word tends to be important, so using the last word is basically a good strategy.", "However, the lead indicates that \"Darvish\" is more important than \"evolution\" (actually, there is no word \"evolution\" in the lead); thus, HybridFusion was able to correctly select the long name \"Darvish\" and abbreviate it to \" (Dar)\".", "In addition, it forcibly changed the style to the short title's style by putting the name into the forefront to easily get users' attention.", "This suggests that our neural-headline-generation model HybridFusion can successfully work even in this real-world application.", "Results Deployment to Editing Support Tool We deployed our short-title-generation model to an editing support tool in collaboration with the 3 13.5 is the limit in the news-aggregator, where space, numbers, and alphabet characters are counted as 0.5.", "Figure 2 : Screenshot of editing support tool displaying generated candidates for creating a short title.", "news service, as shown in Figure 2 .", "In the tool, when an editor enters the URL of an article, the tool can automatically fetch the headline and lead of the article and display up to five candidates next to the edit form of a short title, as shown in the dotted box in the figure.", "These candidates are hypotheses (with high probabilities) generated by the beam search based on the model.", "Then, the editor can effectively create a short title by referring to the generated candidates.", "This supporting feature is expected to be useful especially for inexperienced editors since the quality of short titles is heavily dependent on editors' experience.", "From now on, we briefly describe three features of the tool to improve its usability when displaying candidates: cutoff of unpromising candidates, skipping redundant candidates, and highlighting unknown characters.", "After that, we discuss the effect of the deployment analyzing user behavior before and after releasing the tool.", "Cutoff of Unpromising Candidates The quality of displayed candidates is one of the main factors that affect the usability of the tool.", "If the tool frequently displays unpromising candidates, editors will gradually start ignoring them.", "Therefore, we cutoff unpromising candidates whose perplexity scores are higher than a certain threshold, where the perplexity score of a candidate is calculated by the inverse of the geometric mean of the generation probabilities for all characters in the candidate.", "We set the threshold considering the results of the editors' manual evaluation, where they checked if each candidate was acceptable or not.", "Specifically, we used 1.47 (=1/0.68) as the threshold, which means that the (geometric) mean character likelihood in the candidate should be higher than 0.68.", "If all candidates are judged as unpromising, the tool displays a message like \"No promising candidates.\"", "Skipping Redundant Candidates The purpose of the tool is to give editors some new ideas for creating short titles, so it is not useful to display redundant candidates similar to others.", "Therefore, we skip candidates whose edit distance (Levenshtein, 1966) to the other candidates is lower than a threshold when selecting hypotheses in descending order of probability.", "Formally, the edit distance between two texts is defined as the minimum number of single-character edits (insertions, deletions, or substitutions) required to change one text into the other.", "We set the threshold to 2 so as to restrict variations of Japanese particles as there are many particles with a similar meaning in Japanese 4 , e.g., \" (ha)\" and \" (ga)\".", "Although we used a unit cost for the edit distance, we can adjust the cost of each edit operation so that the tool can ignore variations of prepositions if we want to use English texts.", "Highlighting Unknown Characters One difficulty of neural models is that there is a possibility of generating incorrect or fake titles, which do not correspond to the article.", "This is a serious issue for news editing support since displayed candidates can mislead editors.", "For example, if the tool displays \" (Fujinami)\" for the news about \" (Fujinami)\", where they are different names with the same pronunciation, editors might choose the incorrect one.", "As a simple solution, we highlighted unknown characters that do not appear in both headline and lead in red.", "In Figure 2 , two phrases (\"B\" and \" \") are highlighted since they do not appear in the headline and lead.", "When a candidate includes highlighted characters, editors can carefully check if the candidate is semantically correct.", "Note that we did not exclude candidates with unknown characters so that the model can aggressively generate paraphrases and abbreviations.", "For example, the tool ROUGE-L (± SE) # articles Before 52.71% (± 0.56) 1773 After 57.65% (± 0.53) 1959 Table 5 : Sequence matching rates (ROUGE-L) of editors' titles and generated titles, which are averaged over articles over three weeks before/after releasing tool.", "suggests \" B(Soft B.)\"", "as an abbreviation of \" (Softbank)\" in the figure.", "Effect of Deployment To investigate the effect of the deployment, we compared the sequence matching rates between editors' correct titles and generated candidates before and after releasing the tool.", "The sequence matching rate is basically calculated by ROUGE-L (Lin, 2004) , which is defined as the rate of the length of the longest common subsequence between two sequences, i.e., a correct title and a generated candidate.", "Because we have multiple candidates for each article, we calculate the sequence matching rate as the maximum of their ROUGE-L scores, assuming that editors may refer to the most promising candidate.", "Note that the candidates were filtered by the aforementioned features, so we omitted a few articles without candidates.", "Table 5 shows the results of the sequence matching rates averaged over the articles over three weeks before and after releasing the tool.", "The results indicate that the ROUGE-L score increased by about 5 percentage points after the release.", "This implies that editors created their titles by referring to the displayed candidates to some extent.", "In fact, the ratio of the exact matched titles (ROUGE-L = 100%) in all articles (before/after the release) increased after the release by a factor of 1.62(i.e., from 3.78% to 6.13%).", "Similarly, the ratio of the 80% matched titles (ROUGE-L ≥ 80%) also increased by a factor of 1.32 (i.e., from 14.04% to 18.53%).", "This suggests that professional editors obtained new ideas from generated titles of the tool.", "Related Work We briefly review related studies from three aspects: news headline generation, editing support, and application of headline generation.", "In summary, our work is the first attempt to deploy a neural news-headline-generation model to a realworld application, i.e., news editing support tool.", "News-headline-generation tasks have been extensively studied since early times (Wang et al., 2005; Soricut and Marcu, 2006; Woodsend et al., 2010; Alfonseca et al., 2013; Sun et al., 2015; Colmenares et al., 2015) .", "In this line of research, Rush et al.", "(2015) proposed a neural model to generate news headlines and released a benchmark dataset for their task, and consequently this task has recently received increasing attention (Chopra et al., 2016; Takase et al., 2016; Kiyono et al., 2017; Zhou et al., 2017; Ayana et al., 2017; Raffel et al., 2017; Cao et al., 2018; Kobayashi, 2018) .", "However, their approaches were basically based on the encoderdecoder model, which is trained with a lot of (article, headline) pairs.", "This means that there are few situations for putting their models into the real world because news articles typically already have corresponding headlines, and most editors create a headline before its content (according to a senior journalist).", "Therefore, our work can strongly support their approaches from a practical perspective.", "Considering technologies used for editing support, there have been many studies for various purposes, such as spelling error correction (Farra et al., 2014; Hasan et al., 2015; Etoori et al., 2018) , grammatical error correction (Dahlmeier and Ng, 2012; Susanto et al., 2014; Choshen and Abend, 2018) , fact checking (Baly et al., 2018; Thorne and Vlachos, 2018; Lee et al., 2018) , fluency evaluation (Vadlapudi and Katragadda, 2010; Heilman et al., 2014; Kann et al., 2018) , and so on.", "However, when we consider their studies on our task, they are only used after editing (writing a draft).", "On the other hand, the purpose of our tool is different from theirs since our tool can support editors before or during editing.", "The usage of (interactive) machine translation systems (Denkowski et al., 2014; González-Rubio et al., 2016; Wuebker et al., 2016; Ye et al., 2016; Takeno et al., 2017) for supporting manual post-editing are similar to our purpose, but their task is completely different from ours.", "In other words, their task is a translation without information loss, whereas our task is a summarization that requires information compression.", "We believe that a case study on summarization is still important for the summarization community.", "There have been several studies reporting case studies on headline generation for different real services: (a) question headlines on question answering service (Higurashi et al., 2018) , (b) product headlines on e-commerce service (Wang et al., 2018) , and (c) headlines for product curation pages Camargo de Souza et al., 2018) .", "The first two (a) and (b) are extractive approaches, and the last one (c) is an abstractive approach, where the input is a set of slot/value pairs, such as \"color/white.\"", "That is, our task is more difficult to use in the real-world.", "In addition, application to news services tends to be sensitive since news articles contain serious contents such as incidents, accidents, and disasters.", "Thus, our work should be valuable as a rare case study applying a neural model to such a news service.", "Conclusion We addressed short-title generation from news articles for a news aggregator to support the editorial process.", "We proposed an encoder-decoder model with multiple encoders for separately encoding multiple information sources, i.e., news headlines and leads.", "Comparative experiments using crowdsourcing showed that our hybrid model performed better than the baselines, especially using the usefulness measure.", "We deployed our model to an editing support tool and empirically confirmed that professional editors began to refer to the generated titles after the release.", "Future research will include verifying how much our headline generation model can affect practical performance indicators, such as click-through rate.", "In this case, we need to develop a much safer model since our model sometimes yields erroneous outputs or fake news titles, which cannot be directly used in the commercial service." ] }
{ "paper_header_number": [ "1", "3", "4.1", "4.2", "4.3", "4.4", "5", "5.1", "5.2", "5.3", "5.4", "6", "7" ], "paper_header_content": [ "Introduction", "Proposed Method", "Dataset", "Training", "Evaluation", "Compared Models", "Deployment to Editing Support Tool", "Cutoff of Unpromising Candidates", "Skipping Redundant Candidates", "Highlighting Unknown Characters", "Effect of Deployment", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-34#paper-1048#slide-4
Encoder decoder model with attention
Conditional language model consisting of two RNNs Described by three components (encoder, attention, decoder) Encoder RNN Decoder RNN Attention calculates a context from the encoders states Copyright 2019 Yahoo Japan Corporation. All Rights Reserved.
Conditional language model consisting of two RNNs Described by three components (encoder, attention, decoder) Encoder RNN Decoder RNN Attention calculates a context from the encoders states Copyright 2019 Yahoo Japan Corporation. All Rights Reserved.
[]
GEM-SciDuet-train-34#paper-1048#slide-5
1048
A Case Study on Neural Headline Generation for Editing Support
There have been many studies on neural headline generation models trained with a lot of (article, headline) pairs. However, there are few situations for putting such models into practical use in the real world since news articles typically already have corresponding headlines. In this paper, we describe a practical use case of neural headline generation in a news aggregator, where dozens of professional editors constantly select important news articles and manually create their headlines, which are much shorter than the original headlines. Specifically, we show how to deploy our model to an editing support tool and report the results of comparing the behavior of the editors before and after the release.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186 ], "paper_content_text": [ "Introduction A news-aggregator is a website or mobile application that aggregates a large amount of web content, e.g., online newspapers provided by different publishers.", "The main purpose of such a service is to help users obtain important news out of vast amounts of information quickly and easily.", "Therefore, it is critical to consider how to compactly show news, as well as what type of news to select, to improve service quality.", "In fact, the news-aggregator of Yahoo!", "JAPAN 1 , the largest Japanese portal site, is supported by dozens of professional editors who constantly select important news articles and manually create their new headlines called short titles, which are much shorter than the original headline, to construct a newstopic list.", "Note that we use the term \"title\" to avoid confusion with the original news headline, although they are similar concepts.", "* Both authors contributed equally to this work.", "1 https://www.yahoo.co.jp/ (a) List of news topics including short titles.", "(b) Page of news entry including headline and lead.", "Figure 1 shows screenshots of the newsaggregator of Yahoo!", "JAPAN, where the English translations of the short title, headline and lead are listed in Table 1 .", "The left figure (a) shows the list of news topics (important news articles), which includes short titles, and the right figure (b) shows the entry page of the first topic in the list, which consists of a headline and lead.", "The lead is a short version of the article and can be used by users to decide whether to read the whole article.", "The editors' job is to create a short title from news content including the headline and lead.", "A short title has two advantages over a normal headline; one is quick understandability of the content and the other is saving display space by using a single line.", "This means that short titles can increase a user's chances of reaching interesting articles.", "Since the click-through rate of news articles is directly related to ad revenue, even a small improvement in short titles has a significant impact on business.", "We tackle an automatic-generation task of such short titles for a news aggregator to support the Japanese English translation Short title The prime minister cannot say that there is no surmise.", "Headline It cannot be said that there is no \"sontaku (surmise)\" with absolute certainty.", "The prime minister Abe said about the problem of \"Kake Gakuen (Kake school)\".", "Lead Prime Minister Shinzo Abe said, in an intensive deliberation with the House of Councilors Budget Committee held on the afternoon of the 14th, as an answer to a question about whether bureaucrats surmised to the prime minister regarding the Kake suspicion, \"It is difficult to understand whether there is a sontaku (surmise)\".", "He said \"It cannot be said that there was nothing wrong,\" while explaining that \"I do not need to be obsequious\".", "An answer to Ichiro Tsukada (LDP).", "Table 1 : Short title, headline, and lead in Figure 1 (b) with English versions.", "editorial process.", "Our task is a variant of newsheadline generation, which has been extensively studied, as described in Section 6.", "A clear difference between their task and ours is that we need to generate short titles from news content including headlines.", "Thus, we formulate our task as an abstractive summarization from multiple information sources, i.e., headlines and leads, based on an encoder-decoder model (Section 2).", "There are roughly three approaches for handling multiple information sources.", "The first approach is to merge all sources with some weights based on the importance of each source, which can be achieved by a weighted average of the context vectors, as in multimodal summarization (Hori et al., 2017) .", "This is the most general approach since the other two can also be regarded as special cases of the weighted average.", "The second approach is to use one source as the main source and others as secondary ones.", "This is effective when the main source can be clearly determined, such as query-focused summarization (Nema et al., 2017) , where the target document is main and a query is secondary.", "The third approach is to find the salient components of the sources.", "This is suitable when there are many sources including less informative ones (redundant sources), such as lengthydocument summarization that outputs a multisentence summary (Tan et al., 2017) , where each sentence can be regarded as one source.", "We addressed an extension of the weighted average approach and compared our proposed model with a multimodal model (Hori et al., 2017) from the first approach and a query-based model (Nema et al., 2017) from the second approach, as well as the normal encoder-decoder model.", "Since we have only two sources (headlines and leads), where the headline source is clearly salient for generating a short title, the third approach can be reduced to the normal encoder-decoder model.", "Our contributions are as follows.", "• We report on a case study of short-title generation of news articles for a news aggregator as a real-world application of neural headline generation.", "This study supports previous studies based on the encoder-decoder model from a practical standpoint since most real-world news articles basically already have headlines, which means that there has been little direct application of these previous studies.", "• We propose an encoder-decoder model with multiple encoders for separately encoding news headlines and leads (Section 3).", "Our comparative experiments with several baselines involving evaluations done by crowdsourcing workers showed the effectiveness of our model, especially using the \"usefulness\" measure (Section 4).", "• We describe how to deploy our model to an editing support tool and show the results of comparing the editors' behavior before and after releasing the tool (Section 5), which imply that the editors began to refer to generated titles after the release.", "late the following conditional likelihood p(y | x) = T −1 ∏ t=1 p(y t+1 | y ≤t , x) (1) with respect to each pair (x, y) of an input sequence x = x 1 · · · x S and output sequence y = y 1 · · · y T , where y ≤t = y 1 · · · y t , and maximize its mean.", "The model p(y | x) in Eq.", "(1) is computed by a combination of two recurrent neural networks (RNNs): an encoder and decoder.", "The encoder reads an input sequence x to recognize its content, and the decoder predicts an output sequence y corresponding to the content.", "More formally, an encoder calculates a hidden state h s for each element x s in a x by using the state transition function f enc of the encoder: h s = f enc (x s , h s−1 ).", "In a similar fashion, a decoder calculates a hidden stateĥ t for each element y t in a y by using the state transition function f dec of the decoder after setting the last hidden state of the encoder as the initial state of the decoder (ĥ 0 = h S ): h t = f dec (y t ,ĥ t−1 ).", "Then, a prediction of outputs for eachĥ t is calculated using the output function g dec with an attention mechanism: p(y t+1 | y ≤t , x) = g dec (ĥ t , c t ), (2) where c t is a weighted average of the encoder hidden states {h 1 , · · · , h S }, defined by c t = S ∑ s=1 a t (s)h s , (3) where a t (s) represents a weight of an encoder hidden state h s with respect to a decoder hidden statê h t .", "c t represents a soft alignment (or attention weight) to the source sequence at the target position t, so it is called a context.", "Proposed Method We propose an encoder-decoder model with multiple encoders.", "For simplicity, we describe our model assuming two encoders for news headlines and leads.", "Let d t and d ′ t be contexts calculated with Eq.", "(3) with the headline encoder and lead encoder, respectively.", "Our model combines the two context vectors inspired by a gating mechanism in long-short term memory networks (Hochreiter and Schmidhuber, 1997) as follows: w t = σ(W [d t ; d ′ t ;ĥ t ]), (4) w ′ t = σ(W ′ [d t ; d ′ t ;ĥ t ]), (5) c t = w t ⊙ d t + w ′ t ⊙ d ′ t , (6) where function σ represents the sigmoid function, i.e., σ(x) = 1/(1 + e −x ), and the operator ⊙ represents the element-wise product.", "Eq.", "(4) calculates a gating weight w t for d t , where W represents a weight matrix for a concatenated vector [d t ; d ′ t ;ĥ t ].", "Similarly, Eq.", "(5) calculates a gating weight w ′ t for d ′ t .", "Eq.", "(6) calculates a mixed context c t made from the two contexts, d t and d ′ t .", "Finally, the output function in our model is constructed by substituting c t with c t in Eq.", "(2).", "Our model can be regarded as an extension of the multimodal fusion model (Hori et al., 2017) , where multiple contexts are mixed using scalar weights, i.e., c t = αd t + βd ′ t , where α and β are positive scalar weights calculated using an attention mechanism such as a t (s) in Eq.", "(3).", "Our model can obtain a more sophisticated mixed context than their model since that model only takes into account which encoder to weigh at a time step, while our model adjusts weights on the element level.", "Experiments Dataset We prepared a dataset extracted from the newsaggregator of Yahoo!", "JAPAN by Web crawling.", "The dataset included 263K (headline, lead, short title) triples, and was split into three parts, i.e., for training (90%), validation (5%), and testing (5%).", "We preprocessed them by separating characters for training since our preliminary experiments showed that character-based training clearly performed better than word-based training.", "The statistics of our dataset are as follows.", "The average lengths of headlines, leads, and short titles are 24.87, 128.49, and 13.05 Japanese characters, respectively.", "The dictionary sizes (for characters) of headlines, leads, and short titles are 3618, 4226, and 3156, respectively.", "Each news article has only one short title created by a professional editor.", "The percentage of short titles equal to their headlines is only 0.13%, while the percentage of extractively solvable instances, in which the characters in each short title are completely matched by those in the corresponding headline, was about 20%.", "However, the average edit distance (Levenshtein, 1966 ) between short titles and headlines was 23.74.", "This means that short titles cannot be easily created from headlines.", "Training We implemented our model on the OpenNMT 2 toolkit.", "We used a convolutional neural network (CNN) (Kim, 2014) , instead of an RNN, to construct the lead encoder since leads are longer than headlines and require much more computational time.", "Since the CNN encoder outputs all hidden states for an input sequence in the same format as the RNN encoder, we can easily apply these states to Eq.", "(3).", "Our headline encoder still remains as an RNN (i.e., bidirectional LSTM) for fair comparison with the default implementation.", "We used a stochastic gradient descent algorithm with Nesterov momentum (Nesterov, 1983) as an optimizer, after initializing parameters by uniform sampling on (−0.1, 0.1).", "Table 2 lists the details of the hyper-parameter settings in our experiment.", "Other settings were basically the same as the default implementation of OpenNMT.", "Evaluation We conducted two crowdsourcing tasks to separately measure readability and usefulness.", "The readability task asked ten workers how readable each short title was on a four-point scale (higher is better), while the usefulness task asked them how useful the short title was compared to the corresponding article.", "The score of each generated short title was calculated by averaging the scores collected from the ten workers.", "Compared Models We prepared four models, our model GateFusion and three baselines MultiModal, QueryBased, and OpenNMT, listed below.", "We implemented the fusion mechanisms of MultiModal and 2 https://github.com/OpenNMT/OpenNMT-py Table 3 : Mean scores of readability (r), usefulness (u), and their average r+u 2 based on crowdsourcing.", "The \" †\" mark shows a statistical significance from all three baselines OpenNMT, MultiModal, and QueryBased on a one-tailed, paired t-test (p < 0.01).", "QueryBased on OpenNMT using an RNN encoder for headlines and CNN encoder for leads (see Appendix A for detailed definitions).", "• GateFusion: Our model with a gating mechanism described in Section 3.", "This is a fusion based on vector weights.", "• MultiModal: A multimodal model proposed by (Hori et al., 2017) , which can handle multimodal information such as image and audio as well as text by using separate encoders.", "The model combines contexts obtained from the encoders via an attention mechanism such as a t (s) in Eq.", "(3).", "This is a fusion based on scalar weights.", "• QueryBased: A query-based model proposed by (Nema et al., 2017) , which can finetune the attention on a document by using a query for query-focused summarization.", "We regard a headline as a document and a lead as a query since the headline is more similar to its short title.", "Specifically, the model finetunes an attention weight a t (s) for calculating a headline context d t by using a pre-computed lead context d ′ t .", "This is a fusion based on cascade connection.", "• OpenNMT: An encoder-decoder model with a single encoder implemented in OpenNMT, whose input is a headline only, because a variant using a lead did not perform better than this setting.", "Table 3 lists the results from the crowdsourcing tasks for readability and usefulness (see Appendix B for the details of these scores).", "Editor and Prefix in the top block of rows show the results of correct short titles created by editors and a naive model using the first 13.5 Japanese characters 3 , respectively.", "The middle and bottom blocks represent the three baselines and our models, respectively.", "We explain our hybrid model HybridFusion later.", "Each model was prepared as an ensemble of ten models by random initialization, aiming for robust performance.", "Our GateFusion clearly performed better than the three baselines regarding usefulness and interestingly outperformed even Editor.", "This implies that GateFusion tends to aggressively copy elements from source sequences.", "However, this seemed to result in complicated expressions; thus, GateFusion performed the worst with respect to readability.", "To overcome this weakness, we developed a hybrid model HybridFusion that consists of GateFusion and another fusion model QueryBased, which performed relatively well in terms of readability.", "The results indicate that HybridFusion performed the best regarding readability and usefulness.", "It can be considered that QueryBased helps GateFusion generate headline-style outputs since QueryBased mainly uses the headline source.", "Table 4 lists output examples generated by the best model OpenNMT from the three baselines and our best model HybridFusion (see Appendix C for more examples).", "In this case, the difference between OpenNMT and HybridFusion is easily comprehensible.", "The former selected \" (evolution)\", and the latter selected \" (Darvish)\" from the headline.", "In Japanese headlines, the last word tends to be important, so using the last word is basically a good strategy.", "However, the lead indicates that \"Darvish\" is more important than \"evolution\" (actually, there is no word \"evolution\" in the lead); thus, HybridFusion was able to correctly select the long name \"Darvish\" and abbreviate it to \" (Dar)\".", "In addition, it forcibly changed the style to the short title's style by putting the name into the forefront to easily get users' attention.", "This suggests that our neural-headline-generation model HybridFusion can successfully work even in this real-world application.", "Results Deployment to Editing Support Tool We deployed our short-title-generation model to an editing support tool in collaboration with the 3 13.5 is the limit in the news-aggregator, where space, numbers, and alphabet characters are counted as 0.5.", "Figure 2 : Screenshot of editing support tool displaying generated candidates for creating a short title.", "news service, as shown in Figure 2 .", "In the tool, when an editor enters the URL of an article, the tool can automatically fetch the headline and lead of the article and display up to five candidates next to the edit form of a short title, as shown in the dotted box in the figure.", "These candidates are hypotheses (with high probabilities) generated by the beam search based on the model.", "Then, the editor can effectively create a short title by referring to the generated candidates.", "This supporting feature is expected to be useful especially for inexperienced editors since the quality of short titles is heavily dependent on editors' experience.", "From now on, we briefly describe three features of the tool to improve its usability when displaying candidates: cutoff of unpromising candidates, skipping redundant candidates, and highlighting unknown characters.", "After that, we discuss the effect of the deployment analyzing user behavior before and after releasing the tool.", "Cutoff of Unpromising Candidates The quality of displayed candidates is one of the main factors that affect the usability of the tool.", "If the tool frequently displays unpromising candidates, editors will gradually start ignoring them.", "Therefore, we cutoff unpromising candidates whose perplexity scores are higher than a certain threshold, where the perplexity score of a candidate is calculated by the inverse of the geometric mean of the generation probabilities for all characters in the candidate.", "We set the threshold considering the results of the editors' manual evaluation, where they checked if each candidate was acceptable or not.", "Specifically, we used 1.47 (=1/0.68) as the threshold, which means that the (geometric) mean character likelihood in the candidate should be higher than 0.68.", "If all candidates are judged as unpromising, the tool displays a message like \"No promising candidates.\"", "Skipping Redundant Candidates The purpose of the tool is to give editors some new ideas for creating short titles, so it is not useful to display redundant candidates similar to others.", "Therefore, we skip candidates whose edit distance (Levenshtein, 1966) to the other candidates is lower than a threshold when selecting hypotheses in descending order of probability.", "Formally, the edit distance between two texts is defined as the minimum number of single-character edits (insertions, deletions, or substitutions) required to change one text into the other.", "We set the threshold to 2 so as to restrict variations of Japanese particles as there are many particles with a similar meaning in Japanese 4 , e.g., \" (ha)\" and \" (ga)\".", "Although we used a unit cost for the edit distance, we can adjust the cost of each edit operation so that the tool can ignore variations of prepositions if we want to use English texts.", "Highlighting Unknown Characters One difficulty of neural models is that there is a possibility of generating incorrect or fake titles, which do not correspond to the article.", "This is a serious issue for news editing support since displayed candidates can mislead editors.", "For example, if the tool displays \" (Fujinami)\" for the news about \" (Fujinami)\", where they are different names with the same pronunciation, editors might choose the incorrect one.", "As a simple solution, we highlighted unknown characters that do not appear in both headline and lead in red.", "In Figure 2 , two phrases (\"B\" and \" \") are highlighted since they do not appear in the headline and lead.", "When a candidate includes highlighted characters, editors can carefully check if the candidate is semantically correct.", "Note that we did not exclude candidates with unknown characters so that the model can aggressively generate paraphrases and abbreviations.", "For example, the tool ROUGE-L (± SE) # articles Before 52.71% (± 0.56) 1773 After 57.65% (± 0.53) 1959 Table 5 : Sequence matching rates (ROUGE-L) of editors' titles and generated titles, which are averaged over articles over three weeks before/after releasing tool.", "suggests \" B(Soft B.)\"", "as an abbreviation of \" (Softbank)\" in the figure.", "Effect of Deployment To investigate the effect of the deployment, we compared the sequence matching rates between editors' correct titles and generated candidates before and after releasing the tool.", "The sequence matching rate is basically calculated by ROUGE-L (Lin, 2004) , which is defined as the rate of the length of the longest common subsequence between two sequences, i.e., a correct title and a generated candidate.", "Because we have multiple candidates for each article, we calculate the sequence matching rate as the maximum of their ROUGE-L scores, assuming that editors may refer to the most promising candidate.", "Note that the candidates were filtered by the aforementioned features, so we omitted a few articles without candidates.", "Table 5 shows the results of the sequence matching rates averaged over the articles over three weeks before and after releasing the tool.", "The results indicate that the ROUGE-L score increased by about 5 percentage points after the release.", "This implies that editors created their titles by referring to the displayed candidates to some extent.", "In fact, the ratio of the exact matched titles (ROUGE-L = 100%) in all articles (before/after the release) increased after the release by a factor of 1.62(i.e., from 3.78% to 6.13%).", "Similarly, the ratio of the 80% matched titles (ROUGE-L ≥ 80%) also increased by a factor of 1.32 (i.e., from 14.04% to 18.53%).", "This suggests that professional editors obtained new ideas from generated titles of the tool.", "Related Work We briefly review related studies from three aspects: news headline generation, editing support, and application of headline generation.", "In summary, our work is the first attempt to deploy a neural news-headline-generation model to a realworld application, i.e., news editing support tool.", "News-headline-generation tasks have been extensively studied since early times (Wang et al., 2005; Soricut and Marcu, 2006; Woodsend et al., 2010; Alfonseca et al., 2013; Sun et al., 2015; Colmenares et al., 2015) .", "In this line of research, Rush et al.", "(2015) proposed a neural model to generate news headlines and released a benchmark dataset for their task, and consequently this task has recently received increasing attention (Chopra et al., 2016; Takase et al., 2016; Kiyono et al., 2017; Zhou et al., 2017; Ayana et al., 2017; Raffel et al., 2017; Cao et al., 2018; Kobayashi, 2018) .", "However, their approaches were basically based on the encoderdecoder model, which is trained with a lot of (article, headline) pairs.", "This means that there are few situations for putting their models into the real world because news articles typically already have corresponding headlines, and most editors create a headline before its content (according to a senior journalist).", "Therefore, our work can strongly support their approaches from a practical perspective.", "Considering technologies used for editing support, there have been many studies for various purposes, such as spelling error correction (Farra et al., 2014; Hasan et al., 2015; Etoori et al., 2018) , grammatical error correction (Dahlmeier and Ng, 2012; Susanto et al., 2014; Choshen and Abend, 2018) , fact checking (Baly et al., 2018; Thorne and Vlachos, 2018; Lee et al., 2018) , fluency evaluation (Vadlapudi and Katragadda, 2010; Heilman et al., 2014; Kann et al., 2018) , and so on.", "However, when we consider their studies on our task, they are only used after editing (writing a draft).", "On the other hand, the purpose of our tool is different from theirs since our tool can support editors before or during editing.", "The usage of (interactive) machine translation systems (Denkowski et al., 2014; González-Rubio et al., 2016; Wuebker et al., 2016; Ye et al., 2016; Takeno et al., 2017) for supporting manual post-editing are similar to our purpose, but their task is completely different from ours.", "In other words, their task is a translation without information loss, whereas our task is a summarization that requires information compression.", "We believe that a case study on summarization is still important for the summarization community.", "There have been several studies reporting case studies on headline generation for different real services: (a) question headlines on question answering service (Higurashi et al., 2018) , (b) product headlines on e-commerce service (Wang et al., 2018) , and (c) headlines for product curation pages Camargo de Souza et al., 2018) .", "The first two (a) and (b) are extractive approaches, and the last one (c) is an abstractive approach, where the input is a set of slot/value pairs, such as \"color/white.\"", "That is, our task is more difficult to use in the real-world.", "In addition, application to news services tends to be sensitive since news articles contain serious contents such as incidents, accidents, and disasters.", "Thus, our work should be valuable as a rare case study applying a neural model to such a news service.", "Conclusion We addressed short-title generation from news articles for a news aggregator to support the editorial process.", "We proposed an encoder-decoder model with multiple encoders for separately encoding multiple information sources, i.e., news headlines and leads.", "Comparative experiments using crowdsourcing showed that our hybrid model performed better than the baselines, especially using the usefulness measure.", "We deployed our model to an editing support tool and empirically confirmed that professional editors began to refer to the generated titles after the release.", "Future research will include verifying how much our headline generation model can affect practical performance indicators, such as click-through rate.", "In this case, we need to develop a much safer model since our model sometimes yields erroneous outputs or fake news titles, which cannot be directly used in the commercial service." ] }
{ "paper_header_number": [ "1", "3", "4.1", "4.2", "4.3", "4.4", "5", "5.1", "5.2", "5.3", "5.4", "6", "7" ], "paper_header_content": [ "Introduction", "Proposed Method", "Dataset", "Training", "Evaluation", "Compared Models", "Deployment to Editing Support Tool", "Cutoff of Unpromising Candidates", "Skipping Redundant Candidates", "Highlighting Unknown Characters", "Effect of Deployment", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-34#paper-1048#slide-5
Proposed method GateFusion
Combine headline and lead contexts w/ gating mechanism Gating mechanism: ve ctor weights used an attention mechanism Copyright 2019 Yahoo Japan Corporation. All Rights Reserved.
Combine headline and lead contexts w/ gating mechanism Gating mechanism: ve ctor weights used an attention mechanism Copyright 2019 Yahoo Japan Corporation. All Rights Reserved.
[]
GEM-SciDuet-train-34#paper-1048#slide-6
1048
A Case Study on Neural Headline Generation for Editing Support
There have been many studies on neural headline generation models trained with a lot of (article, headline) pairs. However, there are few situations for putting such models into practical use in the real world since news articles typically already have corresponding headlines. In this paper, we describe a practical use case of neural headline generation in a news aggregator, where dozens of professional editors constantly select important news articles and manually create their headlines, which are much shorter than the original headlines. Specifically, we show how to deploy our model to an editing support tool and report the results of comparing the behavior of the editors before and after the release.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186 ], "paper_content_text": [ "Introduction A news-aggregator is a website or mobile application that aggregates a large amount of web content, e.g., online newspapers provided by different publishers.", "The main purpose of such a service is to help users obtain important news out of vast amounts of information quickly and easily.", "Therefore, it is critical to consider how to compactly show news, as well as what type of news to select, to improve service quality.", "In fact, the news-aggregator of Yahoo!", "JAPAN 1 , the largest Japanese portal site, is supported by dozens of professional editors who constantly select important news articles and manually create their new headlines called short titles, which are much shorter than the original headline, to construct a newstopic list.", "Note that we use the term \"title\" to avoid confusion with the original news headline, although they are similar concepts.", "* Both authors contributed equally to this work.", "1 https://www.yahoo.co.jp/ (a) List of news topics including short titles.", "(b) Page of news entry including headline and lead.", "Figure 1 shows screenshots of the newsaggregator of Yahoo!", "JAPAN, where the English translations of the short title, headline and lead are listed in Table 1 .", "The left figure (a) shows the list of news topics (important news articles), which includes short titles, and the right figure (b) shows the entry page of the first topic in the list, which consists of a headline and lead.", "The lead is a short version of the article and can be used by users to decide whether to read the whole article.", "The editors' job is to create a short title from news content including the headline and lead.", "A short title has two advantages over a normal headline; one is quick understandability of the content and the other is saving display space by using a single line.", "This means that short titles can increase a user's chances of reaching interesting articles.", "Since the click-through rate of news articles is directly related to ad revenue, even a small improvement in short titles has a significant impact on business.", "We tackle an automatic-generation task of such short titles for a news aggregator to support the Japanese English translation Short title The prime minister cannot say that there is no surmise.", "Headline It cannot be said that there is no \"sontaku (surmise)\" with absolute certainty.", "The prime minister Abe said about the problem of \"Kake Gakuen (Kake school)\".", "Lead Prime Minister Shinzo Abe said, in an intensive deliberation with the House of Councilors Budget Committee held on the afternoon of the 14th, as an answer to a question about whether bureaucrats surmised to the prime minister regarding the Kake suspicion, \"It is difficult to understand whether there is a sontaku (surmise)\".", "He said \"It cannot be said that there was nothing wrong,\" while explaining that \"I do not need to be obsequious\".", "An answer to Ichiro Tsukada (LDP).", "Table 1 : Short title, headline, and lead in Figure 1 (b) with English versions.", "editorial process.", "Our task is a variant of newsheadline generation, which has been extensively studied, as described in Section 6.", "A clear difference between their task and ours is that we need to generate short titles from news content including headlines.", "Thus, we formulate our task as an abstractive summarization from multiple information sources, i.e., headlines and leads, based on an encoder-decoder model (Section 2).", "There are roughly three approaches for handling multiple information sources.", "The first approach is to merge all sources with some weights based on the importance of each source, which can be achieved by a weighted average of the context vectors, as in multimodal summarization (Hori et al., 2017) .", "This is the most general approach since the other two can also be regarded as special cases of the weighted average.", "The second approach is to use one source as the main source and others as secondary ones.", "This is effective when the main source can be clearly determined, such as query-focused summarization (Nema et al., 2017) , where the target document is main and a query is secondary.", "The third approach is to find the salient components of the sources.", "This is suitable when there are many sources including less informative ones (redundant sources), such as lengthydocument summarization that outputs a multisentence summary (Tan et al., 2017) , where each sentence can be regarded as one source.", "We addressed an extension of the weighted average approach and compared our proposed model with a multimodal model (Hori et al., 2017) from the first approach and a query-based model (Nema et al., 2017) from the second approach, as well as the normal encoder-decoder model.", "Since we have only two sources (headlines and leads), where the headline source is clearly salient for generating a short title, the third approach can be reduced to the normal encoder-decoder model.", "Our contributions are as follows.", "• We report on a case study of short-title generation of news articles for a news aggregator as a real-world application of neural headline generation.", "This study supports previous studies based on the encoder-decoder model from a practical standpoint since most real-world news articles basically already have headlines, which means that there has been little direct application of these previous studies.", "• We propose an encoder-decoder model with multiple encoders for separately encoding news headlines and leads (Section 3).", "Our comparative experiments with several baselines involving evaluations done by crowdsourcing workers showed the effectiveness of our model, especially using the \"usefulness\" measure (Section 4).", "• We describe how to deploy our model to an editing support tool and show the results of comparing the editors' behavior before and after releasing the tool (Section 5), which imply that the editors began to refer to generated titles after the release.", "late the following conditional likelihood p(y | x) = T −1 ∏ t=1 p(y t+1 | y ≤t , x) (1) with respect to each pair (x, y) of an input sequence x = x 1 · · · x S and output sequence y = y 1 · · · y T , where y ≤t = y 1 · · · y t , and maximize its mean.", "The model p(y | x) in Eq.", "(1) is computed by a combination of two recurrent neural networks (RNNs): an encoder and decoder.", "The encoder reads an input sequence x to recognize its content, and the decoder predicts an output sequence y corresponding to the content.", "More formally, an encoder calculates a hidden state h s for each element x s in a x by using the state transition function f enc of the encoder: h s = f enc (x s , h s−1 ).", "In a similar fashion, a decoder calculates a hidden stateĥ t for each element y t in a y by using the state transition function f dec of the decoder after setting the last hidden state of the encoder as the initial state of the decoder (ĥ 0 = h S ): h t = f dec (y t ,ĥ t−1 ).", "Then, a prediction of outputs for eachĥ t is calculated using the output function g dec with an attention mechanism: p(y t+1 | y ≤t , x) = g dec (ĥ t , c t ), (2) where c t is a weighted average of the encoder hidden states {h 1 , · · · , h S }, defined by c t = S ∑ s=1 a t (s)h s , (3) where a t (s) represents a weight of an encoder hidden state h s with respect to a decoder hidden statê h t .", "c t represents a soft alignment (or attention weight) to the source sequence at the target position t, so it is called a context.", "Proposed Method We propose an encoder-decoder model with multiple encoders.", "For simplicity, we describe our model assuming two encoders for news headlines and leads.", "Let d t and d ′ t be contexts calculated with Eq.", "(3) with the headline encoder and lead encoder, respectively.", "Our model combines the two context vectors inspired by a gating mechanism in long-short term memory networks (Hochreiter and Schmidhuber, 1997) as follows: w t = σ(W [d t ; d ′ t ;ĥ t ]), (4) w ′ t = σ(W ′ [d t ; d ′ t ;ĥ t ]), (5) c t = w t ⊙ d t + w ′ t ⊙ d ′ t , (6) where function σ represents the sigmoid function, i.e., σ(x) = 1/(1 + e −x ), and the operator ⊙ represents the element-wise product.", "Eq.", "(4) calculates a gating weight w t for d t , where W represents a weight matrix for a concatenated vector [d t ; d ′ t ;ĥ t ].", "Similarly, Eq.", "(5) calculates a gating weight w ′ t for d ′ t .", "Eq.", "(6) calculates a mixed context c t made from the two contexts, d t and d ′ t .", "Finally, the output function in our model is constructed by substituting c t with c t in Eq.", "(2).", "Our model can be regarded as an extension of the multimodal fusion model (Hori et al., 2017) , where multiple contexts are mixed using scalar weights, i.e., c t = αd t + βd ′ t , where α and β are positive scalar weights calculated using an attention mechanism such as a t (s) in Eq.", "(3).", "Our model can obtain a more sophisticated mixed context than their model since that model only takes into account which encoder to weigh at a time step, while our model adjusts weights on the element level.", "Experiments Dataset We prepared a dataset extracted from the newsaggregator of Yahoo!", "JAPAN by Web crawling.", "The dataset included 263K (headline, lead, short title) triples, and was split into three parts, i.e., for training (90%), validation (5%), and testing (5%).", "We preprocessed them by separating characters for training since our preliminary experiments showed that character-based training clearly performed better than word-based training.", "The statistics of our dataset are as follows.", "The average lengths of headlines, leads, and short titles are 24.87, 128.49, and 13.05 Japanese characters, respectively.", "The dictionary sizes (for characters) of headlines, leads, and short titles are 3618, 4226, and 3156, respectively.", "Each news article has only one short title created by a professional editor.", "The percentage of short titles equal to their headlines is only 0.13%, while the percentage of extractively solvable instances, in which the characters in each short title are completely matched by those in the corresponding headline, was about 20%.", "However, the average edit distance (Levenshtein, 1966 ) between short titles and headlines was 23.74.", "This means that short titles cannot be easily created from headlines.", "Training We implemented our model on the OpenNMT 2 toolkit.", "We used a convolutional neural network (CNN) (Kim, 2014) , instead of an RNN, to construct the lead encoder since leads are longer than headlines and require much more computational time.", "Since the CNN encoder outputs all hidden states for an input sequence in the same format as the RNN encoder, we can easily apply these states to Eq.", "(3).", "Our headline encoder still remains as an RNN (i.e., bidirectional LSTM) for fair comparison with the default implementation.", "We used a stochastic gradient descent algorithm with Nesterov momentum (Nesterov, 1983) as an optimizer, after initializing parameters by uniform sampling on (−0.1, 0.1).", "Table 2 lists the details of the hyper-parameter settings in our experiment.", "Other settings were basically the same as the default implementation of OpenNMT.", "Evaluation We conducted two crowdsourcing tasks to separately measure readability and usefulness.", "The readability task asked ten workers how readable each short title was on a four-point scale (higher is better), while the usefulness task asked them how useful the short title was compared to the corresponding article.", "The score of each generated short title was calculated by averaging the scores collected from the ten workers.", "Compared Models We prepared four models, our model GateFusion and three baselines MultiModal, QueryBased, and OpenNMT, listed below.", "We implemented the fusion mechanisms of MultiModal and 2 https://github.com/OpenNMT/OpenNMT-py Table 3 : Mean scores of readability (r), usefulness (u), and their average r+u 2 based on crowdsourcing.", "The \" †\" mark shows a statistical significance from all three baselines OpenNMT, MultiModal, and QueryBased on a one-tailed, paired t-test (p < 0.01).", "QueryBased on OpenNMT using an RNN encoder for headlines and CNN encoder for leads (see Appendix A for detailed definitions).", "• GateFusion: Our model with a gating mechanism described in Section 3.", "This is a fusion based on vector weights.", "• MultiModal: A multimodal model proposed by (Hori et al., 2017) , which can handle multimodal information such as image and audio as well as text by using separate encoders.", "The model combines contexts obtained from the encoders via an attention mechanism such as a t (s) in Eq.", "(3).", "This is a fusion based on scalar weights.", "• QueryBased: A query-based model proposed by (Nema et al., 2017) , which can finetune the attention on a document by using a query for query-focused summarization.", "We regard a headline as a document and a lead as a query since the headline is more similar to its short title.", "Specifically, the model finetunes an attention weight a t (s) for calculating a headline context d t by using a pre-computed lead context d ′ t .", "This is a fusion based on cascade connection.", "• OpenNMT: An encoder-decoder model with a single encoder implemented in OpenNMT, whose input is a headline only, because a variant using a lead did not perform better than this setting.", "Table 3 lists the results from the crowdsourcing tasks for readability and usefulness (see Appendix B for the details of these scores).", "Editor and Prefix in the top block of rows show the results of correct short titles created by editors and a naive model using the first 13.5 Japanese characters 3 , respectively.", "The middle and bottom blocks represent the three baselines and our models, respectively.", "We explain our hybrid model HybridFusion later.", "Each model was prepared as an ensemble of ten models by random initialization, aiming for robust performance.", "Our GateFusion clearly performed better than the three baselines regarding usefulness and interestingly outperformed even Editor.", "This implies that GateFusion tends to aggressively copy elements from source sequences.", "However, this seemed to result in complicated expressions; thus, GateFusion performed the worst with respect to readability.", "To overcome this weakness, we developed a hybrid model HybridFusion that consists of GateFusion and another fusion model QueryBased, which performed relatively well in terms of readability.", "The results indicate that HybridFusion performed the best regarding readability and usefulness.", "It can be considered that QueryBased helps GateFusion generate headline-style outputs since QueryBased mainly uses the headline source.", "Table 4 lists output examples generated by the best model OpenNMT from the three baselines and our best model HybridFusion (see Appendix C for more examples).", "In this case, the difference between OpenNMT and HybridFusion is easily comprehensible.", "The former selected \" (evolution)\", and the latter selected \" (Darvish)\" from the headline.", "In Japanese headlines, the last word tends to be important, so using the last word is basically a good strategy.", "However, the lead indicates that \"Darvish\" is more important than \"evolution\" (actually, there is no word \"evolution\" in the lead); thus, HybridFusion was able to correctly select the long name \"Darvish\" and abbreviate it to \" (Dar)\".", "In addition, it forcibly changed the style to the short title's style by putting the name into the forefront to easily get users' attention.", "This suggests that our neural-headline-generation model HybridFusion can successfully work even in this real-world application.", "Results Deployment to Editing Support Tool We deployed our short-title-generation model to an editing support tool in collaboration with the 3 13.5 is the limit in the news-aggregator, where space, numbers, and alphabet characters are counted as 0.5.", "Figure 2 : Screenshot of editing support tool displaying generated candidates for creating a short title.", "news service, as shown in Figure 2 .", "In the tool, when an editor enters the URL of an article, the tool can automatically fetch the headline and lead of the article and display up to five candidates next to the edit form of a short title, as shown in the dotted box in the figure.", "These candidates are hypotheses (with high probabilities) generated by the beam search based on the model.", "Then, the editor can effectively create a short title by referring to the generated candidates.", "This supporting feature is expected to be useful especially for inexperienced editors since the quality of short titles is heavily dependent on editors' experience.", "From now on, we briefly describe three features of the tool to improve its usability when displaying candidates: cutoff of unpromising candidates, skipping redundant candidates, and highlighting unknown characters.", "After that, we discuss the effect of the deployment analyzing user behavior before and after releasing the tool.", "Cutoff of Unpromising Candidates The quality of displayed candidates is one of the main factors that affect the usability of the tool.", "If the tool frequently displays unpromising candidates, editors will gradually start ignoring them.", "Therefore, we cutoff unpromising candidates whose perplexity scores are higher than a certain threshold, where the perplexity score of a candidate is calculated by the inverse of the geometric mean of the generation probabilities for all characters in the candidate.", "We set the threshold considering the results of the editors' manual evaluation, where they checked if each candidate was acceptable or not.", "Specifically, we used 1.47 (=1/0.68) as the threshold, which means that the (geometric) mean character likelihood in the candidate should be higher than 0.68.", "If all candidates are judged as unpromising, the tool displays a message like \"No promising candidates.\"", "Skipping Redundant Candidates The purpose of the tool is to give editors some new ideas for creating short titles, so it is not useful to display redundant candidates similar to others.", "Therefore, we skip candidates whose edit distance (Levenshtein, 1966) to the other candidates is lower than a threshold when selecting hypotheses in descending order of probability.", "Formally, the edit distance between two texts is defined as the minimum number of single-character edits (insertions, deletions, or substitutions) required to change one text into the other.", "We set the threshold to 2 so as to restrict variations of Japanese particles as there are many particles with a similar meaning in Japanese 4 , e.g., \" (ha)\" and \" (ga)\".", "Although we used a unit cost for the edit distance, we can adjust the cost of each edit operation so that the tool can ignore variations of prepositions if we want to use English texts.", "Highlighting Unknown Characters One difficulty of neural models is that there is a possibility of generating incorrect or fake titles, which do not correspond to the article.", "This is a serious issue for news editing support since displayed candidates can mislead editors.", "For example, if the tool displays \" (Fujinami)\" for the news about \" (Fujinami)\", where they are different names with the same pronunciation, editors might choose the incorrect one.", "As a simple solution, we highlighted unknown characters that do not appear in both headline and lead in red.", "In Figure 2 , two phrases (\"B\" and \" \") are highlighted since they do not appear in the headline and lead.", "When a candidate includes highlighted characters, editors can carefully check if the candidate is semantically correct.", "Note that we did not exclude candidates with unknown characters so that the model can aggressively generate paraphrases and abbreviations.", "For example, the tool ROUGE-L (± SE) # articles Before 52.71% (± 0.56) 1773 After 57.65% (± 0.53) 1959 Table 5 : Sequence matching rates (ROUGE-L) of editors' titles and generated titles, which are averaged over articles over three weeks before/after releasing tool.", "suggests \" B(Soft B.)\"", "as an abbreviation of \" (Softbank)\" in the figure.", "Effect of Deployment To investigate the effect of the deployment, we compared the sequence matching rates between editors' correct titles and generated candidates before and after releasing the tool.", "The sequence matching rate is basically calculated by ROUGE-L (Lin, 2004) , which is defined as the rate of the length of the longest common subsequence between two sequences, i.e., a correct title and a generated candidate.", "Because we have multiple candidates for each article, we calculate the sequence matching rate as the maximum of their ROUGE-L scores, assuming that editors may refer to the most promising candidate.", "Note that the candidates were filtered by the aforementioned features, so we omitted a few articles without candidates.", "Table 5 shows the results of the sequence matching rates averaged over the articles over three weeks before and after releasing the tool.", "The results indicate that the ROUGE-L score increased by about 5 percentage points after the release.", "This implies that editors created their titles by referring to the displayed candidates to some extent.", "In fact, the ratio of the exact matched titles (ROUGE-L = 100%) in all articles (before/after the release) increased after the release by a factor of 1.62(i.e., from 3.78% to 6.13%).", "Similarly, the ratio of the 80% matched titles (ROUGE-L ≥ 80%) also increased by a factor of 1.32 (i.e., from 14.04% to 18.53%).", "This suggests that professional editors obtained new ideas from generated titles of the tool.", "Related Work We briefly review related studies from three aspects: news headline generation, editing support, and application of headline generation.", "In summary, our work is the first attempt to deploy a neural news-headline-generation model to a realworld application, i.e., news editing support tool.", "News-headline-generation tasks have been extensively studied since early times (Wang et al., 2005; Soricut and Marcu, 2006; Woodsend et al., 2010; Alfonseca et al., 2013; Sun et al., 2015; Colmenares et al., 2015) .", "In this line of research, Rush et al.", "(2015) proposed a neural model to generate news headlines and released a benchmark dataset for their task, and consequently this task has recently received increasing attention (Chopra et al., 2016; Takase et al., 2016; Kiyono et al., 2017; Zhou et al., 2017; Ayana et al., 2017; Raffel et al., 2017; Cao et al., 2018; Kobayashi, 2018) .", "However, their approaches were basically based on the encoderdecoder model, which is trained with a lot of (article, headline) pairs.", "This means that there are few situations for putting their models into the real world because news articles typically already have corresponding headlines, and most editors create a headline before its content (according to a senior journalist).", "Therefore, our work can strongly support their approaches from a practical perspective.", "Considering technologies used for editing support, there have been many studies for various purposes, such as spelling error correction (Farra et al., 2014; Hasan et al., 2015; Etoori et al., 2018) , grammatical error correction (Dahlmeier and Ng, 2012; Susanto et al., 2014; Choshen and Abend, 2018) , fact checking (Baly et al., 2018; Thorne and Vlachos, 2018; Lee et al., 2018) , fluency evaluation (Vadlapudi and Katragadda, 2010; Heilman et al., 2014; Kann et al., 2018) , and so on.", "However, when we consider their studies on our task, they are only used after editing (writing a draft).", "On the other hand, the purpose of our tool is different from theirs since our tool can support editors before or during editing.", "The usage of (interactive) machine translation systems (Denkowski et al., 2014; González-Rubio et al., 2016; Wuebker et al., 2016; Ye et al., 2016; Takeno et al., 2017) for supporting manual post-editing are similar to our purpose, but their task is completely different from ours.", "In other words, their task is a translation without information loss, whereas our task is a summarization that requires information compression.", "We believe that a case study on summarization is still important for the summarization community.", "There have been several studies reporting case studies on headline generation for different real services: (a) question headlines on question answering service (Higurashi et al., 2018) , (b) product headlines on e-commerce service (Wang et al., 2018) , and (c) headlines for product curation pages Camargo de Souza et al., 2018) .", "The first two (a) and (b) are extractive approaches, and the last one (c) is an abstractive approach, where the input is a set of slot/value pairs, such as \"color/white.\"", "That is, our task is more difficult to use in the real-world.", "In addition, application to news services tends to be sensitive since news articles contain serious contents such as incidents, accidents, and disasters.", "Thus, our work should be valuable as a rare case study applying a neural model to such a news service.", "Conclusion We addressed short-title generation from news articles for a news aggregator to support the editorial process.", "We proposed an encoder-decoder model with multiple encoders for separately encoding multiple information sources, i.e., news headlines and leads.", "Comparative experiments using crowdsourcing showed that our hybrid model performed better than the baselines, especially using the usefulness measure.", "We deployed our model to an editing support tool and empirically confirmed that professional editors began to refer to the generated titles after the release.", "Future research will include verifying how much our headline generation model can affect practical performance indicators, such as click-through rate.", "In this case, we need to develop a much safer model since our model sometimes yields erroneous outputs or fake news titles, which cannot be directly used in the commercial service." ] }
{ "paper_header_number": [ "1", "3", "4.1", "4.2", "4.3", "4.4", "5", "5.1", "5.2", "5.3", "5.4", "6", "7" ], "paper_header_content": [ "Introduction", "Proposed Method", "Dataset", "Training", "Evaluation", "Compared Models", "Deployment to Editing Support Tool", "Cutoff of Unpromising Candidates", "Skipping Redundant Candidates", "Highlighting Unknown Characters", "Effect of Deployment", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-34#paper-1048#slide-6
Baselines with multiple encoders
(main source) Headline Enc. Atten. Decoder Copyright 2019 Yahoo Japan Corporation. All Rights Reserved.
(main source) Headline Enc. Atten. Decoder Copyright 2019 Yahoo Japan Corporation. All Rights Reserved.
[]
GEM-SciDuet-train-34#paper-1048#slide-7
1048
A Case Study on Neural Headline Generation for Editing Support
There have been many studies on neural headline generation models trained with a lot of (article, headline) pairs. However, there are few situations for putting such models into practical use in the real world since news articles typically already have corresponding headlines. In this paper, we describe a practical use case of neural headline generation in a news aggregator, where dozens of professional editors constantly select important news articles and manually create their headlines, which are much shorter than the original headlines. Specifically, we show how to deploy our model to an editing support tool and report the results of comparing the behavior of the editors before and after the release.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186 ], "paper_content_text": [ "Introduction A news-aggregator is a website or mobile application that aggregates a large amount of web content, e.g., online newspapers provided by different publishers.", "The main purpose of such a service is to help users obtain important news out of vast amounts of information quickly and easily.", "Therefore, it is critical to consider how to compactly show news, as well as what type of news to select, to improve service quality.", "In fact, the news-aggregator of Yahoo!", "JAPAN 1 , the largest Japanese portal site, is supported by dozens of professional editors who constantly select important news articles and manually create their new headlines called short titles, which are much shorter than the original headline, to construct a newstopic list.", "Note that we use the term \"title\" to avoid confusion with the original news headline, although they are similar concepts.", "* Both authors contributed equally to this work.", "1 https://www.yahoo.co.jp/ (a) List of news topics including short titles.", "(b) Page of news entry including headline and lead.", "Figure 1 shows screenshots of the newsaggregator of Yahoo!", "JAPAN, where the English translations of the short title, headline and lead are listed in Table 1 .", "The left figure (a) shows the list of news topics (important news articles), which includes short titles, and the right figure (b) shows the entry page of the first topic in the list, which consists of a headline and lead.", "The lead is a short version of the article and can be used by users to decide whether to read the whole article.", "The editors' job is to create a short title from news content including the headline and lead.", "A short title has two advantages over a normal headline; one is quick understandability of the content and the other is saving display space by using a single line.", "This means that short titles can increase a user's chances of reaching interesting articles.", "Since the click-through rate of news articles is directly related to ad revenue, even a small improvement in short titles has a significant impact on business.", "We tackle an automatic-generation task of such short titles for a news aggregator to support the Japanese English translation Short title The prime minister cannot say that there is no surmise.", "Headline It cannot be said that there is no \"sontaku (surmise)\" with absolute certainty.", "The prime minister Abe said about the problem of \"Kake Gakuen (Kake school)\".", "Lead Prime Minister Shinzo Abe said, in an intensive deliberation with the House of Councilors Budget Committee held on the afternoon of the 14th, as an answer to a question about whether bureaucrats surmised to the prime minister regarding the Kake suspicion, \"It is difficult to understand whether there is a sontaku (surmise)\".", "He said \"It cannot be said that there was nothing wrong,\" while explaining that \"I do not need to be obsequious\".", "An answer to Ichiro Tsukada (LDP).", "Table 1 : Short title, headline, and lead in Figure 1 (b) with English versions.", "editorial process.", "Our task is a variant of newsheadline generation, which has been extensively studied, as described in Section 6.", "A clear difference between their task and ours is that we need to generate short titles from news content including headlines.", "Thus, we formulate our task as an abstractive summarization from multiple information sources, i.e., headlines and leads, based on an encoder-decoder model (Section 2).", "There are roughly three approaches for handling multiple information sources.", "The first approach is to merge all sources with some weights based on the importance of each source, which can be achieved by a weighted average of the context vectors, as in multimodal summarization (Hori et al., 2017) .", "This is the most general approach since the other two can also be regarded as special cases of the weighted average.", "The second approach is to use one source as the main source and others as secondary ones.", "This is effective when the main source can be clearly determined, such as query-focused summarization (Nema et al., 2017) , where the target document is main and a query is secondary.", "The third approach is to find the salient components of the sources.", "This is suitable when there are many sources including less informative ones (redundant sources), such as lengthydocument summarization that outputs a multisentence summary (Tan et al., 2017) , where each sentence can be regarded as one source.", "We addressed an extension of the weighted average approach and compared our proposed model with a multimodal model (Hori et al., 2017) from the first approach and a query-based model (Nema et al., 2017) from the second approach, as well as the normal encoder-decoder model.", "Since we have only two sources (headlines and leads), where the headline source is clearly salient for generating a short title, the third approach can be reduced to the normal encoder-decoder model.", "Our contributions are as follows.", "• We report on a case study of short-title generation of news articles for a news aggregator as a real-world application of neural headline generation.", "This study supports previous studies based on the encoder-decoder model from a practical standpoint since most real-world news articles basically already have headlines, which means that there has been little direct application of these previous studies.", "• We propose an encoder-decoder model with multiple encoders for separately encoding news headlines and leads (Section 3).", "Our comparative experiments with several baselines involving evaluations done by crowdsourcing workers showed the effectiveness of our model, especially using the \"usefulness\" measure (Section 4).", "• We describe how to deploy our model to an editing support tool and show the results of comparing the editors' behavior before and after releasing the tool (Section 5), which imply that the editors began to refer to generated titles after the release.", "late the following conditional likelihood p(y | x) = T −1 ∏ t=1 p(y t+1 | y ≤t , x) (1) with respect to each pair (x, y) of an input sequence x = x 1 · · · x S and output sequence y = y 1 · · · y T , where y ≤t = y 1 · · · y t , and maximize its mean.", "The model p(y | x) in Eq.", "(1) is computed by a combination of two recurrent neural networks (RNNs): an encoder and decoder.", "The encoder reads an input sequence x to recognize its content, and the decoder predicts an output sequence y corresponding to the content.", "More formally, an encoder calculates a hidden state h s for each element x s in a x by using the state transition function f enc of the encoder: h s = f enc (x s , h s−1 ).", "In a similar fashion, a decoder calculates a hidden stateĥ t for each element y t in a y by using the state transition function f dec of the decoder after setting the last hidden state of the encoder as the initial state of the decoder (ĥ 0 = h S ): h t = f dec (y t ,ĥ t−1 ).", "Then, a prediction of outputs for eachĥ t is calculated using the output function g dec with an attention mechanism: p(y t+1 | y ≤t , x) = g dec (ĥ t , c t ), (2) where c t is a weighted average of the encoder hidden states {h 1 , · · · , h S }, defined by c t = S ∑ s=1 a t (s)h s , (3) where a t (s) represents a weight of an encoder hidden state h s with respect to a decoder hidden statê h t .", "c t represents a soft alignment (or attention weight) to the source sequence at the target position t, so it is called a context.", "Proposed Method We propose an encoder-decoder model with multiple encoders.", "For simplicity, we describe our model assuming two encoders for news headlines and leads.", "Let d t and d ′ t be contexts calculated with Eq.", "(3) with the headline encoder and lead encoder, respectively.", "Our model combines the two context vectors inspired by a gating mechanism in long-short term memory networks (Hochreiter and Schmidhuber, 1997) as follows: w t = σ(W [d t ; d ′ t ;ĥ t ]), (4) w ′ t = σ(W ′ [d t ; d ′ t ;ĥ t ]), (5) c t = w t ⊙ d t + w ′ t ⊙ d ′ t , (6) where function σ represents the sigmoid function, i.e., σ(x) = 1/(1 + e −x ), and the operator ⊙ represents the element-wise product.", "Eq.", "(4) calculates a gating weight w t for d t , where W represents a weight matrix for a concatenated vector [d t ; d ′ t ;ĥ t ].", "Similarly, Eq.", "(5) calculates a gating weight w ′ t for d ′ t .", "Eq.", "(6) calculates a mixed context c t made from the two contexts, d t and d ′ t .", "Finally, the output function in our model is constructed by substituting c t with c t in Eq.", "(2).", "Our model can be regarded as an extension of the multimodal fusion model (Hori et al., 2017) , where multiple contexts are mixed using scalar weights, i.e., c t = αd t + βd ′ t , where α and β are positive scalar weights calculated using an attention mechanism such as a t (s) in Eq.", "(3).", "Our model can obtain a more sophisticated mixed context than their model since that model only takes into account which encoder to weigh at a time step, while our model adjusts weights on the element level.", "Experiments Dataset We prepared a dataset extracted from the newsaggregator of Yahoo!", "JAPAN by Web crawling.", "The dataset included 263K (headline, lead, short title) triples, and was split into three parts, i.e., for training (90%), validation (5%), and testing (5%).", "We preprocessed them by separating characters for training since our preliminary experiments showed that character-based training clearly performed better than word-based training.", "The statistics of our dataset are as follows.", "The average lengths of headlines, leads, and short titles are 24.87, 128.49, and 13.05 Japanese characters, respectively.", "The dictionary sizes (for characters) of headlines, leads, and short titles are 3618, 4226, and 3156, respectively.", "Each news article has only one short title created by a professional editor.", "The percentage of short titles equal to their headlines is only 0.13%, while the percentage of extractively solvable instances, in which the characters in each short title are completely matched by those in the corresponding headline, was about 20%.", "However, the average edit distance (Levenshtein, 1966 ) between short titles and headlines was 23.74.", "This means that short titles cannot be easily created from headlines.", "Training We implemented our model on the OpenNMT 2 toolkit.", "We used a convolutional neural network (CNN) (Kim, 2014) , instead of an RNN, to construct the lead encoder since leads are longer than headlines and require much more computational time.", "Since the CNN encoder outputs all hidden states for an input sequence in the same format as the RNN encoder, we can easily apply these states to Eq.", "(3).", "Our headline encoder still remains as an RNN (i.e., bidirectional LSTM) for fair comparison with the default implementation.", "We used a stochastic gradient descent algorithm with Nesterov momentum (Nesterov, 1983) as an optimizer, after initializing parameters by uniform sampling on (−0.1, 0.1).", "Table 2 lists the details of the hyper-parameter settings in our experiment.", "Other settings were basically the same as the default implementation of OpenNMT.", "Evaluation We conducted two crowdsourcing tasks to separately measure readability and usefulness.", "The readability task asked ten workers how readable each short title was on a four-point scale (higher is better), while the usefulness task asked them how useful the short title was compared to the corresponding article.", "The score of each generated short title was calculated by averaging the scores collected from the ten workers.", "Compared Models We prepared four models, our model GateFusion and three baselines MultiModal, QueryBased, and OpenNMT, listed below.", "We implemented the fusion mechanisms of MultiModal and 2 https://github.com/OpenNMT/OpenNMT-py Table 3 : Mean scores of readability (r), usefulness (u), and their average r+u 2 based on crowdsourcing.", "The \" †\" mark shows a statistical significance from all three baselines OpenNMT, MultiModal, and QueryBased on a one-tailed, paired t-test (p < 0.01).", "QueryBased on OpenNMT using an RNN encoder for headlines and CNN encoder for leads (see Appendix A for detailed definitions).", "• GateFusion: Our model with a gating mechanism described in Section 3.", "This is a fusion based on vector weights.", "• MultiModal: A multimodal model proposed by (Hori et al., 2017) , which can handle multimodal information such as image and audio as well as text by using separate encoders.", "The model combines contexts obtained from the encoders via an attention mechanism such as a t (s) in Eq.", "(3).", "This is a fusion based on scalar weights.", "• QueryBased: A query-based model proposed by (Nema et al., 2017) , which can finetune the attention on a document by using a query for query-focused summarization.", "We regard a headline as a document and a lead as a query since the headline is more similar to its short title.", "Specifically, the model finetunes an attention weight a t (s) for calculating a headline context d t by using a pre-computed lead context d ′ t .", "This is a fusion based on cascade connection.", "• OpenNMT: An encoder-decoder model with a single encoder implemented in OpenNMT, whose input is a headline only, because a variant using a lead did not perform better than this setting.", "Table 3 lists the results from the crowdsourcing tasks for readability and usefulness (see Appendix B for the details of these scores).", "Editor and Prefix in the top block of rows show the results of correct short titles created by editors and a naive model using the first 13.5 Japanese characters 3 , respectively.", "The middle and bottom blocks represent the three baselines and our models, respectively.", "We explain our hybrid model HybridFusion later.", "Each model was prepared as an ensemble of ten models by random initialization, aiming for robust performance.", "Our GateFusion clearly performed better than the three baselines regarding usefulness and interestingly outperformed even Editor.", "This implies that GateFusion tends to aggressively copy elements from source sequences.", "However, this seemed to result in complicated expressions; thus, GateFusion performed the worst with respect to readability.", "To overcome this weakness, we developed a hybrid model HybridFusion that consists of GateFusion and another fusion model QueryBased, which performed relatively well in terms of readability.", "The results indicate that HybridFusion performed the best regarding readability and usefulness.", "It can be considered that QueryBased helps GateFusion generate headline-style outputs since QueryBased mainly uses the headline source.", "Table 4 lists output examples generated by the best model OpenNMT from the three baselines and our best model HybridFusion (see Appendix C for more examples).", "In this case, the difference between OpenNMT and HybridFusion is easily comprehensible.", "The former selected \" (evolution)\", and the latter selected \" (Darvish)\" from the headline.", "In Japanese headlines, the last word tends to be important, so using the last word is basically a good strategy.", "However, the lead indicates that \"Darvish\" is more important than \"evolution\" (actually, there is no word \"evolution\" in the lead); thus, HybridFusion was able to correctly select the long name \"Darvish\" and abbreviate it to \" (Dar)\".", "In addition, it forcibly changed the style to the short title's style by putting the name into the forefront to easily get users' attention.", "This suggests that our neural-headline-generation model HybridFusion can successfully work even in this real-world application.", "Results Deployment to Editing Support Tool We deployed our short-title-generation model to an editing support tool in collaboration with the 3 13.5 is the limit in the news-aggregator, where space, numbers, and alphabet characters are counted as 0.5.", "Figure 2 : Screenshot of editing support tool displaying generated candidates for creating a short title.", "news service, as shown in Figure 2 .", "In the tool, when an editor enters the URL of an article, the tool can automatically fetch the headline and lead of the article and display up to five candidates next to the edit form of a short title, as shown in the dotted box in the figure.", "These candidates are hypotheses (with high probabilities) generated by the beam search based on the model.", "Then, the editor can effectively create a short title by referring to the generated candidates.", "This supporting feature is expected to be useful especially for inexperienced editors since the quality of short titles is heavily dependent on editors' experience.", "From now on, we briefly describe three features of the tool to improve its usability when displaying candidates: cutoff of unpromising candidates, skipping redundant candidates, and highlighting unknown characters.", "After that, we discuss the effect of the deployment analyzing user behavior before and after releasing the tool.", "Cutoff of Unpromising Candidates The quality of displayed candidates is one of the main factors that affect the usability of the tool.", "If the tool frequently displays unpromising candidates, editors will gradually start ignoring them.", "Therefore, we cutoff unpromising candidates whose perplexity scores are higher than a certain threshold, where the perplexity score of a candidate is calculated by the inverse of the geometric mean of the generation probabilities for all characters in the candidate.", "We set the threshold considering the results of the editors' manual evaluation, where they checked if each candidate was acceptable or not.", "Specifically, we used 1.47 (=1/0.68) as the threshold, which means that the (geometric) mean character likelihood in the candidate should be higher than 0.68.", "If all candidates are judged as unpromising, the tool displays a message like \"No promising candidates.\"", "Skipping Redundant Candidates The purpose of the tool is to give editors some new ideas for creating short titles, so it is not useful to display redundant candidates similar to others.", "Therefore, we skip candidates whose edit distance (Levenshtein, 1966) to the other candidates is lower than a threshold when selecting hypotheses in descending order of probability.", "Formally, the edit distance between two texts is defined as the minimum number of single-character edits (insertions, deletions, or substitutions) required to change one text into the other.", "We set the threshold to 2 so as to restrict variations of Japanese particles as there are many particles with a similar meaning in Japanese 4 , e.g., \" (ha)\" and \" (ga)\".", "Although we used a unit cost for the edit distance, we can adjust the cost of each edit operation so that the tool can ignore variations of prepositions if we want to use English texts.", "Highlighting Unknown Characters One difficulty of neural models is that there is a possibility of generating incorrect or fake titles, which do not correspond to the article.", "This is a serious issue for news editing support since displayed candidates can mislead editors.", "For example, if the tool displays \" (Fujinami)\" for the news about \" (Fujinami)\", where they are different names with the same pronunciation, editors might choose the incorrect one.", "As a simple solution, we highlighted unknown characters that do not appear in both headline and lead in red.", "In Figure 2 , two phrases (\"B\" and \" \") are highlighted since they do not appear in the headline and lead.", "When a candidate includes highlighted characters, editors can carefully check if the candidate is semantically correct.", "Note that we did not exclude candidates with unknown characters so that the model can aggressively generate paraphrases and abbreviations.", "For example, the tool ROUGE-L (± SE) # articles Before 52.71% (± 0.56) 1773 After 57.65% (± 0.53) 1959 Table 5 : Sequence matching rates (ROUGE-L) of editors' titles and generated titles, which are averaged over articles over three weeks before/after releasing tool.", "suggests \" B(Soft B.)\"", "as an abbreviation of \" (Softbank)\" in the figure.", "Effect of Deployment To investigate the effect of the deployment, we compared the sequence matching rates between editors' correct titles and generated candidates before and after releasing the tool.", "The sequence matching rate is basically calculated by ROUGE-L (Lin, 2004) , which is defined as the rate of the length of the longest common subsequence between two sequences, i.e., a correct title and a generated candidate.", "Because we have multiple candidates for each article, we calculate the sequence matching rate as the maximum of their ROUGE-L scores, assuming that editors may refer to the most promising candidate.", "Note that the candidates were filtered by the aforementioned features, so we omitted a few articles without candidates.", "Table 5 shows the results of the sequence matching rates averaged over the articles over three weeks before and after releasing the tool.", "The results indicate that the ROUGE-L score increased by about 5 percentage points after the release.", "This implies that editors created their titles by referring to the displayed candidates to some extent.", "In fact, the ratio of the exact matched titles (ROUGE-L = 100%) in all articles (before/after the release) increased after the release by a factor of 1.62(i.e., from 3.78% to 6.13%).", "Similarly, the ratio of the 80% matched titles (ROUGE-L ≥ 80%) also increased by a factor of 1.32 (i.e., from 14.04% to 18.53%).", "This suggests that professional editors obtained new ideas from generated titles of the tool.", "Related Work We briefly review related studies from three aspects: news headline generation, editing support, and application of headline generation.", "In summary, our work is the first attempt to deploy a neural news-headline-generation model to a realworld application, i.e., news editing support tool.", "News-headline-generation tasks have been extensively studied since early times (Wang et al., 2005; Soricut and Marcu, 2006; Woodsend et al., 2010; Alfonseca et al., 2013; Sun et al., 2015; Colmenares et al., 2015) .", "In this line of research, Rush et al.", "(2015) proposed a neural model to generate news headlines and released a benchmark dataset for their task, and consequently this task has recently received increasing attention (Chopra et al., 2016; Takase et al., 2016; Kiyono et al., 2017; Zhou et al., 2017; Ayana et al., 2017; Raffel et al., 2017; Cao et al., 2018; Kobayashi, 2018) .", "However, their approaches were basically based on the encoderdecoder model, which is trained with a lot of (article, headline) pairs.", "This means that there are few situations for putting their models into the real world because news articles typically already have corresponding headlines, and most editors create a headline before its content (according to a senior journalist).", "Therefore, our work can strongly support their approaches from a practical perspective.", "Considering technologies used for editing support, there have been many studies for various purposes, such as spelling error correction (Farra et al., 2014; Hasan et al., 2015; Etoori et al., 2018) , grammatical error correction (Dahlmeier and Ng, 2012; Susanto et al., 2014; Choshen and Abend, 2018) , fact checking (Baly et al., 2018; Thorne and Vlachos, 2018; Lee et al., 2018) , fluency evaluation (Vadlapudi and Katragadda, 2010; Heilman et al., 2014; Kann et al., 2018) , and so on.", "However, when we consider their studies on our task, they are only used after editing (writing a draft).", "On the other hand, the purpose of our tool is different from theirs since our tool can support editors before or during editing.", "The usage of (interactive) machine translation systems (Denkowski et al., 2014; González-Rubio et al., 2016; Wuebker et al., 2016; Ye et al., 2016; Takeno et al., 2017) for supporting manual post-editing are similar to our purpose, but their task is completely different from ours.", "In other words, their task is a translation without information loss, whereas our task is a summarization that requires information compression.", "We believe that a case study on summarization is still important for the summarization community.", "There have been several studies reporting case studies on headline generation for different real services: (a) question headlines on question answering service (Higurashi et al., 2018) , (b) product headlines on e-commerce service (Wang et al., 2018) , and (c) headlines for product curation pages Camargo de Souza et al., 2018) .", "The first two (a) and (b) are extractive approaches, and the last one (c) is an abstractive approach, where the input is a set of slot/value pairs, such as \"color/white.\"", "That is, our task is more difficult to use in the real-world.", "In addition, application to news services tends to be sensitive since news articles contain serious contents such as incidents, accidents, and disasters.", "Thus, our work should be valuable as a rare case study applying a neural model to such a news service.", "Conclusion We addressed short-title generation from news articles for a news aggregator to support the editorial process.", "We proposed an encoder-decoder model with multiple encoders for separately encoding multiple information sources, i.e., news headlines and leads.", "Comparative experiments using crowdsourcing showed that our hybrid model performed better than the baselines, especially using the usefulness measure.", "We deployed our model to an editing support tool and empirically confirmed that professional editors began to refer to the generated titles after the release.", "Future research will include verifying how much our headline generation model can affect practical performance indicators, such as click-through rate.", "In this case, we need to develop a much safer model since our model sometimes yields erroneous outputs or fake news titles, which cannot be directly used in the commercial service." ] }
{ "paper_header_number": [ "1", "3", "4.1", "4.2", "4.3", "4.4", "5", "5.1", "5.2", "5.3", "5.4", "6", "7" ], "paper_header_content": [ "Introduction", "Proposed Method", "Dataset", "Training", "Evaluation", "Compared Models", "Deployment to Editing Support Tool", "Cutoff of Unpromising Candidates", "Skipping Redundant Candidates", "Highlighting Unknown Characters", "Effect of Deployment", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-34#paper-1048#slide-7
Training dataset
263K triples of (headline, lead, short title) in Yahoo! News Headline Lead Short title Extractively solvable instances: 20% Characters in each short title are completely covered by the headline Edit distance of headlines and short titles: 23.74 Short titles cannot be easily created only from headlines Copyright 2019 Yahoo Japan Corporation. All Rights Reserved.
263K triples of (headline, lead, short title) in Yahoo! News Headline Lead Short title Extractively solvable instances: 20% Characters in each short title are completely covered by the headline Edit distance of headlines and short titles: 23.74 Short titles cannot be easily created only from headlines Copyright 2019 Yahoo Japan Corporation. All Rights Reserved.
[]
GEM-SciDuet-train-34#paper-1048#slide-8
1048
A Case Study on Neural Headline Generation for Editing Support
There have been many studies on neural headline generation models trained with a lot of (article, headline) pairs. However, there are few situations for putting such models into practical use in the real world since news articles typically already have corresponding headlines. In this paper, we describe a practical use case of neural headline generation in a news aggregator, where dozens of professional editors constantly select important news articles and manually create their headlines, which are much shorter than the original headlines. Specifically, we show how to deploy our model to an editing support tool and report the results of comparing the behavior of the editors before and after the release.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186 ], "paper_content_text": [ "Introduction A news-aggregator is a website or mobile application that aggregates a large amount of web content, e.g., online newspapers provided by different publishers.", "The main purpose of such a service is to help users obtain important news out of vast amounts of information quickly and easily.", "Therefore, it is critical to consider how to compactly show news, as well as what type of news to select, to improve service quality.", "In fact, the news-aggregator of Yahoo!", "JAPAN 1 , the largest Japanese portal site, is supported by dozens of professional editors who constantly select important news articles and manually create their new headlines called short titles, which are much shorter than the original headline, to construct a newstopic list.", "Note that we use the term \"title\" to avoid confusion with the original news headline, although they are similar concepts.", "* Both authors contributed equally to this work.", "1 https://www.yahoo.co.jp/ (a) List of news topics including short titles.", "(b) Page of news entry including headline and lead.", "Figure 1 shows screenshots of the newsaggregator of Yahoo!", "JAPAN, where the English translations of the short title, headline and lead are listed in Table 1 .", "The left figure (a) shows the list of news topics (important news articles), which includes short titles, and the right figure (b) shows the entry page of the first topic in the list, which consists of a headline and lead.", "The lead is a short version of the article and can be used by users to decide whether to read the whole article.", "The editors' job is to create a short title from news content including the headline and lead.", "A short title has two advantages over a normal headline; one is quick understandability of the content and the other is saving display space by using a single line.", "This means that short titles can increase a user's chances of reaching interesting articles.", "Since the click-through rate of news articles is directly related to ad revenue, even a small improvement in short titles has a significant impact on business.", "We tackle an automatic-generation task of such short titles for a news aggregator to support the Japanese English translation Short title The prime minister cannot say that there is no surmise.", "Headline It cannot be said that there is no \"sontaku (surmise)\" with absolute certainty.", "The prime minister Abe said about the problem of \"Kake Gakuen (Kake school)\".", "Lead Prime Minister Shinzo Abe said, in an intensive deliberation with the House of Councilors Budget Committee held on the afternoon of the 14th, as an answer to a question about whether bureaucrats surmised to the prime minister regarding the Kake suspicion, \"It is difficult to understand whether there is a sontaku (surmise)\".", "He said \"It cannot be said that there was nothing wrong,\" while explaining that \"I do not need to be obsequious\".", "An answer to Ichiro Tsukada (LDP).", "Table 1 : Short title, headline, and lead in Figure 1 (b) with English versions.", "editorial process.", "Our task is a variant of newsheadline generation, which has been extensively studied, as described in Section 6.", "A clear difference between their task and ours is that we need to generate short titles from news content including headlines.", "Thus, we formulate our task as an abstractive summarization from multiple information sources, i.e., headlines and leads, based on an encoder-decoder model (Section 2).", "There are roughly three approaches for handling multiple information sources.", "The first approach is to merge all sources with some weights based on the importance of each source, which can be achieved by a weighted average of the context vectors, as in multimodal summarization (Hori et al., 2017) .", "This is the most general approach since the other two can also be regarded as special cases of the weighted average.", "The second approach is to use one source as the main source and others as secondary ones.", "This is effective when the main source can be clearly determined, such as query-focused summarization (Nema et al., 2017) , where the target document is main and a query is secondary.", "The third approach is to find the salient components of the sources.", "This is suitable when there are many sources including less informative ones (redundant sources), such as lengthydocument summarization that outputs a multisentence summary (Tan et al., 2017) , where each sentence can be regarded as one source.", "We addressed an extension of the weighted average approach and compared our proposed model with a multimodal model (Hori et al., 2017) from the first approach and a query-based model (Nema et al., 2017) from the second approach, as well as the normal encoder-decoder model.", "Since we have only two sources (headlines and leads), where the headline source is clearly salient for generating a short title, the third approach can be reduced to the normal encoder-decoder model.", "Our contributions are as follows.", "• We report on a case study of short-title generation of news articles for a news aggregator as a real-world application of neural headline generation.", "This study supports previous studies based on the encoder-decoder model from a practical standpoint since most real-world news articles basically already have headlines, which means that there has been little direct application of these previous studies.", "• We propose an encoder-decoder model with multiple encoders for separately encoding news headlines and leads (Section 3).", "Our comparative experiments with several baselines involving evaluations done by crowdsourcing workers showed the effectiveness of our model, especially using the \"usefulness\" measure (Section 4).", "• We describe how to deploy our model to an editing support tool and show the results of comparing the editors' behavior before and after releasing the tool (Section 5), which imply that the editors began to refer to generated titles after the release.", "late the following conditional likelihood p(y | x) = T −1 ∏ t=1 p(y t+1 | y ≤t , x) (1) with respect to each pair (x, y) of an input sequence x = x 1 · · · x S and output sequence y = y 1 · · · y T , where y ≤t = y 1 · · · y t , and maximize its mean.", "The model p(y | x) in Eq.", "(1) is computed by a combination of two recurrent neural networks (RNNs): an encoder and decoder.", "The encoder reads an input sequence x to recognize its content, and the decoder predicts an output sequence y corresponding to the content.", "More formally, an encoder calculates a hidden state h s for each element x s in a x by using the state transition function f enc of the encoder: h s = f enc (x s , h s−1 ).", "In a similar fashion, a decoder calculates a hidden stateĥ t for each element y t in a y by using the state transition function f dec of the decoder after setting the last hidden state of the encoder as the initial state of the decoder (ĥ 0 = h S ): h t = f dec (y t ,ĥ t−1 ).", "Then, a prediction of outputs for eachĥ t is calculated using the output function g dec with an attention mechanism: p(y t+1 | y ≤t , x) = g dec (ĥ t , c t ), (2) where c t is a weighted average of the encoder hidden states {h 1 , · · · , h S }, defined by c t = S ∑ s=1 a t (s)h s , (3) where a t (s) represents a weight of an encoder hidden state h s with respect to a decoder hidden statê h t .", "c t represents a soft alignment (or attention weight) to the source sequence at the target position t, so it is called a context.", "Proposed Method We propose an encoder-decoder model with multiple encoders.", "For simplicity, we describe our model assuming two encoders for news headlines and leads.", "Let d t and d ′ t be contexts calculated with Eq.", "(3) with the headline encoder and lead encoder, respectively.", "Our model combines the two context vectors inspired by a gating mechanism in long-short term memory networks (Hochreiter and Schmidhuber, 1997) as follows: w t = σ(W [d t ; d ′ t ;ĥ t ]), (4) w ′ t = σ(W ′ [d t ; d ′ t ;ĥ t ]), (5) c t = w t ⊙ d t + w ′ t ⊙ d ′ t , (6) where function σ represents the sigmoid function, i.e., σ(x) = 1/(1 + e −x ), and the operator ⊙ represents the element-wise product.", "Eq.", "(4) calculates a gating weight w t for d t , where W represents a weight matrix for a concatenated vector [d t ; d ′ t ;ĥ t ].", "Similarly, Eq.", "(5) calculates a gating weight w ′ t for d ′ t .", "Eq.", "(6) calculates a mixed context c t made from the two contexts, d t and d ′ t .", "Finally, the output function in our model is constructed by substituting c t with c t in Eq.", "(2).", "Our model can be regarded as an extension of the multimodal fusion model (Hori et al., 2017) , where multiple contexts are mixed using scalar weights, i.e., c t = αd t + βd ′ t , where α and β are positive scalar weights calculated using an attention mechanism such as a t (s) in Eq.", "(3).", "Our model can obtain a more sophisticated mixed context than their model since that model only takes into account which encoder to weigh at a time step, while our model adjusts weights on the element level.", "Experiments Dataset We prepared a dataset extracted from the newsaggregator of Yahoo!", "JAPAN by Web crawling.", "The dataset included 263K (headline, lead, short title) triples, and was split into three parts, i.e., for training (90%), validation (5%), and testing (5%).", "We preprocessed them by separating characters for training since our preliminary experiments showed that character-based training clearly performed better than word-based training.", "The statistics of our dataset are as follows.", "The average lengths of headlines, leads, and short titles are 24.87, 128.49, and 13.05 Japanese characters, respectively.", "The dictionary sizes (for characters) of headlines, leads, and short titles are 3618, 4226, and 3156, respectively.", "Each news article has only one short title created by a professional editor.", "The percentage of short titles equal to their headlines is only 0.13%, while the percentage of extractively solvable instances, in which the characters in each short title are completely matched by those in the corresponding headline, was about 20%.", "However, the average edit distance (Levenshtein, 1966 ) between short titles and headlines was 23.74.", "This means that short titles cannot be easily created from headlines.", "Training We implemented our model on the OpenNMT 2 toolkit.", "We used a convolutional neural network (CNN) (Kim, 2014) , instead of an RNN, to construct the lead encoder since leads are longer than headlines and require much more computational time.", "Since the CNN encoder outputs all hidden states for an input sequence in the same format as the RNN encoder, we can easily apply these states to Eq.", "(3).", "Our headline encoder still remains as an RNN (i.e., bidirectional LSTM) for fair comparison with the default implementation.", "We used a stochastic gradient descent algorithm with Nesterov momentum (Nesterov, 1983) as an optimizer, after initializing parameters by uniform sampling on (−0.1, 0.1).", "Table 2 lists the details of the hyper-parameter settings in our experiment.", "Other settings were basically the same as the default implementation of OpenNMT.", "Evaluation We conducted two crowdsourcing tasks to separately measure readability and usefulness.", "The readability task asked ten workers how readable each short title was on a four-point scale (higher is better), while the usefulness task asked them how useful the short title was compared to the corresponding article.", "The score of each generated short title was calculated by averaging the scores collected from the ten workers.", "Compared Models We prepared four models, our model GateFusion and three baselines MultiModal, QueryBased, and OpenNMT, listed below.", "We implemented the fusion mechanisms of MultiModal and 2 https://github.com/OpenNMT/OpenNMT-py Table 3 : Mean scores of readability (r), usefulness (u), and their average r+u 2 based on crowdsourcing.", "The \" †\" mark shows a statistical significance from all three baselines OpenNMT, MultiModal, and QueryBased on a one-tailed, paired t-test (p < 0.01).", "QueryBased on OpenNMT using an RNN encoder for headlines and CNN encoder for leads (see Appendix A for detailed definitions).", "• GateFusion: Our model with a gating mechanism described in Section 3.", "This is a fusion based on vector weights.", "• MultiModal: A multimodal model proposed by (Hori et al., 2017) , which can handle multimodal information such as image and audio as well as text by using separate encoders.", "The model combines contexts obtained from the encoders via an attention mechanism such as a t (s) in Eq.", "(3).", "This is a fusion based on scalar weights.", "• QueryBased: A query-based model proposed by (Nema et al., 2017) , which can finetune the attention on a document by using a query for query-focused summarization.", "We regard a headline as a document and a lead as a query since the headline is more similar to its short title.", "Specifically, the model finetunes an attention weight a t (s) for calculating a headline context d t by using a pre-computed lead context d ′ t .", "This is a fusion based on cascade connection.", "• OpenNMT: An encoder-decoder model with a single encoder implemented in OpenNMT, whose input is a headline only, because a variant using a lead did not perform better than this setting.", "Table 3 lists the results from the crowdsourcing tasks for readability and usefulness (see Appendix B for the details of these scores).", "Editor and Prefix in the top block of rows show the results of correct short titles created by editors and a naive model using the first 13.5 Japanese characters 3 , respectively.", "The middle and bottom blocks represent the three baselines and our models, respectively.", "We explain our hybrid model HybridFusion later.", "Each model was prepared as an ensemble of ten models by random initialization, aiming for robust performance.", "Our GateFusion clearly performed better than the three baselines regarding usefulness and interestingly outperformed even Editor.", "This implies that GateFusion tends to aggressively copy elements from source sequences.", "However, this seemed to result in complicated expressions; thus, GateFusion performed the worst with respect to readability.", "To overcome this weakness, we developed a hybrid model HybridFusion that consists of GateFusion and another fusion model QueryBased, which performed relatively well in terms of readability.", "The results indicate that HybridFusion performed the best regarding readability and usefulness.", "It can be considered that QueryBased helps GateFusion generate headline-style outputs since QueryBased mainly uses the headline source.", "Table 4 lists output examples generated by the best model OpenNMT from the three baselines and our best model HybridFusion (see Appendix C for more examples).", "In this case, the difference between OpenNMT and HybridFusion is easily comprehensible.", "The former selected \" (evolution)\", and the latter selected \" (Darvish)\" from the headline.", "In Japanese headlines, the last word tends to be important, so using the last word is basically a good strategy.", "However, the lead indicates that \"Darvish\" is more important than \"evolution\" (actually, there is no word \"evolution\" in the lead); thus, HybridFusion was able to correctly select the long name \"Darvish\" and abbreviate it to \" (Dar)\".", "In addition, it forcibly changed the style to the short title's style by putting the name into the forefront to easily get users' attention.", "This suggests that our neural-headline-generation model HybridFusion can successfully work even in this real-world application.", "Results Deployment to Editing Support Tool We deployed our short-title-generation model to an editing support tool in collaboration with the 3 13.5 is the limit in the news-aggregator, where space, numbers, and alphabet characters are counted as 0.5.", "Figure 2 : Screenshot of editing support tool displaying generated candidates for creating a short title.", "news service, as shown in Figure 2 .", "In the tool, when an editor enters the URL of an article, the tool can automatically fetch the headline and lead of the article and display up to five candidates next to the edit form of a short title, as shown in the dotted box in the figure.", "These candidates are hypotheses (with high probabilities) generated by the beam search based on the model.", "Then, the editor can effectively create a short title by referring to the generated candidates.", "This supporting feature is expected to be useful especially for inexperienced editors since the quality of short titles is heavily dependent on editors' experience.", "From now on, we briefly describe three features of the tool to improve its usability when displaying candidates: cutoff of unpromising candidates, skipping redundant candidates, and highlighting unknown characters.", "After that, we discuss the effect of the deployment analyzing user behavior before and after releasing the tool.", "Cutoff of Unpromising Candidates The quality of displayed candidates is one of the main factors that affect the usability of the tool.", "If the tool frequently displays unpromising candidates, editors will gradually start ignoring them.", "Therefore, we cutoff unpromising candidates whose perplexity scores are higher than a certain threshold, where the perplexity score of a candidate is calculated by the inverse of the geometric mean of the generation probabilities for all characters in the candidate.", "We set the threshold considering the results of the editors' manual evaluation, where they checked if each candidate was acceptable or not.", "Specifically, we used 1.47 (=1/0.68) as the threshold, which means that the (geometric) mean character likelihood in the candidate should be higher than 0.68.", "If all candidates are judged as unpromising, the tool displays a message like \"No promising candidates.\"", "Skipping Redundant Candidates The purpose of the tool is to give editors some new ideas for creating short titles, so it is not useful to display redundant candidates similar to others.", "Therefore, we skip candidates whose edit distance (Levenshtein, 1966) to the other candidates is lower than a threshold when selecting hypotheses in descending order of probability.", "Formally, the edit distance between two texts is defined as the minimum number of single-character edits (insertions, deletions, or substitutions) required to change one text into the other.", "We set the threshold to 2 so as to restrict variations of Japanese particles as there are many particles with a similar meaning in Japanese 4 , e.g., \" (ha)\" and \" (ga)\".", "Although we used a unit cost for the edit distance, we can adjust the cost of each edit operation so that the tool can ignore variations of prepositions if we want to use English texts.", "Highlighting Unknown Characters One difficulty of neural models is that there is a possibility of generating incorrect or fake titles, which do not correspond to the article.", "This is a serious issue for news editing support since displayed candidates can mislead editors.", "For example, if the tool displays \" (Fujinami)\" for the news about \" (Fujinami)\", where they are different names with the same pronunciation, editors might choose the incorrect one.", "As a simple solution, we highlighted unknown characters that do not appear in both headline and lead in red.", "In Figure 2 , two phrases (\"B\" and \" \") are highlighted since they do not appear in the headline and lead.", "When a candidate includes highlighted characters, editors can carefully check if the candidate is semantically correct.", "Note that we did not exclude candidates with unknown characters so that the model can aggressively generate paraphrases and abbreviations.", "For example, the tool ROUGE-L (± SE) # articles Before 52.71% (± 0.56) 1773 After 57.65% (± 0.53) 1959 Table 5 : Sequence matching rates (ROUGE-L) of editors' titles and generated titles, which are averaged over articles over three weeks before/after releasing tool.", "suggests \" B(Soft B.)\"", "as an abbreviation of \" (Softbank)\" in the figure.", "Effect of Deployment To investigate the effect of the deployment, we compared the sequence matching rates between editors' correct titles and generated candidates before and after releasing the tool.", "The sequence matching rate is basically calculated by ROUGE-L (Lin, 2004) , which is defined as the rate of the length of the longest common subsequence between two sequences, i.e., a correct title and a generated candidate.", "Because we have multiple candidates for each article, we calculate the sequence matching rate as the maximum of their ROUGE-L scores, assuming that editors may refer to the most promising candidate.", "Note that the candidates were filtered by the aforementioned features, so we omitted a few articles without candidates.", "Table 5 shows the results of the sequence matching rates averaged over the articles over three weeks before and after releasing the tool.", "The results indicate that the ROUGE-L score increased by about 5 percentage points after the release.", "This implies that editors created their titles by referring to the displayed candidates to some extent.", "In fact, the ratio of the exact matched titles (ROUGE-L = 100%) in all articles (before/after the release) increased after the release by a factor of 1.62(i.e., from 3.78% to 6.13%).", "Similarly, the ratio of the 80% matched titles (ROUGE-L ≥ 80%) also increased by a factor of 1.32 (i.e., from 14.04% to 18.53%).", "This suggests that professional editors obtained new ideas from generated titles of the tool.", "Related Work We briefly review related studies from three aspects: news headline generation, editing support, and application of headline generation.", "In summary, our work is the first attempt to deploy a neural news-headline-generation model to a realworld application, i.e., news editing support tool.", "News-headline-generation tasks have been extensively studied since early times (Wang et al., 2005; Soricut and Marcu, 2006; Woodsend et al., 2010; Alfonseca et al., 2013; Sun et al., 2015; Colmenares et al., 2015) .", "In this line of research, Rush et al.", "(2015) proposed a neural model to generate news headlines and released a benchmark dataset for their task, and consequently this task has recently received increasing attention (Chopra et al., 2016; Takase et al., 2016; Kiyono et al., 2017; Zhou et al., 2017; Ayana et al., 2017; Raffel et al., 2017; Cao et al., 2018; Kobayashi, 2018) .", "However, their approaches were basically based on the encoderdecoder model, which is trained with a lot of (article, headline) pairs.", "This means that there are few situations for putting their models into the real world because news articles typically already have corresponding headlines, and most editors create a headline before its content (according to a senior journalist).", "Therefore, our work can strongly support their approaches from a practical perspective.", "Considering technologies used for editing support, there have been many studies for various purposes, such as spelling error correction (Farra et al., 2014; Hasan et al., 2015; Etoori et al., 2018) , grammatical error correction (Dahlmeier and Ng, 2012; Susanto et al., 2014; Choshen and Abend, 2018) , fact checking (Baly et al., 2018; Thorne and Vlachos, 2018; Lee et al., 2018) , fluency evaluation (Vadlapudi and Katragadda, 2010; Heilman et al., 2014; Kann et al., 2018) , and so on.", "However, when we consider their studies on our task, they are only used after editing (writing a draft).", "On the other hand, the purpose of our tool is different from theirs since our tool can support editors before or during editing.", "The usage of (interactive) machine translation systems (Denkowski et al., 2014; González-Rubio et al., 2016; Wuebker et al., 2016; Ye et al., 2016; Takeno et al., 2017) for supporting manual post-editing are similar to our purpose, but their task is completely different from ours.", "In other words, their task is a translation without information loss, whereas our task is a summarization that requires information compression.", "We believe that a case study on summarization is still important for the summarization community.", "There have been several studies reporting case studies on headline generation for different real services: (a) question headlines on question answering service (Higurashi et al., 2018) , (b) product headlines on e-commerce service (Wang et al., 2018) , and (c) headlines for product curation pages Camargo de Souza et al., 2018) .", "The first two (a) and (b) are extractive approaches, and the last one (c) is an abstractive approach, where the input is a set of slot/value pairs, such as \"color/white.\"", "That is, our task is more difficult to use in the real-world.", "In addition, application to news services tends to be sensitive since news articles contain serious contents such as incidents, accidents, and disasters.", "Thus, our work should be valuable as a rare case study applying a neural model to such a news service.", "Conclusion We addressed short-title generation from news articles for a news aggregator to support the editorial process.", "We proposed an encoder-decoder model with multiple encoders for separately encoding multiple information sources, i.e., news headlines and leads.", "Comparative experiments using crowdsourcing showed that our hybrid model performed better than the baselines, especially using the usefulness measure.", "We deployed our model to an editing support tool and empirically confirmed that professional editors began to refer to the generated titles after the release.", "Future research will include verifying how much our headline generation model can affect practical performance indicators, such as click-through rate.", "In this case, we need to develop a much safer model since our model sometimes yields erroneous outputs or fake news titles, which cannot be directly used in the commercial service." ] }
{ "paper_header_number": [ "1", "3", "4.1", "4.2", "4.3", "4.4", "5", "5.1", "5.2", "5.3", "5.4", "6", "7" ], "paper_header_content": [ "Introduction", "Proposed Method", "Dataset", "Training", "Evaluation", "Compared Models", "Deployment to Editing Support Tool", "Cutoff of Unpromising Candidates", "Skipping Redundant Candidates", "Highlighting Unknown Characters", "Effect of Deployment", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-34#paper-1048#slide-8
Model and training settings
To reduce the computational time Ensemble of 10 models Hyper-parameter settings are listed in the right table Copyright 2019 Yahoo Japan Corporation. All Rights Reserved.
To reduce the computational time Ensemble of 10 models Hyper-parameter settings are listed in the right table Copyright 2019 Yahoo Japan Corporation. All Rights Reserved.
[]
GEM-SciDuet-train-34#paper-1048#slide-9
1048
A Case Study on Neural Headline Generation for Editing Support
There have been many studies on neural headline generation models trained with a lot of (article, headline) pairs. However, there are few situations for putting such models into practical use in the real world since news articles typically already have corresponding headlines. In this paper, we describe a practical use case of neural headline generation in a news aggregator, where dozens of professional editors constantly select important news articles and manually create their headlines, which are much shorter than the original headlines. Specifically, we show how to deploy our model to an editing support tool and report the results of comparing the behavior of the editors before and after the release.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186 ], "paper_content_text": [ "Introduction A news-aggregator is a website or mobile application that aggregates a large amount of web content, e.g., online newspapers provided by different publishers.", "The main purpose of such a service is to help users obtain important news out of vast amounts of information quickly and easily.", "Therefore, it is critical to consider how to compactly show news, as well as what type of news to select, to improve service quality.", "In fact, the news-aggregator of Yahoo!", "JAPAN 1 , the largest Japanese portal site, is supported by dozens of professional editors who constantly select important news articles and manually create their new headlines called short titles, which are much shorter than the original headline, to construct a newstopic list.", "Note that we use the term \"title\" to avoid confusion with the original news headline, although they are similar concepts.", "* Both authors contributed equally to this work.", "1 https://www.yahoo.co.jp/ (a) List of news topics including short titles.", "(b) Page of news entry including headline and lead.", "Figure 1 shows screenshots of the newsaggregator of Yahoo!", "JAPAN, where the English translations of the short title, headline and lead are listed in Table 1 .", "The left figure (a) shows the list of news topics (important news articles), which includes short titles, and the right figure (b) shows the entry page of the first topic in the list, which consists of a headline and lead.", "The lead is a short version of the article and can be used by users to decide whether to read the whole article.", "The editors' job is to create a short title from news content including the headline and lead.", "A short title has two advantages over a normal headline; one is quick understandability of the content and the other is saving display space by using a single line.", "This means that short titles can increase a user's chances of reaching interesting articles.", "Since the click-through rate of news articles is directly related to ad revenue, even a small improvement in short titles has a significant impact on business.", "We tackle an automatic-generation task of such short titles for a news aggregator to support the Japanese English translation Short title The prime minister cannot say that there is no surmise.", "Headline It cannot be said that there is no \"sontaku (surmise)\" with absolute certainty.", "The prime minister Abe said about the problem of \"Kake Gakuen (Kake school)\".", "Lead Prime Minister Shinzo Abe said, in an intensive deliberation with the House of Councilors Budget Committee held on the afternoon of the 14th, as an answer to a question about whether bureaucrats surmised to the prime minister regarding the Kake suspicion, \"It is difficult to understand whether there is a sontaku (surmise)\".", "He said \"It cannot be said that there was nothing wrong,\" while explaining that \"I do not need to be obsequious\".", "An answer to Ichiro Tsukada (LDP).", "Table 1 : Short title, headline, and lead in Figure 1 (b) with English versions.", "editorial process.", "Our task is a variant of newsheadline generation, which has been extensively studied, as described in Section 6.", "A clear difference between their task and ours is that we need to generate short titles from news content including headlines.", "Thus, we formulate our task as an abstractive summarization from multiple information sources, i.e., headlines and leads, based on an encoder-decoder model (Section 2).", "There are roughly three approaches for handling multiple information sources.", "The first approach is to merge all sources with some weights based on the importance of each source, which can be achieved by a weighted average of the context vectors, as in multimodal summarization (Hori et al., 2017) .", "This is the most general approach since the other two can also be regarded as special cases of the weighted average.", "The second approach is to use one source as the main source and others as secondary ones.", "This is effective when the main source can be clearly determined, such as query-focused summarization (Nema et al., 2017) , where the target document is main and a query is secondary.", "The third approach is to find the salient components of the sources.", "This is suitable when there are many sources including less informative ones (redundant sources), such as lengthydocument summarization that outputs a multisentence summary (Tan et al., 2017) , where each sentence can be regarded as one source.", "We addressed an extension of the weighted average approach and compared our proposed model with a multimodal model (Hori et al., 2017) from the first approach and a query-based model (Nema et al., 2017) from the second approach, as well as the normal encoder-decoder model.", "Since we have only two sources (headlines and leads), where the headline source is clearly salient for generating a short title, the third approach can be reduced to the normal encoder-decoder model.", "Our contributions are as follows.", "• We report on a case study of short-title generation of news articles for a news aggregator as a real-world application of neural headline generation.", "This study supports previous studies based on the encoder-decoder model from a practical standpoint since most real-world news articles basically already have headlines, which means that there has been little direct application of these previous studies.", "• We propose an encoder-decoder model with multiple encoders for separately encoding news headlines and leads (Section 3).", "Our comparative experiments with several baselines involving evaluations done by crowdsourcing workers showed the effectiveness of our model, especially using the \"usefulness\" measure (Section 4).", "• We describe how to deploy our model to an editing support tool and show the results of comparing the editors' behavior before and after releasing the tool (Section 5), which imply that the editors began to refer to generated titles after the release.", "late the following conditional likelihood p(y | x) = T −1 ∏ t=1 p(y t+1 | y ≤t , x) (1) with respect to each pair (x, y) of an input sequence x = x 1 · · · x S and output sequence y = y 1 · · · y T , where y ≤t = y 1 · · · y t , and maximize its mean.", "The model p(y | x) in Eq.", "(1) is computed by a combination of two recurrent neural networks (RNNs): an encoder and decoder.", "The encoder reads an input sequence x to recognize its content, and the decoder predicts an output sequence y corresponding to the content.", "More formally, an encoder calculates a hidden state h s for each element x s in a x by using the state transition function f enc of the encoder: h s = f enc (x s , h s−1 ).", "In a similar fashion, a decoder calculates a hidden stateĥ t for each element y t in a y by using the state transition function f dec of the decoder after setting the last hidden state of the encoder as the initial state of the decoder (ĥ 0 = h S ): h t = f dec (y t ,ĥ t−1 ).", "Then, a prediction of outputs for eachĥ t is calculated using the output function g dec with an attention mechanism: p(y t+1 | y ≤t , x) = g dec (ĥ t , c t ), (2) where c t is a weighted average of the encoder hidden states {h 1 , · · · , h S }, defined by c t = S ∑ s=1 a t (s)h s , (3) where a t (s) represents a weight of an encoder hidden state h s with respect to a decoder hidden statê h t .", "c t represents a soft alignment (or attention weight) to the source sequence at the target position t, so it is called a context.", "Proposed Method We propose an encoder-decoder model with multiple encoders.", "For simplicity, we describe our model assuming two encoders for news headlines and leads.", "Let d t and d ′ t be contexts calculated with Eq.", "(3) with the headline encoder and lead encoder, respectively.", "Our model combines the two context vectors inspired by a gating mechanism in long-short term memory networks (Hochreiter and Schmidhuber, 1997) as follows: w t = σ(W [d t ; d ′ t ;ĥ t ]), (4) w ′ t = σ(W ′ [d t ; d ′ t ;ĥ t ]), (5) c t = w t ⊙ d t + w ′ t ⊙ d ′ t , (6) where function σ represents the sigmoid function, i.e., σ(x) = 1/(1 + e −x ), and the operator ⊙ represents the element-wise product.", "Eq.", "(4) calculates a gating weight w t for d t , where W represents a weight matrix for a concatenated vector [d t ; d ′ t ;ĥ t ].", "Similarly, Eq.", "(5) calculates a gating weight w ′ t for d ′ t .", "Eq.", "(6) calculates a mixed context c t made from the two contexts, d t and d ′ t .", "Finally, the output function in our model is constructed by substituting c t with c t in Eq.", "(2).", "Our model can be regarded as an extension of the multimodal fusion model (Hori et al., 2017) , where multiple contexts are mixed using scalar weights, i.e., c t = αd t + βd ′ t , where α and β are positive scalar weights calculated using an attention mechanism such as a t (s) in Eq.", "(3).", "Our model can obtain a more sophisticated mixed context than their model since that model only takes into account which encoder to weigh at a time step, while our model adjusts weights on the element level.", "Experiments Dataset We prepared a dataset extracted from the newsaggregator of Yahoo!", "JAPAN by Web crawling.", "The dataset included 263K (headline, lead, short title) triples, and was split into three parts, i.e., for training (90%), validation (5%), and testing (5%).", "We preprocessed them by separating characters for training since our preliminary experiments showed that character-based training clearly performed better than word-based training.", "The statistics of our dataset are as follows.", "The average lengths of headlines, leads, and short titles are 24.87, 128.49, and 13.05 Japanese characters, respectively.", "The dictionary sizes (for characters) of headlines, leads, and short titles are 3618, 4226, and 3156, respectively.", "Each news article has only one short title created by a professional editor.", "The percentage of short titles equal to their headlines is only 0.13%, while the percentage of extractively solvable instances, in which the characters in each short title are completely matched by those in the corresponding headline, was about 20%.", "However, the average edit distance (Levenshtein, 1966 ) between short titles and headlines was 23.74.", "This means that short titles cannot be easily created from headlines.", "Training We implemented our model on the OpenNMT 2 toolkit.", "We used a convolutional neural network (CNN) (Kim, 2014) , instead of an RNN, to construct the lead encoder since leads are longer than headlines and require much more computational time.", "Since the CNN encoder outputs all hidden states for an input sequence in the same format as the RNN encoder, we can easily apply these states to Eq.", "(3).", "Our headline encoder still remains as an RNN (i.e., bidirectional LSTM) for fair comparison with the default implementation.", "We used a stochastic gradient descent algorithm with Nesterov momentum (Nesterov, 1983) as an optimizer, after initializing parameters by uniform sampling on (−0.1, 0.1).", "Table 2 lists the details of the hyper-parameter settings in our experiment.", "Other settings were basically the same as the default implementation of OpenNMT.", "Evaluation We conducted two crowdsourcing tasks to separately measure readability and usefulness.", "The readability task asked ten workers how readable each short title was on a four-point scale (higher is better), while the usefulness task asked them how useful the short title was compared to the corresponding article.", "The score of each generated short title was calculated by averaging the scores collected from the ten workers.", "Compared Models We prepared four models, our model GateFusion and three baselines MultiModal, QueryBased, and OpenNMT, listed below.", "We implemented the fusion mechanisms of MultiModal and 2 https://github.com/OpenNMT/OpenNMT-py Table 3 : Mean scores of readability (r), usefulness (u), and their average r+u 2 based on crowdsourcing.", "The \" †\" mark shows a statistical significance from all three baselines OpenNMT, MultiModal, and QueryBased on a one-tailed, paired t-test (p < 0.01).", "QueryBased on OpenNMT using an RNN encoder for headlines and CNN encoder for leads (see Appendix A for detailed definitions).", "• GateFusion: Our model with a gating mechanism described in Section 3.", "This is a fusion based on vector weights.", "• MultiModal: A multimodal model proposed by (Hori et al., 2017) , which can handle multimodal information such as image and audio as well as text by using separate encoders.", "The model combines contexts obtained from the encoders via an attention mechanism such as a t (s) in Eq.", "(3).", "This is a fusion based on scalar weights.", "• QueryBased: A query-based model proposed by (Nema et al., 2017) , which can finetune the attention on a document by using a query for query-focused summarization.", "We regard a headline as a document and a lead as a query since the headline is more similar to its short title.", "Specifically, the model finetunes an attention weight a t (s) for calculating a headline context d t by using a pre-computed lead context d ′ t .", "This is a fusion based on cascade connection.", "• OpenNMT: An encoder-decoder model with a single encoder implemented in OpenNMT, whose input is a headline only, because a variant using a lead did not perform better than this setting.", "Table 3 lists the results from the crowdsourcing tasks for readability and usefulness (see Appendix B for the details of these scores).", "Editor and Prefix in the top block of rows show the results of correct short titles created by editors and a naive model using the first 13.5 Japanese characters 3 , respectively.", "The middle and bottom blocks represent the three baselines and our models, respectively.", "We explain our hybrid model HybridFusion later.", "Each model was prepared as an ensemble of ten models by random initialization, aiming for robust performance.", "Our GateFusion clearly performed better than the three baselines regarding usefulness and interestingly outperformed even Editor.", "This implies that GateFusion tends to aggressively copy elements from source sequences.", "However, this seemed to result in complicated expressions; thus, GateFusion performed the worst with respect to readability.", "To overcome this weakness, we developed a hybrid model HybridFusion that consists of GateFusion and another fusion model QueryBased, which performed relatively well in terms of readability.", "The results indicate that HybridFusion performed the best regarding readability and usefulness.", "It can be considered that QueryBased helps GateFusion generate headline-style outputs since QueryBased mainly uses the headline source.", "Table 4 lists output examples generated by the best model OpenNMT from the three baselines and our best model HybridFusion (see Appendix C for more examples).", "In this case, the difference between OpenNMT and HybridFusion is easily comprehensible.", "The former selected \" (evolution)\", and the latter selected \" (Darvish)\" from the headline.", "In Japanese headlines, the last word tends to be important, so using the last word is basically a good strategy.", "However, the lead indicates that \"Darvish\" is more important than \"evolution\" (actually, there is no word \"evolution\" in the lead); thus, HybridFusion was able to correctly select the long name \"Darvish\" and abbreviate it to \" (Dar)\".", "In addition, it forcibly changed the style to the short title's style by putting the name into the forefront to easily get users' attention.", "This suggests that our neural-headline-generation model HybridFusion can successfully work even in this real-world application.", "Results Deployment to Editing Support Tool We deployed our short-title-generation model to an editing support tool in collaboration with the 3 13.5 is the limit in the news-aggregator, where space, numbers, and alphabet characters are counted as 0.5.", "Figure 2 : Screenshot of editing support tool displaying generated candidates for creating a short title.", "news service, as shown in Figure 2 .", "In the tool, when an editor enters the URL of an article, the tool can automatically fetch the headline and lead of the article and display up to five candidates next to the edit form of a short title, as shown in the dotted box in the figure.", "These candidates are hypotheses (with high probabilities) generated by the beam search based on the model.", "Then, the editor can effectively create a short title by referring to the generated candidates.", "This supporting feature is expected to be useful especially for inexperienced editors since the quality of short titles is heavily dependent on editors' experience.", "From now on, we briefly describe three features of the tool to improve its usability when displaying candidates: cutoff of unpromising candidates, skipping redundant candidates, and highlighting unknown characters.", "After that, we discuss the effect of the deployment analyzing user behavior before and after releasing the tool.", "Cutoff of Unpromising Candidates The quality of displayed candidates is one of the main factors that affect the usability of the tool.", "If the tool frequently displays unpromising candidates, editors will gradually start ignoring them.", "Therefore, we cutoff unpromising candidates whose perplexity scores are higher than a certain threshold, where the perplexity score of a candidate is calculated by the inverse of the geometric mean of the generation probabilities for all characters in the candidate.", "We set the threshold considering the results of the editors' manual evaluation, where they checked if each candidate was acceptable or not.", "Specifically, we used 1.47 (=1/0.68) as the threshold, which means that the (geometric) mean character likelihood in the candidate should be higher than 0.68.", "If all candidates are judged as unpromising, the tool displays a message like \"No promising candidates.\"", "Skipping Redundant Candidates The purpose of the tool is to give editors some new ideas for creating short titles, so it is not useful to display redundant candidates similar to others.", "Therefore, we skip candidates whose edit distance (Levenshtein, 1966) to the other candidates is lower than a threshold when selecting hypotheses in descending order of probability.", "Formally, the edit distance between two texts is defined as the minimum number of single-character edits (insertions, deletions, or substitutions) required to change one text into the other.", "We set the threshold to 2 so as to restrict variations of Japanese particles as there are many particles with a similar meaning in Japanese 4 , e.g., \" (ha)\" and \" (ga)\".", "Although we used a unit cost for the edit distance, we can adjust the cost of each edit operation so that the tool can ignore variations of prepositions if we want to use English texts.", "Highlighting Unknown Characters One difficulty of neural models is that there is a possibility of generating incorrect or fake titles, which do not correspond to the article.", "This is a serious issue for news editing support since displayed candidates can mislead editors.", "For example, if the tool displays \" (Fujinami)\" for the news about \" (Fujinami)\", where they are different names with the same pronunciation, editors might choose the incorrect one.", "As a simple solution, we highlighted unknown characters that do not appear in both headline and lead in red.", "In Figure 2 , two phrases (\"B\" and \" \") are highlighted since they do not appear in the headline and lead.", "When a candidate includes highlighted characters, editors can carefully check if the candidate is semantically correct.", "Note that we did not exclude candidates with unknown characters so that the model can aggressively generate paraphrases and abbreviations.", "For example, the tool ROUGE-L (± SE) # articles Before 52.71% (± 0.56) 1773 After 57.65% (± 0.53) 1959 Table 5 : Sequence matching rates (ROUGE-L) of editors' titles and generated titles, which are averaged over articles over three weeks before/after releasing tool.", "suggests \" B(Soft B.)\"", "as an abbreviation of \" (Softbank)\" in the figure.", "Effect of Deployment To investigate the effect of the deployment, we compared the sequence matching rates between editors' correct titles and generated candidates before and after releasing the tool.", "The sequence matching rate is basically calculated by ROUGE-L (Lin, 2004) , which is defined as the rate of the length of the longest common subsequence between two sequences, i.e., a correct title and a generated candidate.", "Because we have multiple candidates for each article, we calculate the sequence matching rate as the maximum of their ROUGE-L scores, assuming that editors may refer to the most promising candidate.", "Note that the candidates were filtered by the aforementioned features, so we omitted a few articles without candidates.", "Table 5 shows the results of the sequence matching rates averaged over the articles over three weeks before and after releasing the tool.", "The results indicate that the ROUGE-L score increased by about 5 percentage points after the release.", "This implies that editors created their titles by referring to the displayed candidates to some extent.", "In fact, the ratio of the exact matched titles (ROUGE-L = 100%) in all articles (before/after the release) increased after the release by a factor of 1.62(i.e., from 3.78% to 6.13%).", "Similarly, the ratio of the 80% matched titles (ROUGE-L ≥ 80%) also increased by a factor of 1.32 (i.e., from 14.04% to 18.53%).", "This suggests that professional editors obtained new ideas from generated titles of the tool.", "Related Work We briefly review related studies from three aspects: news headline generation, editing support, and application of headline generation.", "In summary, our work is the first attempt to deploy a neural news-headline-generation model to a realworld application, i.e., news editing support tool.", "News-headline-generation tasks have been extensively studied since early times (Wang et al., 2005; Soricut and Marcu, 2006; Woodsend et al., 2010; Alfonseca et al., 2013; Sun et al., 2015; Colmenares et al., 2015) .", "In this line of research, Rush et al.", "(2015) proposed a neural model to generate news headlines and released a benchmark dataset for their task, and consequently this task has recently received increasing attention (Chopra et al., 2016; Takase et al., 2016; Kiyono et al., 2017; Zhou et al., 2017; Ayana et al., 2017; Raffel et al., 2017; Cao et al., 2018; Kobayashi, 2018) .", "However, their approaches were basically based on the encoderdecoder model, which is trained with a lot of (article, headline) pairs.", "This means that there are few situations for putting their models into the real world because news articles typically already have corresponding headlines, and most editors create a headline before its content (according to a senior journalist).", "Therefore, our work can strongly support their approaches from a practical perspective.", "Considering technologies used for editing support, there have been many studies for various purposes, such as spelling error correction (Farra et al., 2014; Hasan et al., 2015; Etoori et al., 2018) , grammatical error correction (Dahlmeier and Ng, 2012; Susanto et al., 2014; Choshen and Abend, 2018) , fact checking (Baly et al., 2018; Thorne and Vlachos, 2018; Lee et al., 2018) , fluency evaluation (Vadlapudi and Katragadda, 2010; Heilman et al., 2014; Kann et al., 2018) , and so on.", "However, when we consider their studies on our task, they are only used after editing (writing a draft).", "On the other hand, the purpose of our tool is different from theirs since our tool can support editors before or during editing.", "The usage of (interactive) machine translation systems (Denkowski et al., 2014; González-Rubio et al., 2016; Wuebker et al., 2016; Ye et al., 2016; Takeno et al., 2017) for supporting manual post-editing are similar to our purpose, but their task is completely different from ours.", "In other words, their task is a translation without information loss, whereas our task is a summarization that requires information compression.", "We believe that a case study on summarization is still important for the summarization community.", "There have been several studies reporting case studies on headline generation for different real services: (a) question headlines on question answering service (Higurashi et al., 2018) , (b) product headlines on e-commerce service (Wang et al., 2018) , and (c) headlines for product curation pages Camargo de Souza et al., 2018) .", "The first two (a) and (b) are extractive approaches, and the last one (c) is an abstractive approach, where the input is a set of slot/value pairs, such as \"color/white.\"", "That is, our task is more difficult to use in the real-world.", "In addition, application to news services tends to be sensitive since news articles contain serious contents such as incidents, accidents, and disasters.", "Thus, our work should be valuable as a rare case study applying a neural model to such a news service.", "Conclusion We addressed short-title generation from news articles for a news aggregator to support the editorial process.", "We proposed an encoder-decoder model with multiple encoders for separately encoding multiple information sources, i.e., news headlines and leads.", "Comparative experiments using crowdsourcing showed that our hybrid model performed better than the baselines, especially using the usefulness measure.", "We deployed our model to an editing support tool and empirically confirmed that professional editors began to refer to the generated titles after the release.", "Future research will include verifying how much our headline generation model can affect practical performance indicators, such as click-through rate.", "In this case, we need to develop a much safer model since our model sometimes yields erroneous outputs or fake news titles, which cannot be directly used in the commercial service." ] }
{ "paper_header_number": [ "1", "3", "4.1", "4.2", "4.3", "4.4", "5", "5.1", "5.2", "5.3", "5.4", "6", "7" ], "paper_header_content": [ "Introduction", "Proposed Method", "Dataset", "Training", "Evaluation", "Compared Models", "Deployment to Editing Support Tool", "Cutoff of Unpromising Candidates", "Skipping Redundant Candidates", "Highlighting Unknown Characters", "Effect of Deployment", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-34#paper-1048#slide-9
Human evaluation by crowdsourcing
Two crowdsourcing tasks for readability and usefulness Average score of 10 workers for each of 1,000 outputs How readable a short title was How useful a short title was compared to the headline Copyright 2019 Yahoo Japan Corporation. All Rights Reserved.
Two crowdsourcing tasks for readability and usefulness Average score of 10 workers for each of 1,000 outputs How readable a short title was How useful a short title was compared to the headline Copyright 2019 Yahoo Japan Corporation. All Rights Reserved.
[]
GEM-SciDuet-train-34#paper-1048#slide-10
1048
A Case Study on Neural Headline Generation for Editing Support
There have been many studies on neural headline generation models trained with a lot of (article, headline) pairs. However, there are few situations for putting such models into practical use in the real world since news articles typically already have corresponding headlines. In this paper, we describe a practical use case of neural headline generation in a news aggregator, where dozens of professional editors constantly select important news articles and manually create their headlines, which are much shorter than the original headlines. Specifically, we show how to deploy our model to an editing support tool and report the results of comparing the behavior of the editors before and after the release.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186 ], "paper_content_text": [ "Introduction A news-aggregator is a website or mobile application that aggregates a large amount of web content, e.g., online newspapers provided by different publishers.", "The main purpose of such a service is to help users obtain important news out of vast amounts of information quickly and easily.", "Therefore, it is critical to consider how to compactly show news, as well as what type of news to select, to improve service quality.", "In fact, the news-aggregator of Yahoo!", "JAPAN 1 , the largest Japanese portal site, is supported by dozens of professional editors who constantly select important news articles and manually create their new headlines called short titles, which are much shorter than the original headline, to construct a newstopic list.", "Note that we use the term \"title\" to avoid confusion with the original news headline, although they are similar concepts.", "* Both authors contributed equally to this work.", "1 https://www.yahoo.co.jp/ (a) List of news topics including short titles.", "(b) Page of news entry including headline and lead.", "Figure 1 shows screenshots of the newsaggregator of Yahoo!", "JAPAN, where the English translations of the short title, headline and lead are listed in Table 1 .", "The left figure (a) shows the list of news topics (important news articles), which includes short titles, and the right figure (b) shows the entry page of the first topic in the list, which consists of a headline and lead.", "The lead is a short version of the article and can be used by users to decide whether to read the whole article.", "The editors' job is to create a short title from news content including the headline and lead.", "A short title has two advantages over a normal headline; one is quick understandability of the content and the other is saving display space by using a single line.", "This means that short titles can increase a user's chances of reaching interesting articles.", "Since the click-through rate of news articles is directly related to ad revenue, even a small improvement in short titles has a significant impact on business.", "We tackle an automatic-generation task of such short titles for a news aggregator to support the Japanese English translation Short title The prime minister cannot say that there is no surmise.", "Headline It cannot be said that there is no \"sontaku (surmise)\" with absolute certainty.", "The prime minister Abe said about the problem of \"Kake Gakuen (Kake school)\".", "Lead Prime Minister Shinzo Abe said, in an intensive deliberation with the House of Councilors Budget Committee held on the afternoon of the 14th, as an answer to a question about whether bureaucrats surmised to the prime minister regarding the Kake suspicion, \"It is difficult to understand whether there is a sontaku (surmise)\".", "He said \"It cannot be said that there was nothing wrong,\" while explaining that \"I do not need to be obsequious\".", "An answer to Ichiro Tsukada (LDP).", "Table 1 : Short title, headline, and lead in Figure 1 (b) with English versions.", "editorial process.", "Our task is a variant of newsheadline generation, which has been extensively studied, as described in Section 6.", "A clear difference between their task and ours is that we need to generate short titles from news content including headlines.", "Thus, we formulate our task as an abstractive summarization from multiple information sources, i.e., headlines and leads, based on an encoder-decoder model (Section 2).", "There are roughly three approaches for handling multiple information sources.", "The first approach is to merge all sources with some weights based on the importance of each source, which can be achieved by a weighted average of the context vectors, as in multimodal summarization (Hori et al., 2017) .", "This is the most general approach since the other two can also be regarded as special cases of the weighted average.", "The second approach is to use one source as the main source and others as secondary ones.", "This is effective when the main source can be clearly determined, such as query-focused summarization (Nema et al., 2017) , where the target document is main and a query is secondary.", "The third approach is to find the salient components of the sources.", "This is suitable when there are many sources including less informative ones (redundant sources), such as lengthydocument summarization that outputs a multisentence summary (Tan et al., 2017) , where each sentence can be regarded as one source.", "We addressed an extension of the weighted average approach and compared our proposed model with a multimodal model (Hori et al., 2017) from the first approach and a query-based model (Nema et al., 2017) from the second approach, as well as the normal encoder-decoder model.", "Since we have only two sources (headlines and leads), where the headline source is clearly salient for generating a short title, the third approach can be reduced to the normal encoder-decoder model.", "Our contributions are as follows.", "• We report on a case study of short-title generation of news articles for a news aggregator as a real-world application of neural headline generation.", "This study supports previous studies based on the encoder-decoder model from a practical standpoint since most real-world news articles basically already have headlines, which means that there has been little direct application of these previous studies.", "• We propose an encoder-decoder model with multiple encoders for separately encoding news headlines and leads (Section 3).", "Our comparative experiments with several baselines involving evaluations done by crowdsourcing workers showed the effectiveness of our model, especially using the \"usefulness\" measure (Section 4).", "• We describe how to deploy our model to an editing support tool and show the results of comparing the editors' behavior before and after releasing the tool (Section 5), which imply that the editors began to refer to generated titles after the release.", "late the following conditional likelihood p(y | x) = T −1 ∏ t=1 p(y t+1 | y ≤t , x) (1) with respect to each pair (x, y) of an input sequence x = x 1 · · · x S and output sequence y = y 1 · · · y T , where y ≤t = y 1 · · · y t , and maximize its mean.", "The model p(y | x) in Eq.", "(1) is computed by a combination of two recurrent neural networks (RNNs): an encoder and decoder.", "The encoder reads an input sequence x to recognize its content, and the decoder predicts an output sequence y corresponding to the content.", "More formally, an encoder calculates a hidden state h s for each element x s in a x by using the state transition function f enc of the encoder: h s = f enc (x s , h s−1 ).", "In a similar fashion, a decoder calculates a hidden stateĥ t for each element y t in a y by using the state transition function f dec of the decoder after setting the last hidden state of the encoder as the initial state of the decoder (ĥ 0 = h S ): h t = f dec (y t ,ĥ t−1 ).", "Then, a prediction of outputs for eachĥ t is calculated using the output function g dec with an attention mechanism: p(y t+1 | y ≤t , x) = g dec (ĥ t , c t ), (2) where c t is a weighted average of the encoder hidden states {h 1 , · · · , h S }, defined by c t = S ∑ s=1 a t (s)h s , (3) where a t (s) represents a weight of an encoder hidden state h s with respect to a decoder hidden statê h t .", "c t represents a soft alignment (or attention weight) to the source sequence at the target position t, so it is called a context.", "Proposed Method We propose an encoder-decoder model with multiple encoders.", "For simplicity, we describe our model assuming two encoders for news headlines and leads.", "Let d t and d ′ t be contexts calculated with Eq.", "(3) with the headline encoder and lead encoder, respectively.", "Our model combines the two context vectors inspired by a gating mechanism in long-short term memory networks (Hochreiter and Schmidhuber, 1997) as follows: w t = σ(W [d t ; d ′ t ;ĥ t ]), (4) w ′ t = σ(W ′ [d t ; d ′ t ;ĥ t ]), (5) c t = w t ⊙ d t + w ′ t ⊙ d ′ t , (6) where function σ represents the sigmoid function, i.e., σ(x) = 1/(1 + e −x ), and the operator ⊙ represents the element-wise product.", "Eq.", "(4) calculates a gating weight w t for d t , where W represents a weight matrix for a concatenated vector [d t ; d ′ t ;ĥ t ].", "Similarly, Eq.", "(5) calculates a gating weight w ′ t for d ′ t .", "Eq.", "(6) calculates a mixed context c t made from the two contexts, d t and d ′ t .", "Finally, the output function in our model is constructed by substituting c t with c t in Eq.", "(2).", "Our model can be regarded as an extension of the multimodal fusion model (Hori et al., 2017) , where multiple contexts are mixed using scalar weights, i.e., c t = αd t + βd ′ t , where α and β are positive scalar weights calculated using an attention mechanism such as a t (s) in Eq.", "(3).", "Our model can obtain a more sophisticated mixed context than their model since that model only takes into account which encoder to weigh at a time step, while our model adjusts weights on the element level.", "Experiments Dataset We prepared a dataset extracted from the newsaggregator of Yahoo!", "JAPAN by Web crawling.", "The dataset included 263K (headline, lead, short title) triples, and was split into three parts, i.e., for training (90%), validation (5%), and testing (5%).", "We preprocessed them by separating characters for training since our preliminary experiments showed that character-based training clearly performed better than word-based training.", "The statistics of our dataset are as follows.", "The average lengths of headlines, leads, and short titles are 24.87, 128.49, and 13.05 Japanese characters, respectively.", "The dictionary sizes (for characters) of headlines, leads, and short titles are 3618, 4226, and 3156, respectively.", "Each news article has only one short title created by a professional editor.", "The percentage of short titles equal to their headlines is only 0.13%, while the percentage of extractively solvable instances, in which the characters in each short title are completely matched by those in the corresponding headline, was about 20%.", "However, the average edit distance (Levenshtein, 1966 ) between short titles and headlines was 23.74.", "This means that short titles cannot be easily created from headlines.", "Training We implemented our model on the OpenNMT 2 toolkit.", "We used a convolutional neural network (CNN) (Kim, 2014) , instead of an RNN, to construct the lead encoder since leads are longer than headlines and require much more computational time.", "Since the CNN encoder outputs all hidden states for an input sequence in the same format as the RNN encoder, we can easily apply these states to Eq.", "(3).", "Our headline encoder still remains as an RNN (i.e., bidirectional LSTM) for fair comparison with the default implementation.", "We used a stochastic gradient descent algorithm with Nesterov momentum (Nesterov, 1983) as an optimizer, after initializing parameters by uniform sampling on (−0.1, 0.1).", "Table 2 lists the details of the hyper-parameter settings in our experiment.", "Other settings were basically the same as the default implementation of OpenNMT.", "Evaluation We conducted two crowdsourcing tasks to separately measure readability and usefulness.", "The readability task asked ten workers how readable each short title was on a four-point scale (higher is better), while the usefulness task asked them how useful the short title was compared to the corresponding article.", "The score of each generated short title was calculated by averaging the scores collected from the ten workers.", "Compared Models We prepared four models, our model GateFusion and three baselines MultiModal, QueryBased, and OpenNMT, listed below.", "We implemented the fusion mechanisms of MultiModal and 2 https://github.com/OpenNMT/OpenNMT-py Table 3 : Mean scores of readability (r), usefulness (u), and their average r+u 2 based on crowdsourcing.", "The \" †\" mark shows a statistical significance from all three baselines OpenNMT, MultiModal, and QueryBased on a one-tailed, paired t-test (p < 0.01).", "QueryBased on OpenNMT using an RNN encoder for headlines and CNN encoder for leads (see Appendix A for detailed definitions).", "• GateFusion: Our model with a gating mechanism described in Section 3.", "This is a fusion based on vector weights.", "• MultiModal: A multimodal model proposed by (Hori et al., 2017) , which can handle multimodal information such as image and audio as well as text by using separate encoders.", "The model combines contexts obtained from the encoders via an attention mechanism such as a t (s) in Eq.", "(3).", "This is a fusion based on scalar weights.", "• QueryBased: A query-based model proposed by (Nema et al., 2017) , which can finetune the attention on a document by using a query for query-focused summarization.", "We regard a headline as a document and a lead as a query since the headline is more similar to its short title.", "Specifically, the model finetunes an attention weight a t (s) for calculating a headline context d t by using a pre-computed lead context d ′ t .", "This is a fusion based on cascade connection.", "• OpenNMT: An encoder-decoder model with a single encoder implemented in OpenNMT, whose input is a headline only, because a variant using a lead did not perform better than this setting.", "Table 3 lists the results from the crowdsourcing tasks for readability and usefulness (see Appendix B for the details of these scores).", "Editor and Prefix in the top block of rows show the results of correct short titles created by editors and a naive model using the first 13.5 Japanese characters 3 , respectively.", "The middle and bottom blocks represent the three baselines and our models, respectively.", "We explain our hybrid model HybridFusion later.", "Each model was prepared as an ensemble of ten models by random initialization, aiming for robust performance.", "Our GateFusion clearly performed better than the three baselines regarding usefulness and interestingly outperformed even Editor.", "This implies that GateFusion tends to aggressively copy elements from source sequences.", "However, this seemed to result in complicated expressions; thus, GateFusion performed the worst with respect to readability.", "To overcome this weakness, we developed a hybrid model HybridFusion that consists of GateFusion and another fusion model QueryBased, which performed relatively well in terms of readability.", "The results indicate that HybridFusion performed the best regarding readability and usefulness.", "It can be considered that QueryBased helps GateFusion generate headline-style outputs since QueryBased mainly uses the headline source.", "Table 4 lists output examples generated by the best model OpenNMT from the three baselines and our best model HybridFusion (see Appendix C for more examples).", "In this case, the difference between OpenNMT and HybridFusion is easily comprehensible.", "The former selected \" (evolution)\", and the latter selected \" (Darvish)\" from the headline.", "In Japanese headlines, the last word tends to be important, so using the last word is basically a good strategy.", "However, the lead indicates that \"Darvish\" is more important than \"evolution\" (actually, there is no word \"evolution\" in the lead); thus, HybridFusion was able to correctly select the long name \"Darvish\" and abbreviate it to \" (Dar)\".", "In addition, it forcibly changed the style to the short title's style by putting the name into the forefront to easily get users' attention.", "This suggests that our neural-headline-generation model HybridFusion can successfully work even in this real-world application.", "Results Deployment to Editing Support Tool We deployed our short-title-generation model to an editing support tool in collaboration with the 3 13.5 is the limit in the news-aggregator, where space, numbers, and alphabet characters are counted as 0.5.", "Figure 2 : Screenshot of editing support tool displaying generated candidates for creating a short title.", "news service, as shown in Figure 2 .", "In the tool, when an editor enters the URL of an article, the tool can automatically fetch the headline and lead of the article and display up to five candidates next to the edit form of a short title, as shown in the dotted box in the figure.", "These candidates are hypotheses (with high probabilities) generated by the beam search based on the model.", "Then, the editor can effectively create a short title by referring to the generated candidates.", "This supporting feature is expected to be useful especially for inexperienced editors since the quality of short titles is heavily dependent on editors' experience.", "From now on, we briefly describe three features of the tool to improve its usability when displaying candidates: cutoff of unpromising candidates, skipping redundant candidates, and highlighting unknown characters.", "After that, we discuss the effect of the deployment analyzing user behavior before and after releasing the tool.", "Cutoff of Unpromising Candidates The quality of displayed candidates is one of the main factors that affect the usability of the tool.", "If the tool frequently displays unpromising candidates, editors will gradually start ignoring them.", "Therefore, we cutoff unpromising candidates whose perplexity scores are higher than a certain threshold, where the perplexity score of a candidate is calculated by the inverse of the geometric mean of the generation probabilities for all characters in the candidate.", "We set the threshold considering the results of the editors' manual evaluation, where they checked if each candidate was acceptable or not.", "Specifically, we used 1.47 (=1/0.68) as the threshold, which means that the (geometric) mean character likelihood in the candidate should be higher than 0.68.", "If all candidates are judged as unpromising, the tool displays a message like \"No promising candidates.\"", "Skipping Redundant Candidates The purpose of the tool is to give editors some new ideas for creating short titles, so it is not useful to display redundant candidates similar to others.", "Therefore, we skip candidates whose edit distance (Levenshtein, 1966) to the other candidates is lower than a threshold when selecting hypotheses in descending order of probability.", "Formally, the edit distance between two texts is defined as the minimum number of single-character edits (insertions, deletions, or substitutions) required to change one text into the other.", "We set the threshold to 2 so as to restrict variations of Japanese particles as there are many particles with a similar meaning in Japanese 4 , e.g., \" (ha)\" and \" (ga)\".", "Although we used a unit cost for the edit distance, we can adjust the cost of each edit operation so that the tool can ignore variations of prepositions if we want to use English texts.", "Highlighting Unknown Characters One difficulty of neural models is that there is a possibility of generating incorrect or fake titles, which do not correspond to the article.", "This is a serious issue for news editing support since displayed candidates can mislead editors.", "For example, if the tool displays \" (Fujinami)\" for the news about \" (Fujinami)\", where they are different names with the same pronunciation, editors might choose the incorrect one.", "As a simple solution, we highlighted unknown characters that do not appear in both headline and lead in red.", "In Figure 2 , two phrases (\"B\" and \" \") are highlighted since they do not appear in the headline and lead.", "When a candidate includes highlighted characters, editors can carefully check if the candidate is semantically correct.", "Note that we did not exclude candidates with unknown characters so that the model can aggressively generate paraphrases and abbreviations.", "For example, the tool ROUGE-L (± SE) # articles Before 52.71% (± 0.56) 1773 After 57.65% (± 0.53) 1959 Table 5 : Sequence matching rates (ROUGE-L) of editors' titles and generated titles, which are averaged over articles over three weeks before/after releasing tool.", "suggests \" B(Soft B.)\"", "as an abbreviation of \" (Softbank)\" in the figure.", "Effect of Deployment To investigate the effect of the deployment, we compared the sequence matching rates between editors' correct titles and generated candidates before and after releasing the tool.", "The sequence matching rate is basically calculated by ROUGE-L (Lin, 2004) , which is defined as the rate of the length of the longest common subsequence between two sequences, i.e., a correct title and a generated candidate.", "Because we have multiple candidates for each article, we calculate the sequence matching rate as the maximum of their ROUGE-L scores, assuming that editors may refer to the most promising candidate.", "Note that the candidates were filtered by the aforementioned features, so we omitted a few articles without candidates.", "Table 5 shows the results of the sequence matching rates averaged over the articles over three weeks before and after releasing the tool.", "The results indicate that the ROUGE-L score increased by about 5 percentage points after the release.", "This implies that editors created their titles by referring to the displayed candidates to some extent.", "In fact, the ratio of the exact matched titles (ROUGE-L = 100%) in all articles (before/after the release) increased after the release by a factor of 1.62(i.e., from 3.78% to 6.13%).", "Similarly, the ratio of the 80% matched titles (ROUGE-L ≥ 80%) also increased by a factor of 1.32 (i.e., from 14.04% to 18.53%).", "This suggests that professional editors obtained new ideas from generated titles of the tool.", "Related Work We briefly review related studies from three aspects: news headline generation, editing support, and application of headline generation.", "In summary, our work is the first attempt to deploy a neural news-headline-generation model to a realworld application, i.e., news editing support tool.", "News-headline-generation tasks have been extensively studied since early times (Wang et al., 2005; Soricut and Marcu, 2006; Woodsend et al., 2010; Alfonseca et al., 2013; Sun et al., 2015; Colmenares et al., 2015) .", "In this line of research, Rush et al.", "(2015) proposed a neural model to generate news headlines and released a benchmark dataset for their task, and consequently this task has recently received increasing attention (Chopra et al., 2016; Takase et al., 2016; Kiyono et al., 2017; Zhou et al., 2017; Ayana et al., 2017; Raffel et al., 2017; Cao et al., 2018; Kobayashi, 2018) .", "However, their approaches were basically based on the encoderdecoder model, which is trained with a lot of (article, headline) pairs.", "This means that there are few situations for putting their models into the real world because news articles typically already have corresponding headlines, and most editors create a headline before its content (according to a senior journalist).", "Therefore, our work can strongly support their approaches from a practical perspective.", "Considering technologies used for editing support, there have been many studies for various purposes, such as spelling error correction (Farra et al., 2014; Hasan et al., 2015; Etoori et al., 2018) , grammatical error correction (Dahlmeier and Ng, 2012; Susanto et al., 2014; Choshen and Abend, 2018) , fact checking (Baly et al., 2018; Thorne and Vlachos, 2018; Lee et al., 2018) , fluency evaluation (Vadlapudi and Katragadda, 2010; Heilman et al., 2014; Kann et al., 2018) , and so on.", "However, when we consider their studies on our task, they are only used after editing (writing a draft).", "On the other hand, the purpose of our tool is different from theirs since our tool can support editors before or during editing.", "The usage of (interactive) machine translation systems (Denkowski et al., 2014; González-Rubio et al., 2016; Wuebker et al., 2016; Ye et al., 2016; Takeno et al., 2017) for supporting manual post-editing are similar to our purpose, but their task is completely different from ours.", "In other words, their task is a translation without information loss, whereas our task is a summarization that requires information compression.", "We believe that a case study on summarization is still important for the summarization community.", "There have been several studies reporting case studies on headline generation for different real services: (a) question headlines on question answering service (Higurashi et al., 2018) , (b) product headlines on e-commerce service (Wang et al., 2018) , and (c) headlines for product curation pages Camargo de Souza et al., 2018) .", "The first two (a) and (b) are extractive approaches, and the last one (c) is an abstractive approach, where the input is a set of slot/value pairs, such as \"color/white.\"", "That is, our task is more difficult to use in the real-world.", "In addition, application to news services tends to be sensitive since news articles contain serious contents such as incidents, accidents, and disasters.", "Thus, our work should be valuable as a rare case study applying a neural model to such a news service.", "Conclusion We addressed short-title generation from news articles for a news aggregator to support the editorial process.", "We proposed an encoder-decoder model with multiple encoders for separately encoding multiple information sources, i.e., news headlines and leads.", "Comparative experiments using crowdsourcing showed that our hybrid model performed better than the baselines, especially using the usefulness measure.", "We deployed our model to an editing support tool and empirically confirmed that professional editors began to refer to the generated titles after the release.", "Future research will include verifying how much our headline generation model can affect practical performance indicators, such as click-through rate.", "In this case, we need to develop a much safer model since our model sometimes yields erroneous outputs or fake news titles, which cannot be directly used in the commercial service." ] }
{ "paper_header_number": [ "1", "3", "4.1", "4.2", "4.3", "4.4", "5", "5.1", "5.2", "5.3", "5.4", "6", "7" ], "paper_header_content": [ "Introduction", "Proposed Method", "Dataset", "Training", "Evaluation", "Compared Models", "Deployment to Editing Support Tool", "Cutoff of Unpromising Candidates", "Skipping Redundant Candidates", "Highlighting Unknown Characters", "Effect of Deployment", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-34#paper-1048#slide-10
Evaluation results 1 2
Our model performed well for the usefulness measure Copyright 2019 Yahoo Japan Corporation. All Rights Reserved.
Our model performed well for the usefulness measure Copyright 2019 Yahoo Japan Corporation. All Rights Reserved.
[]
GEM-SciDuet-train-34#paper-1048#slide-11
1048
A Case Study on Neural Headline Generation for Editing Support
There have been many studies on neural headline generation models trained with a lot of (article, headline) pairs. However, there are few situations for putting such models into practical use in the real world since news articles typically already have corresponding headlines. In this paper, we describe a practical use case of neural headline generation in a news aggregator, where dozens of professional editors constantly select important news articles and manually create their headlines, which are much shorter than the original headlines. Specifically, we show how to deploy our model to an editing support tool and report the results of comparing the behavior of the editors before and after the release.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186 ], "paper_content_text": [ "Introduction A news-aggregator is a website or mobile application that aggregates a large amount of web content, e.g., online newspapers provided by different publishers.", "The main purpose of such a service is to help users obtain important news out of vast amounts of information quickly and easily.", "Therefore, it is critical to consider how to compactly show news, as well as what type of news to select, to improve service quality.", "In fact, the news-aggregator of Yahoo!", "JAPAN 1 , the largest Japanese portal site, is supported by dozens of professional editors who constantly select important news articles and manually create their new headlines called short titles, which are much shorter than the original headline, to construct a newstopic list.", "Note that we use the term \"title\" to avoid confusion with the original news headline, although they are similar concepts.", "* Both authors contributed equally to this work.", "1 https://www.yahoo.co.jp/ (a) List of news topics including short titles.", "(b) Page of news entry including headline and lead.", "Figure 1 shows screenshots of the newsaggregator of Yahoo!", "JAPAN, where the English translations of the short title, headline and lead are listed in Table 1 .", "The left figure (a) shows the list of news topics (important news articles), which includes short titles, and the right figure (b) shows the entry page of the first topic in the list, which consists of a headline and lead.", "The lead is a short version of the article and can be used by users to decide whether to read the whole article.", "The editors' job is to create a short title from news content including the headline and lead.", "A short title has two advantages over a normal headline; one is quick understandability of the content and the other is saving display space by using a single line.", "This means that short titles can increase a user's chances of reaching interesting articles.", "Since the click-through rate of news articles is directly related to ad revenue, even a small improvement in short titles has a significant impact on business.", "We tackle an automatic-generation task of such short titles for a news aggregator to support the Japanese English translation Short title The prime minister cannot say that there is no surmise.", "Headline It cannot be said that there is no \"sontaku (surmise)\" with absolute certainty.", "The prime minister Abe said about the problem of \"Kake Gakuen (Kake school)\".", "Lead Prime Minister Shinzo Abe said, in an intensive deliberation with the House of Councilors Budget Committee held on the afternoon of the 14th, as an answer to a question about whether bureaucrats surmised to the prime minister regarding the Kake suspicion, \"It is difficult to understand whether there is a sontaku (surmise)\".", "He said \"It cannot be said that there was nothing wrong,\" while explaining that \"I do not need to be obsequious\".", "An answer to Ichiro Tsukada (LDP).", "Table 1 : Short title, headline, and lead in Figure 1 (b) with English versions.", "editorial process.", "Our task is a variant of newsheadline generation, which has been extensively studied, as described in Section 6.", "A clear difference between their task and ours is that we need to generate short titles from news content including headlines.", "Thus, we formulate our task as an abstractive summarization from multiple information sources, i.e., headlines and leads, based on an encoder-decoder model (Section 2).", "There are roughly three approaches for handling multiple information sources.", "The first approach is to merge all sources with some weights based on the importance of each source, which can be achieved by a weighted average of the context vectors, as in multimodal summarization (Hori et al., 2017) .", "This is the most general approach since the other two can also be regarded as special cases of the weighted average.", "The second approach is to use one source as the main source and others as secondary ones.", "This is effective when the main source can be clearly determined, such as query-focused summarization (Nema et al., 2017) , where the target document is main and a query is secondary.", "The third approach is to find the salient components of the sources.", "This is suitable when there are many sources including less informative ones (redundant sources), such as lengthydocument summarization that outputs a multisentence summary (Tan et al., 2017) , where each sentence can be regarded as one source.", "We addressed an extension of the weighted average approach and compared our proposed model with a multimodal model (Hori et al., 2017) from the first approach and a query-based model (Nema et al., 2017) from the second approach, as well as the normal encoder-decoder model.", "Since we have only two sources (headlines and leads), where the headline source is clearly salient for generating a short title, the third approach can be reduced to the normal encoder-decoder model.", "Our contributions are as follows.", "• We report on a case study of short-title generation of news articles for a news aggregator as a real-world application of neural headline generation.", "This study supports previous studies based on the encoder-decoder model from a practical standpoint since most real-world news articles basically already have headlines, which means that there has been little direct application of these previous studies.", "• We propose an encoder-decoder model with multiple encoders for separately encoding news headlines and leads (Section 3).", "Our comparative experiments with several baselines involving evaluations done by crowdsourcing workers showed the effectiveness of our model, especially using the \"usefulness\" measure (Section 4).", "• We describe how to deploy our model to an editing support tool and show the results of comparing the editors' behavior before and after releasing the tool (Section 5), which imply that the editors began to refer to generated titles after the release.", "late the following conditional likelihood p(y | x) = T −1 ∏ t=1 p(y t+1 | y ≤t , x) (1) with respect to each pair (x, y) of an input sequence x = x 1 · · · x S and output sequence y = y 1 · · · y T , where y ≤t = y 1 · · · y t , and maximize its mean.", "The model p(y | x) in Eq.", "(1) is computed by a combination of two recurrent neural networks (RNNs): an encoder and decoder.", "The encoder reads an input sequence x to recognize its content, and the decoder predicts an output sequence y corresponding to the content.", "More formally, an encoder calculates a hidden state h s for each element x s in a x by using the state transition function f enc of the encoder: h s = f enc (x s , h s−1 ).", "In a similar fashion, a decoder calculates a hidden stateĥ t for each element y t in a y by using the state transition function f dec of the decoder after setting the last hidden state of the encoder as the initial state of the decoder (ĥ 0 = h S ): h t = f dec (y t ,ĥ t−1 ).", "Then, a prediction of outputs for eachĥ t is calculated using the output function g dec with an attention mechanism: p(y t+1 | y ≤t , x) = g dec (ĥ t , c t ), (2) where c t is a weighted average of the encoder hidden states {h 1 , · · · , h S }, defined by c t = S ∑ s=1 a t (s)h s , (3) where a t (s) represents a weight of an encoder hidden state h s with respect to a decoder hidden statê h t .", "c t represents a soft alignment (or attention weight) to the source sequence at the target position t, so it is called a context.", "Proposed Method We propose an encoder-decoder model with multiple encoders.", "For simplicity, we describe our model assuming two encoders for news headlines and leads.", "Let d t and d ′ t be contexts calculated with Eq.", "(3) with the headline encoder and lead encoder, respectively.", "Our model combines the two context vectors inspired by a gating mechanism in long-short term memory networks (Hochreiter and Schmidhuber, 1997) as follows: w t = σ(W [d t ; d ′ t ;ĥ t ]), (4) w ′ t = σ(W ′ [d t ; d ′ t ;ĥ t ]), (5) c t = w t ⊙ d t + w ′ t ⊙ d ′ t , (6) where function σ represents the sigmoid function, i.e., σ(x) = 1/(1 + e −x ), and the operator ⊙ represents the element-wise product.", "Eq.", "(4) calculates a gating weight w t for d t , where W represents a weight matrix for a concatenated vector [d t ; d ′ t ;ĥ t ].", "Similarly, Eq.", "(5) calculates a gating weight w ′ t for d ′ t .", "Eq.", "(6) calculates a mixed context c t made from the two contexts, d t and d ′ t .", "Finally, the output function in our model is constructed by substituting c t with c t in Eq.", "(2).", "Our model can be regarded as an extension of the multimodal fusion model (Hori et al., 2017) , where multiple contexts are mixed using scalar weights, i.e., c t = αd t + βd ′ t , where α and β are positive scalar weights calculated using an attention mechanism such as a t (s) in Eq.", "(3).", "Our model can obtain a more sophisticated mixed context than their model since that model only takes into account which encoder to weigh at a time step, while our model adjusts weights on the element level.", "Experiments Dataset We prepared a dataset extracted from the newsaggregator of Yahoo!", "JAPAN by Web crawling.", "The dataset included 263K (headline, lead, short title) triples, and was split into three parts, i.e., for training (90%), validation (5%), and testing (5%).", "We preprocessed them by separating characters for training since our preliminary experiments showed that character-based training clearly performed better than word-based training.", "The statistics of our dataset are as follows.", "The average lengths of headlines, leads, and short titles are 24.87, 128.49, and 13.05 Japanese characters, respectively.", "The dictionary sizes (for characters) of headlines, leads, and short titles are 3618, 4226, and 3156, respectively.", "Each news article has only one short title created by a professional editor.", "The percentage of short titles equal to their headlines is only 0.13%, while the percentage of extractively solvable instances, in which the characters in each short title are completely matched by those in the corresponding headline, was about 20%.", "However, the average edit distance (Levenshtein, 1966 ) between short titles and headlines was 23.74.", "This means that short titles cannot be easily created from headlines.", "Training We implemented our model on the OpenNMT 2 toolkit.", "We used a convolutional neural network (CNN) (Kim, 2014) , instead of an RNN, to construct the lead encoder since leads are longer than headlines and require much more computational time.", "Since the CNN encoder outputs all hidden states for an input sequence in the same format as the RNN encoder, we can easily apply these states to Eq.", "(3).", "Our headline encoder still remains as an RNN (i.e., bidirectional LSTM) for fair comparison with the default implementation.", "We used a stochastic gradient descent algorithm with Nesterov momentum (Nesterov, 1983) as an optimizer, after initializing parameters by uniform sampling on (−0.1, 0.1).", "Table 2 lists the details of the hyper-parameter settings in our experiment.", "Other settings were basically the same as the default implementation of OpenNMT.", "Evaluation We conducted two crowdsourcing tasks to separately measure readability and usefulness.", "The readability task asked ten workers how readable each short title was on a four-point scale (higher is better), while the usefulness task asked them how useful the short title was compared to the corresponding article.", "The score of each generated short title was calculated by averaging the scores collected from the ten workers.", "Compared Models We prepared four models, our model GateFusion and three baselines MultiModal, QueryBased, and OpenNMT, listed below.", "We implemented the fusion mechanisms of MultiModal and 2 https://github.com/OpenNMT/OpenNMT-py Table 3 : Mean scores of readability (r), usefulness (u), and their average r+u 2 based on crowdsourcing.", "The \" †\" mark shows a statistical significance from all three baselines OpenNMT, MultiModal, and QueryBased on a one-tailed, paired t-test (p < 0.01).", "QueryBased on OpenNMT using an RNN encoder for headlines and CNN encoder for leads (see Appendix A for detailed definitions).", "• GateFusion: Our model with a gating mechanism described in Section 3.", "This is a fusion based on vector weights.", "• MultiModal: A multimodal model proposed by (Hori et al., 2017) , which can handle multimodal information such as image and audio as well as text by using separate encoders.", "The model combines contexts obtained from the encoders via an attention mechanism such as a t (s) in Eq.", "(3).", "This is a fusion based on scalar weights.", "• QueryBased: A query-based model proposed by (Nema et al., 2017) , which can finetune the attention on a document by using a query for query-focused summarization.", "We regard a headline as a document and a lead as a query since the headline is more similar to its short title.", "Specifically, the model finetunes an attention weight a t (s) for calculating a headline context d t by using a pre-computed lead context d ′ t .", "This is a fusion based on cascade connection.", "• OpenNMT: An encoder-decoder model with a single encoder implemented in OpenNMT, whose input is a headline only, because a variant using a lead did not perform better than this setting.", "Table 3 lists the results from the crowdsourcing tasks for readability and usefulness (see Appendix B for the details of these scores).", "Editor and Prefix in the top block of rows show the results of correct short titles created by editors and a naive model using the first 13.5 Japanese characters 3 , respectively.", "The middle and bottom blocks represent the three baselines and our models, respectively.", "We explain our hybrid model HybridFusion later.", "Each model was prepared as an ensemble of ten models by random initialization, aiming for robust performance.", "Our GateFusion clearly performed better than the three baselines regarding usefulness and interestingly outperformed even Editor.", "This implies that GateFusion tends to aggressively copy elements from source sequences.", "However, this seemed to result in complicated expressions; thus, GateFusion performed the worst with respect to readability.", "To overcome this weakness, we developed a hybrid model HybridFusion that consists of GateFusion and another fusion model QueryBased, which performed relatively well in terms of readability.", "The results indicate that HybridFusion performed the best regarding readability and usefulness.", "It can be considered that QueryBased helps GateFusion generate headline-style outputs since QueryBased mainly uses the headline source.", "Table 4 lists output examples generated by the best model OpenNMT from the three baselines and our best model HybridFusion (see Appendix C for more examples).", "In this case, the difference between OpenNMT and HybridFusion is easily comprehensible.", "The former selected \" (evolution)\", and the latter selected \" (Darvish)\" from the headline.", "In Japanese headlines, the last word tends to be important, so using the last word is basically a good strategy.", "However, the lead indicates that \"Darvish\" is more important than \"evolution\" (actually, there is no word \"evolution\" in the lead); thus, HybridFusion was able to correctly select the long name \"Darvish\" and abbreviate it to \" (Dar)\".", "In addition, it forcibly changed the style to the short title's style by putting the name into the forefront to easily get users' attention.", "This suggests that our neural-headline-generation model HybridFusion can successfully work even in this real-world application.", "Results Deployment to Editing Support Tool We deployed our short-title-generation model to an editing support tool in collaboration with the 3 13.5 is the limit in the news-aggregator, where space, numbers, and alphabet characters are counted as 0.5.", "Figure 2 : Screenshot of editing support tool displaying generated candidates for creating a short title.", "news service, as shown in Figure 2 .", "In the tool, when an editor enters the URL of an article, the tool can automatically fetch the headline and lead of the article and display up to five candidates next to the edit form of a short title, as shown in the dotted box in the figure.", "These candidates are hypotheses (with high probabilities) generated by the beam search based on the model.", "Then, the editor can effectively create a short title by referring to the generated candidates.", "This supporting feature is expected to be useful especially for inexperienced editors since the quality of short titles is heavily dependent on editors' experience.", "From now on, we briefly describe three features of the tool to improve its usability when displaying candidates: cutoff of unpromising candidates, skipping redundant candidates, and highlighting unknown characters.", "After that, we discuss the effect of the deployment analyzing user behavior before and after releasing the tool.", "Cutoff of Unpromising Candidates The quality of displayed candidates is one of the main factors that affect the usability of the tool.", "If the tool frequently displays unpromising candidates, editors will gradually start ignoring them.", "Therefore, we cutoff unpromising candidates whose perplexity scores are higher than a certain threshold, where the perplexity score of a candidate is calculated by the inverse of the geometric mean of the generation probabilities for all characters in the candidate.", "We set the threshold considering the results of the editors' manual evaluation, where they checked if each candidate was acceptable or not.", "Specifically, we used 1.47 (=1/0.68) as the threshold, which means that the (geometric) mean character likelihood in the candidate should be higher than 0.68.", "If all candidates are judged as unpromising, the tool displays a message like \"No promising candidates.\"", "Skipping Redundant Candidates The purpose of the tool is to give editors some new ideas for creating short titles, so it is not useful to display redundant candidates similar to others.", "Therefore, we skip candidates whose edit distance (Levenshtein, 1966) to the other candidates is lower than a threshold when selecting hypotheses in descending order of probability.", "Formally, the edit distance between two texts is defined as the minimum number of single-character edits (insertions, deletions, or substitutions) required to change one text into the other.", "We set the threshold to 2 so as to restrict variations of Japanese particles as there are many particles with a similar meaning in Japanese 4 , e.g., \" (ha)\" and \" (ga)\".", "Although we used a unit cost for the edit distance, we can adjust the cost of each edit operation so that the tool can ignore variations of prepositions if we want to use English texts.", "Highlighting Unknown Characters One difficulty of neural models is that there is a possibility of generating incorrect or fake titles, which do not correspond to the article.", "This is a serious issue for news editing support since displayed candidates can mislead editors.", "For example, if the tool displays \" (Fujinami)\" for the news about \" (Fujinami)\", where they are different names with the same pronunciation, editors might choose the incorrect one.", "As a simple solution, we highlighted unknown characters that do not appear in both headline and lead in red.", "In Figure 2 , two phrases (\"B\" and \" \") are highlighted since they do not appear in the headline and lead.", "When a candidate includes highlighted characters, editors can carefully check if the candidate is semantically correct.", "Note that we did not exclude candidates with unknown characters so that the model can aggressively generate paraphrases and abbreviations.", "For example, the tool ROUGE-L (± SE) # articles Before 52.71% (± 0.56) 1773 After 57.65% (± 0.53) 1959 Table 5 : Sequence matching rates (ROUGE-L) of editors' titles and generated titles, which are averaged over articles over three weeks before/after releasing tool.", "suggests \" B(Soft B.)\"", "as an abbreviation of \" (Softbank)\" in the figure.", "Effect of Deployment To investigate the effect of the deployment, we compared the sequence matching rates between editors' correct titles and generated candidates before and after releasing the tool.", "The sequence matching rate is basically calculated by ROUGE-L (Lin, 2004) , which is defined as the rate of the length of the longest common subsequence between two sequences, i.e., a correct title and a generated candidate.", "Because we have multiple candidates for each article, we calculate the sequence matching rate as the maximum of their ROUGE-L scores, assuming that editors may refer to the most promising candidate.", "Note that the candidates were filtered by the aforementioned features, so we omitted a few articles without candidates.", "Table 5 shows the results of the sequence matching rates averaged over the articles over three weeks before and after releasing the tool.", "The results indicate that the ROUGE-L score increased by about 5 percentage points after the release.", "This implies that editors created their titles by referring to the displayed candidates to some extent.", "In fact, the ratio of the exact matched titles (ROUGE-L = 100%) in all articles (before/after the release) increased after the release by a factor of 1.62(i.e., from 3.78% to 6.13%).", "Similarly, the ratio of the 80% matched titles (ROUGE-L ≥ 80%) also increased by a factor of 1.32 (i.e., from 14.04% to 18.53%).", "This suggests that professional editors obtained new ideas from generated titles of the tool.", "Related Work We briefly review related studies from three aspects: news headline generation, editing support, and application of headline generation.", "In summary, our work is the first attempt to deploy a neural news-headline-generation model to a realworld application, i.e., news editing support tool.", "News-headline-generation tasks have been extensively studied since early times (Wang et al., 2005; Soricut and Marcu, 2006; Woodsend et al., 2010; Alfonseca et al., 2013; Sun et al., 2015; Colmenares et al., 2015) .", "In this line of research, Rush et al.", "(2015) proposed a neural model to generate news headlines and released a benchmark dataset for their task, and consequently this task has recently received increasing attention (Chopra et al., 2016; Takase et al., 2016; Kiyono et al., 2017; Zhou et al., 2017; Ayana et al., 2017; Raffel et al., 2017; Cao et al., 2018; Kobayashi, 2018) .", "However, their approaches were basically based on the encoderdecoder model, which is trained with a lot of (article, headline) pairs.", "This means that there are few situations for putting their models into the real world because news articles typically already have corresponding headlines, and most editors create a headline before its content (according to a senior journalist).", "Therefore, our work can strongly support their approaches from a practical perspective.", "Considering technologies used for editing support, there have been many studies for various purposes, such as spelling error correction (Farra et al., 2014; Hasan et al., 2015; Etoori et al., 2018) , grammatical error correction (Dahlmeier and Ng, 2012; Susanto et al., 2014; Choshen and Abend, 2018) , fact checking (Baly et al., 2018; Thorne and Vlachos, 2018; Lee et al., 2018) , fluency evaluation (Vadlapudi and Katragadda, 2010; Heilman et al., 2014; Kann et al., 2018) , and so on.", "However, when we consider their studies on our task, they are only used after editing (writing a draft).", "On the other hand, the purpose of our tool is different from theirs since our tool can support editors before or during editing.", "The usage of (interactive) machine translation systems (Denkowski et al., 2014; González-Rubio et al., 2016; Wuebker et al., 2016; Ye et al., 2016; Takeno et al., 2017) for supporting manual post-editing are similar to our purpose, but their task is completely different from ours.", "In other words, their task is a translation without information loss, whereas our task is a summarization that requires information compression.", "We believe that a case study on summarization is still important for the summarization community.", "There have been several studies reporting case studies on headline generation for different real services: (a) question headlines on question answering service (Higurashi et al., 2018) , (b) product headlines on e-commerce service (Wang et al., 2018) , and (c) headlines for product curation pages Camargo de Souza et al., 2018) .", "The first two (a) and (b) are extractive approaches, and the last one (c) is an abstractive approach, where the input is a set of slot/value pairs, such as \"color/white.\"", "That is, our task is more difficult to use in the real-world.", "In addition, application to news services tends to be sensitive since news articles contain serious contents such as incidents, accidents, and disasters.", "Thus, our work should be valuable as a rare case study applying a neural model to such a news service.", "Conclusion We addressed short-title generation from news articles for a news aggregator to support the editorial process.", "We proposed an encoder-decoder model with multiple encoders for separately encoding multiple information sources, i.e., news headlines and leads.", "Comparative experiments using crowdsourcing showed that our hybrid model performed better than the baselines, especially using the usefulness measure.", "We deployed our model to an editing support tool and empirically confirmed that professional editors began to refer to the generated titles after the release.", "Future research will include verifying how much our headline generation model can affect practical performance indicators, such as click-through rate.", "In this case, we need to develop a much safer model since our model sometimes yields erroneous outputs or fake news titles, which cannot be directly used in the commercial service." ] }
{ "paper_header_number": [ "1", "3", "4.1", "4.2", "4.3", "4.4", "5", "5.1", "5.2", "5.3", "5.4", "6", "7" ], "paper_header_content": [ "Introduction", "Proposed Method", "Dataset", "Training", "Evaluation", "Compared Models", "Deployment to Editing Support Tool", "Cutoff of Unpromising Candidates", "Skipping Redundant Candidates", "Highlighting Unknown Characters", "Effect of Deployment", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-34#paper-1048#slide-11
Evaluation results 2 2
Our model performed well for the usefulness measure Co pyright 2019 Yahoo Japan Corporation. All Rights Reserved. generate headline-style outputs
Our model performed well for the usefulness measure Co pyright 2019 Yahoo Japan Corporation. All Rights Reserved. generate headline-style outputs
[]
GEM-SciDuet-train-34#paper-1048#slide-12
1048
A Case Study on Neural Headline Generation for Editing Support
There have been many studies on neural headline generation models trained with a lot of (article, headline) pairs. However, there are few situations for putting such models into practical use in the real world since news articles typically already have corresponding headlines. In this paper, we describe a practical use case of neural headline generation in a news aggregator, where dozens of professional editors constantly select important news articles and manually create their headlines, which are much shorter than the original headlines. Specifically, we show how to deploy our model to an editing support tool and report the results of comparing the behavior of the editors before and after the release.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186 ], "paper_content_text": [ "Introduction A news-aggregator is a website or mobile application that aggregates a large amount of web content, e.g., online newspapers provided by different publishers.", "The main purpose of such a service is to help users obtain important news out of vast amounts of information quickly and easily.", "Therefore, it is critical to consider how to compactly show news, as well as what type of news to select, to improve service quality.", "In fact, the news-aggregator of Yahoo!", "JAPAN 1 , the largest Japanese portal site, is supported by dozens of professional editors who constantly select important news articles and manually create their new headlines called short titles, which are much shorter than the original headline, to construct a newstopic list.", "Note that we use the term \"title\" to avoid confusion with the original news headline, although they are similar concepts.", "* Both authors contributed equally to this work.", "1 https://www.yahoo.co.jp/ (a) List of news topics including short titles.", "(b) Page of news entry including headline and lead.", "Figure 1 shows screenshots of the newsaggregator of Yahoo!", "JAPAN, where the English translations of the short title, headline and lead are listed in Table 1 .", "The left figure (a) shows the list of news topics (important news articles), which includes short titles, and the right figure (b) shows the entry page of the first topic in the list, which consists of a headline and lead.", "The lead is a short version of the article and can be used by users to decide whether to read the whole article.", "The editors' job is to create a short title from news content including the headline and lead.", "A short title has two advantages over a normal headline; one is quick understandability of the content and the other is saving display space by using a single line.", "This means that short titles can increase a user's chances of reaching interesting articles.", "Since the click-through rate of news articles is directly related to ad revenue, even a small improvement in short titles has a significant impact on business.", "We tackle an automatic-generation task of such short titles for a news aggregator to support the Japanese English translation Short title The prime minister cannot say that there is no surmise.", "Headline It cannot be said that there is no \"sontaku (surmise)\" with absolute certainty.", "The prime minister Abe said about the problem of \"Kake Gakuen (Kake school)\".", "Lead Prime Minister Shinzo Abe said, in an intensive deliberation with the House of Councilors Budget Committee held on the afternoon of the 14th, as an answer to a question about whether bureaucrats surmised to the prime minister regarding the Kake suspicion, \"It is difficult to understand whether there is a sontaku (surmise)\".", "He said \"It cannot be said that there was nothing wrong,\" while explaining that \"I do not need to be obsequious\".", "An answer to Ichiro Tsukada (LDP).", "Table 1 : Short title, headline, and lead in Figure 1 (b) with English versions.", "editorial process.", "Our task is a variant of newsheadline generation, which has been extensively studied, as described in Section 6.", "A clear difference between their task and ours is that we need to generate short titles from news content including headlines.", "Thus, we formulate our task as an abstractive summarization from multiple information sources, i.e., headlines and leads, based on an encoder-decoder model (Section 2).", "There are roughly three approaches for handling multiple information sources.", "The first approach is to merge all sources with some weights based on the importance of each source, which can be achieved by a weighted average of the context vectors, as in multimodal summarization (Hori et al., 2017) .", "This is the most general approach since the other two can also be regarded as special cases of the weighted average.", "The second approach is to use one source as the main source and others as secondary ones.", "This is effective when the main source can be clearly determined, such as query-focused summarization (Nema et al., 2017) , where the target document is main and a query is secondary.", "The third approach is to find the salient components of the sources.", "This is suitable when there are many sources including less informative ones (redundant sources), such as lengthydocument summarization that outputs a multisentence summary (Tan et al., 2017) , where each sentence can be regarded as one source.", "We addressed an extension of the weighted average approach and compared our proposed model with a multimodal model (Hori et al., 2017) from the first approach and a query-based model (Nema et al., 2017) from the second approach, as well as the normal encoder-decoder model.", "Since we have only two sources (headlines and leads), where the headline source is clearly salient for generating a short title, the third approach can be reduced to the normal encoder-decoder model.", "Our contributions are as follows.", "• We report on a case study of short-title generation of news articles for a news aggregator as a real-world application of neural headline generation.", "This study supports previous studies based on the encoder-decoder model from a practical standpoint since most real-world news articles basically already have headlines, which means that there has been little direct application of these previous studies.", "• We propose an encoder-decoder model with multiple encoders for separately encoding news headlines and leads (Section 3).", "Our comparative experiments with several baselines involving evaluations done by crowdsourcing workers showed the effectiveness of our model, especially using the \"usefulness\" measure (Section 4).", "• We describe how to deploy our model to an editing support tool and show the results of comparing the editors' behavior before and after releasing the tool (Section 5), which imply that the editors began to refer to generated titles after the release.", "late the following conditional likelihood p(y | x) = T −1 ∏ t=1 p(y t+1 | y ≤t , x) (1) with respect to each pair (x, y) of an input sequence x = x 1 · · · x S and output sequence y = y 1 · · · y T , where y ≤t = y 1 · · · y t , and maximize its mean.", "The model p(y | x) in Eq.", "(1) is computed by a combination of two recurrent neural networks (RNNs): an encoder and decoder.", "The encoder reads an input sequence x to recognize its content, and the decoder predicts an output sequence y corresponding to the content.", "More formally, an encoder calculates a hidden state h s for each element x s in a x by using the state transition function f enc of the encoder: h s = f enc (x s , h s−1 ).", "In a similar fashion, a decoder calculates a hidden stateĥ t for each element y t in a y by using the state transition function f dec of the decoder after setting the last hidden state of the encoder as the initial state of the decoder (ĥ 0 = h S ): h t = f dec (y t ,ĥ t−1 ).", "Then, a prediction of outputs for eachĥ t is calculated using the output function g dec with an attention mechanism: p(y t+1 | y ≤t , x) = g dec (ĥ t , c t ), (2) where c t is a weighted average of the encoder hidden states {h 1 , · · · , h S }, defined by c t = S ∑ s=1 a t (s)h s , (3) where a t (s) represents a weight of an encoder hidden state h s with respect to a decoder hidden statê h t .", "c t represents a soft alignment (or attention weight) to the source sequence at the target position t, so it is called a context.", "Proposed Method We propose an encoder-decoder model with multiple encoders.", "For simplicity, we describe our model assuming two encoders for news headlines and leads.", "Let d t and d ′ t be contexts calculated with Eq.", "(3) with the headline encoder and lead encoder, respectively.", "Our model combines the two context vectors inspired by a gating mechanism in long-short term memory networks (Hochreiter and Schmidhuber, 1997) as follows: w t = σ(W [d t ; d ′ t ;ĥ t ]), (4) w ′ t = σ(W ′ [d t ; d ′ t ;ĥ t ]), (5) c t = w t ⊙ d t + w ′ t ⊙ d ′ t , (6) where function σ represents the sigmoid function, i.e., σ(x) = 1/(1 + e −x ), and the operator ⊙ represents the element-wise product.", "Eq.", "(4) calculates a gating weight w t for d t , where W represents a weight matrix for a concatenated vector [d t ; d ′ t ;ĥ t ].", "Similarly, Eq.", "(5) calculates a gating weight w ′ t for d ′ t .", "Eq.", "(6) calculates a mixed context c t made from the two contexts, d t and d ′ t .", "Finally, the output function in our model is constructed by substituting c t with c t in Eq.", "(2).", "Our model can be regarded as an extension of the multimodal fusion model (Hori et al., 2017) , where multiple contexts are mixed using scalar weights, i.e., c t = αd t + βd ′ t , where α and β are positive scalar weights calculated using an attention mechanism such as a t (s) in Eq.", "(3).", "Our model can obtain a more sophisticated mixed context than their model since that model only takes into account which encoder to weigh at a time step, while our model adjusts weights on the element level.", "Experiments Dataset We prepared a dataset extracted from the newsaggregator of Yahoo!", "JAPAN by Web crawling.", "The dataset included 263K (headline, lead, short title) triples, and was split into three parts, i.e., for training (90%), validation (5%), and testing (5%).", "We preprocessed them by separating characters for training since our preliminary experiments showed that character-based training clearly performed better than word-based training.", "The statistics of our dataset are as follows.", "The average lengths of headlines, leads, and short titles are 24.87, 128.49, and 13.05 Japanese characters, respectively.", "The dictionary sizes (for characters) of headlines, leads, and short titles are 3618, 4226, and 3156, respectively.", "Each news article has only one short title created by a professional editor.", "The percentage of short titles equal to their headlines is only 0.13%, while the percentage of extractively solvable instances, in which the characters in each short title are completely matched by those in the corresponding headline, was about 20%.", "However, the average edit distance (Levenshtein, 1966 ) between short titles and headlines was 23.74.", "This means that short titles cannot be easily created from headlines.", "Training We implemented our model on the OpenNMT 2 toolkit.", "We used a convolutional neural network (CNN) (Kim, 2014) , instead of an RNN, to construct the lead encoder since leads are longer than headlines and require much more computational time.", "Since the CNN encoder outputs all hidden states for an input sequence in the same format as the RNN encoder, we can easily apply these states to Eq.", "(3).", "Our headline encoder still remains as an RNN (i.e., bidirectional LSTM) for fair comparison with the default implementation.", "We used a stochastic gradient descent algorithm with Nesterov momentum (Nesterov, 1983) as an optimizer, after initializing parameters by uniform sampling on (−0.1, 0.1).", "Table 2 lists the details of the hyper-parameter settings in our experiment.", "Other settings were basically the same as the default implementation of OpenNMT.", "Evaluation We conducted two crowdsourcing tasks to separately measure readability and usefulness.", "The readability task asked ten workers how readable each short title was on a four-point scale (higher is better), while the usefulness task asked them how useful the short title was compared to the corresponding article.", "The score of each generated short title was calculated by averaging the scores collected from the ten workers.", "Compared Models We prepared four models, our model GateFusion and three baselines MultiModal, QueryBased, and OpenNMT, listed below.", "We implemented the fusion mechanisms of MultiModal and 2 https://github.com/OpenNMT/OpenNMT-py Table 3 : Mean scores of readability (r), usefulness (u), and their average r+u 2 based on crowdsourcing.", "The \" †\" mark shows a statistical significance from all three baselines OpenNMT, MultiModal, and QueryBased on a one-tailed, paired t-test (p < 0.01).", "QueryBased on OpenNMT using an RNN encoder for headlines and CNN encoder for leads (see Appendix A for detailed definitions).", "• GateFusion: Our model with a gating mechanism described in Section 3.", "This is a fusion based on vector weights.", "• MultiModal: A multimodal model proposed by (Hori et al., 2017) , which can handle multimodal information such as image and audio as well as text by using separate encoders.", "The model combines contexts obtained from the encoders via an attention mechanism such as a t (s) in Eq.", "(3).", "This is a fusion based on scalar weights.", "• QueryBased: A query-based model proposed by (Nema et al., 2017) , which can finetune the attention on a document by using a query for query-focused summarization.", "We regard a headline as a document and a lead as a query since the headline is more similar to its short title.", "Specifically, the model finetunes an attention weight a t (s) for calculating a headline context d t by using a pre-computed lead context d ′ t .", "This is a fusion based on cascade connection.", "• OpenNMT: An encoder-decoder model with a single encoder implemented in OpenNMT, whose input is a headline only, because a variant using a lead did not perform better than this setting.", "Table 3 lists the results from the crowdsourcing tasks for readability and usefulness (see Appendix B for the details of these scores).", "Editor and Prefix in the top block of rows show the results of correct short titles created by editors and a naive model using the first 13.5 Japanese characters 3 , respectively.", "The middle and bottom blocks represent the three baselines and our models, respectively.", "We explain our hybrid model HybridFusion later.", "Each model was prepared as an ensemble of ten models by random initialization, aiming for robust performance.", "Our GateFusion clearly performed better than the three baselines regarding usefulness and interestingly outperformed even Editor.", "This implies that GateFusion tends to aggressively copy elements from source sequences.", "However, this seemed to result in complicated expressions; thus, GateFusion performed the worst with respect to readability.", "To overcome this weakness, we developed a hybrid model HybridFusion that consists of GateFusion and another fusion model QueryBased, which performed relatively well in terms of readability.", "The results indicate that HybridFusion performed the best regarding readability and usefulness.", "It can be considered that QueryBased helps GateFusion generate headline-style outputs since QueryBased mainly uses the headline source.", "Table 4 lists output examples generated by the best model OpenNMT from the three baselines and our best model HybridFusion (see Appendix C for more examples).", "In this case, the difference between OpenNMT and HybridFusion is easily comprehensible.", "The former selected \" (evolution)\", and the latter selected \" (Darvish)\" from the headline.", "In Japanese headlines, the last word tends to be important, so using the last word is basically a good strategy.", "However, the lead indicates that \"Darvish\" is more important than \"evolution\" (actually, there is no word \"evolution\" in the lead); thus, HybridFusion was able to correctly select the long name \"Darvish\" and abbreviate it to \" (Dar)\".", "In addition, it forcibly changed the style to the short title's style by putting the name into the forefront to easily get users' attention.", "This suggests that our neural-headline-generation model HybridFusion can successfully work even in this real-world application.", "Results Deployment to Editing Support Tool We deployed our short-title-generation model to an editing support tool in collaboration with the 3 13.5 is the limit in the news-aggregator, where space, numbers, and alphabet characters are counted as 0.5.", "Figure 2 : Screenshot of editing support tool displaying generated candidates for creating a short title.", "news service, as shown in Figure 2 .", "In the tool, when an editor enters the URL of an article, the tool can automatically fetch the headline and lead of the article and display up to five candidates next to the edit form of a short title, as shown in the dotted box in the figure.", "These candidates are hypotheses (with high probabilities) generated by the beam search based on the model.", "Then, the editor can effectively create a short title by referring to the generated candidates.", "This supporting feature is expected to be useful especially for inexperienced editors since the quality of short titles is heavily dependent on editors' experience.", "From now on, we briefly describe three features of the tool to improve its usability when displaying candidates: cutoff of unpromising candidates, skipping redundant candidates, and highlighting unknown characters.", "After that, we discuss the effect of the deployment analyzing user behavior before and after releasing the tool.", "Cutoff of Unpromising Candidates The quality of displayed candidates is one of the main factors that affect the usability of the tool.", "If the tool frequently displays unpromising candidates, editors will gradually start ignoring them.", "Therefore, we cutoff unpromising candidates whose perplexity scores are higher than a certain threshold, where the perplexity score of a candidate is calculated by the inverse of the geometric mean of the generation probabilities for all characters in the candidate.", "We set the threshold considering the results of the editors' manual evaluation, where they checked if each candidate was acceptable or not.", "Specifically, we used 1.47 (=1/0.68) as the threshold, which means that the (geometric) mean character likelihood in the candidate should be higher than 0.68.", "If all candidates are judged as unpromising, the tool displays a message like \"No promising candidates.\"", "Skipping Redundant Candidates The purpose of the tool is to give editors some new ideas for creating short titles, so it is not useful to display redundant candidates similar to others.", "Therefore, we skip candidates whose edit distance (Levenshtein, 1966) to the other candidates is lower than a threshold when selecting hypotheses in descending order of probability.", "Formally, the edit distance between two texts is defined as the minimum number of single-character edits (insertions, deletions, or substitutions) required to change one text into the other.", "We set the threshold to 2 so as to restrict variations of Japanese particles as there are many particles with a similar meaning in Japanese 4 , e.g., \" (ha)\" and \" (ga)\".", "Although we used a unit cost for the edit distance, we can adjust the cost of each edit operation so that the tool can ignore variations of prepositions if we want to use English texts.", "Highlighting Unknown Characters One difficulty of neural models is that there is a possibility of generating incorrect or fake titles, which do not correspond to the article.", "This is a serious issue for news editing support since displayed candidates can mislead editors.", "For example, if the tool displays \" (Fujinami)\" for the news about \" (Fujinami)\", where they are different names with the same pronunciation, editors might choose the incorrect one.", "As a simple solution, we highlighted unknown characters that do not appear in both headline and lead in red.", "In Figure 2 , two phrases (\"B\" and \" \") are highlighted since they do not appear in the headline and lead.", "When a candidate includes highlighted characters, editors can carefully check if the candidate is semantically correct.", "Note that we did not exclude candidates with unknown characters so that the model can aggressively generate paraphrases and abbreviations.", "For example, the tool ROUGE-L (± SE) # articles Before 52.71% (± 0.56) 1773 After 57.65% (± 0.53) 1959 Table 5 : Sequence matching rates (ROUGE-L) of editors' titles and generated titles, which are averaged over articles over three weeks before/after releasing tool.", "suggests \" B(Soft B.)\"", "as an abbreviation of \" (Softbank)\" in the figure.", "Effect of Deployment To investigate the effect of the deployment, we compared the sequence matching rates between editors' correct titles and generated candidates before and after releasing the tool.", "The sequence matching rate is basically calculated by ROUGE-L (Lin, 2004) , which is defined as the rate of the length of the longest common subsequence between two sequences, i.e., a correct title and a generated candidate.", "Because we have multiple candidates for each article, we calculate the sequence matching rate as the maximum of their ROUGE-L scores, assuming that editors may refer to the most promising candidate.", "Note that the candidates were filtered by the aforementioned features, so we omitted a few articles without candidates.", "Table 5 shows the results of the sequence matching rates averaged over the articles over three weeks before and after releasing the tool.", "The results indicate that the ROUGE-L score increased by about 5 percentage points after the release.", "This implies that editors created their titles by referring to the displayed candidates to some extent.", "In fact, the ratio of the exact matched titles (ROUGE-L = 100%) in all articles (before/after the release) increased after the release by a factor of 1.62(i.e., from 3.78% to 6.13%).", "Similarly, the ratio of the 80% matched titles (ROUGE-L ≥ 80%) also increased by a factor of 1.32 (i.e., from 14.04% to 18.53%).", "This suggests that professional editors obtained new ideas from generated titles of the tool.", "Related Work We briefly review related studies from three aspects: news headline generation, editing support, and application of headline generation.", "In summary, our work is the first attempt to deploy a neural news-headline-generation model to a realworld application, i.e., news editing support tool.", "News-headline-generation tasks have been extensively studied since early times (Wang et al., 2005; Soricut and Marcu, 2006; Woodsend et al., 2010; Alfonseca et al., 2013; Sun et al., 2015; Colmenares et al., 2015) .", "In this line of research, Rush et al.", "(2015) proposed a neural model to generate news headlines and released a benchmark dataset for their task, and consequently this task has recently received increasing attention (Chopra et al., 2016; Takase et al., 2016; Kiyono et al., 2017; Zhou et al., 2017; Ayana et al., 2017; Raffel et al., 2017; Cao et al., 2018; Kobayashi, 2018) .", "However, their approaches were basically based on the encoderdecoder model, which is trained with a lot of (article, headline) pairs.", "This means that there are few situations for putting their models into the real world because news articles typically already have corresponding headlines, and most editors create a headline before its content (according to a senior journalist).", "Therefore, our work can strongly support their approaches from a practical perspective.", "Considering technologies used for editing support, there have been many studies for various purposes, such as spelling error correction (Farra et al., 2014; Hasan et al., 2015; Etoori et al., 2018) , grammatical error correction (Dahlmeier and Ng, 2012; Susanto et al., 2014; Choshen and Abend, 2018) , fact checking (Baly et al., 2018; Thorne and Vlachos, 2018; Lee et al., 2018) , fluency evaluation (Vadlapudi and Katragadda, 2010; Heilman et al., 2014; Kann et al., 2018) , and so on.", "However, when we consider their studies on our task, they are only used after editing (writing a draft).", "On the other hand, the purpose of our tool is different from theirs since our tool can support editors before or during editing.", "The usage of (interactive) machine translation systems (Denkowski et al., 2014; González-Rubio et al., 2016; Wuebker et al., 2016; Ye et al., 2016; Takeno et al., 2017) for supporting manual post-editing are similar to our purpose, but their task is completely different from ours.", "In other words, their task is a translation without information loss, whereas our task is a summarization that requires information compression.", "We believe that a case study on summarization is still important for the summarization community.", "There have been several studies reporting case studies on headline generation for different real services: (a) question headlines on question answering service (Higurashi et al., 2018) , (b) product headlines on e-commerce service (Wang et al., 2018) , and (c) headlines for product curation pages Camargo de Souza et al., 2018) .", "The first two (a) and (b) are extractive approaches, and the last one (c) is an abstractive approach, where the input is a set of slot/value pairs, such as \"color/white.\"", "That is, our task is more difficult to use in the real-world.", "In addition, application to news services tends to be sensitive since news articles contain serious contents such as incidents, accidents, and disasters.", "Thus, our work should be valuable as a rare case study applying a neural model to such a news service.", "Conclusion We addressed short-title generation from news articles for a news aggregator to support the editorial process.", "We proposed an encoder-decoder model with multiple encoders for separately encoding multiple information sources, i.e., news headlines and leads.", "Comparative experiments using crowdsourcing showed that our hybrid model performed better than the baselines, especially using the usefulness measure.", "We deployed our model to an editing support tool and empirically confirmed that professional editors began to refer to the generated titles after the release.", "Future research will include verifying how much our headline generation model can affect practical performance indicators, such as click-through rate.", "In this case, we need to develop a much safer model since our model sometimes yields erroneous outputs or fake news titles, which cannot be directly used in the commercial service." ] }
{ "paper_header_number": [ "1", "3", "4.1", "4.2", "4.3", "4.4", "5", "5.1", "5.2", "5.3", "5.4", "6", "7" ], "paper_header_content": [ "Introduction", "Proposed Method", "Dataset", "Training", "Evaluation", "Compared Models", "Deployment to Editing Support Tool", "Cutoff of Unpromising Candidates", "Skipping Redundant Candidates", "Highlighting Unknown Characters", "Effect of Deployment", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-34#paper-1048#slide-12
Editing support tool
Editors can check candidates when creating short titles Copyright 2019 Yahoo Japan Corporation. All Rights Reserved.
Editors can check candidates when creating short titles Copyright 2019 Yahoo Japan Corporation. All Rights Reserved.
[]
GEM-SciDuet-train-34#paper-1048#slide-13
1048
A Case Study on Neural Headline Generation for Editing Support
There have been many studies on neural headline generation models trained with a lot of (article, headline) pairs. However, there are few situations for putting such models into practical use in the real world since news articles typically already have corresponding headlines. In this paper, we describe a practical use case of neural headline generation in a news aggregator, where dozens of professional editors constantly select important news articles and manually create their headlines, which are much shorter than the original headlines. Specifically, we show how to deploy our model to an editing support tool and report the results of comparing the behavior of the editors before and after the release.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186 ], "paper_content_text": [ "Introduction A news-aggregator is a website or mobile application that aggregates a large amount of web content, e.g., online newspapers provided by different publishers.", "The main purpose of such a service is to help users obtain important news out of vast amounts of information quickly and easily.", "Therefore, it is critical to consider how to compactly show news, as well as what type of news to select, to improve service quality.", "In fact, the news-aggregator of Yahoo!", "JAPAN 1 , the largest Japanese portal site, is supported by dozens of professional editors who constantly select important news articles and manually create their new headlines called short titles, which are much shorter than the original headline, to construct a newstopic list.", "Note that we use the term \"title\" to avoid confusion with the original news headline, although they are similar concepts.", "* Both authors contributed equally to this work.", "1 https://www.yahoo.co.jp/ (a) List of news topics including short titles.", "(b) Page of news entry including headline and lead.", "Figure 1 shows screenshots of the newsaggregator of Yahoo!", "JAPAN, where the English translations of the short title, headline and lead are listed in Table 1 .", "The left figure (a) shows the list of news topics (important news articles), which includes short titles, and the right figure (b) shows the entry page of the first topic in the list, which consists of a headline and lead.", "The lead is a short version of the article and can be used by users to decide whether to read the whole article.", "The editors' job is to create a short title from news content including the headline and lead.", "A short title has two advantages over a normal headline; one is quick understandability of the content and the other is saving display space by using a single line.", "This means that short titles can increase a user's chances of reaching interesting articles.", "Since the click-through rate of news articles is directly related to ad revenue, even a small improvement in short titles has a significant impact on business.", "We tackle an automatic-generation task of such short titles for a news aggregator to support the Japanese English translation Short title The prime minister cannot say that there is no surmise.", "Headline It cannot be said that there is no \"sontaku (surmise)\" with absolute certainty.", "The prime minister Abe said about the problem of \"Kake Gakuen (Kake school)\".", "Lead Prime Minister Shinzo Abe said, in an intensive deliberation with the House of Councilors Budget Committee held on the afternoon of the 14th, as an answer to a question about whether bureaucrats surmised to the prime minister regarding the Kake suspicion, \"It is difficult to understand whether there is a sontaku (surmise)\".", "He said \"It cannot be said that there was nothing wrong,\" while explaining that \"I do not need to be obsequious\".", "An answer to Ichiro Tsukada (LDP).", "Table 1 : Short title, headline, and lead in Figure 1 (b) with English versions.", "editorial process.", "Our task is a variant of newsheadline generation, which has been extensively studied, as described in Section 6.", "A clear difference between their task and ours is that we need to generate short titles from news content including headlines.", "Thus, we formulate our task as an abstractive summarization from multiple information sources, i.e., headlines and leads, based on an encoder-decoder model (Section 2).", "There are roughly three approaches for handling multiple information sources.", "The first approach is to merge all sources with some weights based on the importance of each source, which can be achieved by a weighted average of the context vectors, as in multimodal summarization (Hori et al., 2017) .", "This is the most general approach since the other two can also be regarded as special cases of the weighted average.", "The second approach is to use one source as the main source and others as secondary ones.", "This is effective when the main source can be clearly determined, such as query-focused summarization (Nema et al., 2017) , where the target document is main and a query is secondary.", "The third approach is to find the salient components of the sources.", "This is suitable when there are many sources including less informative ones (redundant sources), such as lengthydocument summarization that outputs a multisentence summary (Tan et al., 2017) , where each sentence can be regarded as one source.", "We addressed an extension of the weighted average approach and compared our proposed model with a multimodal model (Hori et al., 2017) from the first approach and a query-based model (Nema et al., 2017) from the second approach, as well as the normal encoder-decoder model.", "Since we have only two sources (headlines and leads), where the headline source is clearly salient for generating a short title, the third approach can be reduced to the normal encoder-decoder model.", "Our contributions are as follows.", "• We report on a case study of short-title generation of news articles for a news aggregator as a real-world application of neural headline generation.", "This study supports previous studies based on the encoder-decoder model from a practical standpoint since most real-world news articles basically already have headlines, which means that there has been little direct application of these previous studies.", "• We propose an encoder-decoder model with multiple encoders for separately encoding news headlines and leads (Section 3).", "Our comparative experiments with several baselines involving evaluations done by crowdsourcing workers showed the effectiveness of our model, especially using the \"usefulness\" measure (Section 4).", "• We describe how to deploy our model to an editing support tool and show the results of comparing the editors' behavior before and after releasing the tool (Section 5), which imply that the editors began to refer to generated titles after the release.", "late the following conditional likelihood p(y | x) = T −1 ∏ t=1 p(y t+1 | y ≤t , x) (1) with respect to each pair (x, y) of an input sequence x = x 1 · · · x S and output sequence y = y 1 · · · y T , where y ≤t = y 1 · · · y t , and maximize its mean.", "The model p(y | x) in Eq.", "(1) is computed by a combination of two recurrent neural networks (RNNs): an encoder and decoder.", "The encoder reads an input sequence x to recognize its content, and the decoder predicts an output sequence y corresponding to the content.", "More formally, an encoder calculates a hidden state h s for each element x s in a x by using the state transition function f enc of the encoder: h s = f enc (x s , h s−1 ).", "In a similar fashion, a decoder calculates a hidden stateĥ t for each element y t in a y by using the state transition function f dec of the decoder after setting the last hidden state of the encoder as the initial state of the decoder (ĥ 0 = h S ): h t = f dec (y t ,ĥ t−1 ).", "Then, a prediction of outputs for eachĥ t is calculated using the output function g dec with an attention mechanism: p(y t+1 | y ≤t , x) = g dec (ĥ t , c t ), (2) where c t is a weighted average of the encoder hidden states {h 1 , · · · , h S }, defined by c t = S ∑ s=1 a t (s)h s , (3) where a t (s) represents a weight of an encoder hidden state h s with respect to a decoder hidden statê h t .", "c t represents a soft alignment (or attention weight) to the source sequence at the target position t, so it is called a context.", "Proposed Method We propose an encoder-decoder model with multiple encoders.", "For simplicity, we describe our model assuming two encoders for news headlines and leads.", "Let d t and d ′ t be contexts calculated with Eq.", "(3) with the headline encoder and lead encoder, respectively.", "Our model combines the two context vectors inspired by a gating mechanism in long-short term memory networks (Hochreiter and Schmidhuber, 1997) as follows: w t = σ(W [d t ; d ′ t ;ĥ t ]), (4) w ′ t = σ(W ′ [d t ; d ′ t ;ĥ t ]), (5) c t = w t ⊙ d t + w ′ t ⊙ d ′ t , (6) where function σ represents the sigmoid function, i.e., σ(x) = 1/(1 + e −x ), and the operator ⊙ represents the element-wise product.", "Eq.", "(4) calculates a gating weight w t for d t , where W represents a weight matrix for a concatenated vector [d t ; d ′ t ;ĥ t ].", "Similarly, Eq.", "(5) calculates a gating weight w ′ t for d ′ t .", "Eq.", "(6) calculates a mixed context c t made from the two contexts, d t and d ′ t .", "Finally, the output function in our model is constructed by substituting c t with c t in Eq.", "(2).", "Our model can be regarded as an extension of the multimodal fusion model (Hori et al., 2017) , where multiple contexts are mixed using scalar weights, i.e., c t = αd t + βd ′ t , where α and β are positive scalar weights calculated using an attention mechanism such as a t (s) in Eq.", "(3).", "Our model can obtain a more sophisticated mixed context than their model since that model only takes into account which encoder to weigh at a time step, while our model adjusts weights on the element level.", "Experiments Dataset We prepared a dataset extracted from the newsaggregator of Yahoo!", "JAPAN by Web crawling.", "The dataset included 263K (headline, lead, short title) triples, and was split into three parts, i.e., for training (90%), validation (5%), and testing (5%).", "We preprocessed them by separating characters for training since our preliminary experiments showed that character-based training clearly performed better than word-based training.", "The statistics of our dataset are as follows.", "The average lengths of headlines, leads, and short titles are 24.87, 128.49, and 13.05 Japanese characters, respectively.", "The dictionary sizes (for characters) of headlines, leads, and short titles are 3618, 4226, and 3156, respectively.", "Each news article has only one short title created by a professional editor.", "The percentage of short titles equal to their headlines is only 0.13%, while the percentage of extractively solvable instances, in which the characters in each short title are completely matched by those in the corresponding headline, was about 20%.", "However, the average edit distance (Levenshtein, 1966 ) between short titles and headlines was 23.74.", "This means that short titles cannot be easily created from headlines.", "Training We implemented our model on the OpenNMT 2 toolkit.", "We used a convolutional neural network (CNN) (Kim, 2014) , instead of an RNN, to construct the lead encoder since leads are longer than headlines and require much more computational time.", "Since the CNN encoder outputs all hidden states for an input sequence in the same format as the RNN encoder, we can easily apply these states to Eq.", "(3).", "Our headline encoder still remains as an RNN (i.e., bidirectional LSTM) for fair comparison with the default implementation.", "We used a stochastic gradient descent algorithm with Nesterov momentum (Nesterov, 1983) as an optimizer, after initializing parameters by uniform sampling on (−0.1, 0.1).", "Table 2 lists the details of the hyper-parameter settings in our experiment.", "Other settings were basically the same as the default implementation of OpenNMT.", "Evaluation We conducted two crowdsourcing tasks to separately measure readability and usefulness.", "The readability task asked ten workers how readable each short title was on a four-point scale (higher is better), while the usefulness task asked them how useful the short title was compared to the corresponding article.", "The score of each generated short title was calculated by averaging the scores collected from the ten workers.", "Compared Models We prepared four models, our model GateFusion and three baselines MultiModal, QueryBased, and OpenNMT, listed below.", "We implemented the fusion mechanisms of MultiModal and 2 https://github.com/OpenNMT/OpenNMT-py Table 3 : Mean scores of readability (r), usefulness (u), and their average r+u 2 based on crowdsourcing.", "The \" †\" mark shows a statistical significance from all three baselines OpenNMT, MultiModal, and QueryBased on a one-tailed, paired t-test (p < 0.01).", "QueryBased on OpenNMT using an RNN encoder for headlines and CNN encoder for leads (see Appendix A for detailed definitions).", "• GateFusion: Our model with a gating mechanism described in Section 3.", "This is a fusion based on vector weights.", "• MultiModal: A multimodal model proposed by (Hori et al., 2017) , which can handle multimodal information such as image and audio as well as text by using separate encoders.", "The model combines contexts obtained from the encoders via an attention mechanism such as a t (s) in Eq.", "(3).", "This is a fusion based on scalar weights.", "• QueryBased: A query-based model proposed by (Nema et al., 2017) , which can finetune the attention on a document by using a query for query-focused summarization.", "We regard a headline as a document and a lead as a query since the headline is more similar to its short title.", "Specifically, the model finetunes an attention weight a t (s) for calculating a headline context d t by using a pre-computed lead context d ′ t .", "This is a fusion based on cascade connection.", "• OpenNMT: An encoder-decoder model with a single encoder implemented in OpenNMT, whose input is a headline only, because a variant using a lead did not perform better than this setting.", "Table 3 lists the results from the crowdsourcing tasks for readability and usefulness (see Appendix B for the details of these scores).", "Editor and Prefix in the top block of rows show the results of correct short titles created by editors and a naive model using the first 13.5 Japanese characters 3 , respectively.", "The middle and bottom blocks represent the three baselines and our models, respectively.", "We explain our hybrid model HybridFusion later.", "Each model was prepared as an ensemble of ten models by random initialization, aiming for robust performance.", "Our GateFusion clearly performed better than the three baselines regarding usefulness and interestingly outperformed even Editor.", "This implies that GateFusion tends to aggressively copy elements from source sequences.", "However, this seemed to result in complicated expressions; thus, GateFusion performed the worst with respect to readability.", "To overcome this weakness, we developed a hybrid model HybridFusion that consists of GateFusion and another fusion model QueryBased, which performed relatively well in terms of readability.", "The results indicate that HybridFusion performed the best regarding readability and usefulness.", "It can be considered that QueryBased helps GateFusion generate headline-style outputs since QueryBased mainly uses the headline source.", "Table 4 lists output examples generated by the best model OpenNMT from the three baselines and our best model HybridFusion (see Appendix C for more examples).", "In this case, the difference between OpenNMT and HybridFusion is easily comprehensible.", "The former selected \" (evolution)\", and the latter selected \" (Darvish)\" from the headline.", "In Japanese headlines, the last word tends to be important, so using the last word is basically a good strategy.", "However, the lead indicates that \"Darvish\" is more important than \"evolution\" (actually, there is no word \"evolution\" in the lead); thus, HybridFusion was able to correctly select the long name \"Darvish\" and abbreviate it to \" (Dar)\".", "In addition, it forcibly changed the style to the short title's style by putting the name into the forefront to easily get users' attention.", "This suggests that our neural-headline-generation model HybridFusion can successfully work even in this real-world application.", "Results Deployment to Editing Support Tool We deployed our short-title-generation model to an editing support tool in collaboration with the 3 13.5 is the limit in the news-aggregator, where space, numbers, and alphabet characters are counted as 0.5.", "Figure 2 : Screenshot of editing support tool displaying generated candidates for creating a short title.", "news service, as shown in Figure 2 .", "In the tool, when an editor enters the URL of an article, the tool can automatically fetch the headline and lead of the article and display up to five candidates next to the edit form of a short title, as shown in the dotted box in the figure.", "These candidates are hypotheses (with high probabilities) generated by the beam search based on the model.", "Then, the editor can effectively create a short title by referring to the generated candidates.", "This supporting feature is expected to be useful especially for inexperienced editors since the quality of short titles is heavily dependent on editors' experience.", "From now on, we briefly describe three features of the tool to improve its usability when displaying candidates: cutoff of unpromising candidates, skipping redundant candidates, and highlighting unknown characters.", "After that, we discuss the effect of the deployment analyzing user behavior before and after releasing the tool.", "Cutoff of Unpromising Candidates The quality of displayed candidates is one of the main factors that affect the usability of the tool.", "If the tool frequently displays unpromising candidates, editors will gradually start ignoring them.", "Therefore, we cutoff unpromising candidates whose perplexity scores are higher than a certain threshold, where the perplexity score of a candidate is calculated by the inverse of the geometric mean of the generation probabilities for all characters in the candidate.", "We set the threshold considering the results of the editors' manual evaluation, where they checked if each candidate was acceptable or not.", "Specifically, we used 1.47 (=1/0.68) as the threshold, which means that the (geometric) mean character likelihood in the candidate should be higher than 0.68.", "If all candidates are judged as unpromising, the tool displays a message like \"No promising candidates.\"", "Skipping Redundant Candidates The purpose of the tool is to give editors some new ideas for creating short titles, so it is not useful to display redundant candidates similar to others.", "Therefore, we skip candidates whose edit distance (Levenshtein, 1966) to the other candidates is lower than a threshold when selecting hypotheses in descending order of probability.", "Formally, the edit distance between two texts is defined as the minimum number of single-character edits (insertions, deletions, or substitutions) required to change one text into the other.", "We set the threshold to 2 so as to restrict variations of Japanese particles as there are many particles with a similar meaning in Japanese 4 , e.g., \" (ha)\" and \" (ga)\".", "Although we used a unit cost for the edit distance, we can adjust the cost of each edit operation so that the tool can ignore variations of prepositions if we want to use English texts.", "Highlighting Unknown Characters One difficulty of neural models is that there is a possibility of generating incorrect or fake titles, which do not correspond to the article.", "This is a serious issue for news editing support since displayed candidates can mislead editors.", "For example, if the tool displays \" (Fujinami)\" for the news about \" (Fujinami)\", where they are different names with the same pronunciation, editors might choose the incorrect one.", "As a simple solution, we highlighted unknown characters that do not appear in both headline and lead in red.", "In Figure 2 , two phrases (\"B\" and \" \") are highlighted since they do not appear in the headline and lead.", "When a candidate includes highlighted characters, editors can carefully check if the candidate is semantically correct.", "Note that we did not exclude candidates with unknown characters so that the model can aggressively generate paraphrases and abbreviations.", "For example, the tool ROUGE-L (± SE) # articles Before 52.71% (± 0.56) 1773 After 57.65% (± 0.53) 1959 Table 5 : Sequence matching rates (ROUGE-L) of editors' titles and generated titles, which are averaged over articles over three weeks before/after releasing tool.", "suggests \" B(Soft B.)\"", "as an abbreviation of \" (Softbank)\" in the figure.", "Effect of Deployment To investigate the effect of the deployment, we compared the sequence matching rates between editors' correct titles and generated candidates before and after releasing the tool.", "The sequence matching rate is basically calculated by ROUGE-L (Lin, 2004) , which is defined as the rate of the length of the longest common subsequence between two sequences, i.e., a correct title and a generated candidate.", "Because we have multiple candidates for each article, we calculate the sequence matching rate as the maximum of their ROUGE-L scores, assuming that editors may refer to the most promising candidate.", "Note that the candidates were filtered by the aforementioned features, so we omitted a few articles without candidates.", "Table 5 shows the results of the sequence matching rates averaged over the articles over three weeks before and after releasing the tool.", "The results indicate that the ROUGE-L score increased by about 5 percentage points after the release.", "This implies that editors created their titles by referring to the displayed candidates to some extent.", "In fact, the ratio of the exact matched titles (ROUGE-L = 100%) in all articles (before/after the release) increased after the release by a factor of 1.62(i.e., from 3.78% to 6.13%).", "Similarly, the ratio of the 80% matched titles (ROUGE-L ≥ 80%) also increased by a factor of 1.32 (i.e., from 14.04% to 18.53%).", "This suggests that professional editors obtained new ideas from generated titles of the tool.", "Related Work We briefly review related studies from three aspects: news headline generation, editing support, and application of headline generation.", "In summary, our work is the first attempt to deploy a neural news-headline-generation model to a realworld application, i.e., news editing support tool.", "News-headline-generation tasks have been extensively studied since early times (Wang et al., 2005; Soricut and Marcu, 2006; Woodsend et al., 2010; Alfonseca et al., 2013; Sun et al., 2015; Colmenares et al., 2015) .", "In this line of research, Rush et al.", "(2015) proposed a neural model to generate news headlines and released a benchmark dataset for their task, and consequently this task has recently received increasing attention (Chopra et al., 2016; Takase et al., 2016; Kiyono et al., 2017; Zhou et al., 2017; Ayana et al., 2017; Raffel et al., 2017; Cao et al., 2018; Kobayashi, 2018) .", "However, their approaches were basically based on the encoderdecoder model, which is trained with a lot of (article, headline) pairs.", "This means that there are few situations for putting their models into the real world because news articles typically already have corresponding headlines, and most editors create a headline before its content (according to a senior journalist).", "Therefore, our work can strongly support their approaches from a practical perspective.", "Considering technologies used for editing support, there have been many studies for various purposes, such as spelling error correction (Farra et al., 2014; Hasan et al., 2015; Etoori et al., 2018) , grammatical error correction (Dahlmeier and Ng, 2012; Susanto et al., 2014; Choshen and Abend, 2018) , fact checking (Baly et al., 2018; Thorne and Vlachos, 2018; Lee et al., 2018) , fluency evaluation (Vadlapudi and Katragadda, 2010; Heilman et al., 2014; Kann et al., 2018) , and so on.", "However, when we consider their studies on our task, they are only used after editing (writing a draft).", "On the other hand, the purpose of our tool is different from theirs since our tool can support editors before or during editing.", "The usage of (interactive) machine translation systems (Denkowski et al., 2014; González-Rubio et al., 2016; Wuebker et al., 2016; Ye et al., 2016; Takeno et al., 2017) for supporting manual post-editing are similar to our purpose, but their task is completely different from ours.", "In other words, their task is a translation without information loss, whereas our task is a summarization that requires information compression.", "We believe that a case study on summarization is still important for the summarization community.", "There have been several studies reporting case studies on headline generation for different real services: (a) question headlines on question answering service (Higurashi et al., 2018) , (b) product headlines on e-commerce service (Wang et al., 2018) , and (c) headlines for product curation pages Camargo de Souza et al., 2018) .", "The first two (a) and (b) are extractive approaches, and the last one (c) is an abstractive approach, where the input is a set of slot/value pairs, such as \"color/white.\"", "That is, our task is more difficult to use in the real-world.", "In addition, application to news services tends to be sensitive since news articles contain serious contents such as incidents, accidents, and disasters.", "Thus, our work should be valuable as a rare case study applying a neural model to such a news service.", "Conclusion We addressed short-title generation from news articles for a news aggregator to support the editorial process.", "We proposed an encoder-decoder model with multiple encoders for separately encoding multiple information sources, i.e., news headlines and leads.", "Comparative experiments using crowdsourcing showed that our hybrid model performed better than the baselines, especially using the usefulness measure.", "We deployed our model to an editing support tool and empirically confirmed that professional editors began to refer to the generated titles after the release.", "Future research will include verifying how much our headline generation model can affect practical performance indicators, such as click-through rate.", "In this case, we need to develop a much safer model since our model sometimes yields erroneous outputs or fake news titles, which cannot be directly used in the commercial service." ] }
{ "paper_header_number": [ "1", "3", "4.1", "4.2", "4.3", "4.4", "5", "5.1", "5.2", "5.3", "5.4", "6", "7" ], "paper_header_content": [ "Introduction", "Proposed Method", "Dataset", "Training", "Evaluation", "Compared Models", "Deployment to Editing Support Tool", "Cutoff of Unpromising Candidates", "Skipping Redundant Candidates", "Highlighting Unknown Characters", "Effect of Deployment", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-34#paper-1048#slide-13
Functionalities in the tool
To keep the system quality To display various outputs Cutoff Hilight If not in the article To encourage fact checking Copyright 2019 Yahoo Japan Corporation. All Rights Reserved.
To keep the system quality To display various outputs Cutoff Hilight If not in the article To encourage fact checking Copyright 2019 Yahoo Japan Corporation. All Rights Reserved.
[]
GEM-SciDuet-train-34#paper-1048#slide-14
1048
A Case Study on Neural Headline Generation for Editing Support
There have been many studies on neural headline generation models trained with a lot of (article, headline) pairs. However, there are few situations for putting such models into practical use in the real world since news articles typically already have corresponding headlines. In this paper, we describe a practical use case of neural headline generation in a news aggregator, where dozens of professional editors constantly select important news articles and manually create their headlines, which are much shorter than the original headlines. Specifically, we show how to deploy our model to an editing support tool and report the results of comparing the behavior of the editors before and after the release.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186 ], "paper_content_text": [ "Introduction A news-aggregator is a website or mobile application that aggregates a large amount of web content, e.g., online newspapers provided by different publishers.", "The main purpose of such a service is to help users obtain important news out of vast amounts of information quickly and easily.", "Therefore, it is critical to consider how to compactly show news, as well as what type of news to select, to improve service quality.", "In fact, the news-aggregator of Yahoo!", "JAPAN 1 , the largest Japanese portal site, is supported by dozens of professional editors who constantly select important news articles and manually create their new headlines called short titles, which are much shorter than the original headline, to construct a newstopic list.", "Note that we use the term \"title\" to avoid confusion with the original news headline, although they are similar concepts.", "* Both authors contributed equally to this work.", "1 https://www.yahoo.co.jp/ (a) List of news topics including short titles.", "(b) Page of news entry including headline and lead.", "Figure 1 shows screenshots of the newsaggregator of Yahoo!", "JAPAN, where the English translations of the short title, headline and lead are listed in Table 1 .", "The left figure (a) shows the list of news topics (important news articles), which includes short titles, and the right figure (b) shows the entry page of the first topic in the list, which consists of a headline and lead.", "The lead is a short version of the article and can be used by users to decide whether to read the whole article.", "The editors' job is to create a short title from news content including the headline and lead.", "A short title has two advantages over a normal headline; one is quick understandability of the content and the other is saving display space by using a single line.", "This means that short titles can increase a user's chances of reaching interesting articles.", "Since the click-through rate of news articles is directly related to ad revenue, even a small improvement in short titles has a significant impact on business.", "We tackle an automatic-generation task of such short titles for a news aggregator to support the Japanese English translation Short title The prime minister cannot say that there is no surmise.", "Headline It cannot be said that there is no \"sontaku (surmise)\" with absolute certainty.", "The prime minister Abe said about the problem of \"Kake Gakuen (Kake school)\".", "Lead Prime Minister Shinzo Abe said, in an intensive deliberation with the House of Councilors Budget Committee held on the afternoon of the 14th, as an answer to a question about whether bureaucrats surmised to the prime minister regarding the Kake suspicion, \"It is difficult to understand whether there is a sontaku (surmise)\".", "He said \"It cannot be said that there was nothing wrong,\" while explaining that \"I do not need to be obsequious\".", "An answer to Ichiro Tsukada (LDP).", "Table 1 : Short title, headline, and lead in Figure 1 (b) with English versions.", "editorial process.", "Our task is a variant of newsheadline generation, which has been extensively studied, as described in Section 6.", "A clear difference between their task and ours is that we need to generate short titles from news content including headlines.", "Thus, we formulate our task as an abstractive summarization from multiple information sources, i.e., headlines and leads, based on an encoder-decoder model (Section 2).", "There are roughly three approaches for handling multiple information sources.", "The first approach is to merge all sources with some weights based on the importance of each source, which can be achieved by a weighted average of the context vectors, as in multimodal summarization (Hori et al., 2017) .", "This is the most general approach since the other two can also be regarded as special cases of the weighted average.", "The second approach is to use one source as the main source and others as secondary ones.", "This is effective when the main source can be clearly determined, such as query-focused summarization (Nema et al., 2017) , where the target document is main and a query is secondary.", "The third approach is to find the salient components of the sources.", "This is suitable when there are many sources including less informative ones (redundant sources), such as lengthydocument summarization that outputs a multisentence summary (Tan et al., 2017) , where each sentence can be regarded as one source.", "We addressed an extension of the weighted average approach and compared our proposed model with a multimodal model (Hori et al., 2017) from the first approach and a query-based model (Nema et al., 2017) from the second approach, as well as the normal encoder-decoder model.", "Since we have only two sources (headlines and leads), where the headline source is clearly salient for generating a short title, the third approach can be reduced to the normal encoder-decoder model.", "Our contributions are as follows.", "• We report on a case study of short-title generation of news articles for a news aggregator as a real-world application of neural headline generation.", "This study supports previous studies based on the encoder-decoder model from a practical standpoint since most real-world news articles basically already have headlines, which means that there has been little direct application of these previous studies.", "• We propose an encoder-decoder model with multiple encoders for separately encoding news headlines and leads (Section 3).", "Our comparative experiments with several baselines involving evaluations done by crowdsourcing workers showed the effectiveness of our model, especially using the \"usefulness\" measure (Section 4).", "• We describe how to deploy our model to an editing support tool and show the results of comparing the editors' behavior before and after releasing the tool (Section 5), which imply that the editors began to refer to generated titles after the release.", "late the following conditional likelihood p(y | x) = T −1 ∏ t=1 p(y t+1 | y ≤t , x) (1) with respect to each pair (x, y) of an input sequence x = x 1 · · · x S and output sequence y = y 1 · · · y T , where y ≤t = y 1 · · · y t , and maximize its mean.", "The model p(y | x) in Eq.", "(1) is computed by a combination of two recurrent neural networks (RNNs): an encoder and decoder.", "The encoder reads an input sequence x to recognize its content, and the decoder predicts an output sequence y corresponding to the content.", "More formally, an encoder calculates a hidden state h s for each element x s in a x by using the state transition function f enc of the encoder: h s = f enc (x s , h s−1 ).", "In a similar fashion, a decoder calculates a hidden stateĥ t for each element y t in a y by using the state transition function f dec of the decoder after setting the last hidden state of the encoder as the initial state of the decoder (ĥ 0 = h S ): h t = f dec (y t ,ĥ t−1 ).", "Then, a prediction of outputs for eachĥ t is calculated using the output function g dec with an attention mechanism: p(y t+1 | y ≤t , x) = g dec (ĥ t , c t ), (2) where c t is a weighted average of the encoder hidden states {h 1 , · · · , h S }, defined by c t = S ∑ s=1 a t (s)h s , (3) where a t (s) represents a weight of an encoder hidden state h s with respect to a decoder hidden statê h t .", "c t represents a soft alignment (or attention weight) to the source sequence at the target position t, so it is called a context.", "Proposed Method We propose an encoder-decoder model with multiple encoders.", "For simplicity, we describe our model assuming two encoders for news headlines and leads.", "Let d t and d ′ t be contexts calculated with Eq.", "(3) with the headline encoder and lead encoder, respectively.", "Our model combines the two context vectors inspired by a gating mechanism in long-short term memory networks (Hochreiter and Schmidhuber, 1997) as follows: w t = σ(W [d t ; d ′ t ;ĥ t ]), (4) w ′ t = σ(W ′ [d t ; d ′ t ;ĥ t ]), (5) c t = w t ⊙ d t + w ′ t ⊙ d ′ t , (6) where function σ represents the sigmoid function, i.e., σ(x) = 1/(1 + e −x ), and the operator ⊙ represents the element-wise product.", "Eq.", "(4) calculates a gating weight w t for d t , where W represents a weight matrix for a concatenated vector [d t ; d ′ t ;ĥ t ].", "Similarly, Eq.", "(5) calculates a gating weight w ′ t for d ′ t .", "Eq.", "(6) calculates a mixed context c t made from the two contexts, d t and d ′ t .", "Finally, the output function in our model is constructed by substituting c t with c t in Eq.", "(2).", "Our model can be regarded as an extension of the multimodal fusion model (Hori et al., 2017) , where multiple contexts are mixed using scalar weights, i.e., c t = αd t + βd ′ t , where α and β are positive scalar weights calculated using an attention mechanism such as a t (s) in Eq.", "(3).", "Our model can obtain a more sophisticated mixed context than their model since that model only takes into account which encoder to weigh at a time step, while our model adjusts weights on the element level.", "Experiments Dataset We prepared a dataset extracted from the newsaggregator of Yahoo!", "JAPAN by Web crawling.", "The dataset included 263K (headline, lead, short title) triples, and was split into three parts, i.e., for training (90%), validation (5%), and testing (5%).", "We preprocessed them by separating characters for training since our preliminary experiments showed that character-based training clearly performed better than word-based training.", "The statistics of our dataset are as follows.", "The average lengths of headlines, leads, and short titles are 24.87, 128.49, and 13.05 Japanese characters, respectively.", "The dictionary sizes (for characters) of headlines, leads, and short titles are 3618, 4226, and 3156, respectively.", "Each news article has only one short title created by a professional editor.", "The percentage of short titles equal to their headlines is only 0.13%, while the percentage of extractively solvable instances, in which the characters in each short title are completely matched by those in the corresponding headline, was about 20%.", "However, the average edit distance (Levenshtein, 1966 ) between short titles and headlines was 23.74.", "This means that short titles cannot be easily created from headlines.", "Training We implemented our model on the OpenNMT 2 toolkit.", "We used a convolutional neural network (CNN) (Kim, 2014) , instead of an RNN, to construct the lead encoder since leads are longer than headlines and require much more computational time.", "Since the CNN encoder outputs all hidden states for an input sequence in the same format as the RNN encoder, we can easily apply these states to Eq.", "(3).", "Our headline encoder still remains as an RNN (i.e., bidirectional LSTM) for fair comparison with the default implementation.", "We used a stochastic gradient descent algorithm with Nesterov momentum (Nesterov, 1983) as an optimizer, after initializing parameters by uniform sampling on (−0.1, 0.1).", "Table 2 lists the details of the hyper-parameter settings in our experiment.", "Other settings were basically the same as the default implementation of OpenNMT.", "Evaluation We conducted two crowdsourcing tasks to separately measure readability and usefulness.", "The readability task asked ten workers how readable each short title was on a four-point scale (higher is better), while the usefulness task asked them how useful the short title was compared to the corresponding article.", "The score of each generated short title was calculated by averaging the scores collected from the ten workers.", "Compared Models We prepared four models, our model GateFusion and three baselines MultiModal, QueryBased, and OpenNMT, listed below.", "We implemented the fusion mechanisms of MultiModal and 2 https://github.com/OpenNMT/OpenNMT-py Table 3 : Mean scores of readability (r), usefulness (u), and their average r+u 2 based on crowdsourcing.", "The \" †\" mark shows a statistical significance from all three baselines OpenNMT, MultiModal, and QueryBased on a one-tailed, paired t-test (p < 0.01).", "QueryBased on OpenNMT using an RNN encoder for headlines and CNN encoder for leads (see Appendix A for detailed definitions).", "• GateFusion: Our model with a gating mechanism described in Section 3.", "This is a fusion based on vector weights.", "• MultiModal: A multimodal model proposed by (Hori et al., 2017) , which can handle multimodal information such as image and audio as well as text by using separate encoders.", "The model combines contexts obtained from the encoders via an attention mechanism such as a t (s) in Eq.", "(3).", "This is a fusion based on scalar weights.", "• QueryBased: A query-based model proposed by (Nema et al., 2017) , which can finetune the attention on a document by using a query for query-focused summarization.", "We regard a headline as a document and a lead as a query since the headline is more similar to its short title.", "Specifically, the model finetunes an attention weight a t (s) for calculating a headline context d t by using a pre-computed lead context d ′ t .", "This is a fusion based on cascade connection.", "• OpenNMT: An encoder-decoder model with a single encoder implemented in OpenNMT, whose input is a headline only, because a variant using a lead did not perform better than this setting.", "Table 3 lists the results from the crowdsourcing tasks for readability and usefulness (see Appendix B for the details of these scores).", "Editor and Prefix in the top block of rows show the results of correct short titles created by editors and a naive model using the first 13.5 Japanese characters 3 , respectively.", "The middle and bottom blocks represent the three baselines and our models, respectively.", "We explain our hybrid model HybridFusion later.", "Each model was prepared as an ensemble of ten models by random initialization, aiming for robust performance.", "Our GateFusion clearly performed better than the three baselines regarding usefulness and interestingly outperformed even Editor.", "This implies that GateFusion tends to aggressively copy elements from source sequences.", "However, this seemed to result in complicated expressions; thus, GateFusion performed the worst with respect to readability.", "To overcome this weakness, we developed a hybrid model HybridFusion that consists of GateFusion and another fusion model QueryBased, which performed relatively well in terms of readability.", "The results indicate that HybridFusion performed the best regarding readability and usefulness.", "It can be considered that QueryBased helps GateFusion generate headline-style outputs since QueryBased mainly uses the headline source.", "Table 4 lists output examples generated by the best model OpenNMT from the three baselines and our best model HybridFusion (see Appendix C for more examples).", "In this case, the difference between OpenNMT and HybridFusion is easily comprehensible.", "The former selected \" (evolution)\", and the latter selected \" (Darvish)\" from the headline.", "In Japanese headlines, the last word tends to be important, so using the last word is basically a good strategy.", "However, the lead indicates that \"Darvish\" is more important than \"evolution\" (actually, there is no word \"evolution\" in the lead); thus, HybridFusion was able to correctly select the long name \"Darvish\" and abbreviate it to \" (Dar)\".", "In addition, it forcibly changed the style to the short title's style by putting the name into the forefront to easily get users' attention.", "This suggests that our neural-headline-generation model HybridFusion can successfully work even in this real-world application.", "Results Deployment to Editing Support Tool We deployed our short-title-generation model to an editing support tool in collaboration with the 3 13.5 is the limit in the news-aggregator, where space, numbers, and alphabet characters are counted as 0.5.", "Figure 2 : Screenshot of editing support tool displaying generated candidates for creating a short title.", "news service, as shown in Figure 2 .", "In the tool, when an editor enters the URL of an article, the tool can automatically fetch the headline and lead of the article and display up to five candidates next to the edit form of a short title, as shown in the dotted box in the figure.", "These candidates are hypotheses (with high probabilities) generated by the beam search based on the model.", "Then, the editor can effectively create a short title by referring to the generated candidates.", "This supporting feature is expected to be useful especially for inexperienced editors since the quality of short titles is heavily dependent on editors' experience.", "From now on, we briefly describe three features of the tool to improve its usability when displaying candidates: cutoff of unpromising candidates, skipping redundant candidates, and highlighting unknown characters.", "After that, we discuss the effect of the deployment analyzing user behavior before and after releasing the tool.", "Cutoff of Unpromising Candidates The quality of displayed candidates is one of the main factors that affect the usability of the tool.", "If the tool frequently displays unpromising candidates, editors will gradually start ignoring them.", "Therefore, we cutoff unpromising candidates whose perplexity scores are higher than a certain threshold, where the perplexity score of a candidate is calculated by the inverse of the geometric mean of the generation probabilities for all characters in the candidate.", "We set the threshold considering the results of the editors' manual evaluation, where they checked if each candidate was acceptable or not.", "Specifically, we used 1.47 (=1/0.68) as the threshold, which means that the (geometric) mean character likelihood in the candidate should be higher than 0.68.", "If all candidates are judged as unpromising, the tool displays a message like \"No promising candidates.\"", "Skipping Redundant Candidates The purpose of the tool is to give editors some new ideas for creating short titles, so it is not useful to display redundant candidates similar to others.", "Therefore, we skip candidates whose edit distance (Levenshtein, 1966) to the other candidates is lower than a threshold when selecting hypotheses in descending order of probability.", "Formally, the edit distance between two texts is defined as the minimum number of single-character edits (insertions, deletions, or substitutions) required to change one text into the other.", "We set the threshold to 2 so as to restrict variations of Japanese particles as there are many particles with a similar meaning in Japanese 4 , e.g., \" (ha)\" and \" (ga)\".", "Although we used a unit cost for the edit distance, we can adjust the cost of each edit operation so that the tool can ignore variations of prepositions if we want to use English texts.", "Highlighting Unknown Characters One difficulty of neural models is that there is a possibility of generating incorrect or fake titles, which do not correspond to the article.", "This is a serious issue for news editing support since displayed candidates can mislead editors.", "For example, if the tool displays \" (Fujinami)\" for the news about \" (Fujinami)\", where they are different names with the same pronunciation, editors might choose the incorrect one.", "As a simple solution, we highlighted unknown characters that do not appear in both headline and lead in red.", "In Figure 2 , two phrases (\"B\" and \" \") are highlighted since they do not appear in the headline and lead.", "When a candidate includes highlighted characters, editors can carefully check if the candidate is semantically correct.", "Note that we did not exclude candidates with unknown characters so that the model can aggressively generate paraphrases and abbreviations.", "For example, the tool ROUGE-L (± SE) # articles Before 52.71% (± 0.56) 1773 After 57.65% (± 0.53) 1959 Table 5 : Sequence matching rates (ROUGE-L) of editors' titles and generated titles, which are averaged over articles over three weeks before/after releasing tool.", "suggests \" B(Soft B.)\"", "as an abbreviation of \" (Softbank)\" in the figure.", "Effect of Deployment To investigate the effect of the deployment, we compared the sequence matching rates between editors' correct titles and generated candidates before and after releasing the tool.", "The sequence matching rate is basically calculated by ROUGE-L (Lin, 2004) , which is defined as the rate of the length of the longest common subsequence between two sequences, i.e., a correct title and a generated candidate.", "Because we have multiple candidates for each article, we calculate the sequence matching rate as the maximum of their ROUGE-L scores, assuming that editors may refer to the most promising candidate.", "Note that the candidates were filtered by the aforementioned features, so we omitted a few articles without candidates.", "Table 5 shows the results of the sequence matching rates averaged over the articles over three weeks before and after releasing the tool.", "The results indicate that the ROUGE-L score increased by about 5 percentage points after the release.", "This implies that editors created their titles by referring to the displayed candidates to some extent.", "In fact, the ratio of the exact matched titles (ROUGE-L = 100%) in all articles (before/after the release) increased after the release by a factor of 1.62(i.e., from 3.78% to 6.13%).", "Similarly, the ratio of the 80% matched titles (ROUGE-L ≥ 80%) also increased by a factor of 1.32 (i.e., from 14.04% to 18.53%).", "This suggests that professional editors obtained new ideas from generated titles of the tool.", "Related Work We briefly review related studies from three aspects: news headline generation, editing support, and application of headline generation.", "In summary, our work is the first attempt to deploy a neural news-headline-generation model to a realworld application, i.e., news editing support tool.", "News-headline-generation tasks have been extensively studied since early times (Wang et al., 2005; Soricut and Marcu, 2006; Woodsend et al., 2010; Alfonseca et al., 2013; Sun et al., 2015; Colmenares et al., 2015) .", "In this line of research, Rush et al.", "(2015) proposed a neural model to generate news headlines and released a benchmark dataset for their task, and consequently this task has recently received increasing attention (Chopra et al., 2016; Takase et al., 2016; Kiyono et al., 2017; Zhou et al., 2017; Ayana et al., 2017; Raffel et al., 2017; Cao et al., 2018; Kobayashi, 2018) .", "However, their approaches were basically based on the encoderdecoder model, which is trained with a lot of (article, headline) pairs.", "This means that there are few situations for putting their models into the real world because news articles typically already have corresponding headlines, and most editors create a headline before its content (according to a senior journalist).", "Therefore, our work can strongly support their approaches from a practical perspective.", "Considering technologies used for editing support, there have been many studies for various purposes, such as spelling error correction (Farra et al., 2014; Hasan et al., 2015; Etoori et al., 2018) , grammatical error correction (Dahlmeier and Ng, 2012; Susanto et al., 2014; Choshen and Abend, 2018) , fact checking (Baly et al., 2018; Thorne and Vlachos, 2018; Lee et al., 2018) , fluency evaluation (Vadlapudi and Katragadda, 2010; Heilman et al., 2014; Kann et al., 2018) , and so on.", "However, when we consider their studies on our task, they are only used after editing (writing a draft).", "On the other hand, the purpose of our tool is different from theirs since our tool can support editors before or during editing.", "The usage of (interactive) machine translation systems (Denkowski et al., 2014; González-Rubio et al., 2016; Wuebker et al., 2016; Ye et al., 2016; Takeno et al., 2017) for supporting manual post-editing are similar to our purpose, but their task is completely different from ours.", "In other words, their task is a translation without information loss, whereas our task is a summarization that requires information compression.", "We believe that a case study on summarization is still important for the summarization community.", "There have been several studies reporting case studies on headline generation for different real services: (a) question headlines on question answering service (Higurashi et al., 2018) , (b) product headlines on e-commerce service (Wang et al., 2018) , and (c) headlines for product curation pages Camargo de Souza et al., 2018) .", "The first two (a) and (b) are extractive approaches, and the last one (c) is an abstractive approach, where the input is a set of slot/value pairs, such as \"color/white.\"", "That is, our task is more difficult to use in the real-world.", "In addition, application to news services tends to be sensitive since news articles contain serious contents such as incidents, accidents, and disasters.", "Thus, our work should be valuable as a rare case study applying a neural model to such a news service.", "Conclusion We addressed short-title generation from news articles for a news aggregator to support the editorial process.", "We proposed an encoder-decoder model with multiple encoders for separately encoding multiple information sources, i.e., news headlines and leads.", "Comparative experiments using crowdsourcing showed that our hybrid model performed better than the baselines, especially using the usefulness measure.", "We deployed our model to an editing support tool and empirically confirmed that professional editors began to refer to the generated titles after the release.", "Future research will include verifying how much our headline generation model can affect practical performance indicators, such as click-through rate.", "In this case, we need to develop a much safer model since our model sometimes yields erroneous outputs or fake news titles, which cannot be directly used in the commercial service." ] }
{ "paper_header_number": [ "1", "3", "4.1", "4.2", "4.3", "4.4", "5", "5.1", "5.2", "5.3", "5.4", "6", "7" ], "paper_header_content": [ "Introduction", "Proposed Method", "Dataset", "Training", "Evaluation", "Compared Models", "Deployment to Editing Support Tool", "Cutoff of Unpromising Candidates", "Skipping Redundant Candidates", "Highlighting Unknown Characters", "Effect of Deployment", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-34#paper-1048#slide-14
Effect of the tool release
Editors behavior in three weeks before/after the release Rate at which an editors title matches the generated one by X% Rate of 100% match titles Rate of 80+% match titles Before After Before After Editors began to refer to generated outputs after the release Copyright 2019 Yahoo Japan Corporation. All Rights Reserved.
Editors behavior in three weeks before/after the release Rate at which an editors title matches the generated one by X% Rate of 100% match titles Rate of 80+% match titles Before After Before After Editors began to refer to generated outputs after the release Copyright 2019 Yahoo Japan Corporation. All Rights Reserved.
[]
GEM-SciDuet-train-34#paper-1048#slide-15
1048
A Case Study on Neural Headline Generation for Editing Support
There have been many studies on neural headline generation models trained with a lot of (article, headline) pairs. However, there are few situations for putting such models into practical use in the real world since news articles typically already have corresponding headlines. In this paper, we describe a practical use case of neural headline generation in a news aggregator, where dozens of professional editors constantly select important news articles and manually create their headlines, which are much shorter than the original headlines. Specifically, we show how to deploy our model to an editing support tool and report the results of comparing the behavior of the editors before and after the release.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186 ], "paper_content_text": [ "Introduction A news-aggregator is a website or mobile application that aggregates a large amount of web content, e.g., online newspapers provided by different publishers.", "The main purpose of such a service is to help users obtain important news out of vast amounts of information quickly and easily.", "Therefore, it is critical to consider how to compactly show news, as well as what type of news to select, to improve service quality.", "In fact, the news-aggregator of Yahoo!", "JAPAN 1 , the largest Japanese portal site, is supported by dozens of professional editors who constantly select important news articles and manually create their new headlines called short titles, which are much shorter than the original headline, to construct a newstopic list.", "Note that we use the term \"title\" to avoid confusion with the original news headline, although they are similar concepts.", "* Both authors contributed equally to this work.", "1 https://www.yahoo.co.jp/ (a) List of news topics including short titles.", "(b) Page of news entry including headline and lead.", "Figure 1 shows screenshots of the newsaggregator of Yahoo!", "JAPAN, where the English translations of the short title, headline and lead are listed in Table 1 .", "The left figure (a) shows the list of news topics (important news articles), which includes short titles, and the right figure (b) shows the entry page of the first topic in the list, which consists of a headline and lead.", "The lead is a short version of the article and can be used by users to decide whether to read the whole article.", "The editors' job is to create a short title from news content including the headline and lead.", "A short title has two advantages over a normal headline; one is quick understandability of the content and the other is saving display space by using a single line.", "This means that short titles can increase a user's chances of reaching interesting articles.", "Since the click-through rate of news articles is directly related to ad revenue, even a small improvement in short titles has a significant impact on business.", "We tackle an automatic-generation task of such short titles for a news aggregator to support the Japanese English translation Short title The prime minister cannot say that there is no surmise.", "Headline It cannot be said that there is no \"sontaku (surmise)\" with absolute certainty.", "The prime minister Abe said about the problem of \"Kake Gakuen (Kake school)\".", "Lead Prime Minister Shinzo Abe said, in an intensive deliberation with the House of Councilors Budget Committee held on the afternoon of the 14th, as an answer to a question about whether bureaucrats surmised to the prime minister regarding the Kake suspicion, \"It is difficult to understand whether there is a sontaku (surmise)\".", "He said \"It cannot be said that there was nothing wrong,\" while explaining that \"I do not need to be obsequious\".", "An answer to Ichiro Tsukada (LDP).", "Table 1 : Short title, headline, and lead in Figure 1 (b) with English versions.", "editorial process.", "Our task is a variant of newsheadline generation, which has been extensively studied, as described in Section 6.", "A clear difference between their task and ours is that we need to generate short titles from news content including headlines.", "Thus, we formulate our task as an abstractive summarization from multiple information sources, i.e., headlines and leads, based on an encoder-decoder model (Section 2).", "There are roughly three approaches for handling multiple information sources.", "The first approach is to merge all sources with some weights based on the importance of each source, which can be achieved by a weighted average of the context vectors, as in multimodal summarization (Hori et al., 2017) .", "This is the most general approach since the other two can also be regarded as special cases of the weighted average.", "The second approach is to use one source as the main source and others as secondary ones.", "This is effective when the main source can be clearly determined, such as query-focused summarization (Nema et al., 2017) , where the target document is main and a query is secondary.", "The third approach is to find the salient components of the sources.", "This is suitable when there are many sources including less informative ones (redundant sources), such as lengthydocument summarization that outputs a multisentence summary (Tan et al., 2017) , where each sentence can be regarded as one source.", "We addressed an extension of the weighted average approach and compared our proposed model with a multimodal model (Hori et al., 2017) from the first approach and a query-based model (Nema et al., 2017) from the second approach, as well as the normal encoder-decoder model.", "Since we have only two sources (headlines and leads), where the headline source is clearly salient for generating a short title, the third approach can be reduced to the normal encoder-decoder model.", "Our contributions are as follows.", "• We report on a case study of short-title generation of news articles for a news aggregator as a real-world application of neural headline generation.", "This study supports previous studies based on the encoder-decoder model from a practical standpoint since most real-world news articles basically already have headlines, which means that there has been little direct application of these previous studies.", "• We propose an encoder-decoder model with multiple encoders for separately encoding news headlines and leads (Section 3).", "Our comparative experiments with several baselines involving evaluations done by crowdsourcing workers showed the effectiveness of our model, especially using the \"usefulness\" measure (Section 4).", "• We describe how to deploy our model to an editing support tool and show the results of comparing the editors' behavior before and after releasing the tool (Section 5), which imply that the editors began to refer to generated titles after the release.", "late the following conditional likelihood p(y | x) = T −1 ∏ t=1 p(y t+1 | y ≤t , x) (1) with respect to each pair (x, y) of an input sequence x = x 1 · · · x S and output sequence y = y 1 · · · y T , where y ≤t = y 1 · · · y t , and maximize its mean.", "The model p(y | x) in Eq.", "(1) is computed by a combination of two recurrent neural networks (RNNs): an encoder and decoder.", "The encoder reads an input sequence x to recognize its content, and the decoder predicts an output sequence y corresponding to the content.", "More formally, an encoder calculates a hidden state h s for each element x s in a x by using the state transition function f enc of the encoder: h s = f enc (x s , h s−1 ).", "In a similar fashion, a decoder calculates a hidden stateĥ t for each element y t in a y by using the state transition function f dec of the decoder after setting the last hidden state of the encoder as the initial state of the decoder (ĥ 0 = h S ): h t = f dec (y t ,ĥ t−1 ).", "Then, a prediction of outputs for eachĥ t is calculated using the output function g dec with an attention mechanism: p(y t+1 | y ≤t , x) = g dec (ĥ t , c t ), (2) where c t is a weighted average of the encoder hidden states {h 1 , · · · , h S }, defined by c t = S ∑ s=1 a t (s)h s , (3) where a t (s) represents a weight of an encoder hidden state h s with respect to a decoder hidden statê h t .", "c t represents a soft alignment (or attention weight) to the source sequence at the target position t, so it is called a context.", "Proposed Method We propose an encoder-decoder model with multiple encoders.", "For simplicity, we describe our model assuming two encoders for news headlines and leads.", "Let d t and d ′ t be contexts calculated with Eq.", "(3) with the headline encoder and lead encoder, respectively.", "Our model combines the two context vectors inspired by a gating mechanism in long-short term memory networks (Hochreiter and Schmidhuber, 1997) as follows: w t = σ(W [d t ; d ′ t ;ĥ t ]), (4) w ′ t = σ(W ′ [d t ; d ′ t ;ĥ t ]), (5) c t = w t ⊙ d t + w ′ t ⊙ d ′ t , (6) where function σ represents the sigmoid function, i.e., σ(x) = 1/(1 + e −x ), and the operator ⊙ represents the element-wise product.", "Eq.", "(4) calculates a gating weight w t for d t , where W represents a weight matrix for a concatenated vector [d t ; d ′ t ;ĥ t ].", "Similarly, Eq.", "(5) calculates a gating weight w ′ t for d ′ t .", "Eq.", "(6) calculates a mixed context c t made from the two contexts, d t and d ′ t .", "Finally, the output function in our model is constructed by substituting c t with c t in Eq.", "(2).", "Our model can be regarded as an extension of the multimodal fusion model (Hori et al., 2017) , where multiple contexts are mixed using scalar weights, i.e., c t = αd t + βd ′ t , where α and β are positive scalar weights calculated using an attention mechanism such as a t (s) in Eq.", "(3).", "Our model can obtain a more sophisticated mixed context than their model since that model only takes into account which encoder to weigh at a time step, while our model adjusts weights on the element level.", "Experiments Dataset We prepared a dataset extracted from the newsaggregator of Yahoo!", "JAPAN by Web crawling.", "The dataset included 263K (headline, lead, short title) triples, and was split into three parts, i.e., for training (90%), validation (5%), and testing (5%).", "We preprocessed them by separating characters for training since our preliminary experiments showed that character-based training clearly performed better than word-based training.", "The statistics of our dataset are as follows.", "The average lengths of headlines, leads, and short titles are 24.87, 128.49, and 13.05 Japanese characters, respectively.", "The dictionary sizes (for characters) of headlines, leads, and short titles are 3618, 4226, and 3156, respectively.", "Each news article has only one short title created by a professional editor.", "The percentage of short titles equal to their headlines is only 0.13%, while the percentage of extractively solvable instances, in which the characters in each short title are completely matched by those in the corresponding headline, was about 20%.", "However, the average edit distance (Levenshtein, 1966 ) between short titles and headlines was 23.74.", "This means that short titles cannot be easily created from headlines.", "Training We implemented our model on the OpenNMT 2 toolkit.", "We used a convolutional neural network (CNN) (Kim, 2014) , instead of an RNN, to construct the lead encoder since leads are longer than headlines and require much more computational time.", "Since the CNN encoder outputs all hidden states for an input sequence in the same format as the RNN encoder, we can easily apply these states to Eq.", "(3).", "Our headline encoder still remains as an RNN (i.e., bidirectional LSTM) for fair comparison with the default implementation.", "We used a stochastic gradient descent algorithm with Nesterov momentum (Nesterov, 1983) as an optimizer, after initializing parameters by uniform sampling on (−0.1, 0.1).", "Table 2 lists the details of the hyper-parameter settings in our experiment.", "Other settings were basically the same as the default implementation of OpenNMT.", "Evaluation We conducted two crowdsourcing tasks to separately measure readability and usefulness.", "The readability task asked ten workers how readable each short title was on a four-point scale (higher is better), while the usefulness task asked them how useful the short title was compared to the corresponding article.", "The score of each generated short title was calculated by averaging the scores collected from the ten workers.", "Compared Models We prepared four models, our model GateFusion and three baselines MultiModal, QueryBased, and OpenNMT, listed below.", "We implemented the fusion mechanisms of MultiModal and 2 https://github.com/OpenNMT/OpenNMT-py Table 3 : Mean scores of readability (r), usefulness (u), and their average r+u 2 based on crowdsourcing.", "The \" †\" mark shows a statistical significance from all three baselines OpenNMT, MultiModal, and QueryBased on a one-tailed, paired t-test (p < 0.01).", "QueryBased on OpenNMT using an RNN encoder for headlines and CNN encoder for leads (see Appendix A for detailed definitions).", "• GateFusion: Our model with a gating mechanism described in Section 3.", "This is a fusion based on vector weights.", "• MultiModal: A multimodal model proposed by (Hori et al., 2017) , which can handle multimodal information such as image and audio as well as text by using separate encoders.", "The model combines contexts obtained from the encoders via an attention mechanism such as a t (s) in Eq.", "(3).", "This is a fusion based on scalar weights.", "• QueryBased: A query-based model proposed by (Nema et al., 2017) , which can finetune the attention on a document by using a query for query-focused summarization.", "We regard a headline as a document and a lead as a query since the headline is more similar to its short title.", "Specifically, the model finetunes an attention weight a t (s) for calculating a headline context d t by using a pre-computed lead context d ′ t .", "This is a fusion based on cascade connection.", "• OpenNMT: An encoder-decoder model with a single encoder implemented in OpenNMT, whose input is a headline only, because a variant using a lead did not perform better than this setting.", "Table 3 lists the results from the crowdsourcing tasks for readability and usefulness (see Appendix B for the details of these scores).", "Editor and Prefix in the top block of rows show the results of correct short titles created by editors and a naive model using the first 13.5 Japanese characters 3 , respectively.", "The middle and bottom blocks represent the three baselines and our models, respectively.", "We explain our hybrid model HybridFusion later.", "Each model was prepared as an ensemble of ten models by random initialization, aiming for robust performance.", "Our GateFusion clearly performed better than the three baselines regarding usefulness and interestingly outperformed even Editor.", "This implies that GateFusion tends to aggressively copy elements from source sequences.", "However, this seemed to result in complicated expressions; thus, GateFusion performed the worst with respect to readability.", "To overcome this weakness, we developed a hybrid model HybridFusion that consists of GateFusion and another fusion model QueryBased, which performed relatively well in terms of readability.", "The results indicate that HybridFusion performed the best regarding readability and usefulness.", "It can be considered that QueryBased helps GateFusion generate headline-style outputs since QueryBased mainly uses the headline source.", "Table 4 lists output examples generated by the best model OpenNMT from the three baselines and our best model HybridFusion (see Appendix C for more examples).", "In this case, the difference between OpenNMT and HybridFusion is easily comprehensible.", "The former selected \" (evolution)\", and the latter selected \" (Darvish)\" from the headline.", "In Japanese headlines, the last word tends to be important, so using the last word is basically a good strategy.", "However, the lead indicates that \"Darvish\" is more important than \"evolution\" (actually, there is no word \"evolution\" in the lead); thus, HybridFusion was able to correctly select the long name \"Darvish\" and abbreviate it to \" (Dar)\".", "In addition, it forcibly changed the style to the short title's style by putting the name into the forefront to easily get users' attention.", "This suggests that our neural-headline-generation model HybridFusion can successfully work even in this real-world application.", "Results Deployment to Editing Support Tool We deployed our short-title-generation model to an editing support tool in collaboration with the 3 13.5 is the limit in the news-aggregator, where space, numbers, and alphabet characters are counted as 0.5.", "Figure 2 : Screenshot of editing support tool displaying generated candidates for creating a short title.", "news service, as shown in Figure 2 .", "In the tool, when an editor enters the URL of an article, the tool can automatically fetch the headline and lead of the article and display up to five candidates next to the edit form of a short title, as shown in the dotted box in the figure.", "These candidates are hypotheses (with high probabilities) generated by the beam search based on the model.", "Then, the editor can effectively create a short title by referring to the generated candidates.", "This supporting feature is expected to be useful especially for inexperienced editors since the quality of short titles is heavily dependent on editors' experience.", "From now on, we briefly describe three features of the tool to improve its usability when displaying candidates: cutoff of unpromising candidates, skipping redundant candidates, and highlighting unknown characters.", "After that, we discuss the effect of the deployment analyzing user behavior before and after releasing the tool.", "Cutoff of Unpromising Candidates The quality of displayed candidates is one of the main factors that affect the usability of the tool.", "If the tool frequently displays unpromising candidates, editors will gradually start ignoring them.", "Therefore, we cutoff unpromising candidates whose perplexity scores are higher than a certain threshold, where the perplexity score of a candidate is calculated by the inverse of the geometric mean of the generation probabilities for all characters in the candidate.", "We set the threshold considering the results of the editors' manual evaluation, where they checked if each candidate was acceptable or not.", "Specifically, we used 1.47 (=1/0.68) as the threshold, which means that the (geometric) mean character likelihood in the candidate should be higher than 0.68.", "If all candidates are judged as unpromising, the tool displays a message like \"No promising candidates.\"", "Skipping Redundant Candidates The purpose of the tool is to give editors some new ideas for creating short titles, so it is not useful to display redundant candidates similar to others.", "Therefore, we skip candidates whose edit distance (Levenshtein, 1966) to the other candidates is lower than a threshold when selecting hypotheses in descending order of probability.", "Formally, the edit distance between two texts is defined as the minimum number of single-character edits (insertions, deletions, or substitutions) required to change one text into the other.", "We set the threshold to 2 so as to restrict variations of Japanese particles as there are many particles with a similar meaning in Japanese 4 , e.g., \" (ha)\" and \" (ga)\".", "Although we used a unit cost for the edit distance, we can adjust the cost of each edit operation so that the tool can ignore variations of prepositions if we want to use English texts.", "Highlighting Unknown Characters One difficulty of neural models is that there is a possibility of generating incorrect or fake titles, which do not correspond to the article.", "This is a serious issue for news editing support since displayed candidates can mislead editors.", "For example, if the tool displays \" (Fujinami)\" for the news about \" (Fujinami)\", where they are different names with the same pronunciation, editors might choose the incorrect one.", "As a simple solution, we highlighted unknown characters that do not appear in both headline and lead in red.", "In Figure 2 , two phrases (\"B\" and \" \") are highlighted since they do not appear in the headline and lead.", "When a candidate includes highlighted characters, editors can carefully check if the candidate is semantically correct.", "Note that we did not exclude candidates with unknown characters so that the model can aggressively generate paraphrases and abbreviations.", "For example, the tool ROUGE-L (± SE) # articles Before 52.71% (± 0.56) 1773 After 57.65% (± 0.53) 1959 Table 5 : Sequence matching rates (ROUGE-L) of editors' titles and generated titles, which are averaged over articles over three weeks before/after releasing tool.", "suggests \" B(Soft B.)\"", "as an abbreviation of \" (Softbank)\" in the figure.", "Effect of Deployment To investigate the effect of the deployment, we compared the sequence matching rates between editors' correct titles and generated candidates before and after releasing the tool.", "The sequence matching rate is basically calculated by ROUGE-L (Lin, 2004) , which is defined as the rate of the length of the longest common subsequence between two sequences, i.e., a correct title and a generated candidate.", "Because we have multiple candidates for each article, we calculate the sequence matching rate as the maximum of their ROUGE-L scores, assuming that editors may refer to the most promising candidate.", "Note that the candidates were filtered by the aforementioned features, so we omitted a few articles without candidates.", "Table 5 shows the results of the sequence matching rates averaged over the articles over three weeks before and after releasing the tool.", "The results indicate that the ROUGE-L score increased by about 5 percentage points after the release.", "This implies that editors created their titles by referring to the displayed candidates to some extent.", "In fact, the ratio of the exact matched titles (ROUGE-L = 100%) in all articles (before/after the release) increased after the release by a factor of 1.62(i.e., from 3.78% to 6.13%).", "Similarly, the ratio of the 80% matched titles (ROUGE-L ≥ 80%) also increased by a factor of 1.32 (i.e., from 14.04% to 18.53%).", "This suggests that professional editors obtained new ideas from generated titles of the tool.", "Related Work We briefly review related studies from three aspects: news headline generation, editing support, and application of headline generation.", "In summary, our work is the first attempt to deploy a neural news-headline-generation model to a realworld application, i.e., news editing support tool.", "News-headline-generation tasks have been extensively studied since early times (Wang et al., 2005; Soricut and Marcu, 2006; Woodsend et al., 2010; Alfonseca et al., 2013; Sun et al., 2015; Colmenares et al., 2015) .", "In this line of research, Rush et al.", "(2015) proposed a neural model to generate news headlines and released a benchmark dataset for their task, and consequently this task has recently received increasing attention (Chopra et al., 2016; Takase et al., 2016; Kiyono et al., 2017; Zhou et al., 2017; Ayana et al., 2017; Raffel et al., 2017; Cao et al., 2018; Kobayashi, 2018) .", "However, their approaches were basically based on the encoderdecoder model, which is trained with a lot of (article, headline) pairs.", "This means that there are few situations for putting their models into the real world because news articles typically already have corresponding headlines, and most editors create a headline before its content (according to a senior journalist).", "Therefore, our work can strongly support their approaches from a practical perspective.", "Considering technologies used for editing support, there have been many studies for various purposes, such as spelling error correction (Farra et al., 2014; Hasan et al., 2015; Etoori et al., 2018) , grammatical error correction (Dahlmeier and Ng, 2012; Susanto et al., 2014; Choshen and Abend, 2018) , fact checking (Baly et al., 2018; Thorne and Vlachos, 2018; Lee et al., 2018) , fluency evaluation (Vadlapudi and Katragadda, 2010; Heilman et al., 2014; Kann et al., 2018) , and so on.", "However, when we consider their studies on our task, they are only used after editing (writing a draft).", "On the other hand, the purpose of our tool is different from theirs since our tool can support editors before or during editing.", "The usage of (interactive) machine translation systems (Denkowski et al., 2014; González-Rubio et al., 2016; Wuebker et al., 2016; Ye et al., 2016; Takeno et al., 2017) for supporting manual post-editing are similar to our purpose, but their task is completely different from ours.", "In other words, their task is a translation without information loss, whereas our task is a summarization that requires information compression.", "We believe that a case study on summarization is still important for the summarization community.", "There have been several studies reporting case studies on headline generation for different real services: (a) question headlines on question answering service (Higurashi et al., 2018) , (b) product headlines on e-commerce service (Wang et al., 2018) , and (c) headlines for product curation pages Camargo de Souza et al., 2018) .", "The first two (a) and (b) are extractive approaches, and the last one (c) is an abstractive approach, where the input is a set of slot/value pairs, such as \"color/white.\"", "That is, our task is more difficult to use in the real-world.", "In addition, application to news services tends to be sensitive since news articles contain serious contents such as incidents, accidents, and disasters.", "Thus, our work should be valuable as a rare case study applying a neural model to such a news service.", "Conclusion We addressed short-title generation from news articles for a news aggregator to support the editorial process.", "We proposed an encoder-decoder model with multiple encoders for separately encoding multiple information sources, i.e., news headlines and leads.", "Comparative experiments using crowdsourcing showed that our hybrid model performed better than the baselines, especially using the usefulness measure.", "We deployed our model to an editing support tool and empirically confirmed that professional editors began to refer to the generated titles after the release.", "Future research will include verifying how much our headline generation model can affect practical performance indicators, such as click-through rate.", "In this case, we need to develop a much safer model since our model sometimes yields erroneous outputs or fake news titles, which cannot be directly used in the commercial service." ] }
{ "paper_header_number": [ "1", "3", "4.1", "4.2", "4.3", "4.4", "5", "5.1", "5.2", "5.3", "5.4", "6", "7" ], "paper_header_content": [ "Introduction", "Proposed Method", "Dataset", "Training", "Evaluation", "Compared Models", "Deployment to Editing Support Tool", "Cutoff of Unpromising Candidates", "Skipping Redundant Candidates", "Highlighting Unknown Characters", "Effect of Deployment", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-34#paper-1048#slide-15
Conclusion
Short titles were successfully generated for editing support Editors began to refer to generated titles of our system Verify how much our model can affect click-through rate Need a much safer model to avoid generating fake titles We would like to thank editors and engineers in the news service who continuously supported our experiments Copyright 2019 Yahoo Japan Corporation. All Rights Reserved.
Short titles were successfully generated for editing support Editors began to refer to generated titles of our system Verify how much our model can affect click-through rate Need a much safer model to avoid generating fake titles We would like to thank editors and engineers in the news service who continuously supported our experiments Copyright 2019 Yahoo Japan Corporation. All Rights Reserved.
[]
GEM-SciDuet-train-35#paper-1049#slide-0
1049
Transformation Networks for Target-Oriented Sentiment Classification *
Target-oriented sentiment classification aims at classifying sentiment polarities over individual opinion targets in a sentence. RNN with attention seems a good fit for the characteristics of this task, and indeed it achieves the state-of-the-art performance. After re-examining the drawbacks of attention mechanism and the obstacles that block CNN to perform well in this classification task, we propose a new model to overcome these issues. Instead of attention, our model employs a CNN layer to extract salient features from the transformed word representations originated from a bi-directional RNN layer. Between the two layers, we propose a component to generate target-specific representations of words in the sentence, meanwhile incorporate a mechanism for preserving the original contextual information from the RNN layer. Experiments show that our model achieves a new state-of-the-art performance on a few benchmarks. 1
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204 ], "paper_content_text": [ "Introduction Target-oriented (also mentioned as \"target-level\" or \"aspect-level\" in some works) sentiment classification aims to determine sentiment polarities over \"opinion targets\" that explicitly appear in the sentences (Liu, 2012) .", "For example, in the sentence \"I am pleased with the fast log on, and the long battery life\", the user mentions two targets * The work was done when Xin Li was an intern at Tencent AI Lab.", "This project is substantially supported by a grant from the Research Grant Council of the Hong Kong Special Administrative Region, China (Project Code: 14203414).", "1 Our code is open-source and available at https:// github.com/lixin4ever/TNet \"log on\" and \"better life\", and expresses positive sentiments over them.", "The task is usually formulated as predicting a sentiment category for a (target, sentence) pair.", "Recurrent Neural Networks (RNNs) with attention mechanism, firstly proposed in machine translation (Bahdanau et al., 2014) , is the most commonly-used technique for this task.", "For example, Wang et al.", "(2016) ; Tang et al.", "(2016b) ; ; Liu and Zhang (2017) ; Ma et al.", "(2017) and employ attention to measure the semantic relatedness between each context word and the target, and then use the induced attention scores to aggregate contextual features for prediction.", "In these works, the attention weight based combination of word-level features for classification may introduce noise and downgrade the prediction accuracy.", "For example, in \"This dish is my favorite and I always get it and never get tired of it.", "\", these approaches tend to involve irrelevant words such as \"never\" and \"tired\" when they highlight the opinion modifier \"favorite\".", "To some extent, this drawback is rooted in the attention mechanism, as also observed in machine translation (Luong et al., 2015) and image captioning .", "Another observation is that the sentiment of a target is usually determined by key phrases such as \"is my favorite\".", "By this token, Convolutional Neural Networks (CNNs)-whose capability for extracting the informative n-gram features (also called \"active local features\") as sentence representations has been verified in (Kim, 2014; Johnson and Zhang, 2015) -should be a suitable model for this classification problem.", "However, CNN likely fails in cases where a sentence expresses different sentiments over multiple targets, such as \"great food but the service was dreadful!\".", "One reason is that CNN cannot fully explore the target information as done by RNN-based meth-ods (Tang et al., 2016a) .", "2 Moreover, it is hard for vanilla CNN to differentiate opinion words of multiple targets.", "Precisely, multiple active local features holding different sentiments (e.g., \"great food\" and \"service was dreadful\") may be captured for a single target, thus it will hinder the prediction.", "We propose a new architecture, named Target-Specific Transformation Networks (TNet), to solve the above issues in the task of target sentiment classification.", "TNet firstly encodes the context information into word embeddings and generates the contextualized word representations with LSTMs.", "To integrate the target information into the word representations, TNet introduces a novel Target-Specific Transformation (TST) component for generating the target-specific word representations.", "Contrary to the previous attention-based approaches which apply the same target representation to determine the attention scores of individual context words, TST firstly generates different representations of the target conditioned on individual context words, then it consolidates each context word with its tailor-made target representation to obtain the transformed word representation.", "Considering the context word \"long\" and the target \"battery life\" in the above example, TST firstly measures the associations between \"long\" and individual target words.", "Then it uses the association scores to generate the target representation conditioned on \"long\".", "After that, TST transforms the representation of \"long\" into its target-specific version with the new target representation.", "Note that \"long\" could also indicate a negative sentiment (say for \"startup time\"), and the above TST is able to differentiate them.", "As the context information carried by the representations from the LSTM layer will be lost after the non-linear TST, we design a contextpreserving mechanism to contextualize the generated target-specific word representations.", "Such mechanism also allows deep transformation structure to learn abstract features 3 .", "To help the CNN feature extractor locate sentiment indicators more accurately, we adopt a proximity strategy to scale the input of convolutional layer with positional relevance between a word and the target.", "2 One method could be concatenating the target representation with each word representation, but the effect as shown in (Wang et al., 2016) is limited.", "3 Abstract features usually refer to the features ultimately useful for the task (Bengio et al., 2013; LeCun et al., 2015) .", "In summary, our contributions are as follows: • TNet adapts CNN to handle target-level sentiment classification, and its performance dominates the state-of-the-art models on benchmark datasets.", "• A novel Target-Specific Transformation component is proposed to better integrate target information into the word representations.", "• A context-preserving mechanism is designed to forward the context information into a deep transformation architecture, thus, the model can learn more abstract contextualized word features from deeper networks.", "Model Description Given a target-sentence pair (w τ , w), where w τ = {w τ 1 , w τ 2 , ..., w τ m } is a sub-sequence of w = {w 1 , w 2 , ..., w n }, and the corresponding word embeddings x τ = {x τ 1 , x τ 2 , ..., x τ m } and x = {x 1 , x 2 , ..., x n }, the aim of target sentiment classification is to predict the sentiment polarity y ∈ {P, N, O} of the sentence w over the target w τ , where P , N and O denote \"positive\", \"negative\" and \"neutral\" sentiments respectively.", "The architecture of the proposed Target-Specific Transformation Networks (TNet) is shown in Fig.", "1 .", "The bottom layer is a BiLSTM which transforms the input x = {x 1 , x 2 , ..., x n } ∈ R n×dimw into the contextualized word representations h (0) = {h (0) 1 , h (0) 2 , ..., h (0) n } ∈ R n×2dim h (i.e.", "hidden states of BiLSTM), where dim w and dim h denote the dimensions of the word embeddings and the hidden representations respectively.", "The middle part, the core part of our TNet, consists of L Context-Preserving Transformation (CPT) layers.", "The CPT layer incorporates the target information into the word representations via a novel Target-Specific Transformation (TST) component.", "CPT also contains a contextpreserving mechanism, resembling identity mapping (He et al., 2016a,b) and highway connection (Srivastava et al., 2015a,b) , allows preserving the context information and learning more abstract word-level features using a deep network.", "The top most part is a position-aware convolutional layer which first encodes positional relevance between a word and a target, and then extracts informative features for classification.", "Bi-directional LSTM Layer As observed in Lai et al.", "(2015) , combining contextual information with word embeddings is an effective way to represent a word in convolutionbased architectures.", "TNet also employs a BiL-STM to accumulate the context information for each word of the input sentence, i.e., the bottom part in Fig.", "1 .", "For simplicity and space issue, we denote the operation of an LSTM unit on x i as LSTM(x i ).", "Thus, the contextualized word representation h (0) i ∈ R 2dim h is obtained as follows: h (0) i = [ − −−− → LSTM(x i ); ← −−− − LSTM(x i )], i ∈ [1, n].", "(1) Context-Preserving Transformation The above word-level representation has not considered the target information yet.", "Traditional attention-based approaches keep the word-level features static and aggregate them with weights as the final sentence representation.", "In contrast, as shown in the middle part in Fig.", "1 , we introduce multiple CPT layers and the detail of a single CPT is shown in Fig.", "2 .", "In each CPT layer, a tailor-made TST component that aims at better consolidating word representation and target representation is proposed.", "Moreover, we design a context-preserving mechanism enabling the learning of target-specific word representations in a deep neural architecture.", "Target-Specific Transformation TST component is depicted with the TST block in Liu and Zhang, 2017) average the embeddings of the target words as the target representation.", "This strategy may be inappropriate in some cases because different target words usually do not contribute equally.", "For example, in the target \"amd turin processor\", the word \"processor\" is more important than \"amd\" and \"turin\", because the sentiment is usually conveyed over the phrase head, i.e.,\"processor\", but seldom over modifiers (such as brand name \"amd\").", "Ma et al.", "(2017) attempted to overcome this issue by measuring the importance score between each target word representation and the averaged sentence vector.", "However, it may be ineffective for sentences expressing multiple sentiments (e.g., \"Air has higher resolution but the fonts are small.", "\"), because taking the average tends to neutralize different sentiments.", "We propose to dynamically compute the importance of target words based on each sentence word rather than the whole sentence.", "We first employ another BiLSTM to obtain the target word representations h τ ∈ R m×2dim h : h τ j = [ − −−− → LSTM(x τ j ); ← −−− − LSTM(x τ j )], j ∈ [1, m].", "(2) Then, we dynamically associate them with each word w i in the sentence to tailor-make target representation r τ i at the time step i: r τ i = m j=1 h τ j * F(h (l) i , h τ j ) , (3) where the function F measures the relatedness between the j-th target word representation h τ j and the i-th word-level representation h (l) i : F(h (l) i , h τ j ) = exp (h (l) i h τ j ) m k=1 exp (h (l) i h τ k ) .", "(4) Finally, the concatenation of r τ i and h (l) i is fed into a fully-connected layer to obtain the i-th targetspecific word representationh i (l) : h (l) i = g(W τ [h (l) i : r τ i ] + b τ ), (5) where g( * ) is a non-linear activation function and \":\" denotes vector concatenation.", "W τ and b τ are the weights of the layer.", "Context-Preserving Mechanism After the non-linear TST (see Eq.", "5), the context information captured with contextualized representations from the BiLSTM layer will be lost since the mean and the variance of the features within the feature vector will be changed.", "To take advantage of the context information, which has been proved to be useful in (Lai et al., 2015) , we investigate two strategies: Lossless Forwarding (LF) and Adaptive Scaling (AS), to pass the context information to each following layer, as depicted by the block \"LF/AS\" in Fig.", "2 .", "Accordingly, the model variants are named TNet-LF and TNet-AS.", "Lossless Forwarding.", "This strategy preserves context information by directly feeding the features before the transformation to the next layer.", "Specifically, the input h (l+1) i of the (l + 1)-th CPT layer is formulated as: h (l+1) i = h (l) i +h (l) i , i ∈ [1, n], l ∈ [0, L], (6) where h (l) i is the input of the l-th layer andh (l) i is the output of TST in this layer.", "We unfold the recursive form of Eq.", "6 as follows: h (l+1) i = h (0) i +TST(h (0) i )+· · ·+TST(h (l) i ).", "(7) Here, we denoteh (l) i as TST(h (l) i ).", "From Eq.", "7, we can see that the output of each layer will contain the contextualized word representations (i.e., h (0) i ), thus, the context information is encoded into the transformed features.", "We call this strategy \"Lossless Forwarding\" because the contextualized representations and the transformed representations (i.e., TST(h (l) i )) are kept unchanged during the feature combination.", "Adaptive Scaling.", "Lossless Forwarding introduces the context information by directly adding back the contextualized features to the transformed features, which raises a question: Can the weights of the input and the transformed features be adjusted dynamically?", "With this motivation, we propose another strategy, named \"Adaptive Scaling\".", "Similar to the gate mechanism in RNN variants (Jozefowicz et al., 2015) , Adaptive Scaling introduces a gating function to control the passed proportions of the transformed features and the input features.", "The gate t (l) as follows: t (l) i = σ(W trans h (l) i + b trans ), (8) where t (l) i is the gate for the i-th input of the l-th CPT layer, and σ is the sigmoid activation function.", "Then we perform convex combination of h (l) i andh (l) i based on the gate: h (l+1) i = t (l) i h (l) i + (1 − t (l) i ) h (l) i .", "(9) Here, denotes element-wise multiplication.", "The non-recursive form of this equation is as follows (for clarity, we ignore the subscripts): h (l+1) = [ l k=0 (1 − t (k) )] h (0) +[t (0) l k=1 (1 − t (k) )] TST(h (0) ) + · · · +t (l−1) (1 − t (l) ) TST(h (l−1) ) + t (l) TST(h (l) ).", "Thus, the context information is integrated in each upper layer and the proportions of the contextualized representations and the transformed representations are controlled by the computed gates in different transformation layers.", "Convolutional Feature Extractor Recall that the second issue that blocks CNN to perform well is that vanilla CNN may associate a target with unrelated general opinion words which are frequently used as modifiers for different targets across domains.", "For example, \"service\" in \"Great food but the service is dreadful\" may be associated with both \"great\" and \"dreadful\".", "To solve it, we adopt a proximity strategy, which is observed effective in Li and Lam, 2017) .", "The idea is a closer opinion word is more likely to be the actual modifier of the target.", "Specifically, we first calculate the position relevance v i between the i-th word and the target 4 : v i =      1 − (k+m−i) C i < k + m 1 − i−k C k + m ≤ i ≤ n 0 i > n (10) where k is the index of the first target word, C is a pre-specified constant, and m is the length of the target w τ .", "Then, we use v to help CNN locate the correct opinion w.r.t.", "the given target: h (l) i = h (l) i * v i , i ∈ [1, n], l ∈ [1, L].", "(11) Based on Eq.", "10 and Eq.", "11, the words close to the target will be highlighted and those far away will be downgraded.", "v is also applied on the intermediate output to introduce the position information into each CPT layer.", "Then we feed the weighted h (L) to the convolutional layer, i.e., the top-most layer in Fig.", "1 , to generate the feature map c ∈ R n−s+1 as follows: c i = ReLU(w conv h (L) i:i+s−1 + b conv ), (12) where h (L) i:i+s−1 ∈ R s·dim h is the concatenated vec- tor ofĥ (L) i , · · · ,ĥ (L) i+s−1 , and s is the kernel size.", "w conv ∈ R s·dim h and b conv ∈ R are learnable weights of the convolutional kernel.", "To capture the most informative features, we apply max pooling (Kim, 2014) and obtain the sentence representation z ∈ R n k by employing n k kernels: z = [max(c 1 ), · · · , max(c n k )] .", "(13) Finally, we pass z to a fully connected layer for sentiment prediction: p(y|w τ , w) = Softmax(W f z + b f ).", "(14) where W f and b f are learnable parameters.", "4 As we perform sentence padding, it is possible that the index i is larger than the actual length n of the sentence.", "Experiments Experimental Setup As shown in Table 1 , we evaluate the proposed TNet on three benchmark datasets: LAPTOP and REST are from SemEval ABSA challenge (Pontiki et al., 2014) , containing user reviews in laptop domain and restaurant domain respectively.", "We also remove a few examples having the \"conflict label\" as done in ; TWITTER is built by Dong et al.", "(2014) , containing twitter posts.", "All tokens are lowercased without removal of stop words, symbols or digits, and sentences are zero-padded to the length of the longest sentence in the dataset.", "Evaluation metrics are Accuracy and Macro-Averaged F1 where the latter is more appropriate for datasets with unbalanced classes.", "We also conduct pairwise t-test on both Accuracy and Macro-Averaged F1 to verify if the improvements over the compared models are reliable.", "TNet is compared with the following methods.", "• SVM (Kiritchenko et al., 2014) : It is a traditional support vector machine based model with extensive feature engineering; • AdaRNN (Dong et al., 2014) : It learns the sentence representation toward target for sentiment prediction via semantic composition over dependency tree; • AE-LSTM, and ATAE-LSTM (Wang et al., 2016) : AE-LSTM is a simple LSTM model incorporating the target embedding as input, while ATAE-LSTM extends AE-LSTM with attention; • IAN (Ma et al., 2017) : IAN employs two LSTMs to learn the representations of the context and the target phrase interactively; • CNN-ASP: It is a CNN-based model implemented by us which directly concatenates target representation to each word embedding; • TD-LSTM (Tang et al., 2016a) : It employs two LSTMs to model the left and right contexts of the target separately, then performs predictions based on concatenated context representations; • MemNet (Tang et al., 2016b) : It applies attention mechanism over the word embeddings multiple times and predicts sentiments based on the top-most sentence representations; • BILSTM-ATT-G (Liu and Zhang, 2017): It models left and right contexts using two attention-based LSTMs and introduces gates to measure the importance of left context, right context, and the entire sentence for the prediction; • RAM : RAM is a multilayer architecture where each layer consists of attention-based aggregation of word features and a GRU cell to learn the sentence representation.", "We run the released codes of TD-LSTM and BILSTM-ATT-G to generate results, since their papers only reported results on TWITTER.", "We also rerun MemNet on our datasets and evaluate it with both accuracy and Macro-Averaged F1.", "5 We use pre-trained GloVe vectors (Pennington et al., 2014) to initialize the word embeddings and the dimension is 300 (i.e., dim w = 300).", "For out-of-vocabulary words, we randomly sample their embeddings from the uniform distribution U(−0.25, 0.25), as done in (Kim, 2014) .", "We only use one convolutional kernel size because it was observed that CNN with single optimal kernel size is comparable with CNN having multiple kernel sizes on small datasets (Zhang and Wallace, 2017) .", "To alleviate overfitting, we apply dropout on the input word embeddings of the LSTM and the ultimate sentence representation z.", "All weight matrices are initialized with the uniform distribution U(−0.01, 0.01) and the biases are initialized 5 The codes of TD-LSTM/MemNet and BILSTM-ATT-G are available at: http://ir.hit.edu.cn/˜dytang and http://leoncrashcode.github.io.", "Note that MemNet was only evaluated with accuracy.", "as zeros.", "The training objective is cross-entropy, and Adam (Kingma and Ba, 2015) is adopted as the optimizer by following the learning rate and the decay rates in the original paper.", "The hyper-parameters of TNet-LF and TNet-AS are listed in Table 2 .", "Specifically, all hyperparameters are tuned on 20% randomly held-out training data and the hyper-parameter collection producing the highest accuracy score is used for testing.", "Our model has comparable number of parameters compared to traditional LSTM-based models as we reuse parameters in the transformation layers and BiLSTM.", "6 Table 3 , both TNet-LF and TNet-AS consistently achieve the best performance on all datasets, which verifies the efficacy of our whole TNet model.", "Moreover, TNet can perform well for different kinds of user generated content, such as product reviews with relatively formal sentences in LAPTOP and REST, and tweets with more ungrammatical sentences in TWITTER.", "The reason is the CNN-based feature extractor arms TNet with more power to extract accurate features from ungrammatical sentences.", "Indeed, we can also observe that another CNN-based baseline, i.e., CNN-ASP implemented by us, also obtains good results on TWITTER.", "Main Results As shown in On the other hand, the performance of those comparison methods is mostly unstable.", "For the tweet in TWITTER, the competitive BILSTM-ATT-G and RAM cannot perform as effective as they do for the reviews in LAPTOP and REST, due to the fact that they are heavily rooted in LSTMs and the ungrammatical sentences hinder their ca- Table 3 : Experimental results (%).", "The results with symbol\" \" are retrieved from the original papers, and those starred ( * ) one are from Dong et al.", "(2014) .", "The marker † refers to p-value < 0.01 when comparing with BILSTM-ATT-G, while the marker ‡ refers to p-value < 0.01 when comparing with RAM.", "pability in capturing the context features.", "Another difficulty caused by the ungrammatical sentences is that the dependency parsing might be errorprone, which will affect those methods such as AdaRNN using dependency information.", "From the above observations and analysis, some takeaway message for the task of target sentiment classification could be: • LSTM-based models relying on sequential information can perform well for formal sentences by capturing more useful context features; • For ungrammatical text, CNN-based models may have some advantages because CNN aims to extract the most informative n-gram features and is thus less sensitive to informal texts without strong sequential patterns.", "Performance of Ablated TNet To investigate the impact of each component such as deep transformation, context-preserving mechanism, and positional relevance, we perform comparison between the full TNet models and its ablations (the third group in Table 3 ).", "After removing the deep transformation (i.e., the techniques introduced in Section 2.2), both TNet-LF and TNet-AS are reduced to TNet w/o transformation (where position relevance is kept), and their results in both accuracy and F1 measure are incomparable with those of TNet.", "It shows that the integration of target information into the word-level representations is crucial for good performance.", "Comparing the results of TNet and TNet w/o context (where TST and position relevance are kept), we observe that the performance of TNet w/o context drops significantly on LAPTOP and REST 7 , while on TWITTER, TNet w/o context performs very competitive (p-values with TNet-LF and TNet-AS are 0.066 and 0.053 respectively for Accuracy).", "Again, we could attribute this phenomenon to the ungrammatical user generated content of twitter, because the contextpreserving component becomes less important for such data.", "TNet w/o context performs consistently better than TNet w/o transformation, which verifies the efficacy of the target specific transformation (TST), before applying context-preserving.", "As for the position information, we conduct statistical t-test between TNet-LF/AS and TNet-LF/AS w/o position together with performance comparison.", "All of the produced p-values are less than 0.05, suggesting that the improvements brought in by position information are significant.", "CPT versus Alternatives The next interesting question is what if we replace the transformation module (i.e., the CPT layers in Fig.1) of TNet with other commonly-used components?", "We investigate two alternatives: attention mechanism and fully-connected (FC) layer, resulting in three pipelines as shown in the second group of Table 3 (position relevance is kept for them).", "LSTM-ATT-CNN applies attention as the alternative 8 , and it does not need the contextpreserving mechanism.", "It performs unexceptionally worse than the TNet variants.", "We are surprised that LSTM-ATT-CNN is even worse than TNet w/o transformation (a pipeline simply removing the transformation module) on TWITTER.", "More concretely, applying attention results in negative effect on TWITTER, which is consistent with the observation that all those attention-based state-of-the-art methods (i.e., TD-LSTM, Mem-Net, BILSTM-ATT-G, and RAM) cannot perform well on TWITTER.", "LSTM-FC-CNN-LF and LSTM-FC-CNN-AS are built by applying FC layer to replace TST and keeping the context-preserving mechanism (i.e., LF and AS).", "Specifically, the concatenation of word representation and the averaged target vector is fed to the FC layer to obtain targetspecific features.", "Note that LSTM-FC-CNN-LF/AS are equivalent to TNet-LF/AS when processing single-word targets (see Eq.", "3).", "They obtain competitive results on all datasets: comparable with or better than the state-of-the-art methods.", "The TNet variants can still outperform LSTM-FC-CNN-LF/AS with significant gaps, e.g., on LAPTOP and REST, the accuracy gaps between TNet-LF and LSTM-FC-CNN-LF are 0.42% (p < 0.03) and 0.38% (p < 0.04) respectively.", "Impact of CPT Layer Number As our TNet involves multiple CPT layers, we investigate the effect of the layer number L. Specifically, we conduct experiments on the held-out training data of LAPTOP and vary L from 2 to 10, increased by 2.", "The cases L=1 and L=15 are also included.", "The results are illustrated in Figure 3 .", "We can see that both TNet-LF and TNet-AS achieve the best results when L=2.", "While increasing L, the performance is basically becoming worse.", "For large L, the performance of TNet-AS 8 We tried different attention mechanisms and report the best one here, namely, dot attention (Luong et al., 2015) .", "generally becomes more sensitive, it is probably because AS involves extra parameters (see Eq 9) that increase the training difficulty.", "Table 4 shows some sample cases.", "The input targets are wrapped in the brackets with true labels given as subscripts.", "The notations P, N and O in the table represent positive, negative and neutral respectively.", "For each sentence, we underline the target with a particular color, and the text of its corresponding most informative n-gram feature 9 captured by TNet-AS (TNet-LF captures very similar features) is in the same color (so color printing is preferred).", "For example, for the target \"resolution\" in the first sentence, the captured feature is \"Air has higher\".", "Note that as discussed above, the CNN layer of TNet captures such features with the size-three kernels, so that the features are trigrams.", "Each of the last features of the second and seventh sentences contains a padding token, which is not shown.", "Case Study Our TNet variants can predict target sentiment more accurately than RAM and BILSTM-ATT-G in the transitional sentences such as the first sentence by capturing correct trigram features.", "For the third sentence, its second and third most informative trigrams are \"100% .", "PAD\" and \"' s not\", being used together with \"features make up\", our models can make correct predictions.", "Moreover, TNet can still make correct prediction when the explicit opinion is target-specific.", "For example, (P, P, P) (P, P, P) (P, P, P) (P, P, P) 7.", "The [staff] N should be a bit more friendly .", "P P P P Table 4 : Example predictions, color printing is preferred.", "The input targets are wrapped in brackets with the true labels given as subscripts.", "indicates incorrect prediction.", "\"long\" in the fifth sentence is negative for \"startup time\", while it could be positive for other targets such as \"battery life\" in the sixth sentence.", "The sentiment of target-specific opinion word is conditioned on the given target.", "Our TNet variants, armed with the word-level feature transformation w.r.t.", "the target, is capable of handling such case.", "We also find that all these models cannot give correct prediction for the last sentence, a commonly used subjunctive style.", "In this case, the difficulty of prediction does not come from the detection of explicit opinion words but the inference based on implicit semantics, which is still quite challenging for neural network models.", "Related Work Apart from sentence level sentiment classification (Kim, 2014; Shi et al., 2018) , aspect/target level sentiment classification is also an important research topic in the field of sentiment analysis.", "The early methods mostly adopted supervised learning approach with extensive hand-coded features (Blair-Goldensohn et al., 2008; Titov and McDonald, 2008; Jiang et al., 2011; Kiritchenko et al., 2014; Wagner et al., 2014; Vo and Zhang, 2015) , and they fail to model the semantic relatedness between a target and its context which is critical for target sentiment analysis.", "Dong et al.", "(2014) incorporate the target information into the feature learning using dependency trees.", "As observed in previous works, the performance heavily relies on the quality of dependency parsing.", "Tang et al.", "(2016a) propose to split the context into two parts and associate target with contextual features separately.", "Similar to (Tang et al., 2016a) , Zhang et al.", "(2016) develop a three-way gated neural network to model the in-teraction between the target and its surrounding contexts.", "Despite the advantages of jointly modeling target and context, they are not capable of capturing long-range information when some critical context information is far from the target.", "To overcome this limitation, researchers bring in the attention mechanism to model target-context association (Tang et al., 2016a,b; Wang et al., 2016; Liu and Zhang, 2017; Ma et al., 2017; Tay et al., 2017) .", "Compared with these methods, our TNet avoids using attention for feature extraction so as to alleviate the attended noise." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.2.1", "2.2.2", "2.3", "3.1", "3.3", "3.4", "3.5", "3.6", "4" ], "paper_header_content": [ "Introduction", "Model Description", "Bi-directional LSTM Layer", "Context-Preserving Transformation", "Target-Specific Transformation", "Context-Preserving Mechanism", "Convolutional Feature Extractor", "Experimental Setup", "Performance of Ablated TNet", "CPT versus Alternatives", "Impact of CPT Layer Number", "Case Study", "Related Work" ] }
GEM-SciDuet-train-35#paper-1049#slide-0
Introduction
Target-Oriented Sentiment Classification (TOSC) is to detect the overall opinions / sentiments of the user review towards the given opinion target. TOSC is a supporting task of Target / Aspect-based Sentiment TOSC has been investigated extensively in other names: Targeted Sentiment Prediction [6, 14]. Target-Dependent Sentiment Classification [2, 9].
Target-Oriented Sentiment Classification (TOSC) is to detect the overall opinions / sentiments of the user review towards the given opinion target. TOSC is a supporting task of Target / Aspect-based Sentiment TOSC has been investigated extensively in other names: Targeted Sentiment Prediction [6, 14]. Target-Dependent Sentiment Classification [2, 9].
[]
GEM-SciDuet-train-35#paper-1049#slide-1
1049
Transformation Networks for Target-Oriented Sentiment Classification *
Target-oriented sentiment classification aims at classifying sentiment polarities over individual opinion targets in a sentence. RNN with attention seems a good fit for the characteristics of this task, and indeed it achieves the state-of-the-art performance. After re-examining the drawbacks of attention mechanism and the obstacles that block CNN to perform well in this classification task, we propose a new model to overcome these issues. Instead of attention, our model employs a CNN layer to extract salient features from the transformed word representations originated from a bi-directional RNN layer. Between the two layers, we propose a component to generate target-specific representations of words in the sentence, meanwhile incorporate a mechanism for preserving the original contextual information from the RNN layer. Experiments show that our model achieves a new state-of-the-art performance on a few benchmarks. 1
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204 ], "paper_content_text": [ "Introduction Target-oriented (also mentioned as \"target-level\" or \"aspect-level\" in some works) sentiment classification aims to determine sentiment polarities over \"opinion targets\" that explicitly appear in the sentences (Liu, 2012) .", "For example, in the sentence \"I am pleased with the fast log on, and the long battery life\", the user mentions two targets * The work was done when Xin Li was an intern at Tencent AI Lab.", "This project is substantially supported by a grant from the Research Grant Council of the Hong Kong Special Administrative Region, China (Project Code: 14203414).", "1 Our code is open-source and available at https:// github.com/lixin4ever/TNet \"log on\" and \"better life\", and expresses positive sentiments over them.", "The task is usually formulated as predicting a sentiment category for a (target, sentence) pair.", "Recurrent Neural Networks (RNNs) with attention mechanism, firstly proposed in machine translation (Bahdanau et al., 2014) , is the most commonly-used technique for this task.", "For example, Wang et al.", "(2016) ; Tang et al.", "(2016b) ; ; Liu and Zhang (2017) ; Ma et al.", "(2017) and employ attention to measure the semantic relatedness between each context word and the target, and then use the induced attention scores to aggregate contextual features for prediction.", "In these works, the attention weight based combination of word-level features for classification may introduce noise and downgrade the prediction accuracy.", "For example, in \"This dish is my favorite and I always get it and never get tired of it.", "\", these approaches tend to involve irrelevant words such as \"never\" and \"tired\" when they highlight the opinion modifier \"favorite\".", "To some extent, this drawback is rooted in the attention mechanism, as also observed in machine translation (Luong et al., 2015) and image captioning .", "Another observation is that the sentiment of a target is usually determined by key phrases such as \"is my favorite\".", "By this token, Convolutional Neural Networks (CNNs)-whose capability for extracting the informative n-gram features (also called \"active local features\") as sentence representations has been verified in (Kim, 2014; Johnson and Zhang, 2015) -should be a suitable model for this classification problem.", "However, CNN likely fails in cases where a sentence expresses different sentiments over multiple targets, such as \"great food but the service was dreadful!\".", "One reason is that CNN cannot fully explore the target information as done by RNN-based meth-ods (Tang et al., 2016a) .", "2 Moreover, it is hard for vanilla CNN to differentiate opinion words of multiple targets.", "Precisely, multiple active local features holding different sentiments (e.g., \"great food\" and \"service was dreadful\") may be captured for a single target, thus it will hinder the prediction.", "We propose a new architecture, named Target-Specific Transformation Networks (TNet), to solve the above issues in the task of target sentiment classification.", "TNet firstly encodes the context information into word embeddings and generates the contextualized word representations with LSTMs.", "To integrate the target information into the word representations, TNet introduces a novel Target-Specific Transformation (TST) component for generating the target-specific word representations.", "Contrary to the previous attention-based approaches which apply the same target representation to determine the attention scores of individual context words, TST firstly generates different representations of the target conditioned on individual context words, then it consolidates each context word with its tailor-made target representation to obtain the transformed word representation.", "Considering the context word \"long\" and the target \"battery life\" in the above example, TST firstly measures the associations between \"long\" and individual target words.", "Then it uses the association scores to generate the target representation conditioned on \"long\".", "After that, TST transforms the representation of \"long\" into its target-specific version with the new target representation.", "Note that \"long\" could also indicate a negative sentiment (say for \"startup time\"), and the above TST is able to differentiate them.", "As the context information carried by the representations from the LSTM layer will be lost after the non-linear TST, we design a contextpreserving mechanism to contextualize the generated target-specific word representations.", "Such mechanism also allows deep transformation structure to learn abstract features 3 .", "To help the CNN feature extractor locate sentiment indicators more accurately, we adopt a proximity strategy to scale the input of convolutional layer with positional relevance between a word and the target.", "2 One method could be concatenating the target representation with each word representation, but the effect as shown in (Wang et al., 2016) is limited.", "3 Abstract features usually refer to the features ultimately useful for the task (Bengio et al., 2013; LeCun et al., 2015) .", "In summary, our contributions are as follows: • TNet adapts CNN to handle target-level sentiment classification, and its performance dominates the state-of-the-art models on benchmark datasets.", "• A novel Target-Specific Transformation component is proposed to better integrate target information into the word representations.", "• A context-preserving mechanism is designed to forward the context information into a deep transformation architecture, thus, the model can learn more abstract contextualized word features from deeper networks.", "Model Description Given a target-sentence pair (w τ , w), where w τ = {w τ 1 , w τ 2 , ..., w τ m } is a sub-sequence of w = {w 1 , w 2 , ..., w n }, and the corresponding word embeddings x τ = {x τ 1 , x τ 2 , ..., x τ m } and x = {x 1 , x 2 , ..., x n }, the aim of target sentiment classification is to predict the sentiment polarity y ∈ {P, N, O} of the sentence w over the target w τ , where P , N and O denote \"positive\", \"negative\" and \"neutral\" sentiments respectively.", "The architecture of the proposed Target-Specific Transformation Networks (TNet) is shown in Fig.", "1 .", "The bottom layer is a BiLSTM which transforms the input x = {x 1 , x 2 , ..., x n } ∈ R n×dimw into the contextualized word representations h (0) = {h (0) 1 , h (0) 2 , ..., h (0) n } ∈ R n×2dim h (i.e.", "hidden states of BiLSTM), where dim w and dim h denote the dimensions of the word embeddings and the hidden representations respectively.", "The middle part, the core part of our TNet, consists of L Context-Preserving Transformation (CPT) layers.", "The CPT layer incorporates the target information into the word representations via a novel Target-Specific Transformation (TST) component.", "CPT also contains a contextpreserving mechanism, resembling identity mapping (He et al., 2016a,b) and highway connection (Srivastava et al., 2015a,b) , allows preserving the context information and learning more abstract word-level features using a deep network.", "The top most part is a position-aware convolutional layer which first encodes positional relevance between a word and a target, and then extracts informative features for classification.", "Bi-directional LSTM Layer As observed in Lai et al.", "(2015) , combining contextual information with word embeddings is an effective way to represent a word in convolutionbased architectures.", "TNet also employs a BiL-STM to accumulate the context information for each word of the input sentence, i.e., the bottom part in Fig.", "1 .", "For simplicity and space issue, we denote the operation of an LSTM unit on x i as LSTM(x i ).", "Thus, the contextualized word representation h (0) i ∈ R 2dim h is obtained as follows: h (0) i = [ − −−− → LSTM(x i ); ← −−− − LSTM(x i )], i ∈ [1, n].", "(1) Context-Preserving Transformation The above word-level representation has not considered the target information yet.", "Traditional attention-based approaches keep the word-level features static and aggregate them with weights as the final sentence representation.", "In contrast, as shown in the middle part in Fig.", "1 , we introduce multiple CPT layers and the detail of a single CPT is shown in Fig.", "2 .", "In each CPT layer, a tailor-made TST component that aims at better consolidating word representation and target representation is proposed.", "Moreover, we design a context-preserving mechanism enabling the learning of target-specific word representations in a deep neural architecture.", "Target-Specific Transformation TST component is depicted with the TST block in Liu and Zhang, 2017) average the embeddings of the target words as the target representation.", "This strategy may be inappropriate in some cases because different target words usually do not contribute equally.", "For example, in the target \"amd turin processor\", the word \"processor\" is more important than \"amd\" and \"turin\", because the sentiment is usually conveyed over the phrase head, i.e.,\"processor\", but seldom over modifiers (such as brand name \"amd\").", "Ma et al.", "(2017) attempted to overcome this issue by measuring the importance score between each target word representation and the averaged sentence vector.", "However, it may be ineffective for sentences expressing multiple sentiments (e.g., \"Air has higher resolution but the fonts are small.", "\"), because taking the average tends to neutralize different sentiments.", "We propose to dynamically compute the importance of target words based on each sentence word rather than the whole sentence.", "We first employ another BiLSTM to obtain the target word representations h τ ∈ R m×2dim h : h τ j = [ − −−− → LSTM(x τ j ); ← −−− − LSTM(x τ j )], j ∈ [1, m].", "(2) Then, we dynamically associate them with each word w i in the sentence to tailor-make target representation r τ i at the time step i: r τ i = m j=1 h τ j * F(h (l) i , h τ j ) , (3) where the function F measures the relatedness between the j-th target word representation h τ j and the i-th word-level representation h (l) i : F(h (l) i , h τ j ) = exp (h (l) i h τ j ) m k=1 exp (h (l) i h τ k ) .", "(4) Finally, the concatenation of r τ i and h (l) i is fed into a fully-connected layer to obtain the i-th targetspecific word representationh i (l) : h (l) i = g(W τ [h (l) i : r τ i ] + b τ ), (5) where g( * ) is a non-linear activation function and \":\" denotes vector concatenation.", "W τ and b τ are the weights of the layer.", "Context-Preserving Mechanism After the non-linear TST (see Eq.", "5), the context information captured with contextualized representations from the BiLSTM layer will be lost since the mean and the variance of the features within the feature vector will be changed.", "To take advantage of the context information, which has been proved to be useful in (Lai et al., 2015) , we investigate two strategies: Lossless Forwarding (LF) and Adaptive Scaling (AS), to pass the context information to each following layer, as depicted by the block \"LF/AS\" in Fig.", "2 .", "Accordingly, the model variants are named TNet-LF and TNet-AS.", "Lossless Forwarding.", "This strategy preserves context information by directly feeding the features before the transformation to the next layer.", "Specifically, the input h (l+1) i of the (l + 1)-th CPT layer is formulated as: h (l+1) i = h (l) i +h (l) i , i ∈ [1, n], l ∈ [0, L], (6) where h (l) i is the input of the l-th layer andh (l) i is the output of TST in this layer.", "We unfold the recursive form of Eq.", "6 as follows: h (l+1) i = h (0) i +TST(h (0) i )+· · ·+TST(h (l) i ).", "(7) Here, we denoteh (l) i as TST(h (l) i ).", "From Eq.", "7, we can see that the output of each layer will contain the contextualized word representations (i.e., h (0) i ), thus, the context information is encoded into the transformed features.", "We call this strategy \"Lossless Forwarding\" because the contextualized representations and the transformed representations (i.e., TST(h (l) i )) are kept unchanged during the feature combination.", "Adaptive Scaling.", "Lossless Forwarding introduces the context information by directly adding back the contextualized features to the transformed features, which raises a question: Can the weights of the input and the transformed features be adjusted dynamically?", "With this motivation, we propose another strategy, named \"Adaptive Scaling\".", "Similar to the gate mechanism in RNN variants (Jozefowicz et al., 2015) , Adaptive Scaling introduces a gating function to control the passed proportions of the transformed features and the input features.", "The gate t (l) as follows: t (l) i = σ(W trans h (l) i + b trans ), (8) where t (l) i is the gate for the i-th input of the l-th CPT layer, and σ is the sigmoid activation function.", "Then we perform convex combination of h (l) i andh (l) i based on the gate: h (l+1) i = t (l) i h (l) i + (1 − t (l) i ) h (l) i .", "(9) Here, denotes element-wise multiplication.", "The non-recursive form of this equation is as follows (for clarity, we ignore the subscripts): h (l+1) = [ l k=0 (1 − t (k) )] h (0) +[t (0) l k=1 (1 − t (k) )] TST(h (0) ) + · · · +t (l−1) (1 − t (l) ) TST(h (l−1) ) + t (l) TST(h (l) ).", "Thus, the context information is integrated in each upper layer and the proportions of the contextualized representations and the transformed representations are controlled by the computed gates in different transformation layers.", "Convolutional Feature Extractor Recall that the second issue that blocks CNN to perform well is that vanilla CNN may associate a target with unrelated general opinion words which are frequently used as modifiers for different targets across domains.", "For example, \"service\" in \"Great food but the service is dreadful\" may be associated with both \"great\" and \"dreadful\".", "To solve it, we adopt a proximity strategy, which is observed effective in Li and Lam, 2017) .", "The idea is a closer opinion word is more likely to be the actual modifier of the target.", "Specifically, we first calculate the position relevance v i between the i-th word and the target 4 : v i =      1 − (k+m−i) C i < k + m 1 − i−k C k + m ≤ i ≤ n 0 i > n (10) where k is the index of the first target word, C is a pre-specified constant, and m is the length of the target w τ .", "Then, we use v to help CNN locate the correct opinion w.r.t.", "the given target: h (l) i = h (l) i * v i , i ∈ [1, n], l ∈ [1, L].", "(11) Based on Eq.", "10 and Eq.", "11, the words close to the target will be highlighted and those far away will be downgraded.", "v is also applied on the intermediate output to introduce the position information into each CPT layer.", "Then we feed the weighted h (L) to the convolutional layer, i.e., the top-most layer in Fig.", "1 , to generate the feature map c ∈ R n−s+1 as follows: c i = ReLU(w conv h (L) i:i+s−1 + b conv ), (12) where h (L) i:i+s−1 ∈ R s·dim h is the concatenated vec- tor ofĥ (L) i , · · · ,ĥ (L) i+s−1 , and s is the kernel size.", "w conv ∈ R s·dim h and b conv ∈ R are learnable weights of the convolutional kernel.", "To capture the most informative features, we apply max pooling (Kim, 2014) and obtain the sentence representation z ∈ R n k by employing n k kernels: z = [max(c 1 ), · · · , max(c n k )] .", "(13) Finally, we pass z to a fully connected layer for sentiment prediction: p(y|w τ , w) = Softmax(W f z + b f ).", "(14) where W f and b f are learnable parameters.", "4 As we perform sentence padding, it is possible that the index i is larger than the actual length n of the sentence.", "Experiments Experimental Setup As shown in Table 1 , we evaluate the proposed TNet on three benchmark datasets: LAPTOP and REST are from SemEval ABSA challenge (Pontiki et al., 2014) , containing user reviews in laptop domain and restaurant domain respectively.", "We also remove a few examples having the \"conflict label\" as done in ; TWITTER is built by Dong et al.", "(2014) , containing twitter posts.", "All tokens are lowercased without removal of stop words, symbols or digits, and sentences are zero-padded to the length of the longest sentence in the dataset.", "Evaluation metrics are Accuracy and Macro-Averaged F1 where the latter is more appropriate for datasets with unbalanced classes.", "We also conduct pairwise t-test on both Accuracy and Macro-Averaged F1 to verify if the improvements over the compared models are reliable.", "TNet is compared with the following methods.", "• SVM (Kiritchenko et al., 2014) : It is a traditional support vector machine based model with extensive feature engineering; • AdaRNN (Dong et al., 2014) : It learns the sentence representation toward target for sentiment prediction via semantic composition over dependency tree; • AE-LSTM, and ATAE-LSTM (Wang et al., 2016) : AE-LSTM is a simple LSTM model incorporating the target embedding as input, while ATAE-LSTM extends AE-LSTM with attention; • IAN (Ma et al., 2017) : IAN employs two LSTMs to learn the representations of the context and the target phrase interactively; • CNN-ASP: It is a CNN-based model implemented by us which directly concatenates target representation to each word embedding; • TD-LSTM (Tang et al., 2016a) : It employs two LSTMs to model the left and right contexts of the target separately, then performs predictions based on concatenated context representations; • MemNet (Tang et al., 2016b) : It applies attention mechanism over the word embeddings multiple times and predicts sentiments based on the top-most sentence representations; • BILSTM-ATT-G (Liu and Zhang, 2017): It models left and right contexts using two attention-based LSTMs and introduces gates to measure the importance of left context, right context, and the entire sentence for the prediction; • RAM : RAM is a multilayer architecture where each layer consists of attention-based aggregation of word features and a GRU cell to learn the sentence representation.", "We run the released codes of TD-LSTM and BILSTM-ATT-G to generate results, since their papers only reported results on TWITTER.", "We also rerun MemNet on our datasets and evaluate it with both accuracy and Macro-Averaged F1.", "5 We use pre-trained GloVe vectors (Pennington et al., 2014) to initialize the word embeddings and the dimension is 300 (i.e., dim w = 300).", "For out-of-vocabulary words, we randomly sample their embeddings from the uniform distribution U(−0.25, 0.25), as done in (Kim, 2014) .", "We only use one convolutional kernel size because it was observed that CNN with single optimal kernel size is comparable with CNN having multiple kernel sizes on small datasets (Zhang and Wallace, 2017) .", "To alleviate overfitting, we apply dropout on the input word embeddings of the LSTM and the ultimate sentence representation z.", "All weight matrices are initialized with the uniform distribution U(−0.01, 0.01) and the biases are initialized 5 The codes of TD-LSTM/MemNet and BILSTM-ATT-G are available at: http://ir.hit.edu.cn/˜dytang and http://leoncrashcode.github.io.", "Note that MemNet was only evaluated with accuracy.", "as zeros.", "The training objective is cross-entropy, and Adam (Kingma and Ba, 2015) is adopted as the optimizer by following the learning rate and the decay rates in the original paper.", "The hyper-parameters of TNet-LF and TNet-AS are listed in Table 2 .", "Specifically, all hyperparameters are tuned on 20% randomly held-out training data and the hyper-parameter collection producing the highest accuracy score is used for testing.", "Our model has comparable number of parameters compared to traditional LSTM-based models as we reuse parameters in the transformation layers and BiLSTM.", "6 Table 3 , both TNet-LF and TNet-AS consistently achieve the best performance on all datasets, which verifies the efficacy of our whole TNet model.", "Moreover, TNet can perform well for different kinds of user generated content, such as product reviews with relatively formal sentences in LAPTOP and REST, and tweets with more ungrammatical sentences in TWITTER.", "The reason is the CNN-based feature extractor arms TNet with more power to extract accurate features from ungrammatical sentences.", "Indeed, we can also observe that another CNN-based baseline, i.e., CNN-ASP implemented by us, also obtains good results on TWITTER.", "Main Results As shown in On the other hand, the performance of those comparison methods is mostly unstable.", "For the tweet in TWITTER, the competitive BILSTM-ATT-G and RAM cannot perform as effective as they do for the reviews in LAPTOP and REST, due to the fact that they are heavily rooted in LSTMs and the ungrammatical sentences hinder their ca- Table 3 : Experimental results (%).", "The results with symbol\" \" are retrieved from the original papers, and those starred ( * ) one are from Dong et al.", "(2014) .", "The marker † refers to p-value < 0.01 when comparing with BILSTM-ATT-G, while the marker ‡ refers to p-value < 0.01 when comparing with RAM.", "pability in capturing the context features.", "Another difficulty caused by the ungrammatical sentences is that the dependency parsing might be errorprone, which will affect those methods such as AdaRNN using dependency information.", "From the above observations and analysis, some takeaway message for the task of target sentiment classification could be: • LSTM-based models relying on sequential information can perform well for formal sentences by capturing more useful context features; • For ungrammatical text, CNN-based models may have some advantages because CNN aims to extract the most informative n-gram features and is thus less sensitive to informal texts without strong sequential patterns.", "Performance of Ablated TNet To investigate the impact of each component such as deep transformation, context-preserving mechanism, and positional relevance, we perform comparison between the full TNet models and its ablations (the third group in Table 3 ).", "After removing the deep transformation (i.e., the techniques introduced in Section 2.2), both TNet-LF and TNet-AS are reduced to TNet w/o transformation (where position relevance is kept), and their results in both accuracy and F1 measure are incomparable with those of TNet.", "It shows that the integration of target information into the word-level representations is crucial for good performance.", "Comparing the results of TNet and TNet w/o context (where TST and position relevance are kept), we observe that the performance of TNet w/o context drops significantly on LAPTOP and REST 7 , while on TWITTER, TNet w/o context performs very competitive (p-values with TNet-LF and TNet-AS are 0.066 and 0.053 respectively for Accuracy).", "Again, we could attribute this phenomenon to the ungrammatical user generated content of twitter, because the contextpreserving component becomes less important for such data.", "TNet w/o context performs consistently better than TNet w/o transformation, which verifies the efficacy of the target specific transformation (TST), before applying context-preserving.", "As for the position information, we conduct statistical t-test between TNet-LF/AS and TNet-LF/AS w/o position together with performance comparison.", "All of the produced p-values are less than 0.05, suggesting that the improvements brought in by position information are significant.", "CPT versus Alternatives The next interesting question is what if we replace the transformation module (i.e., the CPT layers in Fig.1) of TNet with other commonly-used components?", "We investigate two alternatives: attention mechanism and fully-connected (FC) layer, resulting in three pipelines as shown in the second group of Table 3 (position relevance is kept for them).", "LSTM-ATT-CNN applies attention as the alternative 8 , and it does not need the contextpreserving mechanism.", "It performs unexceptionally worse than the TNet variants.", "We are surprised that LSTM-ATT-CNN is even worse than TNet w/o transformation (a pipeline simply removing the transformation module) on TWITTER.", "More concretely, applying attention results in negative effect on TWITTER, which is consistent with the observation that all those attention-based state-of-the-art methods (i.e., TD-LSTM, Mem-Net, BILSTM-ATT-G, and RAM) cannot perform well on TWITTER.", "LSTM-FC-CNN-LF and LSTM-FC-CNN-AS are built by applying FC layer to replace TST and keeping the context-preserving mechanism (i.e., LF and AS).", "Specifically, the concatenation of word representation and the averaged target vector is fed to the FC layer to obtain targetspecific features.", "Note that LSTM-FC-CNN-LF/AS are equivalent to TNet-LF/AS when processing single-word targets (see Eq.", "3).", "They obtain competitive results on all datasets: comparable with or better than the state-of-the-art methods.", "The TNet variants can still outperform LSTM-FC-CNN-LF/AS with significant gaps, e.g., on LAPTOP and REST, the accuracy gaps between TNet-LF and LSTM-FC-CNN-LF are 0.42% (p < 0.03) and 0.38% (p < 0.04) respectively.", "Impact of CPT Layer Number As our TNet involves multiple CPT layers, we investigate the effect of the layer number L. Specifically, we conduct experiments on the held-out training data of LAPTOP and vary L from 2 to 10, increased by 2.", "The cases L=1 and L=15 are also included.", "The results are illustrated in Figure 3 .", "We can see that both TNet-LF and TNet-AS achieve the best results when L=2.", "While increasing L, the performance is basically becoming worse.", "For large L, the performance of TNet-AS 8 We tried different attention mechanisms and report the best one here, namely, dot attention (Luong et al., 2015) .", "generally becomes more sensitive, it is probably because AS involves extra parameters (see Eq 9) that increase the training difficulty.", "Table 4 shows some sample cases.", "The input targets are wrapped in the brackets with true labels given as subscripts.", "The notations P, N and O in the table represent positive, negative and neutral respectively.", "For each sentence, we underline the target with a particular color, and the text of its corresponding most informative n-gram feature 9 captured by TNet-AS (TNet-LF captures very similar features) is in the same color (so color printing is preferred).", "For example, for the target \"resolution\" in the first sentence, the captured feature is \"Air has higher\".", "Note that as discussed above, the CNN layer of TNet captures such features with the size-three kernels, so that the features are trigrams.", "Each of the last features of the second and seventh sentences contains a padding token, which is not shown.", "Case Study Our TNet variants can predict target sentiment more accurately than RAM and BILSTM-ATT-G in the transitional sentences such as the first sentence by capturing correct trigram features.", "For the third sentence, its second and third most informative trigrams are \"100% .", "PAD\" and \"' s not\", being used together with \"features make up\", our models can make correct predictions.", "Moreover, TNet can still make correct prediction when the explicit opinion is target-specific.", "For example, (P, P, P) (P, P, P) (P, P, P) (P, P, P) 7.", "The [staff] N should be a bit more friendly .", "P P P P Table 4 : Example predictions, color printing is preferred.", "The input targets are wrapped in brackets with the true labels given as subscripts.", "indicates incorrect prediction.", "\"long\" in the fifth sentence is negative for \"startup time\", while it could be positive for other targets such as \"battery life\" in the sixth sentence.", "The sentiment of target-specific opinion word is conditioned on the given target.", "Our TNet variants, armed with the word-level feature transformation w.r.t.", "the target, is capable of handling such case.", "We also find that all these models cannot give correct prediction for the last sentence, a commonly used subjunctive style.", "In this case, the difficulty of prediction does not come from the detection of explicit opinion words but the inference based on implicit semantics, which is still quite challenging for neural network models.", "Related Work Apart from sentence level sentiment classification (Kim, 2014; Shi et al., 2018) , aspect/target level sentiment classification is also an important research topic in the field of sentiment analysis.", "The early methods mostly adopted supervised learning approach with extensive hand-coded features (Blair-Goldensohn et al., 2008; Titov and McDonald, 2008; Jiang et al., 2011; Kiritchenko et al., 2014; Wagner et al., 2014; Vo and Zhang, 2015) , and they fail to model the semantic relatedness between a target and its context which is critical for target sentiment analysis.", "Dong et al.", "(2014) incorporate the target information into the feature learning using dependency trees.", "As observed in previous works, the performance heavily relies on the quality of dependency parsing.", "Tang et al.", "(2016a) propose to split the context into two parts and associate target with contextual features separately.", "Similar to (Tang et al., 2016a) , Zhang et al.", "(2016) develop a three-way gated neural network to model the in-teraction between the target and its surrounding contexts.", "Despite the advantages of jointly modeling target and context, they are not capable of capturing long-range information when some critical context information is far from the target.", "To overcome this limitation, researchers bring in the attention mechanism to model target-context association (Tang et al., 2016a,b; Wang et al., 2016; Liu and Zhang, 2017; Ma et al., 2017; Tay et al., 2017) .", "Compared with these methods, our TNet avoids using attention for feature extraction so as to alleviate the attended noise." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.2.1", "2.2.2", "2.3", "3.1", "3.3", "3.4", "3.5", "3.6", "4" ], "paper_header_content": [ "Introduction", "Model Description", "Bi-directional LSTM Layer", "Context-Preserving Transformation", "Target-Specific Transformation", "Context-Preserving Mechanism", "Convolutional Feature Extractor", "Experimental Setup", "Performance of Ablated TNet", "CPT versus Alternatives", "Impact of CPT Layer Number", "Case Study", "Related Work" ] }
GEM-SciDuet-train-35#paper-1049#slide-1
Problem Formulation
TOSC is a typical classification task but the input texts come from two sources: Target: explicitly mentioned phrase of opinion target, also called aspect term or aspect. Context: the original review sentence or the sentence without target phrase. TOSC is to predict the overall sentiment of the context towards the target. [Boot time] is super fast, around anywhere from 35 seconds to 1 minute. This review conveys positive sentiment over the input Boot time. Great [food] but the [service] is dreadful. Given the target food, the sentiment polarity is positive while if the input target is service, it becomes negative.
TOSC is a typical classification task but the input texts come from two sources: Target: explicitly mentioned phrase of opinion target, also called aspect term or aspect. Context: the original review sentence or the sentence without target phrase. TOSC is to predict the overall sentiment of the context towards the target. [Boot time] is super fast, around anywhere from 35 seconds to 1 minute. This review conveys positive sentiment over the input Boot time. Great [food] but the [service] is dreadful. Given the target food, the sentiment polarity is positive while if the input target is service, it becomes negative.
[]
GEM-SciDuet-train-35#paper-1049#slide-2
1049
Transformation Networks for Target-Oriented Sentiment Classification *
Target-oriented sentiment classification aims at classifying sentiment polarities over individual opinion targets in a sentence. RNN with attention seems a good fit for the characteristics of this task, and indeed it achieves the state-of-the-art performance. After re-examining the drawbacks of attention mechanism and the obstacles that block CNN to perform well in this classification task, we propose a new model to overcome these issues. Instead of attention, our model employs a CNN layer to extract salient features from the transformed word representations originated from a bi-directional RNN layer. Between the two layers, we propose a component to generate target-specific representations of words in the sentence, meanwhile incorporate a mechanism for preserving the original contextual information from the RNN layer. Experiments show that our model achieves a new state-of-the-art performance on a few benchmarks. 1
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204 ], "paper_content_text": [ "Introduction Target-oriented (also mentioned as \"target-level\" or \"aspect-level\" in some works) sentiment classification aims to determine sentiment polarities over \"opinion targets\" that explicitly appear in the sentences (Liu, 2012) .", "For example, in the sentence \"I am pleased with the fast log on, and the long battery life\", the user mentions two targets * The work was done when Xin Li was an intern at Tencent AI Lab.", "This project is substantially supported by a grant from the Research Grant Council of the Hong Kong Special Administrative Region, China (Project Code: 14203414).", "1 Our code is open-source and available at https:// github.com/lixin4ever/TNet \"log on\" and \"better life\", and expresses positive sentiments over them.", "The task is usually formulated as predicting a sentiment category for a (target, sentence) pair.", "Recurrent Neural Networks (RNNs) with attention mechanism, firstly proposed in machine translation (Bahdanau et al., 2014) , is the most commonly-used technique for this task.", "For example, Wang et al.", "(2016) ; Tang et al.", "(2016b) ; ; Liu and Zhang (2017) ; Ma et al.", "(2017) and employ attention to measure the semantic relatedness between each context word and the target, and then use the induced attention scores to aggregate contextual features for prediction.", "In these works, the attention weight based combination of word-level features for classification may introduce noise and downgrade the prediction accuracy.", "For example, in \"This dish is my favorite and I always get it and never get tired of it.", "\", these approaches tend to involve irrelevant words such as \"never\" and \"tired\" when they highlight the opinion modifier \"favorite\".", "To some extent, this drawback is rooted in the attention mechanism, as also observed in machine translation (Luong et al., 2015) and image captioning .", "Another observation is that the sentiment of a target is usually determined by key phrases such as \"is my favorite\".", "By this token, Convolutional Neural Networks (CNNs)-whose capability for extracting the informative n-gram features (also called \"active local features\") as sentence representations has been verified in (Kim, 2014; Johnson and Zhang, 2015) -should be a suitable model for this classification problem.", "However, CNN likely fails in cases where a sentence expresses different sentiments over multiple targets, such as \"great food but the service was dreadful!\".", "One reason is that CNN cannot fully explore the target information as done by RNN-based meth-ods (Tang et al., 2016a) .", "2 Moreover, it is hard for vanilla CNN to differentiate opinion words of multiple targets.", "Precisely, multiple active local features holding different sentiments (e.g., \"great food\" and \"service was dreadful\") may be captured for a single target, thus it will hinder the prediction.", "We propose a new architecture, named Target-Specific Transformation Networks (TNet), to solve the above issues in the task of target sentiment classification.", "TNet firstly encodes the context information into word embeddings and generates the contextualized word representations with LSTMs.", "To integrate the target information into the word representations, TNet introduces a novel Target-Specific Transformation (TST) component for generating the target-specific word representations.", "Contrary to the previous attention-based approaches which apply the same target representation to determine the attention scores of individual context words, TST firstly generates different representations of the target conditioned on individual context words, then it consolidates each context word with its tailor-made target representation to obtain the transformed word representation.", "Considering the context word \"long\" and the target \"battery life\" in the above example, TST firstly measures the associations between \"long\" and individual target words.", "Then it uses the association scores to generate the target representation conditioned on \"long\".", "After that, TST transforms the representation of \"long\" into its target-specific version with the new target representation.", "Note that \"long\" could also indicate a negative sentiment (say for \"startup time\"), and the above TST is able to differentiate them.", "As the context information carried by the representations from the LSTM layer will be lost after the non-linear TST, we design a contextpreserving mechanism to contextualize the generated target-specific word representations.", "Such mechanism also allows deep transformation structure to learn abstract features 3 .", "To help the CNN feature extractor locate sentiment indicators more accurately, we adopt a proximity strategy to scale the input of convolutional layer with positional relevance between a word and the target.", "2 One method could be concatenating the target representation with each word representation, but the effect as shown in (Wang et al., 2016) is limited.", "3 Abstract features usually refer to the features ultimately useful for the task (Bengio et al., 2013; LeCun et al., 2015) .", "In summary, our contributions are as follows: • TNet adapts CNN to handle target-level sentiment classification, and its performance dominates the state-of-the-art models on benchmark datasets.", "• A novel Target-Specific Transformation component is proposed to better integrate target information into the word representations.", "• A context-preserving mechanism is designed to forward the context information into a deep transformation architecture, thus, the model can learn more abstract contextualized word features from deeper networks.", "Model Description Given a target-sentence pair (w τ , w), where w τ = {w τ 1 , w τ 2 , ..., w τ m } is a sub-sequence of w = {w 1 , w 2 , ..., w n }, and the corresponding word embeddings x τ = {x τ 1 , x τ 2 , ..., x τ m } and x = {x 1 , x 2 , ..., x n }, the aim of target sentiment classification is to predict the sentiment polarity y ∈ {P, N, O} of the sentence w over the target w τ , where P , N and O denote \"positive\", \"negative\" and \"neutral\" sentiments respectively.", "The architecture of the proposed Target-Specific Transformation Networks (TNet) is shown in Fig.", "1 .", "The bottom layer is a BiLSTM which transforms the input x = {x 1 , x 2 , ..., x n } ∈ R n×dimw into the contextualized word representations h (0) = {h (0) 1 , h (0) 2 , ..., h (0) n } ∈ R n×2dim h (i.e.", "hidden states of BiLSTM), where dim w and dim h denote the dimensions of the word embeddings and the hidden representations respectively.", "The middle part, the core part of our TNet, consists of L Context-Preserving Transformation (CPT) layers.", "The CPT layer incorporates the target information into the word representations via a novel Target-Specific Transformation (TST) component.", "CPT also contains a contextpreserving mechanism, resembling identity mapping (He et al., 2016a,b) and highway connection (Srivastava et al., 2015a,b) , allows preserving the context information and learning more abstract word-level features using a deep network.", "The top most part is a position-aware convolutional layer which first encodes positional relevance between a word and a target, and then extracts informative features for classification.", "Bi-directional LSTM Layer As observed in Lai et al.", "(2015) , combining contextual information with word embeddings is an effective way to represent a word in convolutionbased architectures.", "TNet also employs a BiL-STM to accumulate the context information for each word of the input sentence, i.e., the bottom part in Fig.", "1 .", "For simplicity and space issue, we denote the operation of an LSTM unit on x i as LSTM(x i ).", "Thus, the contextualized word representation h (0) i ∈ R 2dim h is obtained as follows: h (0) i = [ − −−− → LSTM(x i ); ← −−− − LSTM(x i )], i ∈ [1, n].", "(1) Context-Preserving Transformation The above word-level representation has not considered the target information yet.", "Traditional attention-based approaches keep the word-level features static and aggregate them with weights as the final sentence representation.", "In contrast, as shown in the middle part in Fig.", "1 , we introduce multiple CPT layers and the detail of a single CPT is shown in Fig.", "2 .", "In each CPT layer, a tailor-made TST component that aims at better consolidating word representation and target representation is proposed.", "Moreover, we design a context-preserving mechanism enabling the learning of target-specific word representations in a deep neural architecture.", "Target-Specific Transformation TST component is depicted with the TST block in Liu and Zhang, 2017) average the embeddings of the target words as the target representation.", "This strategy may be inappropriate in some cases because different target words usually do not contribute equally.", "For example, in the target \"amd turin processor\", the word \"processor\" is more important than \"amd\" and \"turin\", because the sentiment is usually conveyed over the phrase head, i.e.,\"processor\", but seldom over modifiers (such as brand name \"amd\").", "Ma et al.", "(2017) attempted to overcome this issue by measuring the importance score between each target word representation and the averaged sentence vector.", "However, it may be ineffective for sentences expressing multiple sentiments (e.g., \"Air has higher resolution but the fonts are small.", "\"), because taking the average tends to neutralize different sentiments.", "We propose to dynamically compute the importance of target words based on each sentence word rather than the whole sentence.", "We first employ another BiLSTM to obtain the target word representations h τ ∈ R m×2dim h : h τ j = [ − −−− → LSTM(x τ j ); ← −−− − LSTM(x τ j )], j ∈ [1, m].", "(2) Then, we dynamically associate them with each word w i in the sentence to tailor-make target representation r τ i at the time step i: r τ i = m j=1 h τ j * F(h (l) i , h τ j ) , (3) where the function F measures the relatedness between the j-th target word representation h τ j and the i-th word-level representation h (l) i : F(h (l) i , h τ j ) = exp (h (l) i h τ j ) m k=1 exp (h (l) i h τ k ) .", "(4) Finally, the concatenation of r τ i and h (l) i is fed into a fully-connected layer to obtain the i-th targetspecific word representationh i (l) : h (l) i = g(W τ [h (l) i : r τ i ] + b τ ), (5) where g( * ) is a non-linear activation function and \":\" denotes vector concatenation.", "W τ and b τ are the weights of the layer.", "Context-Preserving Mechanism After the non-linear TST (see Eq.", "5), the context information captured with contextualized representations from the BiLSTM layer will be lost since the mean and the variance of the features within the feature vector will be changed.", "To take advantage of the context information, which has been proved to be useful in (Lai et al., 2015) , we investigate two strategies: Lossless Forwarding (LF) and Adaptive Scaling (AS), to pass the context information to each following layer, as depicted by the block \"LF/AS\" in Fig.", "2 .", "Accordingly, the model variants are named TNet-LF and TNet-AS.", "Lossless Forwarding.", "This strategy preserves context information by directly feeding the features before the transformation to the next layer.", "Specifically, the input h (l+1) i of the (l + 1)-th CPT layer is formulated as: h (l+1) i = h (l) i +h (l) i , i ∈ [1, n], l ∈ [0, L], (6) where h (l) i is the input of the l-th layer andh (l) i is the output of TST in this layer.", "We unfold the recursive form of Eq.", "6 as follows: h (l+1) i = h (0) i +TST(h (0) i )+· · ·+TST(h (l) i ).", "(7) Here, we denoteh (l) i as TST(h (l) i ).", "From Eq.", "7, we can see that the output of each layer will contain the contextualized word representations (i.e., h (0) i ), thus, the context information is encoded into the transformed features.", "We call this strategy \"Lossless Forwarding\" because the contextualized representations and the transformed representations (i.e., TST(h (l) i )) are kept unchanged during the feature combination.", "Adaptive Scaling.", "Lossless Forwarding introduces the context information by directly adding back the contextualized features to the transformed features, which raises a question: Can the weights of the input and the transformed features be adjusted dynamically?", "With this motivation, we propose another strategy, named \"Adaptive Scaling\".", "Similar to the gate mechanism in RNN variants (Jozefowicz et al., 2015) , Adaptive Scaling introduces a gating function to control the passed proportions of the transformed features and the input features.", "The gate t (l) as follows: t (l) i = σ(W trans h (l) i + b trans ), (8) where t (l) i is the gate for the i-th input of the l-th CPT layer, and σ is the sigmoid activation function.", "Then we perform convex combination of h (l) i andh (l) i based on the gate: h (l+1) i = t (l) i h (l) i + (1 − t (l) i ) h (l) i .", "(9) Here, denotes element-wise multiplication.", "The non-recursive form of this equation is as follows (for clarity, we ignore the subscripts): h (l+1) = [ l k=0 (1 − t (k) )] h (0) +[t (0) l k=1 (1 − t (k) )] TST(h (0) ) + · · · +t (l−1) (1 − t (l) ) TST(h (l−1) ) + t (l) TST(h (l) ).", "Thus, the context information is integrated in each upper layer and the proportions of the contextualized representations and the transformed representations are controlled by the computed gates in different transformation layers.", "Convolutional Feature Extractor Recall that the second issue that blocks CNN to perform well is that vanilla CNN may associate a target with unrelated general opinion words which are frequently used as modifiers for different targets across domains.", "For example, \"service\" in \"Great food but the service is dreadful\" may be associated with both \"great\" and \"dreadful\".", "To solve it, we adopt a proximity strategy, which is observed effective in Li and Lam, 2017) .", "The idea is a closer opinion word is more likely to be the actual modifier of the target.", "Specifically, we first calculate the position relevance v i between the i-th word and the target 4 : v i =      1 − (k+m−i) C i < k + m 1 − i−k C k + m ≤ i ≤ n 0 i > n (10) where k is the index of the first target word, C is a pre-specified constant, and m is the length of the target w τ .", "Then, we use v to help CNN locate the correct opinion w.r.t.", "the given target: h (l) i = h (l) i * v i , i ∈ [1, n], l ∈ [1, L].", "(11) Based on Eq.", "10 and Eq.", "11, the words close to the target will be highlighted and those far away will be downgraded.", "v is also applied on the intermediate output to introduce the position information into each CPT layer.", "Then we feed the weighted h (L) to the convolutional layer, i.e., the top-most layer in Fig.", "1 , to generate the feature map c ∈ R n−s+1 as follows: c i = ReLU(w conv h (L) i:i+s−1 + b conv ), (12) where h (L) i:i+s−1 ∈ R s·dim h is the concatenated vec- tor ofĥ (L) i , · · · ,ĥ (L) i+s−1 , and s is the kernel size.", "w conv ∈ R s·dim h and b conv ∈ R are learnable weights of the convolutional kernel.", "To capture the most informative features, we apply max pooling (Kim, 2014) and obtain the sentence representation z ∈ R n k by employing n k kernels: z = [max(c 1 ), · · · , max(c n k )] .", "(13) Finally, we pass z to a fully connected layer for sentiment prediction: p(y|w τ , w) = Softmax(W f z + b f ).", "(14) where W f and b f are learnable parameters.", "4 As we perform sentence padding, it is possible that the index i is larger than the actual length n of the sentence.", "Experiments Experimental Setup As shown in Table 1 , we evaluate the proposed TNet on three benchmark datasets: LAPTOP and REST are from SemEval ABSA challenge (Pontiki et al., 2014) , containing user reviews in laptop domain and restaurant domain respectively.", "We also remove a few examples having the \"conflict label\" as done in ; TWITTER is built by Dong et al.", "(2014) , containing twitter posts.", "All tokens are lowercased without removal of stop words, symbols or digits, and sentences are zero-padded to the length of the longest sentence in the dataset.", "Evaluation metrics are Accuracy and Macro-Averaged F1 where the latter is more appropriate for datasets with unbalanced classes.", "We also conduct pairwise t-test on both Accuracy and Macro-Averaged F1 to verify if the improvements over the compared models are reliable.", "TNet is compared with the following methods.", "• SVM (Kiritchenko et al., 2014) : It is a traditional support vector machine based model with extensive feature engineering; • AdaRNN (Dong et al., 2014) : It learns the sentence representation toward target for sentiment prediction via semantic composition over dependency tree; • AE-LSTM, and ATAE-LSTM (Wang et al., 2016) : AE-LSTM is a simple LSTM model incorporating the target embedding as input, while ATAE-LSTM extends AE-LSTM with attention; • IAN (Ma et al., 2017) : IAN employs two LSTMs to learn the representations of the context and the target phrase interactively; • CNN-ASP: It is a CNN-based model implemented by us which directly concatenates target representation to each word embedding; • TD-LSTM (Tang et al., 2016a) : It employs two LSTMs to model the left and right contexts of the target separately, then performs predictions based on concatenated context representations; • MemNet (Tang et al., 2016b) : It applies attention mechanism over the word embeddings multiple times and predicts sentiments based on the top-most sentence representations; • BILSTM-ATT-G (Liu and Zhang, 2017): It models left and right contexts using two attention-based LSTMs and introduces gates to measure the importance of left context, right context, and the entire sentence for the prediction; • RAM : RAM is a multilayer architecture where each layer consists of attention-based aggregation of word features and a GRU cell to learn the sentence representation.", "We run the released codes of TD-LSTM and BILSTM-ATT-G to generate results, since their papers only reported results on TWITTER.", "We also rerun MemNet on our datasets and evaluate it with both accuracy and Macro-Averaged F1.", "5 We use pre-trained GloVe vectors (Pennington et al., 2014) to initialize the word embeddings and the dimension is 300 (i.e., dim w = 300).", "For out-of-vocabulary words, we randomly sample their embeddings from the uniform distribution U(−0.25, 0.25), as done in (Kim, 2014) .", "We only use one convolutional kernel size because it was observed that CNN with single optimal kernel size is comparable with CNN having multiple kernel sizes on small datasets (Zhang and Wallace, 2017) .", "To alleviate overfitting, we apply dropout on the input word embeddings of the LSTM and the ultimate sentence representation z.", "All weight matrices are initialized with the uniform distribution U(−0.01, 0.01) and the biases are initialized 5 The codes of TD-LSTM/MemNet and BILSTM-ATT-G are available at: http://ir.hit.edu.cn/˜dytang and http://leoncrashcode.github.io.", "Note that MemNet was only evaluated with accuracy.", "as zeros.", "The training objective is cross-entropy, and Adam (Kingma and Ba, 2015) is adopted as the optimizer by following the learning rate and the decay rates in the original paper.", "The hyper-parameters of TNet-LF and TNet-AS are listed in Table 2 .", "Specifically, all hyperparameters are tuned on 20% randomly held-out training data and the hyper-parameter collection producing the highest accuracy score is used for testing.", "Our model has comparable number of parameters compared to traditional LSTM-based models as we reuse parameters in the transformation layers and BiLSTM.", "6 Table 3 , both TNet-LF and TNet-AS consistently achieve the best performance on all datasets, which verifies the efficacy of our whole TNet model.", "Moreover, TNet can perform well for different kinds of user generated content, such as product reviews with relatively formal sentences in LAPTOP and REST, and tweets with more ungrammatical sentences in TWITTER.", "The reason is the CNN-based feature extractor arms TNet with more power to extract accurate features from ungrammatical sentences.", "Indeed, we can also observe that another CNN-based baseline, i.e., CNN-ASP implemented by us, also obtains good results on TWITTER.", "Main Results As shown in On the other hand, the performance of those comparison methods is mostly unstable.", "For the tweet in TWITTER, the competitive BILSTM-ATT-G and RAM cannot perform as effective as they do for the reviews in LAPTOP and REST, due to the fact that they are heavily rooted in LSTMs and the ungrammatical sentences hinder their ca- Table 3 : Experimental results (%).", "The results with symbol\" \" are retrieved from the original papers, and those starred ( * ) one are from Dong et al.", "(2014) .", "The marker † refers to p-value < 0.01 when comparing with BILSTM-ATT-G, while the marker ‡ refers to p-value < 0.01 when comparing with RAM.", "pability in capturing the context features.", "Another difficulty caused by the ungrammatical sentences is that the dependency parsing might be errorprone, which will affect those methods such as AdaRNN using dependency information.", "From the above observations and analysis, some takeaway message for the task of target sentiment classification could be: • LSTM-based models relying on sequential information can perform well for formal sentences by capturing more useful context features; • For ungrammatical text, CNN-based models may have some advantages because CNN aims to extract the most informative n-gram features and is thus less sensitive to informal texts without strong sequential patterns.", "Performance of Ablated TNet To investigate the impact of each component such as deep transformation, context-preserving mechanism, and positional relevance, we perform comparison between the full TNet models and its ablations (the third group in Table 3 ).", "After removing the deep transformation (i.e., the techniques introduced in Section 2.2), both TNet-LF and TNet-AS are reduced to TNet w/o transformation (where position relevance is kept), and their results in both accuracy and F1 measure are incomparable with those of TNet.", "It shows that the integration of target information into the word-level representations is crucial for good performance.", "Comparing the results of TNet and TNet w/o context (where TST and position relevance are kept), we observe that the performance of TNet w/o context drops significantly on LAPTOP and REST 7 , while on TWITTER, TNet w/o context performs very competitive (p-values with TNet-LF and TNet-AS are 0.066 and 0.053 respectively for Accuracy).", "Again, we could attribute this phenomenon to the ungrammatical user generated content of twitter, because the contextpreserving component becomes less important for such data.", "TNet w/o context performs consistently better than TNet w/o transformation, which verifies the efficacy of the target specific transformation (TST), before applying context-preserving.", "As for the position information, we conduct statistical t-test between TNet-LF/AS and TNet-LF/AS w/o position together with performance comparison.", "All of the produced p-values are less than 0.05, suggesting that the improvements brought in by position information are significant.", "CPT versus Alternatives The next interesting question is what if we replace the transformation module (i.e., the CPT layers in Fig.1) of TNet with other commonly-used components?", "We investigate two alternatives: attention mechanism and fully-connected (FC) layer, resulting in three pipelines as shown in the second group of Table 3 (position relevance is kept for them).", "LSTM-ATT-CNN applies attention as the alternative 8 , and it does not need the contextpreserving mechanism.", "It performs unexceptionally worse than the TNet variants.", "We are surprised that LSTM-ATT-CNN is even worse than TNet w/o transformation (a pipeline simply removing the transformation module) on TWITTER.", "More concretely, applying attention results in negative effect on TWITTER, which is consistent with the observation that all those attention-based state-of-the-art methods (i.e., TD-LSTM, Mem-Net, BILSTM-ATT-G, and RAM) cannot perform well on TWITTER.", "LSTM-FC-CNN-LF and LSTM-FC-CNN-AS are built by applying FC layer to replace TST and keeping the context-preserving mechanism (i.e., LF and AS).", "Specifically, the concatenation of word representation and the averaged target vector is fed to the FC layer to obtain targetspecific features.", "Note that LSTM-FC-CNN-LF/AS are equivalent to TNet-LF/AS when processing single-word targets (see Eq.", "3).", "They obtain competitive results on all datasets: comparable with or better than the state-of-the-art methods.", "The TNet variants can still outperform LSTM-FC-CNN-LF/AS with significant gaps, e.g., on LAPTOP and REST, the accuracy gaps between TNet-LF and LSTM-FC-CNN-LF are 0.42% (p < 0.03) and 0.38% (p < 0.04) respectively.", "Impact of CPT Layer Number As our TNet involves multiple CPT layers, we investigate the effect of the layer number L. Specifically, we conduct experiments on the held-out training data of LAPTOP and vary L from 2 to 10, increased by 2.", "The cases L=1 and L=15 are also included.", "The results are illustrated in Figure 3 .", "We can see that both TNet-LF and TNet-AS achieve the best results when L=2.", "While increasing L, the performance is basically becoming worse.", "For large L, the performance of TNet-AS 8 We tried different attention mechanisms and report the best one here, namely, dot attention (Luong et al., 2015) .", "generally becomes more sensitive, it is probably because AS involves extra parameters (see Eq 9) that increase the training difficulty.", "Table 4 shows some sample cases.", "The input targets are wrapped in the brackets with true labels given as subscripts.", "The notations P, N and O in the table represent positive, negative and neutral respectively.", "For each sentence, we underline the target with a particular color, and the text of its corresponding most informative n-gram feature 9 captured by TNet-AS (TNet-LF captures very similar features) is in the same color (so color printing is preferred).", "For example, for the target \"resolution\" in the first sentence, the captured feature is \"Air has higher\".", "Note that as discussed above, the CNN layer of TNet captures such features with the size-three kernels, so that the features are trigrams.", "Each of the last features of the second and seventh sentences contains a padding token, which is not shown.", "Case Study Our TNet variants can predict target sentiment more accurately than RAM and BILSTM-ATT-G in the transitional sentences such as the first sentence by capturing correct trigram features.", "For the third sentence, its second and third most informative trigrams are \"100% .", "PAD\" and \"' s not\", being used together with \"features make up\", our models can make correct predictions.", "Moreover, TNet can still make correct prediction when the explicit opinion is target-specific.", "For example, (P, P, P) (P, P, P) (P, P, P) (P, P, P) 7.", "The [staff] N should be a bit more friendly .", "P P P P Table 4 : Example predictions, color printing is preferred.", "The input targets are wrapped in brackets with the true labels given as subscripts.", "indicates incorrect prediction.", "\"long\" in the fifth sentence is negative for \"startup time\", while it could be positive for other targets such as \"battery life\" in the sixth sentence.", "The sentiment of target-specific opinion word is conditioned on the given target.", "Our TNet variants, armed with the word-level feature transformation w.r.t.", "the target, is capable of handling such case.", "We also find that all these models cannot give correct prediction for the last sentence, a commonly used subjunctive style.", "In this case, the difficulty of prediction does not come from the detection of explicit opinion words but the inference based on implicit semantics, which is still quite challenging for neural network models.", "Related Work Apart from sentence level sentiment classification (Kim, 2014; Shi et al., 2018) , aspect/target level sentiment classification is also an important research topic in the field of sentiment analysis.", "The early methods mostly adopted supervised learning approach with extensive hand-coded features (Blair-Goldensohn et al., 2008; Titov and McDonald, 2008; Jiang et al., 2011; Kiritchenko et al., 2014; Wagner et al., 2014; Vo and Zhang, 2015) , and they fail to model the semantic relatedness between a target and its context which is critical for target sentiment analysis.", "Dong et al.", "(2014) incorporate the target information into the feature learning using dependency trees.", "As observed in previous works, the performance heavily relies on the quality of dependency parsing.", "Tang et al.", "(2016a) propose to split the context into two parts and associate target with contextual features separately.", "Similar to (Tang et al., 2016a) , Zhang et al.", "(2016) develop a three-way gated neural network to model the in-teraction between the target and its surrounding contexts.", "Despite the advantages of jointly modeling target and context, they are not capable of capturing long-range information when some critical context information is far from the target.", "To overcome this limitation, researchers bring in the attention mechanism to model target-context association (Tang et al., 2016a,b; Wang et al., 2016; Liu and Zhang, 2017; Ma et al., 2017; Tay et al., 2017) .", "Compared with these methods, our TNet avoids using attention for feature extraction so as to alleviate the attended noise." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.2.1", "2.2.2", "2.3", "3.1", "3.3", "3.4", "3.5", "3.6", "4" ], "paper_header_content": [ "Introduction", "Model Description", "Bi-directional LSTM Layer", "Context-Preserving Transformation", "Target-Specific Transformation", "Context-Preserving Mechanism", "Convolutional Feature Extractor", "Experimental Setup", "Performance of Ablated TNet", "CPT versus Alternatives", "Impact of CPT Layer Number", "Case Study", "Related Work" ] }
GEM-SciDuet-train-35#paper-1049#slide-2
Motivation
Convolutional Neural Network (CNN) is more suitable for this task Sentiments towards the targets are usually determined by key phrases. Example: This [dish] is my favorite and I always get it and never get tired of it. CNN whose aim is to capture the most informative n-grams (e.g., is my favorite) in the sentence should be a suitable model. Attention-based weighted combination of the entire word-level features may introduce some noises (e.g., never and tired in above sentence). We employ proximity-based CNN rather than attention-based RNN as the top-most feature extractor. CNN likely fails in cases where a sentence expresses different sentiments over multiple targets. Example: great [food] but the [service] was dreadful! CNN cannot fully explore the target information via vector concatenation. Combining context information and word embedding is an effective way to represent a word in the convolution-based architecture [4] (i) We propose a Target-Specific Transformation (TST) component to better consolidate the target information with word representations. (ii) We design two context-preserving mechanisms Adaptive Scaling (AS) and Loseless Forwarding (LF) to combine the contextualized representations and the transformed representations. Most of the existing works do not discriminate different words in the same target phrase In the target phrase, different words would not contribute equally to the target representation. For example, in amd turin processor, phrase head processor is more important than amd and turin. Our TST solves this problem in two steps: (i) Explicitly calculating the importance scores of the target words. (ii) Conducting word-level association between the target and its context.
Convolutional Neural Network (CNN) is more suitable for this task Sentiments towards the targets are usually determined by key phrases. Example: This [dish] is my favorite and I always get it and never get tired of it. CNN whose aim is to capture the most informative n-grams (e.g., is my favorite) in the sentence should be a suitable model. Attention-based weighted combination of the entire word-level features may introduce some noises (e.g., never and tired in above sentence). We employ proximity-based CNN rather than attention-based RNN as the top-most feature extractor. CNN likely fails in cases where a sentence expresses different sentiments over multiple targets. Example: great [food] but the [service] was dreadful! CNN cannot fully explore the target information via vector concatenation. Combining context information and word embedding is an effective way to represent a word in the convolution-based architecture [4] (i) We propose a Target-Specific Transformation (TST) component to better consolidate the target information with word representations. (ii) We design two context-preserving mechanisms Adaptive Scaling (AS) and Loseless Forwarding (LF) to combine the contextualized representations and the transformed representations. Most of the existing works do not discriminate different words in the same target phrase In the target phrase, different words would not contribute equally to the target representation. For example, in amd turin processor, phrase head processor is more important than amd and turin. Our TST solves this problem in two steps: (i) Explicitly calculating the importance scores of the target words. (ii) Conducting word-level association between the target and its context.
[]
GEM-SciDuet-train-35#paper-1049#slide-3
1049
Transformation Networks for Target-Oriented Sentiment Classification *
Target-oriented sentiment classification aims at classifying sentiment polarities over individual opinion targets in a sentence. RNN with attention seems a good fit for the characteristics of this task, and indeed it achieves the state-of-the-art performance. After re-examining the drawbacks of attention mechanism and the obstacles that block CNN to perform well in this classification task, we propose a new model to overcome these issues. Instead of attention, our model employs a CNN layer to extract salient features from the transformed word representations originated from a bi-directional RNN layer. Between the two layers, we propose a component to generate target-specific representations of words in the sentence, meanwhile incorporate a mechanism for preserving the original contextual information from the RNN layer. Experiments show that our model achieves a new state-of-the-art performance on a few benchmarks. 1
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204 ], "paper_content_text": [ "Introduction Target-oriented (also mentioned as \"target-level\" or \"aspect-level\" in some works) sentiment classification aims to determine sentiment polarities over \"opinion targets\" that explicitly appear in the sentences (Liu, 2012) .", "For example, in the sentence \"I am pleased with the fast log on, and the long battery life\", the user mentions two targets * The work was done when Xin Li was an intern at Tencent AI Lab.", "This project is substantially supported by a grant from the Research Grant Council of the Hong Kong Special Administrative Region, China (Project Code: 14203414).", "1 Our code is open-source and available at https:// github.com/lixin4ever/TNet \"log on\" and \"better life\", and expresses positive sentiments over them.", "The task is usually formulated as predicting a sentiment category for a (target, sentence) pair.", "Recurrent Neural Networks (RNNs) with attention mechanism, firstly proposed in machine translation (Bahdanau et al., 2014) , is the most commonly-used technique for this task.", "For example, Wang et al.", "(2016) ; Tang et al.", "(2016b) ; ; Liu and Zhang (2017) ; Ma et al.", "(2017) and employ attention to measure the semantic relatedness between each context word and the target, and then use the induced attention scores to aggregate contextual features for prediction.", "In these works, the attention weight based combination of word-level features for classification may introduce noise and downgrade the prediction accuracy.", "For example, in \"This dish is my favorite and I always get it and never get tired of it.", "\", these approaches tend to involve irrelevant words such as \"never\" and \"tired\" when they highlight the opinion modifier \"favorite\".", "To some extent, this drawback is rooted in the attention mechanism, as also observed in machine translation (Luong et al., 2015) and image captioning .", "Another observation is that the sentiment of a target is usually determined by key phrases such as \"is my favorite\".", "By this token, Convolutional Neural Networks (CNNs)-whose capability for extracting the informative n-gram features (also called \"active local features\") as sentence representations has been verified in (Kim, 2014; Johnson and Zhang, 2015) -should be a suitable model for this classification problem.", "However, CNN likely fails in cases where a sentence expresses different sentiments over multiple targets, such as \"great food but the service was dreadful!\".", "One reason is that CNN cannot fully explore the target information as done by RNN-based meth-ods (Tang et al., 2016a) .", "2 Moreover, it is hard for vanilla CNN to differentiate opinion words of multiple targets.", "Precisely, multiple active local features holding different sentiments (e.g., \"great food\" and \"service was dreadful\") may be captured for a single target, thus it will hinder the prediction.", "We propose a new architecture, named Target-Specific Transformation Networks (TNet), to solve the above issues in the task of target sentiment classification.", "TNet firstly encodes the context information into word embeddings and generates the contextualized word representations with LSTMs.", "To integrate the target information into the word representations, TNet introduces a novel Target-Specific Transformation (TST) component for generating the target-specific word representations.", "Contrary to the previous attention-based approaches which apply the same target representation to determine the attention scores of individual context words, TST firstly generates different representations of the target conditioned on individual context words, then it consolidates each context word with its tailor-made target representation to obtain the transformed word representation.", "Considering the context word \"long\" and the target \"battery life\" in the above example, TST firstly measures the associations between \"long\" and individual target words.", "Then it uses the association scores to generate the target representation conditioned on \"long\".", "After that, TST transforms the representation of \"long\" into its target-specific version with the new target representation.", "Note that \"long\" could also indicate a negative sentiment (say for \"startup time\"), and the above TST is able to differentiate them.", "As the context information carried by the representations from the LSTM layer will be lost after the non-linear TST, we design a contextpreserving mechanism to contextualize the generated target-specific word representations.", "Such mechanism also allows deep transformation structure to learn abstract features 3 .", "To help the CNN feature extractor locate sentiment indicators more accurately, we adopt a proximity strategy to scale the input of convolutional layer with positional relevance between a word and the target.", "2 One method could be concatenating the target representation with each word representation, but the effect as shown in (Wang et al., 2016) is limited.", "3 Abstract features usually refer to the features ultimately useful for the task (Bengio et al., 2013; LeCun et al., 2015) .", "In summary, our contributions are as follows: • TNet adapts CNN to handle target-level sentiment classification, and its performance dominates the state-of-the-art models on benchmark datasets.", "• A novel Target-Specific Transformation component is proposed to better integrate target information into the word representations.", "• A context-preserving mechanism is designed to forward the context information into a deep transformation architecture, thus, the model can learn more abstract contextualized word features from deeper networks.", "Model Description Given a target-sentence pair (w τ , w), where w τ = {w τ 1 , w τ 2 , ..., w τ m } is a sub-sequence of w = {w 1 , w 2 , ..., w n }, and the corresponding word embeddings x τ = {x τ 1 , x τ 2 , ..., x τ m } and x = {x 1 , x 2 , ..., x n }, the aim of target sentiment classification is to predict the sentiment polarity y ∈ {P, N, O} of the sentence w over the target w τ , where P , N and O denote \"positive\", \"negative\" and \"neutral\" sentiments respectively.", "The architecture of the proposed Target-Specific Transformation Networks (TNet) is shown in Fig.", "1 .", "The bottom layer is a BiLSTM which transforms the input x = {x 1 , x 2 , ..., x n } ∈ R n×dimw into the contextualized word representations h (0) = {h (0) 1 , h (0) 2 , ..., h (0) n } ∈ R n×2dim h (i.e.", "hidden states of BiLSTM), where dim w and dim h denote the dimensions of the word embeddings and the hidden representations respectively.", "The middle part, the core part of our TNet, consists of L Context-Preserving Transformation (CPT) layers.", "The CPT layer incorporates the target information into the word representations via a novel Target-Specific Transformation (TST) component.", "CPT also contains a contextpreserving mechanism, resembling identity mapping (He et al., 2016a,b) and highway connection (Srivastava et al., 2015a,b) , allows preserving the context information and learning more abstract word-level features using a deep network.", "The top most part is a position-aware convolutional layer which first encodes positional relevance between a word and a target, and then extracts informative features for classification.", "Bi-directional LSTM Layer As observed in Lai et al.", "(2015) , combining contextual information with word embeddings is an effective way to represent a word in convolutionbased architectures.", "TNet also employs a BiL-STM to accumulate the context information for each word of the input sentence, i.e., the bottom part in Fig.", "1 .", "For simplicity and space issue, we denote the operation of an LSTM unit on x i as LSTM(x i ).", "Thus, the contextualized word representation h (0) i ∈ R 2dim h is obtained as follows: h (0) i = [ − −−− → LSTM(x i ); ← −−− − LSTM(x i )], i ∈ [1, n].", "(1) Context-Preserving Transformation The above word-level representation has not considered the target information yet.", "Traditional attention-based approaches keep the word-level features static and aggregate them with weights as the final sentence representation.", "In contrast, as shown in the middle part in Fig.", "1 , we introduce multiple CPT layers and the detail of a single CPT is shown in Fig.", "2 .", "In each CPT layer, a tailor-made TST component that aims at better consolidating word representation and target representation is proposed.", "Moreover, we design a context-preserving mechanism enabling the learning of target-specific word representations in a deep neural architecture.", "Target-Specific Transformation TST component is depicted with the TST block in Liu and Zhang, 2017) average the embeddings of the target words as the target representation.", "This strategy may be inappropriate in some cases because different target words usually do not contribute equally.", "For example, in the target \"amd turin processor\", the word \"processor\" is more important than \"amd\" and \"turin\", because the sentiment is usually conveyed over the phrase head, i.e.,\"processor\", but seldom over modifiers (such as brand name \"amd\").", "Ma et al.", "(2017) attempted to overcome this issue by measuring the importance score between each target word representation and the averaged sentence vector.", "However, it may be ineffective for sentences expressing multiple sentiments (e.g., \"Air has higher resolution but the fonts are small.", "\"), because taking the average tends to neutralize different sentiments.", "We propose to dynamically compute the importance of target words based on each sentence word rather than the whole sentence.", "We first employ another BiLSTM to obtain the target word representations h τ ∈ R m×2dim h : h τ j = [ − −−− → LSTM(x τ j ); ← −−− − LSTM(x τ j )], j ∈ [1, m].", "(2) Then, we dynamically associate them with each word w i in the sentence to tailor-make target representation r τ i at the time step i: r τ i = m j=1 h τ j * F(h (l) i , h τ j ) , (3) where the function F measures the relatedness between the j-th target word representation h τ j and the i-th word-level representation h (l) i : F(h (l) i , h τ j ) = exp (h (l) i h τ j ) m k=1 exp (h (l) i h τ k ) .", "(4) Finally, the concatenation of r τ i and h (l) i is fed into a fully-connected layer to obtain the i-th targetspecific word representationh i (l) : h (l) i = g(W τ [h (l) i : r τ i ] + b τ ), (5) where g( * ) is a non-linear activation function and \":\" denotes vector concatenation.", "W τ and b τ are the weights of the layer.", "Context-Preserving Mechanism After the non-linear TST (see Eq.", "5), the context information captured with contextualized representations from the BiLSTM layer will be lost since the mean and the variance of the features within the feature vector will be changed.", "To take advantage of the context information, which has been proved to be useful in (Lai et al., 2015) , we investigate two strategies: Lossless Forwarding (LF) and Adaptive Scaling (AS), to pass the context information to each following layer, as depicted by the block \"LF/AS\" in Fig.", "2 .", "Accordingly, the model variants are named TNet-LF and TNet-AS.", "Lossless Forwarding.", "This strategy preserves context information by directly feeding the features before the transformation to the next layer.", "Specifically, the input h (l+1) i of the (l + 1)-th CPT layer is formulated as: h (l+1) i = h (l) i +h (l) i , i ∈ [1, n], l ∈ [0, L], (6) where h (l) i is the input of the l-th layer andh (l) i is the output of TST in this layer.", "We unfold the recursive form of Eq.", "6 as follows: h (l+1) i = h (0) i +TST(h (0) i )+· · ·+TST(h (l) i ).", "(7) Here, we denoteh (l) i as TST(h (l) i ).", "From Eq.", "7, we can see that the output of each layer will contain the contextualized word representations (i.e., h (0) i ), thus, the context information is encoded into the transformed features.", "We call this strategy \"Lossless Forwarding\" because the contextualized representations and the transformed representations (i.e., TST(h (l) i )) are kept unchanged during the feature combination.", "Adaptive Scaling.", "Lossless Forwarding introduces the context information by directly adding back the contextualized features to the transformed features, which raises a question: Can the weights of the input and the transformed features be adjusted dynamically?", "With this motivation, we propose another strategy, named \"Adaptive Scaling\".", "Similar to the gate mechanism in RNN variants (Jozefowicz et al., 2015) , Adaptive Scaling introduces a gating function to control the passed proportions of the transformed features and the input features.", "The gate t (l) as follows: t (l) i = σ(W trans h (l) i + b trans ), (8) where t (l) i is the gate for the i-th input of the l-th CPT layer, and σ is the sigmoid activation function.", "Then we perform convex combination of h (l) i andh (l) i based on the gate: h (l+1) i = t (l) i h (l) i + (1 − t (l) i ) h (l) i .", "(9) Here, denotes element-wise multiplication.", "The non-recursive form of this equation is as follows (for clarity, we ignore the subscripts): h (l+1) = [ l k=0 (1 − t (k) )] h (0) +[t (0) l k=1 (1 − t (k) )] TST(h (0) ) + · · · +t (l−1) (1 − t (l) ) TST(h (l−1) ) + t (l) TST(h (l) ).", "Thus, the context information is integrated in each upper layer and the proportions of the contextualized representations and the transformed representations are controlled by the computed gates in different transformation layers.", "Convolutional Feature Extractor Recall that the second issue that blocks CNN to perform well is that vanilla CNN may associate a target with unrelated general opinion words which are frequently used as modifiers for different targets across domains.", "For example, \"service\" in \"Great food but the service is dreadful\" may be associated with both \"great\" and \"dreadful\".", "To solve it, we adopt a proximity strategy, which is observed effective in Li and Lam, 2017) .", "The idea is a closer opinion word is more likely to be the actual modifier of the target.", "Specifically, we first calculate the position relevance v i between the i-th word and the target 4 : v i =      1 − (k+m−i) C i < k + m 1 − i−k C k + m ≤ i ≤ n 0 i > n (10) where k is the index of the first target word, C is a pre-specified constant, and m is the length of the target w τ .", "Then, we use v to help CNN locate the correct opinion w.r.t.", "the given target: h (l) i = h (l) i * v i , i ∈ [1, n], l ∈ [1, L].", "(11) Based on Eq.", "10 and Eq.", "11, the words close to the target will be highlighted and those far away will be downgraded.", "v is also applied on the intermediate output to introduce the position information into each CPT layer.", "Then we feed the weighted h (L) to the convolutional layer, i.e., the top-most layer in Fig.", "1 , to generate the feature map c ∈ R n−s+1 as follows: c i = ReLU(w conv h (L) i:i+s−1 + b conv ), (12) where h (L) i:i+s−1 ∈ R s·dim h is the concatenated vec- tor ofĥ (L) i , · · · ,ĥ (L) i+s−1 , and s is the kernel size.", "w conv ∈ R s·dim h and b conv ∈ R are learnable weights of the convolutional kernel.", "To capture the most informative features, we apply max pooling (Kim, 2014) and obtain the sentence representation z ∈ R n k by employing n k kernels: z = [max(c 1 ), · · · , max(c n k )] .", "(13) Finally, we pass z to a fully connected layer for sentiment prediction: p(y|w τ , w) = Softmax(W f z + b f ).", "(14) where W f and b f are learnable parameters.", "4 As we perform sentence padding, it is possible that the index i is larger than the actual length n of the sentence.", "Experiments Experimental Setup As shown in Table 1 , we evaluate the proposed TNet on three benchmark datasets: LAPTOP and REST are from SemEval ABSA challenge (Pontiki et al., 2014) , containing user reviews in laptop domain and restaurant domain respectively.", "We also remove a few examples having the \"conflict label\" as done in ; TWITTER is built by Dong et al.", "(2014) , containing twitter posts.", "All tokens are lowercased without removal of stop words, symbols or digits, and sentences are zero-padded to the length of the longest sentence in the dataset.", "Evaluation metrics are Accuracy and Macro-Averaged F1 where the latter is more appropriate for datasets with unbalanced classes.", "We also conduct pairwise t-test on both Accuracy and Macro-Averaged F1 to verify if the improvements over the compared models are reliable.", "TNet is compared with the following methods.", "• SVM (Kiritchenko et al., 2014) : It is a traditional support vector machine based model with extensive feature engineering; • AdaRNN (Dong et al., 2014) : It learns the sentence representation toward target for sentiment prediction via semantic composition over dependency tree; • AE-LSTM, and ATAE-LSTM (Wang et al., 2016) : AE-LSTM is a simple LSTM model incorporating the target embedding as input, while ATAE-LSTM extends AE-LSTM with attention; • IAN (Ma et al., 2017) : IAN employs two LSTMs to learn the representations of the context and the target phrase interactively; • CNN-ASP: It is a CNN-based model implemented by us which directly concatenates target representation to each word embedding; • TD-LSTM (Tang et al., 2016a) : It employs two LSTMs to model the left and right contexts of the target separately, then performs predictions based on concatenated context representations; • MemNet (Tang et al., 2016b) : It applies attention mechanism over the word embeddings multiple times and predicts sentiments based on the top-most sentence representations; • BILSTM-ATT-G (Liu and Zhang, 2017): It models left and right contexts using two attention-based LSTMs and introduces gates to measure the importance of left context, right context, and the entire sentence for the prediction; • RAM : RAM is a multilayer architecture where each layer consists of attention-based aggregation of word features and a GRU cell to learn the sentence representation.", "We run the released codes of TD-LSTM and BILSTM-ATT-G to generate results, since their papers only reported results on TWITTER.", "We also rerun MemNet on our datasets and evaluate it with both accuracy and Macro-Averaged F1.", "5 We use pre-trained GloVe vectors (Pennington et al., 2014) to initialize the word embeddings and the dimension is 300 (i.e., dim w = 300).", "For out-of-vocabulary words, we randomly sample their embeddings from the uniform distribution U(−0.25, 0.25), as done in (Kim, 2014) .", "We only use one convolutional kernel size because it was observed that CNN with single optimal kernel size is comparable with CNN having multiple kernel sizes on small datasets (Zhang and Wallace, 2017) .", "To alleviate overfitting, we apply dropout on the input word embeddings of the LSTM and the ultimate sentence representation z.", "All weight matrices are initialized with the uniform distribution U(−0.01, 0.01) and the biases are initialized 5 The codes of TD-LSTM/MemNet and BILSTM-ATT-G are available at: http://ir.hit.edu.cn/˜dytang and http://leoncrashcode.github.io.", "Note that MemNet was only evaluated with accuracy.", "as zeros.", "The training objective is cross-entropy, and Adam (Kingma and Ba, 2015) is adopted as the optimizer by following the learning rate and the decay rates in the original paper.", "The hyper-parameters of TNet-LF and TNet-AS are listed in Table 2 .", "Specifically, all hyperparameters are tuned on 20% randomly held-out training data and the hyper-parameter collection producing the highest accuracy score is used for testing.", "Our model has comparable number of parameters compared to traditional LSTM-based models as we reuse parameters in the transformation layers and BiLSTM.", "6 Table 3 , both TNet-LF and TNet-AS consistently achieve the best performance on all datasets, which verifies the efficacy of our whole TNet model.", "Moreover, TNet can perform well for different kinds of user generated content, such as product reviews with relatively formal sentences in LAPTOP and REST, and tweets with more ungrammatical sentences in TWITTER.", "The reason is the CNN-based feature extractor arms TNet with more power to extract accurate features from ungrammatical sentences.", "Indeed, we can also observe that another CNN-based baseline, i.e., CNN-ASP implemented by us, also obtains good results on TWITTER.", "Main Results As shown in On the other hand, the performance of those comparison methods is mostly unstable.", "For the tweet in TWITTER, the competitive BILSTM-ATT-G and RAM cannot perform as effective as they do for the reviews in LAPTOP and REST, due to the fact that they are heavily rooted in LSTMs and the ungrammatical sentences hinder their ca- Table 3 : Experimental results (%).", "The results with symbol\" \" are retrieved from the original papers, and those starred ( * ) one are from Dong et al.", "(2014) .", "The marker † refers to p-value < 0.01 when comparing with BILSTM-ATT-G, while the marker ‡ refers to p-value < 0.01 when comparing with RAM.", "pability in capturing the context features.", "Another difficulty caused by the ungrammatical sentences is that the dependency parsing might be errorprone, which will affect those methods such as AdaRNN using dependency information.", "From the above observations and analysis, some takeaway message for the task of target sentiment classification could be: • LSTM-based models relying on sequential information can perform well for formal sentences by capturing more useful context features; • For ungrammatical text, CNN-based models may have some advantages because CNN aims to extract the most informative n-gram features and is thus less sensitive to informal texts without strong sequential patterns.", "Performance of Ablated TNet To investigate the impact of each component such as deep transformation, context-preserving mechanism, and positional relevance, we perform comparison between the full TNet models and its ablations (the third group in Table 3 ).", "After removing the deep transformation (i.e., the techniques introduced in Section 2.2), both TNet-LF and TNet-AS are reduced to TNet w/o transformation (where position relevance is kept), and their results in both accuracy and F1 measure are incomparable with those of TNet.", "It shows that the integration of target information into the word-level representations is crucial for good performance.", "Comparing the results of TNet and TNet w/o context (where TST and position relevance are kept), we observe that the performance of TNet w/o context drops significantly on LAPTOP and REST 7 , while on TWITTER, TNet w/o context performs very competitive (p-values with TNet-LF and TNet-AS are 0.066 and 0.053 respectively for Accuracy).", "Again, we could attribute this phenomenon to the ungrammatical user generated content of twitter, because the contextpreserving component becomes less important for such data.", "TNet w/o context performs consistently better than TNet w/o transformation, which verifies the efficacy of the target specific transformation (TST), before applying context-preserving.", "As for the position information, we conduct statistical t-test between TNet-LF/AS and TNet-LF/AS w/o position together with performance comparison.", "All of the produced p-values are less than 0.05, suggesting that the improvements brought in by position information are significant.", "CPT versus Alternatives The next interesting question is what if we replace the transformation module (i.e., the CPT layers in Fig.1) of TNet with other commonly-used components?", "We investigate two alternatives: attention mechanism and fully-connected (FC) layer, resulting in three pipelines as shown in the second group of Table 3 (position relevance is kept for them).", "LSTM-ATT-CNN applies attention as the alternative 8 , and it does not need the contextpreserving mechanism.", "It performs unexceptionally worse than the TNet variants.", "We are surprised that LSTM-ATT-CNN is even worse than TNet w/o transformation (a pipeline simply removing the transformation module) on TWITTER.", "More concretely, applying attention results in negative effect on TWITTER, which is consistent with the observation that all those attention-based state-of-the-art methods (i.e., TD-LSTM, Mem-Net, BILSTM-ATT-G, and RAM) cannot perform well on TWITTER.", "LSTM-FC-CNN-LF and LSTM-FC-CNN-AS are built by applying FC layer to replace TST and keeping the context-preserving mechanism (i.e., LF and AS).", "Specifically, the concatenation of word representation and the averaged target vector is fed to the FC layer to obtain targetspecific features.", "Note that LSTM-FC-CNN-LF/AS are equivalent to TNet-LF/AS when processing single-word targets (see Eq.", "3).", "They obtain competitive results on all datasets: comparable with or better than the state-of-the-art methods.", "The TNet variants can still outperform LSTM-FC-CNN-LF/AS with significant gaps, e.g., on LAPTOP and REST, the accuracy gaps between TNet-LF and LSTM-FC-CNN-LF are 0.42% (p < 0.03) and 0.38% (p < 0.04) respectively.", "Impact of CPT Layer Number As our TNet involves multiple CPT layers, we investigate the effect of the layer number L. Specifically, we conduct experiments on the held-out training data of LAPTOP and vary L from 2 to 10, increased by 2.", "The cases L=1 and L=15 are also included.", "The results are illustrated in Figure 3 .", "We can see that both TNet-LF and TNet-AS achieve the best results when L=2.", "While increasing L, the performance is basically becoming worse.", "For large L, the performance of TNet-AS 8 We tried different attention mechanisms and report the best one here, namely, dot attention (Luong et al., 2015) .", "generally becomes more sensitive, it is probably because AS involves extra parameters (see Eq 9) that increase the training difficulty.", "Table 4 shows some sample cases.", "The input targets are wrapped in the brackets with true labels given as subscripts.", "The notations P, N and O in the table represent positive, negative and neutral respectively.", "For each sentence, we underline the target with a particular color, and the text of its corresponding most informative n-gram feature 9 captured by TNet-AS (TNet-LF captures very similar features) is in the same color (so color printing is preferred).", "For example, for the target \"resolution\" in the first sentence, the captured feature is \"Air has higher\".", "Note that as discussed above, the CNN layer of TNet captures such features with the size-three kernels, so that the features are trigrams.", "Each of the last features of the second and seventh sentences contains a padding token, which is not shown.", "Case Study Our TNet variants can predict target sentiment more accurately than RAM and BILSTM-ATT-G in the transitional sentences such as the first sentence by capturing correct trigram features.", "For the third sentence, its second and third most informative trigrams are \"100% .", "PAD\" and \"' s not\", being used together with \"features make up\", our models can make correct predictions.", "Moreover, TNet can still make correct prediction when the explicit opinion is target-specific.", "For example, (P, P, P) (P, P, P) (P, P, P) (P, P, P) 7.", "The [staff] N should be a bit more friendly .", "P P P P Table 4 : Example predictions, color printing is preferred.", "The input targets are wrapped in brackets with the true labels given as subscripts.", "indicates incorrect prediction.", "\"long\" in the fifth sentence is negative for \"startup time\", while it could be positive for other targets such as \"battery life\" in the sixth sentence.", "The sentiment of target-specific opinion word is conditioned on the given target.", "Our TNet variants, armed with the word-level feature transformation w.r.t.", "the target, is capable of handling such case.", "We also find that all these models cannot give correct prediction for the last sentence, a commonly used subjunctive style.", "In this case, the difficulty of prediction does not come from the detection of explicit opinion words but the inference based on implicit semantics, which is still quite challenging for neural network models.", "Related Work Apart from sentence level sentiment classification (Kim, 2014; Shi et al., 2018) , aspect/target level sentiment classification is also an important research topic in the field of sentiment analysis.", "The early methods mostly adopted supervised learning approach with extensive hand-coded features (Blair-Goldensohn et al., 2008; Titov and McDonald, 2008; Jiang et al., 2011; Kiritchenko et al., 2014; Wagner et al., 2014; Vo and Zhang, 2015) , and they fail to model the semantic relatedness between a target and its context which is critical for target sentiment analysis.", "Dong et al.", "(2014) incorporate the target information into the feature learning using dependency trees.", "As observed in previous works, the performance heavily relies on the quality of dependency parsing.", "Tang et al.", "(2016a) propose to split the context into two parts and associate target with contextual features separately.", "Similar to (Tang et al., 2016a) , Zhang et al.", "(2016) develop a three-way gated neural network to model the in-teraction between the target and its surrounding contexts.", "Despite the advantages of jointly modeling target and context, they are not capable of capturing long-range information when some critical context information is far from the target.", "To overcome this limitation, researchers bring in the attention mechanism to model target-context association (Tang et al., 2016a,b; Wang et al., 2016; Liu and Zhang, 2017; Ma et al., 2017; Tay et al., 2017) .", "Compared with these methods, our TNet avoids using attention for feature extraction so as to alleviate the attended noise." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.2.1", "2.2.2", "2.3", "3.1", "3.3", "3.4", "3.5", "3.6", "4" ], "paper_header_content": [ "Introduction", "Model Description", "Bi-directional LSTM Layer", "Context-Preserving Transformation", "Target-Specific Transformation", "Context-Preserving Mechanism", "Convolutional Feature Extractor", "Experimental Setup", "Performance of Ablated TNet", "CPT versus Alternatives", "Impact of CPT Layer Number", "Case Study", "Related Work" ] }
GEM-SciDuet-train-35#paper-1049#slide-3
Model Overview
onvoCuretectrchin AiatoTransformM rayeLnlutio STirectional Li-dB CPT TST CPT CPT fully-connected Figure: Architecture of TNet. The proposed TNet consists of the following three components: (BOTTOM) Bi-directional LSTM for memory building Generating contextualized word representations. (MIDDLE) Deep Transformation architecture for learning target-specific word representations Refining word-level representations with the input target and the contextual information. (TOP) Proximity-based convolutional feature extractor. Introducing position information to detect the most salient features more accurately.
onvoCuretectrchin AiatoTransformM rayeLnlutio STirectional Li-dB CPT TST CPT CPT fully-connected Figure: Architecture of TNet. The proposed TNet consists of the following three components: (BOTTOM) Bi-directional LSTM for memory building Generating contextualized word representations. (MIDDLE) Deep Transformation architecture for learning target-specific word representations Refining word-level representations with the input target and the contextual information. (TOP) Proximity-based convolutional feature extractor. Introducing position information to detect the most salient features more accurately.
[]
GEM-SciDuet-train-35#paper-1049#slide-4
1049
Transformation Networks for Target-Oriented Sentiment Classification *
Target-oriented sentiment classification aims at classifying sentiment polarities over individual opinion targets in a sentence. RNN with attention seems a good fit for the characteristics of this task, and indeed it achieves the state-of-the-art performance. After re-examining the drawbacks of attention mechanism and the obstacles that block CNN to perform well in this classification task, we propose a new model to overcome these issues. Instead of attention, our model employs a CNN layer to extract salient features from the transformed word representations originated from a bi-directional RNN layer. Between the two layers, we propose a component to generate target-specific representations of words in the sentence, meanwhile incorporate a mechanism for preserving the original contextual information from the RNN layer. Experiments show that our model achieves a new state-of-the-art performance on a few benchmarks. 1
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204 ], "paper_content_text": [ "Introduction Target-oriented (also mentioned as \"target-level\" or \"aspect-level\" in some works) sentiment classification aims to determine sentiment polarities over \"opinion targets\" that explicitly appear in the sentences (Liu, 2012) .", "For example, in the sentence \"I am pleased with the fast log on, and the long battery life\", the user mentions two targets * The work was done when Xin Li was an intern at Tencent AI Lab.", "This project is substantially supported by a grant from the Research Grant Council of the Hong Kong Special Administrative Region, China (Project Code: 14203414).", "1 Our code is open-source and available at https:// github.com/lixin4ever/TNet \"log on\" and \"better life\", and expresses positive sentiments over them.", "The task is usually formulated as predicting a sentiment category for a (target, sentence) pair.", "Recurrent Neural Networks (RNNs) with attention mechanism, firstly proposed in machine translation (Bahdanau et al., 2014) , is the most commonly-used technique for this task.", "For example, Wang et al.", "(2016) ; Tang et al.", "(2016b) ; ; Liu and Zhang (2017) ; Ma et al.", "(2017) and employ attention to measure the semantic relatedness between each context word and the target, and then use the induced attention scores to aggregate contextual features for prediction.", "In these works, the attention weight based combination of word-level features for classification may introduce noise and downgrade the prediction accuracy.", "For example, in \"This dish is my favorite and I always get it and never get tired of it.", "\", these approaches tend to involve irrelevant words such as \"never\" and \"tired\" when they highlight the opinion modifier \"favorite\".", "To some extent, this drawback is rooted in the attention mechanism, as also observed in machine translation (Luong et al., 2015) and image captioning .", "Another observation is that the sentiment of a target is usually determined by key phrases such as \"is my favorite\".", "By this token, Convolutional Neural Networks (CNNs)-whose capability for extracting the informative n-gram features (also called \"active local features\") as sentence representations has been verified in (Kim, 2014; Johnson and Zhang, 2015) -should be a suitable model for this classification problem.", "However, CNN likely fails in cases where a sentence expresses different sentiments over multiple targets, such as \"great food but the service was dreadful!\".", "One reason is that CNN cannot fully explore the target information as done by RNN-based meth-ods (Tang et al., 2016a) .", "2 Moreover, it is hard for vanilla CNN to differentiate opinion words of multiple targets.", "Precisely, multiple active local features holding different sentiments (e.g., \"great food\" and \"service was dreadful\") may be captured for a single target, thus it will hinder the prediction.", "We propose a new architecture, named Target-Specific Transformation Networks (TNet), to solve the above issues in the task of target sentiment classification.", "TNet firstly encodes the context information into word embeddings and generates the contextualized word representations with LSTMs.", "To integrate the target information into the word representations, TNet introduces a novel Target-Specific Transformation (TST) component for generating the target-specific word representations.", "Contrary to the previous attention-based approaches which apply the same target representation to determine the attention scores of individual context words, TST firstly generates different representations of the target conditioned on individual context words, then it consolidates each context word with its tailor-made target representation to obtain the transformed word representation.", "Considering the context word \"long\" and the target \"battery life\" in the above example, TST firstly measures the associations between \"long\" and individual target words.", "Then it uses the association scores to generate the target representation conditioned on \"long\".", "After that, TST transforms the representation of \"long\" into its target-specific version with the new target representation.", "Note that \"long\" could also indicate a negative sentiment (say for \"startup time\"), and the above TST is able to differentiate them.", "As the context information carried by the representations from the LSTM layer will be lost after the non-linear TST, we design a contextpreserving mechanism to contextualize the generated target-specific word representations.", "Such mechanism also allows deep transformation structure to learn abstract features 3 .", "To help the CNN feature extractor locate sentiment indicators more accurately, we adopt a proximity strategy to scale the input of convolutional layer with positional relevance between a word and the target.", "2 One method could be concatenating the target representation with each word representation, but the effect as shown in (Wang et al., 2016) is limited.", "3 Abstract features usually refer to the features ultimately useful for the task (Bengio et al., 2013; LeCun et al., 2015) .", "In summary, our contributions are as follows: • TNet adapts CNN to handle target-level sentiment classification, and its performance dominates the state-of-the-art models on benchmark datasets.", "• A novel Target-Specific Transformation component is proposed to better integrate target information into the word representations.", "• A context-preserving mechanism is designed to forward the context information into a deep transformation architecture, thus, the model can learn more abstract contextualized word features from deeper networks.", "Model Description Given a target-sentence pair (w τ , w), where w τ = {w τ 1 , w τ 2 , ..., w τ m } is a sub-sequence of w = {w 1 , w 2 , ..., w n }, and the corresponding word embeddings x τ = {x τ 1 , x τ 2 , ..., x τ m } and x = {x 1 , x 2 , ..., x n }, the aim of target sentiment classification is to predict the sentiment polarity y ∈ {P, N, O} of the sentence w over the target w τ , where P , N and O denote \"positive\", \"negative\" and \"neutral\" sentiments respectively.", "The architecture of the proposed Target-Specific Transformation Networks (TNet) is shown in Fig.", "1 .", "The bottom layer is a BiLSTM which transforms the input x = {x 1 , x 2 , ..., x n } ∈ R n×dimw into the contextualized word representations h (0) = {h (0) 1 , h (0) 2 , ..., h (0) n } ∈ R n×2dim h (i.e.", "hidden states of BiLSTM), where dim w and dim h denote the dimensions of the word embeddings and the hidden representations respectively.", "The middle part, the core part of our TNet, consists of L Context-Preserving Transformation (CPT) layers.", "The CPT layer incorporates the target information into the word representations via a novel Target-Specific Transformation (TST) component.", "CPT also contains a contextpreserving mechanism, resembling identity mapping (He et al., 2016a,b) and highway connection (Srivastava et al., 2015a,b) , allows preserving the context information and learning more abstract word-level features using a deep network.", "The top most part is a position-aware convolutional layer which first encodes positional relevance between a word and a target, and then extracts informative features for classification.", "Bi-directional LSTM Layer As observed in Lai et al.", "(2015) , combining contextual information with word embeddings is an effective way to represent a word in convolutionbased architectures.", "TNet also employs a BiL-STM to accumulate the context information for each word of the input sentence, i.e., the bottom part in Fig.", "1 .", "For simplicity and space issue, we denote the operation of an LSTM unit on x i as LSTM(x i ).", "Thus, the contextualized word representation h (0) i ∈ R 2dim h is obtained as follows: h (0) i = [ − −−− → LSTM(x i ); ← −−− − LSTM(x i )], i ∈ [1, n].", "(1) Context-Preserving Transformation The above word-level representation has not considered the target information yet.", "Traditional attention-based approaches keep the word-level features static and aggregate them with weights as the final sentence representation.", "In contrast, as shown in the middle part in Fig.", "1 , we introduce multiple CPT layers and the detail of a single CPT is shown in Fig.", "2 .", "In each CPT layer, a tailor-made TST component that aims at better consolidating word representation and target representation is proposed.", "Moreover, we design a context-preserving mechanism enabling the learning of target-specific word representations in a deep neural architecture.", "Target-Specific Transformation TST component is depicted with the TST block in Liu and Zhang, 2017) average the embeddings of the target words as the target representation.", "This strategy may be inappropriate in some cases because different target words usually do not contribute equally.", "For example, in the target \"amd turin processor\", the word \"processor\" is more important than \"amd\" and \"turin\", because the sentiment is usually conveyed over the phrase head, i.e.,\"processor\", but seldom over modifiers (such as brand name \"amd\").", "Ma et al.", "(2017) attempted to overcome this issue by measuring the importance score between each target word representation and the averaged sentence vector.", "However, it may be ineffective for sentences expressing multiple sentiments (e.g., \"Air has higher resolution but the fonts are small.", "\"), because taking the average tends to neutralize different sentiments.", "We propose to dynamically compute the importance of target words based on each sentence word rather than the whole sentence.", "We first employ another BiLSTM to obtain the target word representations h τ ∈ R m×2dim h : h τ j = [ − −−− → LSTM(x τ j ); ← −−− − LSTM(x τ j )], j ∈ [1, m].", "(2) Then, we dynamically associate them with each word w i in the sentence to tailor-make target representation r τ i at the time step i: r τ i = m j=1 h τ j * F(h (l) i , h τ j ) , (3) where the function F measures the relatedness between the j-th target word representation h τ j and the i-th word-level representation h (l) i : F(h (l) i , h τ j ) = exp (h (l) i h τ j ) m k=1 exp (h (l) i h τ k ) .", "(4) Finally, the concatenation of r τ i and h (l) i is fed into a fully-connected layer to obtain the i-th targetspecific word representationh i (l) : h (l) i = g(W τ [h (l) i : r τ i ] + b τ ), (5) where g( * ) is a non-linear activation function and \":\" denotes vector concatenation.", "W τ and b τ are the weights of the layer.", "Context-Preserving Mechanism After the non-linear TST (see Eq.", "5), the context information captured with contextualized representations from the BiLSTM layer will be lost since the mean and the variance of the features within the feature vector will be changed.", "To take advantage of the context information, which has been proved to be useful in (Lai et al., 2015) , we investigate two strategies: Lossless Forwarding (LF) and Adaptive Scaling (AS), to pass the context information to each following layer, as depicted by the block \"LF/AS\" in Fig.", "2 .", "Accordingly, the model variants are named TNet-LF and TNet-AS.", "Lossless Forwarding.", "This strategy preserves context information by directly feeding the features before the transformation to the next layer.", "Specifically, the input h (l+1) i of the (l + 1)-th CPT layer is formulated as: h (l+1) i = h (l) i +h (l) i , i ∈ [1, n], l ∈ [0, L], (6) where h (l) i is the input of the l-th layer andh (l) i is the output of TST in this layer.", "We unfold the recursive form of Eq.", "6 as follows: h (l+1) i = h (0) i +TST(h (0) i )+· · ·+TST(h (l) i ).", "(7) Here, we denoteh (l) i as TST(h (l) i ).", "From Eq.", "7, we can see that the output of each layer will contain the contextualized word representations (i.e., h (0) i ), thus, the context information is encoded into the transformed features.", "We call this strategy \"Lossless Forwarding\" because the contextualized representations and the transformed representations (i.e., TST(h (l) i )) are kept unchanged during the feature combination.", "Adaptive Scaling.", "Lossless Forwarding introduces the context information by directly adding back the contextualized features to the transformed features, which raises a question: Can the weights of the input and the transformed features be adjusted dynamically?", "With this motivation, we propose another strategy, named \"Adaptive Scaling\".", "Similar to the gate mechanism in RNN variants (Jozefowicz et al., 2015) , Adaptive Scaling introduces a gating function to control the passed proportions of the transformed features and the input features.", "The gate t (l) as follows: t (l) i = σ(W trans h (l) i + b trans ), (8) where t (l) i is the gate for the i-th input of the l-th CPT layer, and σ is the sigmoid activation function.", "Then we perform convex combination of h (l) i andh (l) i based on the gate: h (l+1) i = t (l) i h (l) i + (1 − t (l) i ) h (l) i .", "(9) Here, denotes element-wise multiplication.", "The non-recursive form of this equation is as follows (for clarity, we ignore the subscripts): h (l+1) = [ l k=0 (1 − t (k) )] h (0) +[t (0) l k=1 (1 − t (k) )] TST(h (0) ) + · · · +t (l−1) (1 − t (l) ) TST(h (l−1) ) + t (l) TST(h (l) ).", "Thus, the context information is integrated in each upper layer and the proportions of the contextualized representations and the transformed representations are controlled by the computed gates in different transformation layers.", "Convolutional Feature Extractor Recall that the second issue that blocks CNN to perform well is that vanilla CNN may associate a target with unrelated general opinion words which are frequently used as modifiers for different targets across domains.", "For example, \"service\" in \"Great food but the service is dreadful\" may be associated with both \"great\" and \"dreadful\".", "To solve it, we adopt a proximity strategy, which is observed effective in Li and Lam, 2017) .", "The idea is a closer opinion word is more likely to be the actual modifier of the target.", "Specifically, we first calculate the position relevance v i between the i-th word and the target 4 : v i =      1 − (k+m−i) C i < k + m 1 − i−k C k + m ≤ i ≤ n 0 i > n (10) where k is the index of the first target word, C is a pre-specified constant, and m is the length of the target w τ .", "Then, we use v to help CNN locate the correct opinion w.r.t.", "the given target: h (l) i = h (l) i * v i , i ∈ [1, n], l ∈ [1, L].", "(11) Based on Eq.", "10 and Eq.", "11, the words close to the target will be highlighted and those far away will be downgraded.", "v is also applied on the intermediate output to introduce the position information into each CPT layer.", "Then we feed the weighted h (L) to the convolutional layer, i.e., the top-most layer in Fig.", "1 , to generate the feature map c ∈ R n−s+1 as follows: c i = ReLU(w conv h (L) i:i+s−1 + b conv ), (12) where h (L) i:i+s−1 ∈ R s·dim h is the concatenated vec- tor ofĥ (L) i , · · · ,ĥ (L) i+s−1 , and s is the kernel size.", "w conv ∈ R s·dim h and b conv ∈ R are learnable weights of the convolutional kernel.", "To capture the most informative features, we apply max pooling (Kim, 2014) and obtain the sentence representation z ∈ R n k by employing n k kernels: z = [max(c 1 ), · · · , max(c n k )] .", "(13) Finally, we pass z to a fully connected layer for sentiment prediction: p(y|w τ , w) = Softmax(W f z + b f ).", "(14) where W f and b f are learnable parameters.", "4 As we perform sentence padding, it is possible that the index i is larger than the actual length n of the sentence.", "Experiments Experimental Setup As shown in Table 1 , we evaluate the proposed TNet on three benchmark datasets: LAPTOP and REST are from SemEval ABSA challenge (Pontiki et al., 2014) , containing user reviews in laptop domain and restaurant domain respectively.", "We also remove a few examples having the \"conflict label\" as done in ; TWITTER is built by Dong et al.", "(2014) , containing twitter posts.", "All tokens are lowercased without removal of stop words, symbols or digits, and sentences are zero-padded to the length of the longest sentence in the dataset.", "Evaluation metrics are Accuracy and Macro-Averaged F1 where the latter is more appropriate for datasets with unbalanced classes.", "We also conduct pairwise t-test on both Accuracy and Macro-Averaged F1 to verify if the improvements over the compared models are reliable.", "TNet is compared with the following methods.", "• SVM (Kiritchenko et al., 2014) : It is a traditional support vector machine based model with extensive feature engineering; • AdaRNN (Dong et al., 2014) : It learns the sentence representation toward target for sentiment prediction via semantic composition over dependency tree; • AE-LSTM, and ATAE-LSTM (Wang et al., 2016) : AE-LSTM is a simple LSTM model incorporating the target embedding as input, while ATAE-LSTM extends AE-LSTM with attention; • IAN (Ma et al., 2017) : IAN employs two LSTMs to learn the representations of the context and the target phrase interactively; • CNN-ASP: It is a CNN-based model implemented by us which directly concatenates target representation to each word embedding; • TD-LSTM (Tang et al., 2016a) : It employs two LSTMs to model the left and right contexts of the target separately, then performs predictions based on concatenated context representations; • MemNet (Tang et al., 2016b) : It applies attention mechanism over the word embeddings multiple times and predicts sentiments based on the top-most sentence representations; • BILSTM-ATT-G (Liu and Zhang, 2017): It models left and right contexts using two attention-based LSTMs and introduces gates to measure the importance of left context, right context, and the entire sentence for the prediction; • RAM : RAM is a multilayer architecture where each layer consists of attention-based aggregation of word features and a GRU cell to learn the sentence representation.", "We run the released codes of TD-LSTM and BILSTM-ATT-G to generate results, since their papers only reported results on TWITTER.", "We also rerun MemNet on our datasets and evaluate it with both accuracy and Macro-Averaged F1.", "5 We use pre-trained GloVe vectors (Pennington et al., 2014) to initialize the word embeddings and the dimension is 300 (i.e., dim w = 300).", "For out-of-vocabulary words, we randomly sample their embeddings from the uniform distribution U(−0.25, 0.25), as done in (Kim, 2014) .", "We only use one convolutional kernel size because it was observed that CNN with single optimal kernel size is comparable with CNN having multiple kernel sizes on small datasets (Zhang and Wallace, 2017) .", "To alleviate overfitting, we apply dropout on the input word embeddings of the LSTM and the ultimate sentence representation z.", "All weight matrices are initialized with the uniform distribution U(−0.01, 0.01) and the biases are initialized 5 The codes of TD-LSTM/MemNet and BILSTM-ATT-G are available at: http://ir.hit.edu.cn/˜dytang and http://leoncrashcode.github.io.", "Note that MemNet was only evaluated with accuracy.", "as zeros.", "The training objective is cross-entropy, and Adam (Kingma and Ba, 2015) is adopted as the optimizer by following the learning rate and the decay rates in the original paper.", "The hyper-parameters of TNet-LF and TNet-AS are listed in Table 2 .", "Specifically, all hyperparameters are tuned on 20% randomly held-out training data and the hyper-parameter collection producing the highest accuracy score is used for testing.", "Our model has comparable number of parameters compared to traditional LSTM-based models as we reuse parameters in the transformation layers and BiLSTM.", "6 Table 3 , both TNet-LF and TNet-AS consistently achieve the best performance on all datasets, which verifies the efficacy of our whole TNet model.", "Moreover, TNet can perform well for different kinds of user generated content, such as product reviews with relatively formal sentences in LAPTOP and REST, and tweets with more ungrammatical sentences in TWITTER.", "The reason is the CNN-based feature extractor arms TNet with more power to extract accurate features from ungrammatical sentences.", "Indeed, we can also observe that another CNN-based baseline, i.e., CNN-ASP implemented by us, also obtains good results on TWITTER.", "Main Results As shown in On the other hand, the performance of those comparison methods is mostly unstable.", "For the tweet in TWITTER, the competitive BILSTM-ATT-G and RAM cannot perform as effective as they do for the reviews in LAPTOP and REST, due to the fact that they are heavily rooted in LSTMs and the ungrammatical sentences hinder their ca- Table 3 : Experimental results (%).", "The results with symbol\" \" are retrieved from the original papers, and those starred ( * ) one are from Dong et al.", "(2014) .", "The marker † refers to p-value < 0.01 when comparing with BILSTM-ATT-G, while the marker ‡ refers to p-value < 0.01 when comparing with RAM.", "pability in capturing the context features.", "Another difficulty caused by the ungrammatical sentences is that the dependency parsing might be errorprone, which will affect those methods such as AdaRNN using dependency information.", "From the above observations and analysis, some takeaway message for the task of target sentiment classification could be: • LSTM-based models relying on sequential information can perform well for formal sentences by capturing more useful context features; • For ungrammatical text, CNN-based models may have some advantages because CNN aims to extract the most informative n-gram features and is thus less sensitive to informal texts without strong sequential patterns.", "Performance of Ablated TNet To investigate the impact of each component such as deep transformation, context-preserving mechanism, and positional relevance, we perform comparison between the full TNet models and its ablations (the third group in Table 3 ).", "After removing the deep transformation (i.e., the techniques introduced in Section 2.2), both TNet-LF and TNet-AS are reduced to TNet w/o transformation (where position relevance is kept), and their results in both accuracy and F1 measure are incomparable with those of TNet.", "It shows that the integration of target information into the word-level representations is crucial for good performance.", "Comparing the results of TNet and TNet w/o context (where TST and position relevance are kept), we observe that the performance of TNet w/o context drops significantly on LAPTOP and REST 7 , while on TWITTER, TNet w/o context performs very competitive (p-values with TNet-LF and TNet-AS are 0.066 and 0.053 respectively for Accuracy).", "Again, we could attribute this phenomenon to the ungrammatical user generated content of twitter, because the contextpreserving component becomes less important for such data.", "TNet w/o context performs consistently better than TNet w/o transformation, which verifies the efficacy of the target specific transformation (TST), before applying context-preserving.", "As for the position information, we conduct statistical t-test between TNet-LF/AS and TNet-LF/AS w/o position together with performance comparison.", "All of the produced p-values are less than 0.05, suggesting that the improvements brought in by position information are significant.", "CPT versus Alternatives The next interesting question is what if we replace the transformation module (i.e., the CPT layers in Fig.1) of TNet with other commonly-used components?", "We investigate two alternatives: attention mechanism and fully-connected (FC) layer, resulting in three pipelines as shown in the second group of Table 3 (position relevance is kept for them).", "LSTM-ATT-CNN applies attention as the alternative 8 , and it does not need the contextpreserving mechanism.", "It performs unexceptionally worse than the TNet variants.", "We are surprised that LSTM-ATT-CNN is even worse than TNet w/o transformation (a pipeline simply removing the transformation module) on TWITTER.", "More concretely, applying attention results in negative effect on TWITTER, which is consistent with the observation that all those attention-based state-of-the-art methods (i.e., TD-LSTM, Mem-Net, BILSTM-ATT-G, and RAM) cannot perform well on TWITTER.", "LSTM-FC-CNN-LF and LSTM-FC-CNN-AS are built by applying FC layer to replace TST and keeping the context-preserving mechanism (i.e., LF and AS).", "Specifically, the concatenation of word representation and the averaged target vector is fed to the FC layer to obtain targetspecific features.", "Note that LSTM-FC-CNN-LF/AS are equivalent to TNet-LF/AS when processing single-word targets (see Eq.", "3).", "They obtain competitive results on all datasets: comparable with or better than the state-of-the-art methods.", "The TNet variants can still outperform LSTM-FC-CNN-LF/AS with significant gaps, e.g., on LAPTOP and REST, the accuracy gaps between TNet-LF and LSTM-FC-CNN-LF are 0.42% (p < 0.03) and 0.38% (p < 0.04) respectively.", "Impact of CPT Layer Number As our TNet involves multiple CPT layers, we investigate the effect of the layer number L. Specifically, we conduct experiments on the held-out training data of LAPTOP and vary L from 2 to 10, increased by 2.", "The cases L=1 and L=15 are also included.", "The results are illustrated in Figure 3 .", "We can see that both TNet-LF and TNet-AS achieve the best results when L=2.", "While increasing L, the performance is basically becoming worse.", "For large L, the performance of TNet-AS 8 We tried different attention mechanisms and report the best one here, namely, dot attention (Luong et al., 2015) .", "generally becomes more sensitive, it is probably because AS involves extra parameters (see Eq 9) that increase the training difficulty.", "Table 4 shows some sample cases.", "The input targets are wrapped in the brackets with true labels given as subscripts.", "The notations P, N and O in the table represent positive, negative and neutral respectively.", "For each sentence, we underline the target with a particular color, and the text of its corresponding most informative n-gram feature 9 captured by TNet-AS (TNet-LF captures very similar features) is in the same color (so color printing is preferred).", "For example, for the target \"resolution\" in the first sentence, the captured feature is \"Air has higher\".", "Note that as discussed above, the CNN layer of TNet captures such features with the size-three kernels, so that the features are trigrams.", "Each of the last features of the second and seventh sentences contains a padding token, which is not shown.", "Case Study Our TNet variants can predict target sentiment more accurately than RAM and BILSTM-ATT-G in the transitional sentences such as the first sentence by capturing correct trigram features.", "For the third sentence, its second and third most informative trigrams are \"100% .", "PAD\" and \"' s not\", being used together with \"features make up\", our models can make correct predictions.", "Moreover, TNet can still make correct prediction when the explicit opinion is target-specific.", "For example, (P, P, P) (P, P, P) (P, P, P) (P, P, P) 7.", "The [staff] N should be a bit more friendly .", "P P P P Table 4 : Example predictions, color printing is preferred.", "The input targets are wrapped in brackets with the true labels given as subscripts.", "indicates incorrect prediction.", "\"long\" in the fifth sentence is negative for \"startup time\", while it could be positive for other targets such as \"battery life\" in the sixth sentence.", "The sentiment of target-specific opinion word is conditioned on the given target.", "Our TNet variants, armed with the word-level feature transformation w.r.t.", "the target, is capable of handling such case.", "We also find that all these models cannot give correct prediction for the last sentence, a commonly used subjunctive style.", "In this case, the difficulty of prediction does not come from the detection of explicit opinion words but the inference based on implicit semantics, which is still quite challenging for neural network models.", "Related Work Apart from sentence level sentiment classification (Kim, 2014; Shi et al., 2018) , aspect/target level sentiment classification is also an important research topic in the field of sentiment analysis.", "The early methods mostly adopted supervised learning approach with extensive hand-coded features (Blair-Goldensohn et al., 2008; Titov and McDonald, 2008; Jiang et al., 2011; Kiritchenko et al., 2014; Wagner et al., 2014; Vo and Zhang, 2015) , and they fail to model the semantic relatedness between a target and its context which is critical for target sentiment analysis.", "Dong et al.", "(2014) incorporate the target information into the feature learning using dependency trees.", "As observed in previous works, the performance heavily relies on the quality of dependency parsing.", "Tang et al.", "(2016a) propose to split the context into two parts and associate target with contextual features separately.", "Similar to (Tang et al., 2016a) , Zhang et al.", "(2016) develop a three-way gated neural network to model the in-teraction between the target and its surrounding contexts.", "Despite the advantages of jointly modeling target and context, they are not capable of capturing long-range information when some critical context information is far from the target.", "To overcome this limitation, researchers bring in the attention mechanism to model target-context association (Tang et al., 2016a,b; Wang et al., 2016; Liu and Zhang, 2017; Ma et al., 2017; Tay et al., 2017) .", "Compared with these methods, our TNet avoids using attention for feature extraction so as to alleviate the attended noise." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.2.1", "2.2.2", "2.3", "3.1", "3.3", "3.4", "3.5", "3.6", "4" ], "paper_header_content": [ "Introduction", "Model Description", "Bi-directional LSTM Layer", "Context-Preserving Transformation", "Target-Specific Transformation", "Context-Preserving Mechanism", "Convolutional Feature Extractor", "Experimental Setup", "Performance of Ablated TNet", "CPT versus Alternatives", "Impact of CPT Layer Number", "Case Study", "Related Work" ] }
GEM-SciDuet-train-35#paper-1049#slide-4
Deep Transformation Architecture
Deeper network helps to learn more abstract features (He et al.,
Deeper network helps to learn more abstract features (He et al.,
[]
GEM-SciDuet-train-35#paper-1049#slide-5
1049
Transformation Networks for Target-Oriented Sentiment Classification *
Target-oriented sentiment classification aims at classifying sentiment polarities over individual opinion targets in a sentence. RNN with attention seems a good fit for the characteristics of this task, and indeed it achieves the state-of-the-art performance. After re-examining the drawbacks of attention mechanism and the obstacles that block CNN to perform well in this classification task, we propose a new model to overcome these issues. Instead of attention, our model employs a CNN layer to extract salient features from the transformed word representations originated from a bi-directional RNN layer. Between the two layers, we propose a component to generate target-specific representations of words in the sentence, meanwhile incorporate a mechanism for preserving the original contextual information from the RNN layer. Experiments show that our model achieves a new state-of-the-art performance on a few benchmarks. 1
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204 ], "paper_content_text": [ "Introduction Target-oriented (also mentioned as \"target-level\" or \"aspect-level\" in some works) sentiment classification aims to determine sentiment polarities over \"opinion targets\" that explicitly appear in the sentences (Liu, 2012) .", "For example, in the sentence \"I am pleased with the fast log on, and the long battery life\", the user mentions two targets * The work was done when Xin Li was an intern at Tencent AI Lab.", "This project is substantially supported by a grant from the Research Grant Council of the Hong Kong Special Administrative Region, China (Project Code: 14203414).", "1 Our code is open-source and available at https:// github.com/lixin4ever/TNet \"log on\" and \"better life\", and expresses positive sentiments over them.", "The task is usually formulated as predicting a sentiment category for a (target, sentence) pair.", "Recurrent Neural Networks (RNNs) with attention mechanism, firstly proposed in machine translation (Bahdanau et al., 2014) , is the most commonly-used technique for this task.", "For example, Wang et al.", "(2016) ; Tang et al.", "(2016b) ; ; Liu and Zhang (2017) ; Ma et al.", "(2017) and employ attention to measure the semantic relatedness between each context word and the target, and then use the induced attention scores to aggregate contextual features for prediction.", "In these works, the attention weight based combination of word-level features for classification may introduce noise and downgrade the prediction accuracy.", "For example, in \"This dish is my favorite and I always get it and never get tired of it.", "\", these approaches tend to involve irrelevant words such as \"never\" and \"tired\" when they highlight the opinion modifier \"favorite\".", "To some extent, this drawback is rooted in the attention mechanism, as also observed in machine translation (Luong et al., 2015) and image captioning .", "Another observation is that the sentiment of a target is usually determined by key phrases such as \"is my favorite\".", "By this token, Convolutional Neural Networks (CNNs)-whose capability for extracting the informative n-gram features (also called \"active local features\") as sentence representations has been verified in (Kim, 2014; Johnson and Zhang, 2015) -should be a suitable model for this classification problem.", "However, CNN likely fails in cases where a sentence expresses different sentiments over multiple targets, such as \"great food but the service was dreadful!\".", "One reason is that CNN cannot fully explore the target information as done by RNN-based meth-ods (Tang et al., 2016a) .", "2 Moreover, it is hard for vanilla CNN to differentiate opinion words of multiple targets.", "Precisely, multiple active local features holding different sentiments (e.g., \"great food\" and \"service was dreadful\") may be captured for a single target, thus it will hinder the prediction.", "We propose a new architecture, named Target-Specific Transformation Networks (TNet), to solve the above issues in the task of target sentiment classification.", "TNet firstly encodes the context information into word embeddings and generates the contextualized word representations with LSTMs.", "To integrate the target information into the word representations, TNet introduces a novel Target-Specific Transformation (TST) component for generating the target-specific word representations.", "Contrary to the previous attention-based approaches which apply the same target representation to determine the attention scores of individual context words, TST firstly generates different representations of the target conditioned on individual context words, then it consolidates each context word with its tailor-made target representation to obtain the transformed word representation.", "Considering the context word \"long\" and the target \"battery life\" in the above example, TST firstly measures the associations between \"long\" and individual target words.", "Then it uses the association scores to generate the target representation conditioned on \"long\".", "After that, TST transforms the representation of \"long\" into its target-specific version with the new target representation.", "Note that \"long\" could also indicate a negative sentiment (say for \"startup time\"), and the above TST is able to differentiate them.", "As the context information carried by the representations from the LSTM layer will be lost after the non-linear TST, we design a contextpreserving mechanism to contextualize the generated target-specific word representations.", "Such mechanism also allows deep transformation structure to learn abstract features 3 .", "To help the CNN feature extractor locate sentiment indicators more accurately, we adopt a proximity strategy to scale the input of convolutional layer with positional relevance between a word and the target.", "2 One method could be concatenating the target representation with each word representation, but the effect as shown in (Wang et al., 2016) is limited.", "3 Abstract features usually refer to the features ultimately useful for the task (Bengio et al., 2013; LeCun et al., 2015) .", "In summary, our contributions are as follows: • TNet adapts CNN to handle target-level sentiment classification, and its performance dominates the state-of-the-art models on benchmark datasets.", "• A novel Target-Specific Transformation component is proposed to better integrate target information into the word representations.", "• A context-preserving mechanism is designed to forward the context information into a deep transformation architecture, thus, the model can learn more abstract contextualized word features from deeper networks.", "Model Description Given a target-sentence pair (w τ , w), where w τ = {w τ 1 , w τ 2 , ..., w τ m } is a sub-sequence of w = {w 1 , w 2 , ..., w n }, and the corresponding word embeddings x τ = {x τ 1 , x τ 2 , ..., x τ m } and x = {x 1 , x 2 , ..., x n }, the aim of target sentiment classification is to predict the sentiment polarity y ∈ {P, N, O} of the sentence w over the target w τ , where P , N and O denote \"positive\", \"negative\" and \"neutral\" sentiments respectively.", "The architecture of the proposed Target-Specific Transformation Networks (TNet) is shown in Fig.", "1 .", "The bottom layer is a BiLSTM which transforms the input x = {x 1 , x 2 , ..., x n } ∈ R n×dimw into the contextualized word representations h (0) = {h (0) 1 , h (0) 2 , ..., h (0) n } ∈ R n×2dim h (i.e.", "hidden states of BiLSTM), where dim w and dim h denote the dimensions of the word embeddings and the hidden representations respectively.", "The middle part, the core part of our TNet, consists of L Context-Preserving Transformation (CPT) layers.", "The CPT layer incorporates the target information into the word representations via a novel Target-Specific Transformation (TST) component.", "CPT also contains a contextpreserving mechanism, resembling identity mapping (He et al., 2016a,b) and highway connection (Srivastava et al., 2015a,b) , allows preserving the context information and learning more abstract word-level features using a deep network.", "The top most part is a position-aware convolutional layer which first encodes positional relevance between a word and a target, and then extracts informative features for classification.", "Bi-directional LSTM Layer As observed in Lai et al.", "(2015) , combining contextual information with word embeddings is an effective way to represent a word in convolutionbased architectures.", "TNet also employs a BiL-STM to accumulate the context information for each word of the input sentence, i.e., the bottom part in Fig.", "1 .", "For simplicity and space issue, we denote the operation of an LSTM unit on x i as LSTM(x i ).", "Thus, the contextualized word representation h (0) i ∈ R 2dim h is obtained as follows: h (0) i = [ − −−− → LSTM(x i ); ← −−− − LSTM(x i )], i ∈ [1, n].", "(1) Context-Preserving Transformation The above word-level representation has not considered the target information yet.", "Traditional attention-based approaches keep the word-level features static and aggregate them with weights as the final sentence representation.", "In contrast, as shown in the middle part in Fig.", "1 , we introduce multiple CPT layers and the detail of a single CPT is shown in Fig.", "2 .", "In each CPT layer, a tailor-made TST component that aims at better consolidating word representation and target representation is proposed.", "Moreover, we design a context-preserving mechanism enabling the learning of target-specific word representations in a deep neural architecture.", "Target-Specific Transformation TST component is depicted with the TST block in Liu and Zhang, 2017) average the embeddings of the target words as the target representation.", "This strategy may be inappropriate in some cases because different target words usually do not contribute equally.", "For example, in the target \"amd turin processor\", the word \"processor\" is more important than \"amd\" and \"turin\", because the sentiment is usually conveyed over the phrase head, i.e.,\"processor\", but seldom over modifiers (such as brand name \"amd\").", "Ma et al.", "(2017) attempted to overcome this issue by measuring the importance score between each target word representation and the averaged sentence vector.", "However, it may be ineffective for sentences expressing multiple sentiments (e.g., \"Air has higher resolution but the fonts are small.", "\"), because taking the average tends to neutralize different sentiments.", "We propose to dynamically compute the importance of target words based on each sentence word rather than the whole sentence.", "We first employ another BiLSTM to obtain the target word representations h τ ∈ R m×2dim h : h τ j = [ − −−− → LSTM(x τ j ); ← −−− − LSTM(x τ j )], j ∈ [1, m].", "(2) Then, we dynamically associate them with each word w i in the sentence to tailor-make target representation r τ i at the time step i: r τ i = m j=1 h τ j * F(h (l) i , h τ j ) , (3) where the function F measures the relatedness between the j-th target word representation h τ j and the i-th word-level representation h (l) i : F(h (l) i , h τ j ) = exp (h (l) i h τ j ) m k=1 exp (h (l) i h τ k ) .", "(4) Finally, the concatenation of r τ i and h (l) i is fed into a fully-connected layer to obtain the i-th targetspecific word representationh i (l) : h (l) i = g(W τ [h (l) i : r τ i ] + b τ ), (5) where g( * ) is a non-linear activation function and \":\" denotes vector concatenation.", "W τ and b τ are the weights of the layer.", "Context-Preserving Mechanism After the non-linear TST (see Eq.", "5), the context information captured with contextualized representations from the BiLSTM layer will be lost since the mean and the variance of the features within the feature vector will be changed.", "To take advantage of the context information, which has been proved to be useful in (Lai et al., 2015) , we investigate two strategies: Lossless Forwarding (LF) and Adaptive Scaling (AS), to pass the context information to each following layer, as depicted by the block \"LF/AS\" in Fig.", "2 .", "Accordingly, the model variants are named TNet-LF and TNet-AS.", "Lossless Forwarding.", "This strategy preserves context information by directly feeding the features before the transformation to the next layer.", "Specifically, the input h (l+1) i of the (l + 1)-th CPT layer is formulated as: h (l+1) i = h (l) i +h (l) i , i ∈ [1, n], l ∈ [0, L], (6) where h (l) i is the input of the l-th layer andh (l) i is the output of TST in this layer.", "We unfold the recursive form of Eq.", "6 as follows: h (l+1) i = h (0) i +TST(h (0) i )+· · ·+TST(h (l) i ).", "(7) Here, we denoteh (l) i as TST(h (l) i ).", "From Eq.", "7, we can see that the output of each layer will contain the contextualized word representations (i.e., h (0) i ), thus, the context information is encoded into the transformed features.", "We call this strategy \"Lossless Forwarding\" because the contextualized representations and the transformed representations (i.e., TST(h (l) i )) are kept unchanged during the feature combination.", "Adaptive Scaling.", "Lossless Forwarding introduces the context information by directly adding back the contextualized features to the transformed features, which raises a question: Can the weights of the input and the transformed features be adjusted dynamically?", "With this motivation, we propose another strategy, named \"Adaptive Scaling\".", "Similar to the gate mechanism in RNN variants (Jozefowicz et al., 2015) , Adaptive Scaling introduces a gating function to control the passed proportions of the transformed features and the input features.", "The gate t (l) as follows: t (l) i = σ(W trans h (l) i + b trans ), (8) where t (l) i is the gate for the i-th input of the l-th CPT layer, and σ is the sigmoid activation function.", "Then we perform convex combination of h (l) i andh (l) i based on the gate: h (l+1) i = t (l) i h (l) i + (1 − t (l) i ) h (l) i .", "(9) Here, denotes element-wise multiplication.", "The non-recursive form of this equation is as follows (for clarity, we ignore the subscripts): h (l+1) = [ l k=0 (1 − t (k) )] h (0) +[t (0) l k=1 (1 − t (k) )] TST(h (0) ) + · · · +t (l−1) (1 − t (l) ) TST(h (l−1) ) + t (l) TST(h (l) ).", "Thus, the context information is integrated in each upper layer and the proportions of the contextualized representations and the transformed representations are controlled by the computed gates in different transformation layers.", "Convolutional Feature Extractor Recall that the second issue that blocks CNN to perform well is that vanilla CNN may associate a target with unrelated general opinion words which are frequently used as modifiers for different targets across domains.", "For example, \"service\" in \"Great food but the service is dreadful\" may be associated with both \"great\" and \"dreadful\".", "To solve it, we adopt a proximity strategy, which is observed effective in Li and Lam, 2017) .", "The idea is a closer opinion word is more likely to be the actual modifier of the target.", "Specifically, we first calculate the position relevance v i between the i-th word and the target 4 : v i =      1 − (k+m−i) C i < k + m 1 − i−k C k + m ≤ i ≤ n 0 i > n (10) where k is the index of the first target word, C is a pre-specified constant, and m is the length of the target w τ .", "Then, we use v to help CNN locate the correct opinion w.r.t.", "the given target: h (l) i = h (l) i * v i , i ∈ [1, n], l ∈ [1, L].", "(11) Based on Eq.", "10 and Eq.", "11, the words close to the target will be highlighted and those far away will be downgraded.", "v is also applied on the intermediate output to introduce the position information into each CPT layer.", "Then we feed the weighted h (L) to the convolutional layer, i.e., the top-most layer in Fig.", "1 , to generate the feature map c ∈ R n−s+1 as follows: c i = ReLU(w conv h (L) i:i+s−1 + b conv ), (12) where h (L) i:i+s−1 ∈ R s·dim h is the concatenated vec- tor ofĥ (L) i , · · · ,ĥ (L) i+s−1 , and s is the kernel size.", "w conv ∈ R s·dim h and b conv ∈ R are learnable weights of the convolutional kernel.", "To capture the most informative features, we apply max pooling (Kim, 2014) and obtain the sentence representation z ∈ R n k by employing n k kernels: z = [max(c 1 ), · · · , max(c n k )] .", "(13) Finally, we pass z to a fully connected layer for sentiment prediction: p(y|w τ , w) = Softmax(W f z + b f ).", "(14) where W f and b f are learnable parameters.", "4 As we perform sentence padding, it is possible that the index i is larger than the actual length n of the sentence.", "Experiments Experimental Setup As shown in Table 1 , we evaluate the proposed TNet on three benchmark datasets: LAPTOP and REST are from SemEval ABSA challenge (Pontiki et al., 2014) , containing user reviews in laptop domain and restaurant domain respectively.", "We also remove a few examples having the \"conflict label\" as done in ; TWITTER is built by Dong et al.", "(2014) , containing twitter posts.", "All tokens are lowercased without removal of stop words, symbols or digits, and sentences are zero-padded to the length of the longest sentence in the dataset.", "Evaluation metrics are Accuracy and Macro-Averaged F1 where the latter is more appropriate for datasets with unbalanced classes.", "We also conduct pairwise t-test on both Accuracy and Macro-Averaged F1 to verify if the improvements over the compared models are reliable.", "TNet is compared with the following methods.", "• SVM (Kiritchenko et al., 2014) : It is a traditional support vector machine based model with extensive feature engineering; • AdaRNN (Dong et al., 2014) : It learns the sentence representation toward target for sentiment prediction via semantic composition over dependency tree; • AE-LSTM, and ATAE-LSTM (Wang et al., 2016) : AE-LSTM is a simple LSTM model incorporating the target embedding as input, while ATAE-LSTM extends AE-LSTM with attention; • IAN (Ma et al., 2017) : IAN employs two LSTMs to learn the representations of the context and the target phrase interactively; • CNN-ASP: It is a CNN-based model implemented by us which directly concatenates target representation to each word embedding; • TD-LSTM (Tang et al., 2016a) : It employs two LSTMs to model the left and right contexts of the target separately, then performs predictions based on concatenated context representations; • MemNet (Tang et al., 2016b) : It applies attention mechanism over the word embeddings multiple times and predicts sentiments based on the top-most sentence representations; • BILSTM-ATT-G (Liu and Zhang, 2017): It models left and right contexts using two attention-based LSTMs and introduces gates to measure the importance of left context, right context, and the entire sentence for the prediction; • RAM : RAM is a multilayer architecture where each layer consists of attention-based aggregation of word features and a GRU cell to learn the sentence representation.", "We run the released codes of TD-LSTM and BILSTM-ATT-G to generate results, since their papers only reported results on TWITTER.", "We also rerun MemNet on our datasets and evaluate it with both accuracy and Macro-Averaged F1.", "5 We use pre-trained GloVe vectors (Pennington et al., 2014) to initialize the word embeddings and the dimension is 300 (i.e., dim w = 300).", "For out-of-vocabulary words, we randomly sample their embeddings from the uniform distribution U(−0.25, 0.25), as done in (Kim, 2014) .", "We only use one convolutional kernel size because it was observed that CNN with single optimal kernel size is comparable with CNN having multiple kernel sizes on small datasets (Zhang and Wallace, 2017) .", "To alleviate overfitting, we apply dropout on the input word embeddings of the LSTM and the ultimate sentence representation z.", "All weight matrices are initialized with the uniform distribution U(−0.01, 0.01) and the biases are initialized 5 The codes of TD-LSTM/MemNet and BILSTM-ATT-G are available at: http://ir.hit.edu.cn/˜dytang and http://leoncrashcode.github.io.", "Note that MemNet was only evaluated with accuracy.", "as zeros.", "The training objective is cross-entropy, and Adam (Kingma and Ba, 2015) is adopted as the optimizer by following the learning rate and the decay rates in the original paper.", "The hyper-parameters of TNet-LF and TNet-AS are listed in Table 2 .", "Specifically, all hyperparameters are tuned on 20% randomly held-out training data and the hyper-parameter collection producing the highest accuracy score is used for testing.", "Our model has comparable number of parameters compared to traditional LSTM-based models as we reuse parameters in the transformation layers and BiLSTM.", "6 Table 3 , both TNet-LF and TNet-AS consistently achieve the best performance on all datasets, which verifies the efficacy of our whole TNet model.", "Moreover, TNet can perform well for different kinds of user generated content, such as product reviews with relatively formal sentences in LAPTOP and REST, and tweets with more ungrammatical sentences in TWITTER.", "The reason is the CNN-based feature extractor arms TNet with more power to extract accurate features from ungrammatical sentences.", "Indeed, we can also observe that another CNN-based baseline, i.e., CNN-ASP implemented by us, also obtains good results on TWITTER.", "Main Results As shown in On the other hand, the performance of those comparison methods is mostly unstable.", "For the tweet in TWITTER, the competitive BILSTM-ATT-G and RAM cannot perform as effective as they do for the reviews in LAPTOP and REST, due to the fact that they are heavily rooted in LSTMs and the ungrammatical sentences hinder their ca- Table 3 : Experimental results (%).", "The results with symbol\" \" are retrieved from the original papers, and those starred ( * ) one are from Dong et al.", "(2014) .", "The marker † refers to p-value < 0.01 when comparing with BILSTM-ATT-G, while the marker ‡ refers to p-value < 0.01 when comparing with RAM.", "pability in capturing the context features.", "Another difficulty caused by the ungrammatical sentences is that the dependency parsing might be errorprone, which will affect those methods such as AdaRNN using dependency information.", "From the above observations and analysis, some takeaway message for the task of target sentiment classification could be: • LSTM-based models relying on sequential information can perform well for formal sentences by capturing more useful context features; • For ungrammatical text, CNN-based models may have some advantages because CNN aims to extract the most informative n-gram features and is thus less sensitive to informal texts without strong sequential patterns.", "Performance of Ablated TNet To investigate the impact of each component such as deep transformation, context-preserving mechanism, and positional relevance, we perform comparison between the full TNet models and its ablations (the third group in Table 3 ).", "After removing the deep transformation (i.e., the techniques introduced in Section 2.2), both TNet-LF and TNet-AS are reduced to TNet w/o transformation (where position relevance is kept), and their results in both accuracy and F1 measure are incomparable with those of TNet.", "It shows that the integration of target information into the word-level representations is crucial for good performance.", "Comparing the results of TNet and TNet w/o context (where TST and position relevance are kept), we observe that the performance of TNet w/o context drops significantly on LAPTOP and REST 7 , while on TWITTER, TNet w/o context performs very competitive (p-values with TNet-LF and TNet-AS are 0.066 and 0.053 respectively for Accuracy).", "Again, we could attribute this phenomenon to the ungrammatical user generated content of twitter, because the contextpreserving component becomes less important for such data.", "TNet w/o context performs consistently better than TNet w/o transformation, which verifies the efficacy of the target specific transformation (TST), before applying context-preserving.", "As for the position information, we conduct statistical t-test between TNet-LF/AS and TNet-LF/AS w/o position together with performance comparison.", "All of the produced p-values are less than 0.05, suggesting that the improvements brought in by position information are significant.", "CPT versus Alternatives The next interesting question is what if we replace the transformation module (i.e., the CPT layers in Fig.1) of TNet with other commonly-used components?", "We investigate two alternatives: attention mechanism and fully-connected (FC) layer, resulting in three pipelines as shown in the second group of Table 3 (position relevance is kept for them).", "LSTM-ATT-CNN applies attention as the alternative 8 , and it does not need the contextpreserving mechanism.", "It performs unexceptionally worse than the TNet variants.", "We are surprised that LSTM-ATT-CNN is even worse than TNet w/o transformation (a pipeline simply removing the transformation module) on TWITTER.", "More concretely, applying attention results in negative effect on TWITTER, which is consistent with the observation that all those attention-based state-of-the-art methods (i.e., TD-LSTM, Mem-Net, BILSTM-ATT-G, and RAM) cannot perform well on TWITTER.", "LSTM-FC-CNN-LF and LSTM-FC-CNN-AS are built by applying FC layer to replace TST and keeping the context-preserving mechanism (i.e., LF and AS).", "Specifically, the concatenation of word representation and the averaged target vector is fed to the FC layer to obtain targetspecific features.", "Note that LSTM-FC-CNN-LF/AS are equivalent to TNet-LF/AS when processing single-word targets (see Eq.", "3).", "They obtain competitive results on all datasets: comparable with or better than the state-of-the-art methods.", "The TNet variants can still outperform LSTM-FC-CNN-LF/AS with significant gaps, e.g., on LAPTOP and REST, the accuracy gaps between TNet-LF and LSTM-FC-CNN-LF are 0.42% (p < 0.03) and 0.38% (p < 0.04) respectively.", "Impact of CPT Layer Number As our TNet involves multiple CPT layers, we investigate the effect of the layer number L. Specifically, we conduct experiments on the held-out training data of LAPTOP and vary L from 2 to 10, increased by 2.", "The cases L=1 and L=15 are also included.", "The results are illustrated in Figure 3 .", "We can see that both TNet-LF and TNet-AS achieve the best results when L=2.", "While increasing L, the performance is basically becoming worse.", "For large L, the performance of TNet-AS 8 We tried different attention mechanisms and report the best one here, namely, dot attention (Luong et al., 2015) .", "generally becomes more sensitive, it is probably because AS involves extra parameters (see Eq 9) that increase the training difficulty.", "Table 4 shows some sample cases.", "The input targets are wrapped in the brackets with true labels given as subscripts.", "The notations P, N and O in the table represent positive, negative and neutral respectively.", "For each sentence, we underline the target with a particular color, and the text of its corresponding most informative n-gram feature 9 captured by TNet-AS (TNet-LF captures very similar features) is in the same color (so color printing is preferred).", "For example, for the target \"resolution\" in the first sentence, the captured feature is \"Air has higher\".", "Note that as discussed above, the CNN layer of TNet captures such features with the size-three kernels, so that the features are trigrams.", "Each of the last features of the second and seventh sentences contains a padding token, which is not shown.", "Case Study Our TNet variants can predict target sentiment more accurately than RAM and BILSTM-ATT-G in the transitional sentences such as the first sentence by capturing correct trigram features.", "For the third sentence, its second and third most informative trigrams are \"100% .", "PAD\" and \"' s not\", being used together with \"features make up\", our models can make correct predictions.", "Moreover, TNet can still make correct prediction when the explicit opinion is target-specific.", "For example, (P, P, P) (P, P, P) (P, P, P) (P, P, P) 7.", "The [staff] N should be a bit more friendly .", "P P P P Table 4 : Example predictions, color printing is preferred.", "The input targets are wrapped in brackets with the true labels given as subscripts.", "indicates incorrect prediction.", "\"long\" in the fifth sentence is negative for \"startup time\", while it could be positive for other targets such as \"battery life\" in the sixth sentence.", "The sentiment of target-specific opinion word is conditioned on the given target.", "Our TNet variants, armed with the word-level feature transformation w.r.t.", "the target, is capable of handling such case.", "We also find that all these models cannot give correct prediction for the last sentence, a commonly used subjunctive style.", "In this case, the difficulty of prediction does not come from the detection of explicit opinion words but the inference based on implicit semantics, which is still quite challenging for neural network models.", "Related Work Apart from sentence level sentiment classification (Kim, 2014; Shi et al., 2018) , aspect/target level sentiment classification is also an important research topic in the field of sentiment analysis.", "The early methods mostly adopted supervised learning approach with extensive hand-coded features (Blair-Goldensohn et al., 2008; Titov and McDonald, 2008; Jiang et al., 2011; Kiritchenko et al., 2014; Wagner et al., 2014; Vo and Zhang, 2015) , and they fail to model the semantic relatedness between a target and its context which is critical for target sentiment analysis.", "Dong et al.", "(2014) incorporate the target information into the feature learning using dependency trees.", "As observed in previous works, the performance heavily relies on the quality of dependency parsing.", "Tang et al.", "(2016a) propose to split the context into two parts and associate target with contextual features separately.", "Similar to (Tang et al., 2016a) , Zhang et al.", "(2016) develop a three-way gated neural network to model the in-teraction between the target and its surrounding contexts.", "Despite the advantages of jointly modeling target and context, they are not capable of capturing long-range information when some critical context information is far from the target.", "To overcome this limitation, researchers bring in the attention mechanism to model target-context association (Tang et al., 2016a,b; Wang et al., 2016; Liu and Zhang, 2017; Ma et al., 2017; Tay et al., 2017) .", "Compared with these methods, our TNet avoids using attention for feature extraction so as to alleviate the attended noise." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.2.1", "2.2.2", "2.3", "3.1", "3.3", "3.4", "3.5", "3.6", "4" ], "paper_header_content": [ "Introduction", "Model Description", "Bi-directional LSTM Layer", "Context-Preserving Transformation", "Target-Specific Transformation", "Context-Preserving Mechanism", "Convolutional Feature Extractor", "Experimental Setup", "Performance of Ablated TNet", "CPT versus Alternatives", "Impact of CPT Layer Number", "Case Study", "Related Work" ] }
GEM-SciDuet-train-35#paper-1049#slide-5
CPT Layer
The functions of the CPT layer are two folds: Incorporating opinion target information into the word-level representations. Generating context-aware target representations ri conditioned on the i-th (l) word representation h fed to the l-th layer: ri (l) hj F(h i h j Obtaining target-specific word representations (l) h i Preserving context information for the upper layers We design two Context-Preserving Mechanisms to add context information back to the transformed word features hi (i) Adaptive Scaling (AS) (Similar to Highway Connection [8]): (l) (Wtransh i btrans), (ii) Lossless Forwarding (LF) (Similar to Residual Connection [3]):
The functions of the CPT layer are two folds: Incorporating opinion target information into the word-level representations. Generating context-aware target representations ri conditioned on the i-th (l) word representation h fed to the l-th layer: ri (l) hj F(h i h j Obtaining target-specific word representations (l) h i Preserving context information for the upper layers We design two Context-Preserving Mechanisms to add context information back to the transformed word features hi (i) Adaptive Scaling (AS) (Similar to Highway Connection [8]): (l) (Wtransh i btrans), (ii) Lossless Forwarding (LF) (Similar to Residual Connection [3]):
[]
GEM-SciDuet-train-35#paper-1049#slide-6
1049
Transformation Networks for Target-Oriented Sentiment Classification *
Target-oriented sentiment classification aims at classifying sentiment polarities over individual opinion targets in a sentence. RNN with attention seems a good fit for the characteristics of this task, and indeed it achieves the state-of-the-art performance. After re-examining the drawbacks of attention mechanism and the obstacles that block CNN to perform well in this classification task, we propose a new model to overcome these issues. Instead of attention, our model employs a CNN layer to extract salient features from the transformed word representations originated from a bi-directional RNN layer. Between the two layers, we propose a component to generate target-specific representations of words in the sentence, meanwhile incorporate a mechanism for preserving the original contextual information from the RNN layer. Experiments show that our model achieves a new state-of-the-art performance on a few benchmarks. 1
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204 ], "paper_content_text": [ "Introduction Target-oriented (also mentioned as \"target-level\" or \"aspect-level\" in some works) sentiment classification aims to determine sentiment polarities over \"opinion targets\" that explicitly appear in the sentences (Liu, 2012) .", "For example, in the sentence \"I am pleased with the fast log on, and the long battery life\", the user mentions two targets * The work was done when Xin Li was an intern at Tencent AI Lab.", "This project is substantially supported by a grant from the Research Grant Council of the Hong Kong Special Administrative Region, China (Project Code: 14203414).", "1 Our code is open-source and available at https:// github.com/lixin4ever/TNet \"log on\" and \"better life\", and expresses positive sentiments over them.", "The task is usually formulated as predicting a sentiment category for a (target, sentence) pair.", "Recurrent Neural Networks (RNNs) with attention mechanism, firstly proposed in machine translation (Bahdanau et al., 2014) , is the most commonly-used technique for this task.", "For example, Wang et al.", "(2016) ; Tang et al.", "(2016b) ; ; Liu and Zhang (2017) ; Ma et al.", "(2017) and employ attention to measure the semantic relatedness between each context word and the target, and then use the induced attention scores to aggregate contextual features for prediction.", "In these works, the attention weight based combination of word-level features for classification may introduce noise and downgrade the prediction accuracy.", "For example, in \"This dish is my favorite and I always get it and never get tired of it.", "\", these approaches tend to involve irrelevant words such as \"never\" and \"tired\" when they highlight the opinion modifier \"favorite\".", "To some extent, this drawback is rooted in the attention mechanism, as also observed in machine translation (Luong et al., 2015) and image captioning .", "Another observation is that the sentiment of a target is usually determined by key phrases such as \"is my favorite\".", "By this token, Convolutional Neural Networks (CNNs)-whose capability for extracting the informative n-gram features (also called \"active local features\") as sentence representations has been verified in (Kim, 2014; Johnson and Zhang, 2015) -should be a suitable model for this classification problem.", "However, CNN likely fails in cases where a sentence expresses different sentiments over multiple targets, such as \"great food but the service was dreadful!\".", "One reason is that CNN cannot fully explore the target information as done by RNN-based meth-ods (Tang et al., 2016a) .", "2 Moreover, it is hard for vanilla CNN to differentiate opinion words of multiple targets.", "Precisely, multiple active local features holding different sentiments (e.g., \"great food\" and \"service was dreadful\") may be captured for a single target, thus it will hinder the prediction.", "We propose a new architecture, named Target-Specific Transformation Networks (TNet), to solve the above issues in the task of target sentiment classification.", "TNet firstly encodes the context information into word embeddings and generates the contextualized word representations with LSTMs.", "To integrate the target information into the word representations, TNet introduces a novel Target-Specific Transformation (TST) component for generating the target-specific word representations.", "Contrary to the previous attention-based approaches which apply the same target representation to determine the attention scores of individual context words, TST firstly generates different representations of the target conditioned on individual context words, then it consolidates each context word with its tailor-made target representation to obtain the transformed word representation.", "Considering the context word \"long\" and the target \"battery life\" in the above example, TST firstly measures the associations between \"long\" and individual target words.", "Then it uses the association scores to generate the target representation conditioned on \"long\".", "After that, TST transforms the representation of \"long\" into its target-specific version with the new target representation.", "Note that \"long\" could also indicate a negative sentiment (say for \"startup time\"), and the above TST is able to differentiate them.", "As the context information carried by the representations from the LSTM layer will be lost after the non-linear TST, we design a contextpreserving mechanism to contextualize the generated target-specific word representations.", "Such mechanism also allows deep transformation structure to learn abstract features 3 .", "To help the CNN feature extractor locate sentiment indicators more accurately, we adopt a proximity strategy to scale the input of convolutional layer with positional relevance between a word and the target.", "2 One method could be concatenating the target representation with each word representation, but the effect as shown in (Wang et al., 2016) is limited.", "3 Abstract features usually refer to the features ultimately useful for the task (Bengio et al., 2013; LeCun et al., 2015) .", "In summary, our contributions are as follows: • TNet adapts CNN to handle target-level sentiment classification, and its performance dominates the state-of-the-art models on benchmark datasets.", "• A novel Target-Specific Transformation component is proposed to better integrate target information into the word representations.", "• A context-preserving mechanism is designed to forward the context information into a deep transformation architecture, thus, the model can learn more abstract contextualized word features from deeper networks.", "Model Description Given a target-sentence pair (w τ , w), where w τ = {w τ 1 , w τ 2 , ..., w τ m } is a sub-sequence of w = {w 1 , w 2 , ..., w n }, and the corresponding word embeddings x τ = {x τ 1 , x τ 2 , ..., x τ m } and x = {x 1 , x 2 , ..., x n }, the aim of target sentiment classification is to predict the sentiment polarity y ∈ {P, N, O} of the sentence w over the target w τ , where P , N and O denote \"positive\", \"negative\" and \"neutral\" sentiments respectively.", "The architecture of the proposed Target-Specific Transformation Networks (TNet) is shown in Fig.", "1 .", "The bottom layer is a BiLSTM which transforms the input x = {x 1 , x 2 , ..., x n } ∈ R n×dimw into the contextualized word representations h (0) = {h (0) 1 , h (0) 2 , ..., h (0) n } ∈ R n×2dim h (i.e.", "hidden states of BiLSTM), where dim w and dim h denote the dimensions of the word embeddings and the hidden representations respectively.", "The middle part, the core part of our TNet, consists of L Context-Preserving Transformation (CPT) layers.", "The CPT layer incorporates the target information into the word representations via a novel Target-Specific Transformation (TST) component.", "CPT also contains a contextpreserving mechanism, resembling identity mapping (He et al., 2016a,b) and highway connection (Srivastava et al., 2015a,b) , allows preserving the context information and learning more abstract word-level features using a deep network.", "The top most part is a position-aware convolutional layer which first encodes positional relevance between a word and a target, and then extracts informative features for classification.", "Bi-directional LSTM Layer As observed in Lai et al.", "(2015) , combining contextual information with word embeddings is an effective way to represent a word in convolutionbased architectures.", "TNet also employs a BiL-STM to accumulate the context information for each word of the input sentence, i.e., the bottom part in Fig.", "1 .", "For simplicity and space issue, we denote the operation of an LSTM unit on x i as LSTM(x i ).", "Thus, the contextualized word representation h (0) i ∈ R 2dim h is obtained as follows: h (0) i = [ − −−− → LSTM(x i ); ← −−− − LSTM(x i )], i ∈ [1, n].", "(1) Context-Preserving Transformation The above word-level representation has not considered the target information yet.", "Traditional attention-based approaches keep the word-level features static and aggregate them with weights as the final sentence representation.", "In contrast, as shown in the middle part in Fig.", "1 , we introduce multiple CPT layers and the detail of a single CPT is shown in Fig.", "2 .", "In each CPT layer, a tailor-made TST component that aims at better consolidating word representation and target representation is proposed.", "Moreover, we design a context-preserving mechanism enabling the learning of target-specific word representations in a deep neural architecture.", "Target-Specific Transformation TST component is depicted with the TST block in Liu and Zhang, 2017) average the embeddings of the target words as the target representation.", "This strategy may be inappropriate in some cases because different target words usually do not contribute equally.", "For example, in the target \"amd turin processor\", the word \"processor\" is more important than \"amd\" and \"turin\", because the sentiment is usually conveyed over the phrase head, i.e.,\"processor\", but seldom over modifiers (such as brand name \"amd\").", "Ma et al.", "(2017) attempted to overcome this issue by measuring the importance score between each target word representation and the averaged sentence vector.", "However, it may be ineffective for sentences expressing multiple sentiments (e.g., \"Air has higher resolution but the fonts are small.", "\"), because taking the average tends to neutralize different sentiments.", "We propose to dynamically compute the importance of target words based on each sentence word rather than the whole sentence.", "We first employ another BiLSTM to obtain the target word representations h τ ∈ R m×2dim h : h τ j = [ − −−− → LSTM(x τ j ); ← −−− − LSTM(x τ j )], j ∈ [1, m].", "(2) Then, we dynamically associate them with each word w i in the sentence to tailor-make target representation r τ i at the time step i: r τ i = m j=1 h τ j * F(h (l) i , h τ j ) , (3) where the function F measures the relatedness between the j-th target word representation h τ j and the i-th word-level representation h (l) i : F(h (l) i , h τ j ) = exp (h (l) i h τ j ) m k=1 exp (h (l) i h τ k ) .", "(4) Finally, the concatenation of r τ i and h (l) i is fed into a fully-connected layer to obtain the i-th targetspecific word representationh i (l) : h (l) i = g(W τ [h (l) i : r τ i ] + b τ ), (5) where g( * ) is a non-linear activation function and \":\" denotes vector concatenation.", "W τ and b τ are the weights of the layer.", "Context-Preserving Mechanism After the non-linear TST (see Eq.", "5), the context information captured with contextualized representations from the BiLSTM layer will be lost since the mean and the variance of the features within the feature vector will be changed.", "To take advantage of the context information, which has been proved to be useful in (Lai et al., 2015) , we investigate two strategies: Lossless Forwarding (LF) and Adaptive Scaling (AS), to pass the context information to each following layer, as depicted by the block \"LF/AS\" in Fig.", "2 .", "Accordingly, the model variants are named TNet-LF and TNet-AS.", "Lossless Forwarding.", "This strategy preserves context information by directly feeding the features before the transformation to the next layer.", "Specifically, the input h (l+1) i of the (l + 1)-th CPT layer is formulated as: h (l+1) i = h (l) i +h (l) i , i ∈ [1, n], l ∈ [0, L], (6) where h (l) i is the input of the l-th layer andh (l) i is the output of TST in this layer.", "We unfold the recursive form of Eq.", "6 as follows: h (l+1) i = h (0) i +TST(h (0) i )+· · ·+TST(h (l) i ).", "(7) Here, we denoteh (l) i as TST(h (l) i ).", "From Eq.", "7, we can see that the output of each layer will contain the contextualized word representations (i.e., h (0) i ), thus, the context information is encoded into the transformed features.", "We call this strategy \"Lossless Forwarding\" because the contextualized representations and the transformed representations (i.e., TST(h (l) i )) are kept unchanged during the feature combination.", "Adaptive Scaling.", "Lossless Forwarding introduces the context information by directly adding back the contextualized features to the transformed features, which raises a question: Can the weights of the input and the transformed features be adjusted dynamically?", "With this motivation, we propose another strategy, named \"Adaptive Scaling\".", "Similar to the gate mechanism in RNN variants (Jozefowicz et al., 2015) , Adaptive Scaling introduces a gating function to control the passed proportions of the transformed features and the input features.", "The gate t (l) as follows: t (l) i = σ(W trans h (l) i + b trans ), (8) where t (l) i is the gate for the i-th input of the l-th CPT layer, and σ is the sigmoid activation function.", "Then we perform convex combination of h (l) i andh (l) i based on the gate: h (l+1) i = t (l) i h (l) i + (1 − t (l) i ) h (l) i .", "(9) Here, denotes element-wise multiplication.", "The non-recursive form of this equation is as follows (for clarity, we ignore the subscripts): h (l+1) = [ l k=0 (1 − t (k) )] h (0) +[t (0) l k=1 (1 − t (k) )] TST(h (0) ) + · · · +t (l−1) (1 − t (l) ) TST(h (l−1) ) + t (l) TST(h (l) ).", "Thus, the context information is integrated in each upper layer and the proportions of the contextualized representations and the transformed representations are controlled by the computed gates in different transformation layers.", "Convolutional Feature Extractor Recall that the second issue that blocks CNN to perform well is that vanilla CNN may associate a target with unrelated general opinion words which are frequently used as modifiers for different targets across domains.", "For example, \"service\" in \"Great food but the service is dreadful\" may be associated with both \"great\" and \"dreadful\".", "To solve it, we adopt a proximity strategy, which is observed effective in Li and Lam, 2017) .", "The idea is a closer opinion word is more likely to be the actual modifier of the target.", "Specifically, we first calculate the position relevance v i between the i-th word and the target 4 : v i =      1 − (k+m−i) C i < k + m 1 − i−k C k + m ≤ i ≤ n 0 i > n (10) where k is the index of the first target word, C is a pre-specified constant, and m is the length of the target w τ .", "Then, we use v to help CNN locate the correct opinion w.r.t.", "the given target: h (l) i = h (l) i * v i , i ∈ [1, n], l ∈ [1, L].", "(11) Based on Eq.", "10 and Eq.", "11, the words close to the target will be highlighted and those far away will be downgraded.", "v is also applied on the intermediate output to introduce the position information into each CPT layer.", "Then we feed the weighted h (L) to the convolutional layer, i.e., the top-most layer in Fig.", "1 , to generate the feature map c ∈ R n−s+1 as follows: c i = ReLU(w conv h (L) i:i+s−1 + b conv ), (12) where h (L) i:i+s−1 ∈ R s·dim h is the concatenated vec- tor ofĥ (L) i , · · · ,ĥ (L) i+s−1 , and s is the kernel size.", "w conv ∈ R s·dim h and b conv ∈ R are learnable weights of the convolutional kernel.", "To capture the most informative features, we apply max pooling (Kim, 2014) and obtain the sentence representation z ∈ R n k by employing n k kernels: z = [max(c 1 ), · · · , max(c n k )] .", "(13) Finally, we pass z to a fully connected layer for sentiment prediction: p(y|w τ , w) = Softmax(W f z + b f ).", "(14) where W f and b f are learnable parameters.", "4 As we perform sentence padding, it is possible that the index i is larger than the actual length n of the sentence.", "Experiments Experimental Setup As shown in Table 1 , we evaluate the proposed TNet on three benchmark datasets: LAPTOP and REST are from SemEval ABSA challenge (Pontiki et al., 2014) , containing user reviews in laptop domain and restaurant domain respectively.", "We also remove a few examples having the \"conflict label\" as done in ; TWITTER is built by Dong et al.", "(2014) , containing twitter posts.", "All tokens are lowercased without removal of stop words, symbols or digits, and sentences are zero-padded to the length of the longest sentence in the dataset.", "Evaluation metrics are Accuracy and Macro-Averaged F1 where the latter is more appropriate for datasets with unbalanced classes.", "We also conduct pairwise t-test on both Accuracy and Macro-Averaged F1 to verify if the improvements over the compared models are reliable.", "TNet is compared with the following methods.", "• SVM (Kiritchenko et al., 2014) : It is a traditional support vector machine based model with extensive feature engineering; • AdaRNN (Dong et al., 2014) : It learns the sentence representation toward target for sentiment prediction via semantic composition over dependency tree; • AE-LSTM, and ATAE-LSTM (Wang et al., 2016) : AE-LSTM is a simple LSTM model incorporating the target embedding as input, while ATAE-LSTM extends AE-LSTM with attention; • IAN (Ma et al., 2017) : IAN employs two LSTMs to learn the representations of the context and the target phrase interactively; • CNN-ASP: It is a CNN-based model implemented by us which directly concatenates target representation to each word embedding; • TD-LSTM (Tang et al., 2016a) : It employs two LSTMs to model the left and right contexts of the target separately, then performs predictions based on concatenated context representations; • MemNet (Tang et al., 2016b) : It applies attention mechanism over the word embeddings multiple times and predicts sentiments based on the top-most sentence representations; • BILSTM-ATT-G (Liu and Zhang, 2017): It models left and right contexts using two attention-based LSTMs and introduces gates to measure the importance of left context, right context, and the entire sentence for the prediction; • RAM : RAM is a multilayer architecture where each layer consists of attention-based aggregation of word features and a GRU cell to learn the sentence representation.", "We run the released codes of TD-LSTM and BILSTM-ATT-G to generate results, since their papers only reported results on TWITTER.", "We also rerun MemNet on our datasets and evaluate it with both accuracy and Macro-Averaged F1.", "5 We use pre-trained GloVe vectors (Pennington et al., 2014) to initialize the word embeddings and the dimension is 300 (i.e., dim w = 300).", "For out-of-vocabulary words, we randomly sample their embeddings from the uniform distribution U(−0.25, 0.25), as done in (Kim, 2014) .", "We only use one convolutional kernel size because it was observed that CNN with single optimal kernel size is comparable with CNN having multiple kernel sizes on small datasets (Zhang and Wallace, 2017) .", "To alleviate overfitting, we apply dropout on the input word embeddings of the LSTM and the ultimate sentence representation z.", "All weight matrices are initialized with the uniform distribution U(−0.01, 0.01) and the biases are initialized 5 The codes of TD-LSTM/MemNet and BILSTM-ATT-G are available at: http://ir.hit.edu.cn/˜dytang and http://leoncrashcode.github.io.", "Note that MemNet was only evaluated with accuracy.", "as zeros.", "The training objective is cross-entropy, and Adam (Kingma and Ba, 2015) is adopted as the optimizer by following the learning rate and the decay rates in the original paper.", "The hyper-parameters of TNet-LF and TNet-AS are listed in Table 2 .", "Specifically, all hyperparameters are tuned on 20% randomly held-out training data and the hyper-parameter collection producing the highest accuracy score is used for testing.", "Our model has comparable number of parameters compared to traditional LSTM-based models as we reuse parameters in the transformation layers and BiLSTM.", "6 Table 3 , both TNet-LF and TNet-AS consistently achieve the best performance on all datasets, which verifies the efficacy of our whole TNet model.", "Moreover, TNet can perform well for different kinds of user generated content, such as product reviews with relatively formal sentences in LAPTOP and REST, and tweets with more ungrammatical sentences in TWITTER.", "The reason is the CNN-based feature extractor arms TNet with more power to extract accurate features from ungrammatical sentences.", "Indeed, we can also observe that another CNN-based baseline, i.e., CNN-ASP implemented by us, also obtains good results on TWITTER.", "Main Results As shown in On the other hand, the performance of those comparison methods is mostly unstable.", "For the tweet in TWITTER, the competitive BILSTM-ATT-G and RAM cannot perform as effective as they do for the reviews in LAPTOP and REST, due to the fact that they are heavily rooted in LSTMs and the ungrammatical sentences hinder their ca- Table 3 : Experimental results (%).", "The results with symbol\" \" are retrieved from the original papers, and those starred ( * ) one are from Dong et al.", "(2014) .", "The marker † refers to p-value < 0.01 when comparing with BILSTM-ATT-G, while the marker ‡ refers to p-value < 0.01 when comparing with RAM.", "pability in capturing the context features.", "Another difficulty caused by the ungrammatical sentences is that the dependency parsing might be errorprone, which will affect those methods such as AdaRNN using dependency information.", "From the above observations and analysis, some takeaway message for the task of target sentiment classification could be: • LSTM-based models relying on sequential information can perform well for formal sentences by capturing more useful context features; • For ungrammatical text, CNN-based models may have some advantages because CNN aims to extract the most informative n-gram features and is thus less sensitive to informal texts without strong sequential patterns.", "Performance of Ablated TNet To investigate the impact of each component such as deep transformation, context-preserving mechanism, and positional relevance, we perform comparison between the full TNet models and its ablations (the third group in Table 3 ).", "After removing the deep transformation (i.e., the techniques introduced in Section 2.2), both TNet-LF and TNet-AS are reduced to TNet w/o transformation (where position relevance is kept), and their results in both accuracy and F1 measure are incomparable with those of TNet.", "It shows that the integration of target information into the word-level representations is crucial for good performance.", "Comparing the results of TNet and TNet w/o context (where TST and position relevance are kept), we observe that the performance of TNet w/o context drops significantly on LAPTOP and REST 7 , while on TWITTER, TNet w/o context performs very competitive (p-values with TNet-LF and TNet-AS are 0.066 and 0.053 respectively for Accuracy).", "Again, we could attribute this phenomenon to the ungrammatical user generated content of twitter, because the contextpreserving component becomes less important for such data.", "TNet w/o context performs consistently better than TNet w/o transformation, which verifies the efficacy of the target specific transformation (TST), before applying context-preserving.", "As for the position information, we conduct statistical t-test between TNet-LF/AS and TNet-LF/AS w/o position together with performance comparison.", "All of the produced p-values are less than 0.05, suggesting that the improvements brought in by position information are significant.", "CPT versus Alternatives The next interesting question is what if we replace the transformation module (i.e., the CPT layers in Fig.1) of TNet with other commonly-used components?", "We investigate two alternatives: attention mechanism and fully-connected (FC) layer, resulting in three pipelines as shown in the second group of Table 3 (position relevance is kept for them).", "LSTM-ATT-CNN applies attention as the alternative 8 , and it does not need the contextpreserving mechanism.", "It performs unexceptionally worse than the TNet variants.", "We are surprised that LSTM-ATT-CNN is even worse than TNet w/o transformation (a pipeline simply removing the transformation module) on TWITTER.", "More concretely, applying attention results in negative effect on TWITTER, which is consistent with the observation that all those attention-based state-of-the-art methods (i.e., TD-LSTM, Mem-Net, BILSTM-ATT-G, and RAM) cannot perform well on TWITTER.", "LSTM-FC-CNN-LF and LSTM-FC-CNN-AS are built by applying FC layer to replace TST and keeping the context-preserving mechanism (i.e., LF and AS).", "Specifically, the concatenation of word representation and the averaged target vector is fed to the FC layer to obtain targetspecific features.", "Note that LSTM-FC-CNN-LF/AS are equivalent to TNet-LF/AS when processing single-word targets (see Eq.", "3).", "They obtain competitive results on all datasets: comparable with or better than the state-of-the-art methods.", "The TNet variants can still outperform LSTM-FC-CNN-LF/AS with significant gaps, e.g., on LAPTOP and REST, the accuracy gaps between TNet-LF and LSTM-FC-CNN-LF are 0.42% (p < 0.03) and 0.38% (p < 0.04) respectively.", "Impact of CPT Layer Number As our TNet involves multiple CPT layers, we investigate the effect of the layer number L. Specifically, we conduct experiments on the held-out training data of LAPTOP and vary L from 2 to 10, increased by 2.", "The cases L=1 and L=15 are also included.", "The results are illustrated in Figure 3 .", "We can see that both TNet-LF and TNet-AS achieve the best results when L=2.", "While increasing L, the performance is basically becoming worse.", "For large L, the performance of TNet-AS 8 We tried different attention mechanisms and report the best one here, namely, dot attention (Luong et al., 2015) .", "generally becomes more sensitive, it is probably because AS involves extra parameters (see Eq 9) that increase the training difficulty.", "Table 4 shows some sample cases.", "The input targets are wrapped in the brackets with true labels given as subscripts.", "The notations P, N and O in the table represent positive, negative and neutral respectively.", "For each sentence, we underline the target with a particular color, and the text of its corresponding most informative n-gram feature 9 captured by TNet-AS (TNet-LF captures very similar features) is in the same color (so color printing is preferred).", "For example, for the target \"resolution\" in the first sentence, the captured feature is \"Air has higher\".", "Note that as discussed above, the CNN layer of TNet captures such features with the size-three kernels, so that the features are trigrams.", "Each of the last features of the second and seventh sentences contains a padding token, which is not shown.", "Case Study Our TNet variants can predict target sentiment more accurately than RAM and BILSTM-ATT-G in the transitional sentences such as the first sentence by capturing correct trigram features.", "For the third sentence, its second and third most informative trigrams are \"100% .", "PAD\" and \"' s not\", being used together with \"features make up\", our models can make correct predictions.", "Moreover, TNet can still make correct prediction when the explicit opinion is target-specific.", "For example, (P, P, P) (P, P, P) (P, P, P) (P, P, P) 7.", "The [staff] N should be a bit more friendly .", "P P P P Table 4 : Example predictions, color printing is preferred.", "The input targets are wrapped in brackets with the true labels given as subscripts.", "indicates incorrect prediction.", "\"long\" in the fifth sentence is negative for \"startup time\", while it could be positive for other targets such as \"battery life\" in the sixth sentence.", "The sentiment of target-specific opinion word is conditioned on the given target.", "Our TNet variants, armed with the word-level feature transformation w.r.t.", "the target, is capable of handling such case.", "We also find that all these models cannot give correct prediction for the last sentence, a commonly used subjunctive style.", "In this case, the difficulty of prediction does not come from the detection of explicit opinion words but the inference based on implicit semantics, which is still quite challenging for neural network models.", "Related Work Apart from sentence level sentiment classification (Kim, 2014; Shi et al., 2018) , aspect/target level sentiment classification is also an important research topic in the field of sentiment analysis.", "The early methods mostly adopted supervised learning approach with extensive hand-coded features (Blair-Goldensohn et al., 2008; Titov and McDonald, 2008; Jiang et al., 2011; Kiritchenko et al., 2014; Wagner et al., 2014; Vo and Zhang, 2015) , and they fail to model the semantic relatedness between a target and its context which is critical for target sentiment analysis.", "Dong et al.", "(2014) incorporate the target information into the feature learning using dependency trees.", "As observed in previous works, the performance heavily relies on the quality of dependency parsing.", "Tang et al.", "(2016a) propose to split the context into two parts and associate target with contextual features separately.", "Similar to (Tang et al., 2016a) , Zhang et al.", "(2016) develop a three-way gated neural network to model the in-teraction between the target and its surrounding contexts.", "Despite the advantages of jointly modeling target and context, they are not capable of capturing long-range information when some critical context information is far from the target.", "To overcome this limitation, researchers bring in the attention mechanism to model target-context association (Tang et al., 2016a,b; Wang et al., 2016; Liu and Zhang, 2017; Ma et al., 2017; Tay et al., 2017) .", "Compared with these methods, our TNet avoids using attention for feature extraction so as to alleviate the attended noise." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.2.1", "2.2.2", "2.3", "3.1", "3.3", "3.4", "3.5", "3.6", "4" ], "paper_header_content": [ "Introduction", "Model Description", "Bi-directional LSTM Layer", "Context-Preserving Transformation", "Target-Specific Transformation", "Context-Preserving Mechanism", "Convolutional Feature Extractor", "Experimental Setup", "Performance of Ablated TNet", "CPT versus Alternatives", "Impact of CPT Layer Number", "Case Study", "Related Work" ] }
GEM-SciDuet-train-35#paper-1049#slide-6
Proximity based Convolutional Feature Extractor
This component aims to capture the most salient feature w.r.t. the current target for sentiment prediction. information is effective for better locating the salient features. Basic idea: Up-weighting the words close to the target and down-weighting those far away from the target. Convolutional neural network (Kim, 2014) is used to extract features from the weighted word representations.
This component aims to capture the most salient feature w.r.t. the current target for sentiment prediction. information is effective for better locating the salient features. Basic idea: Up-weighting the words close to the target and down-weighting those far away from the target. Convolutional neural network (Kim, 2014) is used to extract features from the weighted word representations.
[]
GEM-SciDuet-train-35#paper-1049#slide-7
1049
Transformation Networks for Target-Oriented Sentiment Classification *
Target-oriented sentiment classification aims at classifying sentiment polarities over individual opinion targets in a sentence. RNN with attention seems a good fit for the characteristics of this task, and indeed it achieves the state-of-the-art performance. After re-examining the drawbacks of attention mechanism and the obstacles that block CNN to perform well in this classification task, we propose a new model to overcome these issues. Instead of attention, our model employs a CNN layer to extract salient features from the transformed word representations originated from a bi-directional RNN layer. Between the two layers, we propose a component to generate target-specific representations of words in the sentence, meanwhile incorporate a mechanism for preserving the original contextual information from the RNN layer. Experiments show that our model achieves a new state-of-the-art performance on a few benchmarks. 1
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204 ], "paper_content_text": [ "Introduction Target-oriented (also mentioned as \"target-level\" or \"aspect-level\" in some works) sentiment classification aims to determine sentiment polarities over \"opinion targets\" that explicitly appear in the sentences (Liu, 2012) .", "For example, in the sentence \"I am pleased with the fast log on, and the long battery life\", the user mentions two targets * The work was done when Xin Li was an intern at Tencent AI Lab.", "This project is substantially supported by a grant from the Research Grant Council of the Hong Kong Special Administrative Region, China (Project Code: 14203414).", "1 Our code is open-source and available at https:// github.com/lixin4ever/TNet \"log on\" and \"better life\", and expresses positive sentiments over them.", "The task is usually formulated as predicting a sentiment category for a (target, sentence) pair.", "Recurrent Neural Networks (RNNs) with attention mechanism, firstly proposed in machine translation (Bahdanau et al., 2014) , is the most commonly-used technique for this task.", "For example, Wang et al.", "(2016) ; Tang et al.", "(2016b) ; ; Liu and Zhang (2017) ; Ma et al.", "(2017) and employ attention to measure the semantic relatedness between each context word and the target, and then use the induced attention scores to aggregate contextual features for prediction.", "In these works, the attention weight based combination of word-level features for classification may introduce noise and downgrade the prediction accuracy.", "For example, in \"This dish is my favorite and I always get it and never get tired of it.", "\", these approaches tend to involve irrelevant words such as \"never\" and \"tired\" when they highlight the opinion modifier \"favorite\".", "To some extent, this drawback is rooted in the attention mechanism, as also observed in machine translation (Luong et al., 2015) and image captioning .", "Another observation is that the sentiment of a target is usually determined by key phrases such as \"is my favorite\".", "By this token, Convolutional Neural Networks (CNNs)-whose capability for extracting the informative n-gram features (also called \"active local features\") as sentence representations has been verified in (Kim, 2014; Johnson and Zhang, 2015) -should be a suitable model for this classification problem.", "However, CNN likely fails in cases where a sentence expresses different sentiments over multiple targets, such as \"great food but the service was dreadful!\".", "One reason is that CNN cannot fully explore the target information as done by RNN-based meth-ods (Tang et al., 2016a) .", "2 Moreover, it is hard for vanilla CNN to differentiate opinion words of multiple targets.", "Precisely, multiple active local features holding different sentiments (e.g., \"great food\" and \"service was dreadful\") may be captured for a single target, thus it will hinder the prediction.", "We propose a new architecture, named Target-Specific Transformation Networks (TNet), to solve the above issues in the task of target sentiment classification.", "TNet firstly encodes the context information into word embeddings and generates the contextualized word representations with LSTMs.", "To integrate the target information into the word representations, TNet introduces a novel Target-Specific Transformation (TST) component for generating the target-specific word representations.", "Contrary to the previous attention-based approaches which apply the same target representation to determine the attention scores of individual context words, TST firstly generates different representations of the target conditioned on individual context words, then it consolidates each context word with its tailor-made target representation to obtain the transformed word representation.", "Considering the context word \"long\" and the target \"battery life\" in the above example, TST firstly measures the associations between \"long\" and individual target words.", "Then it uses the association scores to generate the target representation conditioned on \"long\".", "After that, TST transforms the representation of \"long\" into its target-specific version with the new target representation.", "Note that \"long\" could also indicate a negative sentiment (say for \"startup time\"), and the above TST is able to differentiate them.", "As the context information carried by the representations from the LSTM layer will be lost after the non-linear TST, we design a contextpreserving mechanism to contextualize the generated target-specific word representations.", "Such mechanism also allows deep transformation structure to learn abstract features 3 .", "To help the CNN feature extractor locate sentiment indicators more accurately, we adopt a proximity strategy to scale the input of convolutional layer with positional relevance between a word and the target.", "2 One method could be concatenating the target representation with each word representation, but the effect as shown in (Wang et al., 2016) is limited.", "3 Abstract features usually refer to the features ultimately useful for the task (Bengio et al., 2013; LeCun et al., 2015) .", "In summary, our contributions are as follows: • TNet adapts CNN to handle target-level sentiment classification, and its performance dominates the state-of-the-art models on benchmark datasets.", "• A novel Target-Specific Transformation component is proposed to better integrate target information into the word representations.", "• A context-preserving mechanism is designed to forward the context information into a deep transformation architecture, thus, the model can learn more abstract contextualized word features from deeper networks.", "Model Description Given a target-sentence pair (w τ , w), where w τ = {w τ 1 , w τ 2 , ..., w τ m } is a sub-sequence of w = {w 1 , w 2 , ..., w n }, and the corresponding word embeddings x τ = {x τ 1 , x τ 2 , ..., x τ m } and x = {x 1 , x 2 , ..., x n }, the aim of target sentiment classification is to predict the sentiment polarity y ∈ {P, N, O} of the sentence w over the target w τ , where P , N and O denote \"positive\", \"negative\" and \"neutral\" sentiments respectively.", "The architecture of the proposed Target-Specific Transformation Networks (TNet) is shown in Fig.", "1 .", "The bottom layer is a BiLSTM which transforms the input x = {x 1 , x 2 , ..., x n } ∈ R n×dimw into the contextualized word representations h (0) = {h (0) 1 , h (0) 2 , ..., h (0) n } ∈ R n×2dim h (i.e.", "hidden states of BiLSTM), where dim w and dim h denote the dimensions of the word embeddings and the hidden representations respectively.", "The middle part, the core part of our TNet, consists of L Context-Preserving Transformation (CPT) layers.", "The CPT layer incorporates the target information into the word representations via a novel Target-Specific Transformation (TST) component.", "CPT also contains a contextpreserving mechanism, resembling identity mapping (He et al., 2016a,b) and highway connection (Srivastava et al., 2015a,b) , allows preserving the context information and learning more abstract word-level features using a deep network.", "The top most part is a position-aware convolutional layer which first encodes positional relevance between a word and a target, and then extracts informative features for classification.", "Bi-directional LSTM Layer As observed in Lai et al.", "(2015) , combining contextual information with word embeddings is an effective way to represent a word in convolutionbased architectures.", "TNet also employs a BiL-STM to accumulate the context information for each word of the input sentence, i.e., the bottom part in Fig.", "1 .", "For simplicity and space issue, we denote the operation of an LSTM unit on x i as LSTM(x i ).", "Thus, the contextualized word representation h (0) i ∈ R 2dim h is obtained as follows: h (0) i = [ − −−− → LSTM(x i ); ← −−− − LSTM(x i )], i ∈ [1, n].", "(1) Context-Preserving Transformation The above word-level representation has not considered the target information yet.", "Traditional attention-based approaches keep the word-level features static and aggregate them with weights as the final sentence representation.", "In contrast, as shown in the middle part in Fig.", "1 , we introduce multiple CPT layers and the detail of a single CPT is shown in Fig.", "2 .", "In each CPT layer, a tailor-made TST component that aims at better consolidating word representation and target representation is proposed.", "Moreover, we design a context-preserving mechanism enabling the learning of target-specific word representations in a deep neural architecture.", "Target-Specific Transformation TST component is depicted with the TST block in Liu and Zhang, 2017) average the embeddings of the target words as the target representation.", "This strategy may be inappropriate in some cases because different target words usually do not contribute equally.", "For example, in the target \"amd turin processor\", the word \"processor\" is more important than \"amd\" and \"turin\", because the sentiment is usually conveyed over the phrase head, i.e.,\"processor\", but seldom over modifiers (such as brand name \"amd\").", "Ma et al.", "(2017) attempted to overcome this issue by measuring the importance score between each target word representation and the averaged sentence vector.", "However, it may be ineffective for sentences expressing multiple sentiments (e.g., \"Air has higher resolution but the fonts are small.", "\"), because taking the average tends to neutralize different sentiments.", "We propose to dynamically compute the importance of target words based on each sentence word rather than the whole sentence.", "We first employ another BiLSTM to obtain the target word representations h τ ∈ R m×2dim h : h τ j = [ − −−− → LSTM(x τ j ); ← −−− − LSTM(x τ j )], j ∈ [1, m].", "(2) Then, we dynamically associate them with each word w i in the sentence to tailor-make target representation r τ i at the time step i: r τ i = m j=1 h τ j * F(h (l) i , h τ j ) , (3) where the function F measures the relatedness between the j-th target word representation h τ j and the i-th word-level representation h (l) i : F(h (l) i , h τ j ) = exp (h (l) i h τ j ) m k=1 exp (h (l) i h τ k ) .", "(4) Finally, the concatenation of r τ i and h (l) i is fed into a fully-connected layer to obtain the i-th targetspecific word representationh i (l) : h (l) i = g(W τ [h (l) i : r τ i ] + b τ ), (5) where g( * ) is a non-linear activation function and \":\" denotes vector concatenation.", "W τ and b τ are the weights of the layer.", "Context-Preserving Mechanism After the non-linear TST (see Eq.", "5), the context information captured with contextualized representations from the BiLSTM layer will be lost since the mean and the variance of the features within the feature vector will be changed.", "To take advantage of the context information, which has been proved to be useful in (Lai et al., 2015) , we investigate two strategies: Lossless Forwarding (LF) and Adaptive Scaling (AS), to pass the context information to each following layer, as depicted by the block \"LF/AS\" in Fig.", "2 .", "Accordingly, the model variants are named TNet-LF and TNet-AS.", "Lossless Forwarding.", "This strategy preserves context information by directly feeding the features before the transformation to the next layer.", "Specifically, the input h (l+1) i of the (l + 1)-th CPT layer is formulated as: h (l+1) i = h (l) i +h (l) i , i ∈ [1, n], l ∈ [0, L], (6) where h (l) i is the input of the l-th layer andh (l) i is the output of TST in this layer.", "We unfold the recursive form of Eq.", "6 as follows: h (l+1) i = h (0) i +TST(h (0) i )+· · ·+TST(h (l) i ).", "(7) Here, we denoteh (l) i as TST(h (l) i ).", "From Eq.", "7, we can see that the output of each layer will contain the contextualized word representations (i.e., h (0) i ), thus, the context information is encoded into the transformed features.", "We call this strategy \"Lossless Forwarding\" because the contextualized representations and the transformed representations (i.e., TST(h (l) i )) are kept unchanged during the feature combination.", "Adaptive Scaling.", "Lossless Forwarding introduces the context information by directly adding back the contextualized features to the transformed features, which raises a question: Can the weights of the input and the transformed features be adjusted dynamically?", "With this motivation, we propose another strategy, named \"Adaptive Scaling\".", "Similar to the gate mechanism in RNN variants (Jozefowicz et al., 2015) , Adaptive Scaling introduces a gating function to control the passed proportions of the transformed features and the input features.", "The gate t (l) as follows: t (l) i = σ(W trans h (l) i + b trans ), (8) where t (l) i is the gate for the i-th input of the l-th CPT layer, and σ is the sigmoid activation function.", "Then we perform convex combination of h (l) i andh (l) i based on the gate: h (l+1) i = t (l) i h (l) i + (1 − t (l) i ) h (l) i .", "(9) Here, denotes element-wise multiplication.", "The non-recursive form of this equation is as follows (for clarity, we ignore the subscripts): h (l+1) = [ l k=0 (1 − t (k) )] h (0) +[t (0) l k=1 (1 − t (k) )] TST(h (0) ) + · · · +t (l−1) (1 − t (l) ) TST(h (l−1) ) + t (l) TST(h (l) ).", "Thus, the context information is integrated in each upper layer and the proportions of the contextualized representations and the transformed representations are controlled by the computed gates in different transformation layers.", "Convolutional Feature Extractor Recall that the second issue that blocks CNN to perform well is that vanilla CNN may associate a target with unrelated general opinion words which are frequently used as modifiers for different targets across domains.", "For example, \"service\" in \"Great food but the service is dreadful\" may be associated with both \"great\" and \"dreadful\".", "To solve it, we adopt a proximity strategy, which is observed effective in Li and Lam, 2017) .", "The idea is a closer opinion word is more likely to be the actual modifier of the target.", "Specifically, we first calculate the position relevance v i between the i-th word and the target 4 : v i =      1 − (k+m−i) C i < k + m 1 − i−k C k + m ≤ i ≤ n 0 i > n (10) where k is the index of the first target word, C is a pre-specified constant, and m is the length of the target w τ .", "Then, we use v to help CNN locate the correct opinion w.r.t.", "the given target: h (l) i = h (l) i * v i , i ∈ [1, n], l ∈ [1, L].", "(11) Based on Eq.", "10 and Eq.", "11, the words close to the target will be highlighted and those far away will be downgraded.", "v is also applied on the intermediate output to introduce the position information into each CPT layer.", "Then we feed the weighted h (L) to the convolutional layer, i.e., the top-most layer in Fig.", "1 , to generate the feature map c ∈ R n−s+1 as follows: c i = ReLU(w conv h (L) i:i+s−1 + b conv ), (12) where h (L) i:i+s−1 ∈ R s·dim h is the concatenated vec- tor ofĥ (L) i , · · · ,ĥ (L) i+s−1 , and s is the kernel size.", "w conv ∈ R s·dim h and b conv ∈ R are learnable weights of the convolutional kernel.", "To capture the most informative features, we apply max pooling (Kim, 2014) and obtain the sentence representation z ∈ R n k by employing n k kernels: z = [max(c 1 ), · · · , max(c n k )] .", "(13) Finally, we pass z to a fully connected layer for sentiment prediction: p(y|w τ , w) = Softmax(W f z + b f ).", "(14) where W f and b f are learnable parameters.", "4 As we perform sentence padding, it is possible that the index i is larger than the actual length n of the sentence.", "Experiments Experimental Setup As shown in Table 1 , we evaluate the proposed TNet on three benchmark datasets: LAPTOP and REST are from SemEval ABSA challenge (Pontiki et al., 2014) , containing user reviews in laptop domain and restaurant domain respectively.", "We also remove a few examples having the \"conflict label\" as done in ; TWITTER is built by Dong et al.", "(2014) , containing twitter posts.", "All tokens are lowercased without removal of stop words, symbols or digits, and sentences are zero-padded to the length of the longest sentence in the dataset.", "Evaluation metrics are Accuracy and Macro-Averaged F1 where the latter is more appropriate for datasets with unbalanced classes.", "We also conduct pairwise t-test on both Accuracy and Macro-Averaged F1 to verify if the improvements over the compared models are reliable.", "TNet is compared with the following methods.", "• SVM (Kiritchenko et al., 2014) : It is a traditional support vector machine based model with extensive feature engineering; • AdaRNN (Dong et al., 2014) : It learns the sentence representation toward target for sentiment prediction via semantic composition over dependency tree; • AE-LSTM, and ATAE-LSTM (Wang et al., 2016) : AE-LSTM is a simple LSTM model incorporating the target embedding as input, while ATAE-LSTM extends AE-LSTM with attention; • IAN (Ma et al., 2017) : IAN employs two LSTMs to learn the representations of the context and the target phrase interactively; • CNN-ASP: It is a CNN-based model implemented by us which directly concatenates target representation to each word embedding; • TD-LSTM (Tang et al., 2016a) : It employs two LSTMs to model the left and right contexts of the target separately, then performs predictions based on concatenated context representations; • MemNet (Tang et al., 2016b) : It applies attention mechanism over the word embeddings multiple times and predicts sentiments based on the top-most sentence representations; • BILSTM-ATT-G (Liu and Zhang, 2017): It models left and right contexts using two attention-based LSTMs and introduces gates to measure the importance of left context, right context, and the entire sentence for the prediction; • RAM : RAM is a multilayer architecture where each layer consists of attention-based aggregation of word features and a GRU cell to learn the sentence representation.", "We run the released codes of TD-LSTM and BILSTM-ATT-G to generate results, since their papers only reported results on TWITTER.", "We also rerun MemNet on our datasets and evaluate it with both accuracy and Macro-Averaged F1.", "5 We use pre-trained GloVe vectors (Pennington et al., 2014) to initialize the word embeddings and the dimension is 300 (i.e., dim w = 300).", "For out-of-vocabulary words, we randomly sample their embeddings from the uniform distribution U(−0.25, 0.25), as done in (Kim, 2014) .", "We only use one convolutional kernel size because it was observed that CNN with single optimal kernel size is comparable with CNN having multiple kernel sizes on small datasets (Zhang and Wallace, 2017) .", "To alleviate overfitting, we apply dropout on the input word embeddings of the LSTM and the ultimate sentence representation z.", "All weight matrices are initialized with the uniform distribution U(−0.01, 0.01) and the biases are initialized 5 The codes of TD-LSTM/MemNet and BILSTM-ATT-G are available at: http://ir.hit.edu.cn/˜dytang and http://leoncrashcode.github.io.", "Note that MemNet was only evaluated with accuracy.", "as zeros.", "The training objective is cross-entropy, and Adam (Kingma and Ba, 2015) is adopted as the optimizer by following the learning rate and the decay rates in the original paper.", "The hyper-parameters of TNet-LF and TNet-AS are listed in Table 2 .", "Specifically, all hyperparameters are tuned on 20% randomly held-out training data and the hyper-parameter collection producing the highest accuracy score is used for testing.", "Our model has comparable number of parameters compared to traditional LSTM-based models as we reuse parameters in the transformation layers and BiLSTM.", "6 Table 3 , both TNet-LF and TNet-AS consistently achieve the best performance on all datasets, which verifies the efficacy of our whole TNet model.", "Moreover, TNet can perform well for different kinds of user generated content, such as product reviews with relatively formal sentences in LAPTOP and REST, and tweets with more ungrammatical sentences in TWITTER.", "The reason is the CNN-based feature extractor arms TNet with more power to extract accurate features from ungrammatical sentences.", "Indeed, we can also observe that another CNN-based baseline, i.e., CNN-ASP implemented by us, also obtains good results on TWITTER.", "Main Results As shown in On the other hand, the performance of those comparison methods is mostly unstable.", "For the tweet in TWITTER, the competitive BILSTM-ATT-G and RAM cannot perform as effective as they do for the reviews in LAPTOP and REST, due to the fact that they are heavily rooted in LSTMs and the ungrammatical sentences hinder their ca- Table 3 : Experimental results (%).", "The results with symbol\" \" are retrieved from the original papers, and those starred ( * ) one are from Dong et al.", "(2014) .", "The marker † refers to p-value < 0.01 when comparing with BILSTM-ATT-G, while the marker ‡ refers to p-value < 0.01 when comparing with RAM.", "pability in capturing the context features.", "Another difficulty caused by the ungrammatical sentences is that the dependency parsing might be errorprone, which will affect those methods such as AdaRNN using dependency information.", "From the above observations and analysis, some takeaway message for the task of target sentiment classification could be: • LSTM-based models relying on sequential information can perform well for formal sentences by capturing more useful context features; • For ungrammatical text, CNN-based models may have some advantages because CNN aims to extract the most informative n-gram features and is thus less sensitive to informal texts without strong sequential patterns.", "Performance of Ablated TNet To investigate the impact of each component such as deep transformation, context-preserving mechanism, and positional relevance, we perform comparison between the full TNet models and its ablations (the third group in Table 3 ).", "After removing the deep transformation (i.e., the techniques introduced in Section 2.2), both TNet-LF and TNet-AS are reduced to TNet w/o transformation (where position relevance is kept), and their results in both accuracy and F1 measure are incomparable with those of TNet.", "It shows that the integration of target information into the word-level representations is crucial for good performance.", "Comparing the results of TNet and TNet w/o context (where TST and position relevance are kept), we observe that the performance of TNet w/o context drops significantly on LAPTOP and REST 7 , while on TWITTER, TNet w/o context performs very competitive (p-values with TNet-LF and TNet-AS are 0.066 and 0.053 respectively for Accuracy).", "Again, we could attribute this phenomenon to the ungrammatical user generated content of twitter, because the contextpreserving component becomes less important for such data.", "TNet w/o context performs consistently better than TNet w/o transformation, which verifies the efficacy of the target specific transformation (TST), before applying context-preserving.", "As for the position information, we conduct statistical t-test between TNet-LF/AS and TNet-LF/AS w/o position together with performance comparison.", "All of the produced p-values are less than 0.05, suggesting that the improvements brought in by position information are significant.", "CPT versus Alternatives The next interesting question is what if we replace the transformation module (i.e., the CPT layers in Fig.1) of TNet with other commonly-used components?", "We investigate two alternatives: attention mechanism and fully-connected (FC) layer, resulting in three pipelines as shown in the second group of Table 3 (position relevance is kept for them).", "LSTM-ATT-CNN applies attention as the alternative 8 , and it does not need the contextpreserving mechanism.", "It performs unexceptionally worse than the TNet variants.", "We are surprised that LSTM-ATT-CNN is even worse than TNet w/o transformation (a pipeline simply removing the transformation module) on TWITTER.", "More concretely, applying attention results in negative effect on TWITTER, which is consistent with the observation that all those attention-based state-of-the-art methods (i.e., TD-LSTM, Mem-Net, BILSTM-ATT-G, and RAM) cannot perform well on TWITTER.", "LSTM-FC-CNN-LF and LSTM-FC-CNN-AS are built by applying FC layer to replace TST and keeping the context-preserving mechanism (i.e., LF and AS).", "Specifically, the concatenation of word representation and the averaged target vector is fed to the FC layer to obtain targetspecific features.", "Note that LSTM-FC-CNN-LF/AS are equivalent to TNet-LF/AS when processing single-word targets (see Eq.", "3).", "They obtain competitive results on all datasets: comparable with or better than the state-of-the-art methods.", "The TNet variants can still outperform LSTM-FC-CNN-LF/AS with significant gaps, e.g., on LAPTOP and REST, the accuracy gaps between TNet-LF and LSTM-FC-CNN-LF are 0.42% (p < 0.03) and 0.38% (p < 0.04) respectively.", "Impact of CPT Layer Number As our TNet involves multiple CPT layers, we investigate the effect of the layer number L. Specifically, we conduct experiments on the held-out training data of LAPTOP and vary L from 2 to 10, increased by 2.", "The cases L=1 and L=15 are also included.", "The results are illustrated in Figure 3 .", "We can see that both TNet-LF and TNet-AS achieve the best results when L=2.", "While increasing L, the performance is basically becoming worse.", "For large L, the performance of TNet-AS 8 We tried different attention mechanisms and report the best one here, namely, dot attention (Luong et al., 2015) .", "generally becomes more sensitive, it is probably because AS involves extra parameters (see Eq 9) that increase the training difficulty.", "Table 4 shows some sample cases.", "The input targets are wrapped in the brackets with true labels given as subscripts.", "The notations P, N and O in the table represent positive, negative and neutral respectively.", "For each sentence, we underline the target with a particular color, and the text of its corresponding most informative n-gram feature 9 captured by TNet-AS (TNet-LF captures very similar features) is in the same color (so color printing is preferred).", "For example, for the target \"resolution\" in the first sentence, the captured feature is \"Air has higher\".", "Note that as discussed above, the CNN layer of TNet captures such features with the size-three kernels, so that the features are trigrams.", "Each of the last features of the second and seventh sentences contains a padding token, which is not shown.", "Case Study Our TNet variants can predict target sentiment more accurately than RAM and BILSTM-ATT-G in the transitional sentences such as the first sentence by capturing correct trigram features.", "For the third sentence, its second and third most informative trigrams are \"100% .", "PAD\" and \"' s not\", being used together with \"features make up\", our models can make correct predictions.", "Moreover, TNet can still make correct prediction when the explicit opinion is target-specific.", "For example, (P, P, P) (P, P, P) (P, P, P) (P, P, P) 7.", "The [staff] N should be a bit more friendly .", "P P P P Table 4 : Example predictions, color printing is preferred.", "The input targets are wrapped in brackets with the true labels given as subscripts.", "indicates incorrect prediction.", "\"long\" in the fifth sentence is negative for \"startup time\", while it could be positive for other targets such as \"battery life\" in the sixth sentence.", "The sentiment of target-specific opinion word is conditioned on the given target.", "Our TNet variants, armed with the word-level feature transformation w.r.t.", "the target, is capable of handling such case.", "We also find that all these models cannot give correct prediction for the last sentence, a commonly used subjunctive style.", "In this case, the difficulty of prediction does not come from the detection of explicit opinion words but the inference based on implicit semantics, which is still quite challenging for neural network models.", "Related Work Apart from sentence level sentiment classification (Kim, 2014; Shi et al., 2018) , aspect/target level sentiment classification is also an important research topic in the field of sentiment analysis.", "The early methods mostly adopted supervised learning approach with extensive hand-coded features (Blair-Goldensohn et al., 2008; Titov and McDonald, 2008; Jiang et al., 2011; Kiritchenko et al., 2014; Wagner et al., 2014; Vo and Zhang, 2015) , and they fail to model the semantic relatedness between a target and its context which is critical for target sentiment analysis.", "Dong et al.", "(2014) incorporate the target information into the feature learning using dependency trees.", "As observed in previous works, the performance heavily relies on the quality of dependency parsing.", "Tang et al.", "(2016a) propose to split the context into two parts and associate target with contextual features separately.", "Similar to (Tang et al., 2016a) , Zhang et al.", "(2016) develop a three-way gated neural network to model the in-teraction between the target and its surrounding contexts.", "Despite the advantages of jointly modeling target and context, they are not capable of capturing long-range information when some critical context information is far from the target.", "To overcome this limitation, researchers bring in the attention mechanism to model target-context association (Tang et al., 2016a,b; Wang et al., 2016; Liu and Zhang, 2017; Ma et al., 2017; Tay et al., 2017) .", "Compared with these methods, our TNet avoids using attention for feature extraction so as to alleviate the attended noise." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.2.1", "2.2.2", "2.3", "3.1", "3.3", "3.4", "3.5", "3.6", "4" ], "paper_header_content": [ "Introduction", "Model Description", "Bi-directional LSTM Layer", "Context-Preserving Transformation", "Target-Specific Transformation", "Context-Preserving Mechanism", "Convolutional Feature Extractor", "Experimental Setup", "Performance of Ablated TNet", "CPT versus Alternatives", "Impact of CPT Layer Number", "Case Study", "Related Work" ] }
GEM-SciDuet-train-35#paper-1049#slide-7
Settings
LAPTOP, REST: datasets from SemEval14 ABSA challenge, containing the user reviews from laptop domain and restaurant domain respectively. TWITTER: a dataset built in (Dong et al., 2014), containing twitter posts and the opinion targets are annotated.
LAPTOP, REST: datasets from SemEval14 ABSA challenge, containing the user reviews from laptop domain and restaurant domain respectively. TWITTER: a dataset built in (Dong et al., 2014), containing twitter posts and the opinion targets are annotated.
[]
GEM-SciDuet-train-35#paper-1049#slide-8
1049
Transformation Networks for Target-Oriented Sentiment Classification *
Target-oriented sentiment classification aims at classifying sentiment polarities over individual opinion targets in a sentence. RNN with attention seems a good fit for the characteristics of this task, and indeed it achieves the state-of-the-art performance. After re-examining the drawbacks of attention mechanism and the obstacles that block CNN to perform well in this classification task, we propose a new model to overcome these issues. Instead of attention, our model employs a CNN layer to extract salient features from the transformed word representations originated from a bi-directional RNN layer. Between the two layers, we propose a component to generate target-specific representations of words in the sentence, meanwhile incorporate a mechanism for preserving the original contextual information from the RNN layer. Experiments show that our model achieves a new state-of-the-art performance on a few benchmarks. 1
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204 ], "paper_content_text": [ "Introduction Target-oriented (also mentioned as \"target-level\" or \"aspect-level\" in some works) sentiment classification aims to determine sentiment polarities over \"opinion targets\" that explicitly appear in the sentences (Liu, 2012) .", "For example, in the sentence \"I am pleased with the fast log on, and the long battery life\", the user mentions two targets * The work was done when Xin Li was an intern at Tencent AI Lab.", "This project is substantially supported by a grant from the Research Grant Council of the Hong Kong Special Administrative Region, China (Project Code: 14203414).", "1 Our code is open-source and available at https:// github.com/lixin4ever/TNet \"log on\" and \"better life\", and expresses positive sentiments over them.", "The task is usually formulated as predicting a sentiment category for a (target, sentence) pair.", "Recurrent Neural Networks (RNNs) with attention mechanism, firstly proposed in machine translation (Bahdanau et al., 2014) , is the most commonly-used technique for this task.", "For example, Wang et al.", "(2016) ; Tang et al.", "(2016b) ; ; Liu and Zhang (2017) ; Ma et al.", "(2017) and employ attention to measure the semantic relatedness between each context word and the target, and then use the induced attention scores to aggregate contextual features for prediction.", "In these works, the attention weight based combination of word-level features for classification may introduce noise and downgrade the prediction accuracy.", "For example, in \"This dish is my favorite and I always get it and never get tired of it.", "\", these approaches tend to involve irrelevant words such as \"never\" and \"tired\" when they highlight the opinion modifier \"favorite\".", "To some extent, this drawback is rooted in the attention mechanism, as also observed in machine translation (Luong et al., 2015) and image captioning .", "Another observation is that the sentiment of a target is usually determined by key phrases such as \"is my favorite\".", "By this token, Convolutional Neural Networks (CNNs)-whose capability for extracting the informative n-gram features (also called \"active local features\") as sentence representations has been verified in (Kim, 2014; Johnson and Zhang, 2015) -should be a suitable model for this classification problem.", "However, CNN likely fails in cases where a sentence expresses different sentiments over multiple targets, such as \"great food but the service was dreadful!\".", "One reason is that CNN cannot fully explore the target information as done by RNN-based meth-ods (Tang et al., 2016a) .", "2 Moreover, it is hard for vanilla CNN to differentiate opinion words of multiple targets.", "Precisely, multiple active local features holding different sentiments (e.g., \"great food\" and \"service was dreadful\") may be captured for a single target, thus it will hinder the prediction.", "We propose a new architecture, named Target-Specific Transformation Networks (TNet), to solve the above issues in the task of target sentiment classification.", "TNet firstly encodes the context information into word embeddings and generates the contextualized word representations with LSTMs.", "To integrate the target information into the word representations, TNet introduces a novel Target-Specific Transformation (TST) component for generating the target-specific word representations.", "Contrary to the previous attention-based approaches which apply the same target representation to determine the attention scores of individual context words, TST firstly generates different representations of the target conditioned on individual context words, then it consolidates each context word with its tailor-made target representation to obtain the transformed word representation.", "Considering the context word \"long\" and the target \"battery life\" in the above example, TST firstly measures the associations between \"long\" and individual target words.", "Then it uses the association scores to generate the target representation conditioned on \"long\".", "After that, TST transforms the representation of \"long\" into its target-specific version with the new target representation.", "Note that \"long\" could also indicate a negative sentiment (say for \"startup time\"), and the above TST is able to differentiate them.", "As the context information carried by the representations from the LSTM layer will be lost after the non-linear TST, we design a contextpreserving mechanism to contextualize the generated target-specific word representations.", "Such mechanism also allows deep transformation structure to learn abstract features 3 .", "To help the CNN feature extractor locate sentiment indicators more accurately, we adopt a proximity strategy to scale the input of convolutional layer with positional relevance between a word and the target.", "2 One method could be concatenating the target representation with each word representation, but the effect as shown in (Wang et al., 2016) is limited.", "3 Abstract features usually refer to the features ultimately useful for the task (Bengio et al., 2013; LeCun et al., 2015) .", "In summary, our contributions are as follows: • TNet adapts CNN to handle target-level sentiment classification, and its performance dominates the state-of-the-art models on benchmark datasets.", "• A novel Target-Specific Transformation component is proposed to better integrate target information into the word representations.", "• A context-preserving mechanism is designed to forward the context information into a deep transformation architecture, thus, the model can learn more abstract contextualized word features from deeper networks.", "Model Description Given a target-sentence pair (w τ , w), where w τ = {w τ 1 , w τ 2 , ..., w τ m } is a sub-sequence of w = {w 1 , w 2 , ..., w n }, and the corresponding word embeddings x τ = {x τ 1 , x τ 2 , ..., x τ m } and x = {x 1 , x 2 , ..., x n }, the aim of target sentiment classification is to predict the sentiment polarity y ∈ {P, N, O} of the sentence w over the target w τ , where P , N and O denote \"positive\", \"negative\" and \"neutral\" sentiments respectively.", "The architecture of the proposed Target-Specific Transformation Networks (TNet) is shown in Fig.", "1 .", "The bottom layer is a BiLSTM which transforms the input x = {x 1 , x 2 , ..., x n } ∈ R n×dimw into the contextualized word representations h (0) = {h (0) 1 , h (0) 2 , ..., h (0) n } ∈ R n×2dim h (i.e.", "hidden states of BiLSTM), where dim w and dim h denote the dimensions of the word embeddings and the hidden representations respectively.", "The middle part, the core part of our TNet, consists of L Context-Preserving Transformation (CPT) layers.", "The CPT layer incorporates the target information into the word representations via a novel Target-Specific Transformation (TST) component.", "CPT also contains a contextpreserving mechanism, resembling identity mapping (He et al., 2016a,b) and highway connection (Srivastava et al., 2015a,b) , allows preserving the context information and learning more abstract word-level features using a deep network.", "The top most part is a position-aware convolutional layer which first encodes positional relevance between a word and a target, and then extracts informative features for classification.", "Bi-directional LSTM Layer As observed in Lai et al.", "(2015) , combining contextual information with word embeddings is an effective way to represent a word in convolutionbased architectures.", "TNet also employs a BiL-STM to accumulate the context information for each word of the input sentence, i.e., the bottom part in Fig.", "1 .", "For simplicity and space issue, we denote the operation of an LSTM unit on x i as LSTM(x i ).", "Thus, the contextualized word representation h (0) i ∈ R 2dim h is obtained as follows: h (0) i = [ − −−− → LSTM(x i ); ← −−− − LSTM(x i )], i ∈ [1, n].", "(1) Context-Preserving Transformation The above word-level representation has not considered the target information yet.", "Traditional attention-based approaches keep the word-level features static and aggregate them with weights as the final sentence representation.", "In contrast, as shown in the middle part in Fig.", "1 , we introduce multiple CPT layers and the detail of a single CPT is shown in Fig.", "2 .", "In each CPT layer, a tailor-made TST component that aims at better consolidating word representation and target representation is proposed.", "Moreover, we design a context-preserving mechanism enabling the learning of target-specific word representations in a deep neural architecture.", "Target-Specific Transformation TST component is depicted with the TST block in Liu and Zhang, 2017) average the embeddings of the target words as the target representation.", "This strategy may be inappropriate in some cases because different target words usually do not contribute equally.", "For example, in the target \"amd turin processor\", the word \"processor\" is more important than \"amd\" and \"turin\", because the sentiment is usually conveyed over the phrase head, i.e.,\"processor\", but seldom over modifiers (such as brand name \"amd\").", "Ma et al.", "(2017) attempted to overcome this issue by measuring the importance score between each target word representation and the averaged sentence vector.", "However, it may be ineffective for sentences expressing multiple sentiments (e.g., \"Air has higher resolution but the fonts are small.", "\"), because taking the average tends to neutralize different sentiments.", "We propose to dynamically compute the importance of target words based on each sentence word rather than the whole sentence.", "We first employ another BiLSTM to obtain the target word representations h τ ∈ R m×2dim h : h τ j = [ − −−− → LSTM(x τ j ); ← −−− − LSTM(x τ j )], j ∈ [1, m].", "(2) Then, we dynamically associate them with each word w i in the sentence to tailor-make target representation r τ i at the time step i: r τ i = m j=1 h τ j * F(h (l) i , h τ j ) , (3) where the function F measures the relatedness between the j-th target word representation h τ j and the i-th word-level representation h (l) i : F(h (l) i , h τ j ) = exp (h (l) i h τ j ) m k=1 exp (h (l) i h τ k ) .", "(4) Finally, the concatenation of r τ i and h (l) i is fed into a fully-connected layer to obtain the i-th targetspecific word representationh i (l) : h (l) i = g(W τ [h (l) i : r τ i ] + b τ ), (5) where g( * ) is a non-linear activation function and \":\" denotes vector concatenation.", "W τ and b τ are the weights of the layer.", "Context-Preserving Mechanism After the non-linear TST (see Eq.", "5), the context information captured with contextualized representations from the BiLSTM layer will be lost since the mean and the variance of the features within the feature vector will be changed.", "To take advantage of the context information, which has been proved to be useful in (Lai et al., 2015) , we investigate two strategies: Lossless Forwarding (LF) and Adaptive Scaling (AS), to pass the context information to each following layer, as depicted by the block \"LF/AS\" in Fig.", "2 .", "Accordingly, the model variants are named TNet-LF and TNet-AS.", "Lossless Forwarding.", "This strategy preserves context information by directly feeding the features before the transformation to the next layer.", "Specifically, the input h (l+1) i of the (l + 1)-th CPT layer is formulated as: h (l+1) i = h (l) i +h (l) i , i ∈ [1, n], l ∈ [0, L], (6) where h (l) i is the input of the l-th layer andh (l) i is the output of TST in this layer.", "We unfold the recursive form of Eq.", "6 as follows: h (l+1) i = h (0) i +TST(h (0) i )+· · ·+TST(h (l) i ).", "(7) Here, we denoteh (l) i as TST(h (l) i ).", "From Eq.", "7, we can see that the output of each layer will contain the contextualized word representations (i.e., h (0) i ), thus, the context information is encoded into the transformed features.", "We call this strategy \"Lossless Forwarding\" because the contextualized representations and the transformed representations (i.e., TST(h (l) i )) are kept unchanged during the feature combination.", "Adaptive Scaling.", "Lossless Forwarding introduces the context information by directly adding back the contextualized features to the transformed features, which raises a question: Can the weights of the input and the transformed features be adjusted dynamically?", "With this motivation, we propose another strategy, named \"Adaptive Scaling\".", "Similar to the gate mechanism in RNN variants (Jozefowicz et al., 2015) , Adaptive Scaling introduces a gating function to control the passed proportions of the transformed features and the input features.", "The gate t (l) as follows: t (l) i = σ(W trans h (l) i + b trans ), (8) where t (l) i is the gate for the i-th input of the l-th CPT layer, and σ is the sigmoid activation function.", "Then we perform convex combination of h (l) i andh (l) i based on the gate: h (l+1) i = t (l) i h (l) i + (1 − t (l) i ) h (l) i .", "(9) Here, denotes element-wise multiplication.", "The non-recursive form of this equation is as follows (for clarity, we ignore the subscripts): h (l+1) = [ l k=0 (1 − t (k) )] h (0) +[t (0) l k=1 (1 − t (k) )] TST(h (0) ) + · · · +t (l−1) (1 − t (l) ) TST(h (l−1) ) + t (l) TST(h (l) ).", "Thus, the context information is integrated in each upper layer and the proportions of the contextualized representations and the transformed representations are controlled by the computed gates in different transformation layers.", "Convolutional Feature Extractor Recall that the second issue that blocks CNN to perform well is that vanilla CNN may associate a target with unrelated general opinion words which are frequently used as modifiers for different targets across domains.", "For example, \"service\" in \"Great food but the service is dreadful\" may be associated with both \"great\" and \"dreadful\".", "To solve it, we adopt a proximity strategy, which is observed effective in Li and Lam, 2017) .", "The idea is a closer opinion word is more likely to be the actual modifier of the target.", "Specifically, we first calculate the position relevance v i between the i-th word and the target 4 : v i =      1 − (k+m−i) C i < k + m 1 − i−k C k + m ≤ i ≤ n 0 i > n (10) where k is the index of the first target word, C is a pre-specified constant, and m is the length of the target w τ .", "Then, we use v to help CNN locate the correct opinion w.r.t.", "the given target: h (l) i = h (l) i * v i , i ∈ [1, n], l ∈ [1, L].", "(11) Based on Eq.", "10 and Eq.", "11, the words close to the target will be highlighted and those far away will be downgraded.", "v is also applied on the intermediate output to introduce the position information into each CPT layer.", "Then we feed the weighted h (L) to the convolutional layer, i.e., the top-most layer in Fig.", "1 , to generate the feature map c ∈ R n−s+1 as follows: c i = ReLU(w conv h (L) i:i+s−1 + b conv ), (12) where h (L) i:i+s−1 ∈ R s·dim h is the concatenated vec- tor ofĥ (L) i , · · · ,ĥ (L) i+s−1 , and s is the kernel size.", "w conv ∈ R s·dim h and b conv ∈ R are learnable weights of the convolutional kernel.", "To capture the most informative features, we apply max pooling (Kim, 2014) and obtain the sentence representation z ∈ R n k by employing n k kernels: z = [max(c 1 ), · · · , max(c n k )] .", "(13) Finally, we pass z to a fully connected layer for sentiment prediction: p(y|w τ , w) = Softmax(W f z + b f ).", "(14) where W f and b f are learnable parameters.", "4 As we perform sentence padding, it is possible that the index i is larger than the actual length n of the sentence.", "Experiments Experimental Setup As shown in Table 1 , we evaluate the proposed TNet on three benchmark datasets: LAPTOP and REST are from SemEval ABSA challenge (Pontiki et al., 2014) , containing user reviews in laptop domain and restaurant domain respectively.", "We also remove a few examples having the \"conflict label\" as done in ; TWITTER is built by Dong et al.", "(2014) , containing twitter posts.", "All tokens are lowercased without removal of stop words, symbols or digits, and sentences are zero-padded to the length of the longest sentence in the dataset.", "Evaluation metrics are Accuracy and Macro-Averaged F1 where the latter is more appropriate for datasets with unbalanced classes.", "We also conduct pairwise t-test on both Accuracy and Macro-Averaged F1 to verify if the improvements over the compared models are reliable.", "TNet is compared with the following methods.", "• SVM (Kiritchenko et al., 2014) : It is a traditional support vector machine based model with extensive feature engineering; • AdaRNN (Dong et al., 2014) : It learns the sentence representation toward target for sentiment prediction via semantic composition over dependency tree; • AE-LSTM, and ATAE-LSTM (Wang et al., 2016) : AE-LSTM is a simple LSTM model incorporating the target embedding as input, while ATAE-LSTM extends AE-LSTM with attention; • IAN (Ma et al., 2017) : IAN employs two LSTMs to learn the representations of the context and the target phrase interactively; • CNN-ASP: It is a CNN-based model implemented by us which directly concatenates target representation to each word embedding; • TD-LSTM (Tang et al., 2016a) : It employs two LSTMs to model the left and right contexts of the target separately, then performs predictions based on concatenated context representations; • MemNet (Tang et al., 2016b) : It applies attention mechanism over the word embeddings multiple times and predicts sentiments based on the top-most sentence representations; • BILSTM-ATT-G (Liu and Zhang, 2017): It models left and right contexts using two attention-based LSTMs and introduces gates to measure the importance of left context, right context, and the entire sentence for the prediction; • RAM : RAM is a multilayer architecture where each layer consists of attention-based aggregation of word features and a GRU cell to learn the sentence representation.", "We run the released codes of TD-LSTM and BILSTM-ATT-G to generate results, since their papers only reported results on TWITTER.", "We also rerun MemNet on our datasets and evaluate it with both accuracy and Macro-Averaged F1.", "5 We use pre-trained GloVe vectors (Pennington et al., 2014) to initialize the word embeddings and the dimension is 300 (i.e., dim w = 300).", "For out-of-vocabulary words, we randomly sample their embeddings from the uniform distribution U(−0.25, 0.25), as done in (Kim, 2014) .", "We only use one convolutional kernel size because it was observed that CNN with single optimal kernel size is comparable with CNN having multiple kernel sizes on small datasets (Zhang and Wallace, 2017) .", "To alleviate overfitting, we apply dropout on the input word embeddings of the LSTM and the ultimate sentence representation z.", "All weight matrices are initialized with the uniform distribution U(−0.01, 0.01) and the biases are initialized 5 The codes of TD-LSTM/MemNet and BILSTM-ATT-G are available at: http://ir.hit.edu.cn/˜dytang and http://leoncrashcode.github.io.", "Note that MemNet was only evaluated with accuracy.", "as zeros.", "The training objective is cross-entropy, and Adam (Kingma and Ba, 2015) is adopted as the optimizer by following the learning rate and the decay rates in the original paper.", "The hyper-parameters of TNet-LF and TNet-AS are listed in Table 2 .", "Specifically, all hyperparameters are tuned on 20% randomly held-out training data and the hyper-parameter collection producing the highest accuracy score is used for testing.", "Our model has comparable number of parameters compared to traditional LSTM-based models as we reuse parameters in the transformation layers and BiLSTM.", "6 Table 3 , both TNet-LF and TNet-AS consistently achieve the best performance on all datasets, which verifies the efficacy of our whole TNet model.", "Moreover, TNet can perform well for different kinds of user generated content, such as product reviews with relatively formal sentences in LAPTOP and REST, and tweets with more ungrammatical sentences in TWITTER.", "The reason is the CNN-based feature extractor arms TNet with more power to extract accurate features from ungrammatical sentences.", "Indeed, we can also observe that another CNN-based baseline, i.e., CNN-ASP implemented by us, also obtains good results on TWITTER.", "Main Results As shown in On the other hand, the performance of those comparison methods is mostly unstable.", "For the tweet in TWITTER, the competitive BILSTM-ATT-G and RAM cannot perform as effective as they do for the reviews in LAPTOP and REST, due to the fact that they are heavily rooted in LSTMs and the ungrammatical sentences hinder their ca- Table 3 : Experimental results (%).", "The results with symbol\" \" are retrieved from the original papers, and those starred ( * ) one are from Dong et al.", "(2014) .", "The marker † refers to p-value < 0.01 when comparing with BILSTM-ATT-G, while the marker ‡ refers to p-value < 0.01 when comparing with RAM.", "pability in capturing the context features.", "Another difficulty caused by the ungrammatical sentences is that the dependency parsing might be errorprone, which will affect those methods such as AdaRNN using dependency information.", "From the above observations and analysis, some takeaway message for the task of target sentiment classification could be: • LSTM-based models relying on sequential information can perform well for formal sentences by capturing more useful context features; • For ungrammatical text, CNN-based models may have some advantages because CNN aims to extract the most informative n-gram features and is thus less sensitive to informal texts without strong sequential patterns.", "Performance of Ablated TNet To investigate the impact of each component such as deep transformation, context-preserving mechanism, and positional relevance, we perform comparison between the full TNet models and its ablations (the third group in Table 3 ).", "After removing the deep transformation (i.e., the techniques introduced in Section 2.2), both TNet-LF and TNet-AS are reduced to TNet w/o transformation (where position relevance is kept), and their results in both accuracy and F1 measure are incomparable with those of TNet.", "It shows that the integration of target information into the word-level representations is crucial for good performance.", "Comparing the results of TNet and TNet w/o context (where TST and position relevance are kept), we observe that the performance of TNet w/o context drops significantly on LAPTOP and REST 7 , while on TWITTER, TNet w/o context performs very competitive (p-values with TNet-LF and TNet-AS are 0.066 and 0.053 respectively for Accuracy).", "Again, we could attribute this phenomenon to the ungrammatical user generated content of twitter, because the contextpreserving component becomes less important for such data.", "TNet w/o context performs consistently better than TNet w/o transformation, which verifies the efficacy of the target specific transformation (TST), before applying context-preserving.", "As for the position information, we conduct statistical t-test between TNet-LF/AS and TNet-LF/AS w/o position together with performance comparison.", "All of the produced p-values are less than 0.05, suggesting that the improvements brought in by position information are significant.", "CPT versus Alternatives The next interesting question is what if we replace the transformation module (i.e., the CPT layers in Fig.1) of TNet with other commonly-used components?", "We investigate two alternatives: attention mechanism and fully-connected (FC) layer, resulting in three pipelines as shown in the second group of Table 3 (position relevance is kept for them).", "LSTM-ATT-CNN applies attention as the alternative 8 , and it does not need the contextpreserving mechanism.", "It performs unexceptionally worse than the TNet variants.", "We are surprised that LSTM-ATT-CNN is even worse than TNet w/o transformation (a pipeline simply removing the transformation module) on TWITTER.", "More concretely, applying attention results in negative effect on TWITTER, which is consistent with the observation that all those attention-based state-of-the-art methods (i.e., TD-LSTM, Mem-Net, BILSTM-ATT-G, and RAM) cannot perform well on TWITTER.", "LSTM-FC-CNN-LF and LSTM-FC-CNN-AS are built by applying FC layer to replace TST and keeping the context-preserving mechanism (i.e., LF and AS).", "Specifically, the concatenation of word representation and the averaged target vector is fed to the FC layer to obtain targetspecific features.", "Note that LSTM-FC-CNN-LF/AS are equivalent to TNet-LF/AS when processing single-word targets (see Eq.", "3).", "They obtain competitive results on all datasets: comparable with or better than the state-of-the-art methods.", "The TNet variants can still outperform LSTM-FC-CNN-LF/AS with significant gaps, e.g., on LAPTOP and REST, the accuracy gaps between TNet-LF and LSTM-FC-CNN-LF are 0.42% (p < 0.03) and 0.38% (p < 0.04) respectively.", "Impact of CPT Layer Number As our TNet involves multiple CPT layers, we investigate the effect of the layer number L. Specifically, we conduct experiments on the held-out training data of LAPTOP and vary L from 2 to 10, increased by 2.", "The cases L=1 and L=15 are also included.", "The results are illustrated in Figure 3 .", "We can see that both TNet-LF and TNet-AS achieve the best results when L=2.", "While increasing L, the performance is basically becoming worse.", "For large L, the performance of TNet-AS 8 We tried different attention mechanisms and report the best one here, namely, dot attention (Luong et al., 2015) .", "generally becomes more sensitive, it is probably because AS involves extra parameters (see Eq 9) that increase the training difficulty.", "Table 4 shows some sample cases.", "The input targets are wrapped in the brackets with true labels given as subscripts.", "The notations P, N and O in the table represent positive, negative and neutral respectively.", "For each sentence, we underline the target with a particular color, and the text of its corresponding most informative n-gram feature 9 captured by TNet-AS (TNet-LF captures very similar features) is in the same color (so color printing is preferred).", "For example, for the target \"resolution\" in the first sentence, the captured feature is \"Air has higher\".", "Note that as discussed above, the CNN layer of TNet captures such features with the size-three kernels, so that the features are trigrams.", "Each of the last features of the second and seventh sentences contains a padding token, which is not shown.", "Case Study Our TNet variants can predict target sentiment more accurately than RAM and BILSTM-ATT-G in the transitional sentences such as the first sentence by capturing correct trigram features.", "For the third sentence, its second and third most informative trigrams are \"100% .", "PAD\" and \"' s not\", being used together with \"features make up\", our models can make correct predictions.", "Moreover, TNet can still make correct prediction when the explicit opinion is target-specific.", "For example, (P, P, P) (P, P, P) (P, P, P) (P, P, P) 7.", "The [staff] N should be a bit more friendly .", "P P P P Table 4 : Example predictions, color printing is preferred.", "The input targets are wrapped in brackets with the true labels given as subscripts.", "indicates incorrect prediction.", "\"long\" in the fifth sentence is negative for \"startup time\", while it could be positive for other targets such as \"battery life\" in the sixth sentence.", "The sentiment of target-specific opinion word is conditioned on the given target.", "Our TNet variants, armed with the word-level feature transformation w.r.t.", "the target, is capable of handling such case.", "We also find that all these models cannot give correct prediction for the last sentence, a commonly used subjunctive style.", "In this case, the difficulty of prediction does not come from the detection of explicit opinion words but the inference based on implicit semantics, which is still quite challenging for neural network models.", "Related Work Apart from sentence level sentiment classification (Kim, 2014; Shi et al., 2018) , aspect/target level sentiment classification is also an important research topic in the field of sentiment analysis.", "The early methods mostly adopted supervised learning approach with extensive hand-coded features (Blair-Goldensohn et al., 2008; Titov and McDonald, 2008; Jiang et al., 2011; Kiritchenko et al., 2014; Wagner et al., 2014; Vo and Zhang, 2015) , and they fail to model the semantic relatedness between a target and its context which is critical for target sentiment analysis.", "Dong et al.", "(2014) incorporate the target information into the feature learning using dependency trees.", "As observed in previous works, the performance heavily relies on the quality of dependency parsing.", "Tang et al.", "(2016a) propose to split the context into two parts and associate target with contextual features separately.", "Similar to (Tang et al., 2016a) , Zhang et al.", "(2016) develop a three-way gated neural network to model the in-teraction between the target and its surrounding contexts.", "Despite the advantages of jointly modeling target and context, they are not capable of capturing long-range information when some critical context information is far from the target.", "To overcome this limitation, researchers bring in the attention mechanism to model target-context association (Tang et al., 2016a,b; Wang et al., 2016; Liu and Zhang, 2017; Ma et al., 2017; Tay et al., 2017) .", "Compared with these methods, our TNet avoids using attention for feature extraction so as to alleviate the attended noise." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.2.1", "2.2.2", "2.3", "3.1", "3.3", "3.4", "3.5", "3.6", "4" ], "paper_header_content": [ "Introduction", "Model Description", "Bi-directional LSTM Layer", "Context-Preserving Transformation", "Target-Specific Transformation", "Context-Preserving Mechanism", "Convolutional Feature Extractor", "Experimental Setup", "Performance of Ablated TNet", "CPT versus Alternatives", "Impact of CPT Layer Number", "Case Study", "Related Work" ] }
GEM-SciDuet-train-35#paper-1049#slide-8
Main Results
ACC Macro-F1 ACC Macro-F1 ACC Macro-F1 The proposed TNet-LF and TNet-AS consistently outperform the baselines. TNet variants perform well on both user reviews (LAPTOP REST) and twitter posts (TWITTER).
ACC Macro-F1 ACC Macro-F1 ACC Macro-F1 The proposed TNet-LF and TNet-AS consistently outperform the baselines. TNet variants perform well on both user reviews (LAPTOP REST) and twitter posts (TWITTER).
[]
GEM-SciDuet-train-35#paper-1049#slide-9
1049
Transformation Networks for Target-Oriented Sentiment Classification *
Target-oriented sentiment classification aims at classifying sentiment polarities over individual opinion targets in a sentence. RNN with attention seems a good fit for the characteristics of this task, and indeed it achieves the state-of-the-art performance. After re-examining the drawbacks of attention mechanism and the obstacles that block CNN to perform well in this classification task, we propose a new model to overcome these issues. Instead of attention, our model employs a CNN layer to extract salient features from the transformed word representations originated from a bi-directional RNN layer. Between the two layers, we propose a component to generate target-specific representations of words in the sentence, meanwhile incorporate a mechanism for preserving the original contextual information from the RNN layer. Experiments show that our model achieves a new state-of-the-art performance on a few benchmarks. 1
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204 ], "paper_content_text": [ "Introduction Target-oriented (also mentioned as \"target-level\" or \"aspect-level\" in some works) sentiment classification aims to determine sentiment polarities over \"opinion targets\" that explicitly appear in the sentences (Liu, 2012) .", "For example, in the sentence \"I am pleased with the fast log on, and the long battery life\", the user mentions two targets * The work was done when Xin Li was an intern at Tencent AI Lab.", "This project is substantially supported by a grant from the Research Grant Council of the Hong Kong Special Administrative Region, China (Project Code: 14203414).", "1 Our code is open-source and available at https:// github.com/lixin4ever/TNet \"log on\" and \"better life\", and expresses positive sentiments over them.", "The task is usually formulated as predicting a sentiment category for a (target, sentence) pair.", "Recurrent Neural Networks (RNNs) with attention mechanism, firstly proposed in machine translation (Bahdanau et al., 2014) , is the most commonly-used technique for this task.", "For example, Wang et al.", "(2016) ; Tang et al.", "(2016b) ; ; Liu and Zhang (2017) ; Ma et al.", "(2017) and employ attention to measure the semantic relatedness between each context word and the target, and then use the induced attention scores to aggregate contextual features for prediction.", "In these works, the attention weight based combination of word-level features for classification may introduce noise and downgrade the prediction accuracy.", "For example, in \"This dish is my favorite and I always get it and never get tired of it.", "\", these approaches tend to involve irrelevant words such as \"never\" and \"tired\" when they highlight the opinion modifier \"favorite\".", "To some extent, this drawback is rooted in the attention mechanism, as also observed in machine translation (Luong et al., 2015) and image captioning .", "Another observation is that the sentiment of a target is usually determined by key phrases such as \"is my favorite\".", "By this token, Convolutional Neural Networks (CNNs)-whose capability for extracting the informative n-gram features (also called \"active local features\") as sentence representations has been verified in (Kim, 2014; Johnson and Zhang, 2015) -should be a suitable model for this classification problem.", "However, CNN likely fails in cases where a sentence expresses different sentiments over multiple targets, such as \"great food but the service was dreadful!\".", "One reason is that CNN cannot fully explore the target information as done by RNN-based meth-ods (Tang et al., 2016a) .", "2 Moreover, it is hard for vanilla CNN to differentiate opinion words of multiple targets.", "Precisely, multiple active local features holding different sentiments (e.g., \"great food\" and \"service was dreadful\") may be captured for a single target, thus it will hinder the prediction.", "We propose a new architecture, named Target-Specific Transformation Networks (TNet), to solve the above issues in the task of target sentiment classification.", "TNet firstly encodes the context information into word embeddings and generates the contextualized word representations with LSTMs.", "To integrate the target information into the word representations, TNet introduces a novel Target-Specific Transformation (TST) component for generating the target-specific word representations.", "Contrary to the previous attention-based approaches which apply the same target representation to determine the attention scores of individual context words, TST firstly generates different representations of the target conditioned on individual context words, then it consolidates each context word with its tailor-made target representation to obtain the transformed word representation.", "Considering the context word \"long\" and the target \"battery life\" in the above example, TST firstly measures the associations between \"long\" and individual target words.", "Then it uses the association scores to generate the target representation conditioned on \"long\".", "After that, TST transforms the representation of \"long\" into its target-specific version with the new target representation.", "Note that \"long\" could also indicate a negative sentiment (say for \"startup time\"), and the above TST is able to differentiate them.", "As the context information carried by the representations from the LSTM layer will be lost after the non-linear TST, we design a contextpreserving mechanism to contextualize the generated target-specific word representations.", "Such mechanism also allows deep transformation structure to learn abstract features 3 .", "To help the CNN feature extractor locate sentiment indicators more accurately, we adopt a proximity strategy to scale the input of convolutional layer with positional relevance between a word and the target.", "2 One method could be concatenating the target representation with each word representation, but the effect as shown in (Wang et al., 2016) is limited.", "3 Abstract features usually refer to the features ultimately useful for the task (Bengio et al., 2013; LeCun et al., 2015) .", "In summary, our contributions are as follows: • TNet adapts CNN to handle target-level sentiment classification, and its performance dominates the state-of-the-art models on benchmark datasets.", "• A novel Target-Specific Transformation component is proposed to better integrate target information into the word representations.", "• A context-preserving mechanism is designed to forward the context information into a deep transformation architecture, thus, the model can learn more abstract contextualized word features from deeper networks.", "Model Description Given a target-sentence pair (w τ , w), where w τ = {w τ 1 , w τ 2 , ..., w τ m } is a sub-sequence of w = {w 1 , w 2 , ..., w n }, and the corresponding word embeddings x τ = {x τ 1 , x τ 2 , ..., x τ m } and x = {x 1 , x 2 , ..., x n }, the aim of target sentiment classification is to predict the sentiment polarity y ∈ {P, N, O} of the sentence w over the target w τ , where P , N and O denote \"positive\", \"negative\" and \"neutral\" sentiments respectively.", "The architecture of the proposed Target-Specific Transformation Networks (TNet) is shown in Fig.", "1 .", "The bottom layer is a BiLSTM which transforms the input x = {x 1 , x 2 , ..., x n } ∈ R n×dimw into the contextualized word representations h (0) = {h (0) 1 , h (0) 2 , ..., h (0) n } ∈ R n×2dim h (i.e.", "hidden states of BiLSTM), where dim w and dim h denote the dimensions of the word embeddings and the hidden representations respectively.", "The middle part, the core part of our TNet, consists of L Context-Preserving Transformation (CPT) layers.", "The CPT layer incorporates the target information into the word representations via a novel Target-Specific Transformation (TST) component.", "CPT also contains a contextpreserving mechanism, resembling identity mapping (He et al., 2016a,b) and highway connection (Srivastava et al., 2015a,b) , allows preserving the context information and learning more abstract word-level features using a deep network.", "The top most part is a position-aware convolutional layer which first encodes positional relevance between a word and a target, and then extracts informative features for classification.", "Bi-directional LSTM Layer As observed in Lai et al.", "(2015) , combining contextual information with word embeddings is an effective way to represent a word in convolutionbased architectures.", "TNet also employs a BiL-STM to accumulate the context information for each word of the input sentence, i.e., the bottom part in Fig.", "1 .", "For simplicity and space issue, we denote the operation of an LSTM unit on x i as LSTM(x i ).", "Thus, the contextualized word representation h (0) i ∈ R 2dim h is obtained as follows: h (0) i = [ − −−− → LSTM(x i ); ← −−− − LSTM(x i )], i ∈ [1, n].", "(1) Context-Preserving Transformation The above word-level representation has not considered the target information yet.", "Traditional attention-based approaches keep the word-level features static and aggregate them with weights as the final sentence representation.", "In contrast, as shown in the middle part in Fig.", "1 , we introduce multiple CPT layers and the detail of a single CPT is shown in Fig.", "2 .", "In each CPT layer, a tailor-made TST component that aims at better consolidating word representation and target representation is proposed.", "Moreover, we design a context-preserving mechanism enabling the learning of target-specific word representations in a deep neural architecture.", "Target-Specific Transformation TST component is depicted with the TST block in Liu and Zhang, 2017) average the embeddings of the target words as the target representation.", "This strategy may be inappropriate in some cases because different target words usually do not contribute equally.", "For example, in the target \"amd turin processor\", the word \"processor\" is more important than \"amd\" and \"turin\", because the sentiment is usually conveyed over the phrase head, i.e.,\"processor\", but seldom over modifiers (such as brand name \"amd\").", "Ma et al.", "(2017) attempted to overcome this issue by measuring the importance score between each target word representation and the averaged sentence vector.", "However, it may be ineffective for sentences expressing multiple sentiments (e.g., \"Air has higher resolution but the fonts are small.", "\"), because taking the average tends to neutralize different sentiments.", "We propose to dynamically compute the importance of target words based on each sentence word rather than the whole sentence.", "We first employ another BiLSTM to obtain the target word representations h τ ∈ R m×2dim h : h τ j = [ − −−− → LSTM(x τ j ); ← −−− − LSTM(x τ j )], j ∈ [1, m].", "(2) Then, we dynamically associate them with each word w i in the sentence to tailor-make target representation r τ i at the time step i: r τ i = m j=1 h τ j * F(h (l) i , h τ j ) , (3) where the function F measures the relatedness between the j-th target word representation h τ j and the i-th word-level representation h (l) i : F(h (l) i , h τ j ) = exp (h (l) i h τ j ) m k=1 exp (h (l) i h τ k ) .", "(4) Finally, the concatenation of r τ i and h (l) i is fed into a fully-connected layer to obtain the i-th targetspecific word representationh i (l) : h (l) i = g(W τ [h (l) i : r τ i ] + b τ ), (5) where g( * ) is a non-linear activation function and \":\" denotes vector concatenation.", "W τ and b τ are the weights of the layer.", "Context-Preserving Mechanism After the non-linear TST (see Eq.", "5), the context information captured with contextualized representations from the BiLSTM layer will be lost since the mean and the variance of the features within the feature vector will be changed.", "To take advantage of the context information, which has been proved to be useful in (Lai et al., 2015) , we investigate two strategies: Lossless Forwarding (LF) and Adaptive Scaling (AS), to pass the context information to each following layer, as depicted by the block \"LF/AS\" in Fig.", "2 .", "Accordingly, the model variants are named TNet-LF and TNet-AS.", "Lossless Forwarding.", "This strategy preserves context information by directly feeding the features before the transformation to the next layer.", "Specifically, the input h (l+1) i of the (l + 1)-th CPT layer is formulated as: h (l+1) i = h (l) i +h (l) i , i ∈ [1, n], l ∈ [0, L], (6) where h (l) i is the input of the l-th layer andh (l) i is the output of TST in this layer.", "We unfold the recursive form of Eq.", "6 as follows: h (l+1) i = h (0) i +TST(h (0) i )+· · ·+TST(h (l) i ).", "(7) Here, we denoteh (l) i as TST(h (l) i ).", "From Eq.", "7, we can see that the output of each layer will contain the contextualized word representations (i.e., h (0) i ), thus, the context information is encoded into the transformed features.", "We call this strategy \"Lossless Forwarding\" because the contextualized representations and the transformed representations (i.e., TST(h (l) i )) are kept unchanged during the feature combination.", "Adaptive Scaling.", "Lossless Forwarding introduces the context information by directly adding back the contextualized features to the transformed features, which raises a question: Can the weights of the input and the transformed features be adjusted dynamically?", "With this motivation, we propose another strategy, named \"Adaptive Scaling\".", "Similar to the gate mechanism in RNN variants (Jozefowicz et al., 2015) , Adaptive Scaling introduces a gating function to control the passed proportions of the transformed features and the input features.", "The gate t (l) as follows: t (l) i = σ(W trans h (l) i + b trans ), (8) where t (l) i is the gate for the i-th input of the l-th CPT layer, and σ is the sigmoid activation function.", "Then we perform convex combination of h (l) i andh (l) i based on the gate: h (l+1) i = t (l) i h (l) i + (1 − t (l) i ) h (l) i .", "(9) Here, denotes element-wise multiplication.", "The non-recursive form of this equation is as follows (for clarity, we ignore the subscripts): h (l+1) = [ l k=0 (1 − t (k) )] h (0) +[t (0) l k=1 (1 − t (k) )] TST(h (0) ) + · · · +t (l−1) (1 − t (l) ) TST(h (l−1) ) + t (l) TST(h (l) ).", "Thus, the context information is integrated in each upper layer and the proportions of the contextualized representations and the transformed representations are controlled by the computed gates in different transformation layers.", "Convolutional Feature Extractor Recall that the second issue that blocks CNN to perform well is that vanilla CNN may associate a target with unrelated general opinion words which are frequently used as modifiers for different targets across domains.", "For example, \"service\" in \"Great food but the service is dreadful\" may be associated with both \"great\" and \"dreadful\".", "To solve it, we adopt a proximity strategy, which is observed effective in Li and Lam, 2017) .", "The idea is a closer opinion word is more likely to be the actual modifier of the target.", "Specifically, we first calculate the position relevance v i between the i-th word and the target 4 : v i =      1 − (k+m−i) C i < k + m 1 − i−k C k + m ≤ i ≤ n 0 i > n (10) where k is the index of the first target word, C is a pre-specified constant, and m is the length of the target w τ .", "Then, we use v to help CNN locate the correct opinion w.r.t.", "the given target: h (l) i = h (l) i * v i , i ∈ [1, n], l ∈ [1, L].", "(11) Based on Eq.", "10 and Eq.", "11, the words close to the target will be highlighted and those far away will be downgraded.", "v is also applied on the intermediate output to introduce the position information into each CPT layer.", "Then we feed the weighted h (L) to the convolutional layer, i.e., the top-most layer in Fig.", "1 , to generate the feature map c ∈ R n−s+1 as follows: c i = ReLU(w conv h (L) i:i+s−1 + b conv ), (12) where h (L) i:i+s−1 ∈ R s·dim h is the concatenated vec- tor ofĥ (L) i , · · · ,ĥ (L) i+s−1 , and s is the kernel size.", "w conv ∈ R s·dim h and b conv ∈ R are learnable weights of the convolutional kernel.", "To capture the most informative features, we apply max pooling (Kim, 2014) and obtain the sentence representation z ∈ R n k by employing n k kernels: z = [max(c 1 ), · · · , max(c n k )] .", "(13) Finally, we pass z to a fully connected layer for sentiment prediction: p(y|w τ , w) = Softmax(W f z + b f ).", "(14) where W f and b f are learnable parameters.", "4 As we perform sentence padding, it is possible that the index i is larger than the actual length n of the sentence.", "Experiments Experimental Setup As shown in Table 1 , we evaluate the proposed TNet on three benchmark datasets: LAPTOP and REST are from SemEval ABSA challenge (Pontiki et al., 2014) , containing user reviews in laptop domain and restaurant domain respectively.", "We also remove a few examples having the \"conflict label\" as done in ; TWITTER is built by Dong et al.", "(2014) , containing twitter posts.", "All tokens are lowercased without removal of stop words, symbols or digits, and sentences are zero-padded to the length of the longest sentence in the dataset.", "Evaluation metrics are Accuracy and Macro-Averaged F1 where the latter is more appropriate for datasets with unbalanced classes.", "We also conduct pairwise t-test on both Accuracy and Macro-Averaged F1 to verify if the improvements over the compared models are reliable.", "TNet is compared with the following methods.", "• SVM (Kiritchenko et al., 2014) : It is a traditional support vector machine based model with extensive feature engineering; • AdaRNN (Dong et al., 2014) : It learns the sentence representation toward target for sentiment prediction via semantic composition over dependency tree; • AE-LSTM, and ATAE-LSTM (Wang et al., 2016) : AE-LSTM is a simple LSTM model incorporating the target embedding as input, while ATAE-LSTM extends AE-LSTM with attention; • IAN (Ma et al., 2017) : IAN employs two LSTMs to learn the representations of the context and the target phrase interactively; • CNN-ASP: It is a CNN-based model implemented by us which directly concatenates target representation to each word embedding; • TD-LSTM (Tang et al., 2016a) : It employs two LSTMs to model the left and right contexts of the target separately, then performs predictions based on concatenated context representations; • MemNet (Tang et al., 2016b) : It applies attention mechanism over the word embeddings multiple times and predicts sentiments based on the top-most sentence representations; • BILSTM-ATT-G (Liu and Zhang, 2017): It models left and right contexts using two attention-based LSTMs and introduces gates to measure the importance of left context, right context, and the entire sentence for the prediction; • RAM : RAM is a multilayer architecture where each layer consists of attention-based aggregation of word features and a GRU cell to learn the sentence representation.", "We run the released codes of TD-LSTM and BILSTM-ATT-G to generate results, since their papers only reported results on TWITTER.", "We also rerun MemNet on our datasets and evaluate it with both accuracy and Macro-Averaged F1.", "5 We use pre-trained GloVe vectors (Pennington et al., 2014) to initialize the word embeddings and the dimension is 300 (i.e., dim w = 300).", "For out-of-vocabulary words, we randomly sample their embeddings from the uniform distribution U(−0.25, 0.25), as done in (Kim, 2014) .", "We only use one convolutional kernel size because it was observed that CNN with single optimal kernel size is comparable with CNN having multiple kernel sizes on small datasets (Zhang and Wallace, 2017) .", "To alleviate overfitting, we apply dropout on the input word embeddings of the LSTM and the ultimate sentence representation z.", "All weight matrices are initialized with the uniform distribution U(−0.01, 0.01) and the biases are initialized 5 The codes of TD-LSTM/MemNet and BILSTM-ATT-G are available at: http://ir.hit.edu.cn/˜dytang and http://leoncrashcode.github.io.", "Note that MemNet was only evaluated with accuracy.", "as zeros.", "The training objective is cross-entropy, and Adam (Kingma and Ba, 2015) is adopted as the optimizer by following the learning rate and the decay rates in the original paper.", "The hyper-parameters of TNet-LF and TNet-AS are listed in Table 2 .", "Specifically, all hyperparameters are tuned on 20% randomly held-out training data and the hyper-parameter collection producing the highest accuracy score is used for testing.", "Our model has comparable number of parameters compared to traditional LSTM-based models as we reuse parameters in the transformation layers and BiLSTM.", "6 Table 3 , both TNet-LF and TNet-AS consistently achieve the best performance on all datasets, which verifies the efficacy of our whole TNet model.", "Moreover, TNet can perform well for different kinds of user generated content, such as product reviews with relatively formal sentences in LAPTOP and REST, and tweets with more ungrammatical sentences in TWITTER.", "The reason is the CNN-based feature extractor arms TNet with more power to extract accurate features from ungrammatical sentences.", "Indeed, we can also observe that another CNN-based baseline, i.e., CNN-ASP implemented by us, also obtains good results on TWITTER.", "Main Results As shown in On the other hand, the performance of those comparison methods is mostly unstable.", "For the tweet in TWITTER, the competitive BILSTM-ATT-G and RAM cannot perform as effective as they do for the reviews in LAPTOP and REST, due to the fact that they are heavily rooted in LSTMs and the ungrammatical sentences hinder their ca- Table 3 : Experimental results (%).", "The results with symbol\" \" are retrieved from the original papers, and those starred ( * ) one are from Dong et al.", "(2014) .", "The marker † refers to p-value < 0.01 when comparing with BILSTM-ATT-G, while the marker ‡ refers to p-value < 0.01 when comparing with RAM.", "pability in capturing the context features.", "Another difficulty caused by the ungrammatical sentences is that the dependency parsing might be errorprone, which will affect those methods such as AdaRNN using dependency information.", "From the above observations and analysis, some takeaway message for the task of target sentiment classification could be: • LSTM-based models relying on sequential information can perform well for formal sentences by capturing more useful context features; • For ungrammatical text, CNN-based models may have some advantages because CNN aims to extract the most informative n-gram features and is thus less sensitive to informal texts without strong sequential patterns.", "Performance of Ablated TNet To investigate the impact of each component such as deep transformation, context-preserving mechanism, and positional relevance, we perform comparison between the full TNet models and its ablations (the third group in Table 3 ).", "After removing the deep transformation (i.e., the techniques introduced in Section 2.2), both TNet-LF and TNet-AS are reduced to TNet w/o transformation (where position relevance is kept), and their results in both accuracy and F1 measure are incomparable with those of TNet.", "It shows that the integration of target information into the word-level representations is crucial for good performance.", "Comparing the results of TNet and TNet w/o context (where TST and position relevance are kept), we observe that the performance of TNet w/o context drops significantly on LAPTOP and REST 7 , while on TWITTER, TNet w/o context performs very competitive (p-values with TNet-LF and TNet-AS are 0.066 and 0.053 respectively for Accuracy).", "Again, we could attribute this phenomenon to the ungrammatical user generated content of twitter, because the contextpreserving component becomes less important for such data.", "TNet w/o context performs consistently better than TNet w/o transformation, which verifies the efficacy of the target specific transformation (TST), before applying context-preserving.", "As for the position information, we conduct statistical t-test between TNet-LF/AS and TNet-LF/AS w/o position together with performance comparison.", "All of the produced p-values are less than 0.05, suggesting that the improvements brought in by position information are significant.", "CPT versus Alternatives The next interesting question is what if we replace the transformation module (i.e., the CPT layers in Fig.1) of TNet with other commonly-used components?", "We investigate two alternatives: attention mechanism and fully-connected (FC) layer, resulting in three pipelines as shown in the second group of Table 3 (position relevance is kept for them).", "LSTM-ATT-CNN applies attention as the alternative 8 , and it does not need the contextpreserving mechanism.", "It performs unexceptionally worse than the TNet variants.", "We are surprised that LSTM-ATT-CNN is even worse than TNet w/o transformation (a pipeline simply removing the transformation module) on TWITTER.", "More concretely, applying attention results in negative effect on TWITTER, which is consistent with the observation that all those attention-based state-of-the-art methods (i.e., TD-LSTM, Mem-Net, BILSTM-ATT-G, and RAM) cannot perform well on TWITTER.", "LSTM-FC-CNN-LF and LSTM-FC-CNN-AS are built by applying FC layer to replace TST and keeping the context-preserving mechanism (i.e., LF and AS).", "Specifically, the concatenation of word representation and the averaged target vector is fed to the FC layer to obtain targetspecific features.", "Note that LSTM-FC-CNN-LF/AS are equivalent to TNet-LF/AS when processing single-word targets (see Eq.", "3).", "They obtain competitive results on all datasets: comparable with or better than the state-of-the-art methods.", "The TNet variants can still outperform LSTM-FC-CNN-LF/AS with significant gaps, e.g., on LAPTOP and REST, the accuracy gaps between TNet-LF and LSTM-FC-CNN-LF are 0.42% (p < 0.03) and 0.38% (p < 0.04) respectively.", "Impact of CPT Layer Number As our TNet involves multiple CPT layers, we investigate the effect of the layer number L. Specifically, we conduct experiments on the held-out training data of LAPTOP and vary L from 2 to 10, increased by 2.", "The cases L=1 and L=15 are also included.", "The results are illustrated in Figure 3 .", "We can see that both TNet-LF and TNet-AS achieve the best results when L=2.", "While increasing L, the performance is basically becoming worse.", "For large L, the performance of TNet-AS 8 We tried different attention mechanisms and report the best one here, namely, dot attention (Luong et al., 2015) .", "generally becomes more sensitive, it is probably because AS involves extra parameters (see Eq 9) that increase the training difficulty.", "Table 4 shows some sample cases.", "The input targets are wrapped in the brackets with true labels given as subscripts.", "The notations P, N and O in the table represent positive, negative and neutral respectively.", "For each sentence, we underline the target with a particular color, and the text of its corresponding most informative n-gram feature 9 captured by TNet-AS (TNet-LF captures very similar features) is in the same color (so color printing is preferred).", "For example, for the target \"resolution\" in the first sentence, the captured feature is \"Air has higher\".", "Note that as discussed above, the CNN layer of TNet captures such features with the size-three kernels, so that the features are trigrams.", "Each of the last features of the second and seventh sentences contains a padding token, which is not shown.", "Case Study Our TNet variants can predict target sentiment more accurately than RAM and BILSTM-ATT-G in the transitional sentences such as the first sentence by capturing correct trigram features.", "For the third sentence, its second and third most informative trigrams are \"100% .", "PAD\" and \"' s not\", being used together with \"features make up\", our models can make correct predictions.", "Moreover, TNet can still make correct prediction when the explicit opinion is target-specific.", "For example, (P, P, P) (P, P, P) (P, P, P) (P, P, P) 7.", "The [staff] N should be a bit more friendly .", "P P P P Table 4 : Example predictions, color printing is preferred.", "The input targets are wrapped in brackets with the true labels given as subscripts.", "indicates incorrect prediction.", "\"long\" in the fifth sentence is negative for \"startup time\", while it could be positive for other targets such as \"battery life\" in the sixth sentence.", "The sentiment of target-specific opinion word is conditioned on the given target.", "Our TNet variants, armed with the word-level feature transformation w.r.t.", "the target, is capable of handling such case.", "We also find that all these models cannot give correct prediction for the last sentence, a commonly used subjunctive style.", "In this case, the difficulty of prediction does not come from the detection of explicit opinion words but the inference based on implicit semantics, which is still quite challenging for neural network models.", "Related Work Apart from sentence level sentiment classification (Kim, 2014; Shi et al., 2018) , aspect/target level sentiment classification is also an important research topic in the field of sentiment analysis.", "The early methods mostly adopted supervised learning approach with extensive hand-coded features (Blair-Goldensohn et al., 2008; Titov and McDonald, 2008; Jiang et al., 2011; Kiritchenko et al., 2014; Wagner et al., 2014; Vo and Zhang, 2015) , and they fail to model the semantic relatedness between a target and its context which is critical for target sentiment analysis.", "Dong et al.", "(2014) incorporate the target information into the feature learning using dependency trees.", "As observed in previous works, the performance heavily relies on the quality of dependency parsing.", "Tang et al.", "(2016a) propose to split the context into two parts and associate target with contextual features separately.", "Similar to (Tang et al., 2016a) , Zhang et al.", "(2016) develop a three-way gated neural network to model the in-teraction between the target and its surrounding contexts.", "Despite the advantages of jointly modeling target and context, they are not capable of capturing long-range information when some critical context information is far from the target.", "To overcome this limitation, researchers bring in the attention mechanism to model target-context association (Tang et al., 2016a,b; Wang et al., 2016; Liu and Zhang, 2017; Ma et al., 2017; Tay et al., 2017) .", "Compared with these methods, our TNet avoids using attention for feature extraction so as to alleviate the attended noise." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.2.1", "2.2.2", "2.3", "3.1", "3.3", "3.4", "3.5", "3.6", "4" ], "paper_header_content": [ "Introduction", "Model Description", "Bi-directional LSTM Layer", "Context-Preserving Transformation", "Target-Specific Transformation", "Context-Preserving Mechanism", "Convolutional Feature Extractor", "Experimental Setup", "Performance of Ablated TNet", "CPT versus Alternatives", "Impact of CPT Layer Number", "Case Study", "Related Work" ] }
GEM-SciDuet-train-35#paper-1049#slide-9
Ablation Experiment
ACC Macro-F1 ACC Macro-F1 ACC Macro-F1 Using attention (ATT) and fully-connected layer (FC) to replace CPT layer makes the performance worse. Each component / element in TNet contributes to the overall performance improvement.
ACC Macro-F1 ACC Macro-F1 ACC Macro-F1 Using attention (ATT) and fully-connected layer (FC) to replace CPT layer makes the performance worse. Each component / element in TNet contributes to the overall performance improvement.
[]
GEM-SciDuet-train-35#paper-1049#slide-10
1049
Transformation Networks for Target-Oriented Sentiment Classification *
Target-oriented sentiment classification aims at classifying sentiment polarities over individual opinion targets in a sentence. RNN with attention seems a good fit for the characteristics of this task, and indeed it achieves the state-of-the-art performance. After re-examining the drawbacks of attention mechanism and the obstacles that block CNN to perform well in this classification task, we propose a new model to overcome these issues. Instead of attention, our model employs a CNN layer to extract salient features from the transformed word representations originated from a bi-directional RNN layer. Between the two layers, we propose a component to generate target-specific representations of words in the sentence, meanwhile incorporate a mechanism for preserving the original contextual information from the RNN layer. Experiments show that our model achieves a new state-of-the-art performance on a few benchmarks. 1
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204 ], "paper_content_text": [ "Introduction Target-oriented (also mentioned as \"target-level\" or \"aspect-level\" in some works) sentiment classification aims to determine sentiment polarities over \"opinion targets\" that explicitly appear in the sentences (Liu, 2012) .", "For example, in the sentence \"I am pleased with the fast log on, and the long battery life\", the user mentions two targets * The work was done when Xin Li was an intern at Tencent AI Lab.", "This project is substantially supported by a grant from the Research Grant Council of the Hong Kong Special Administrative Region, China (Project Code: 14203414).", "1 Our code is open-source and available at https:// github.com/lixin4ever/TNet \"log on\" and \"better life\", and expresses positive sentiments over them.", "The task is usually formulated as predicting a sentiment category for a (target, sentence) pair.", "Recurrent Neural Networks (RNNs) with attention mechanism, firstly proposed in machine translation (Bahdanau et al., 2014) , is the most commonly-used technique for this task.", "For example, Wang et al.", "(2016) ; Tang et al.", "(2016b) ; ; Liu and Zhang (2017) ; Ma et al.", "(2017) and employ attention to measure the semantic relatedness between each context word and the target, and then use the induced attention scores to aggregate contextual features for prediction.", "In these works, the attention weight based combination of word-level features for classification may introduce noise and downgrade the prediction accuracy.", "For example, in \"This dish is my favorite and I always get it and never get tired of it.", "\", these approaches tend to involve irrelevant words such as \"never\" and \"tired\" when they highlight the opinion modifier \"favorite\".", "To some extent, this drawback is rooted in the attention mechanism, as also observed in machine translation (Luong et al., 2015) and image captioning .", "Another observation is that the sentiment of a target is usually determined by key phrases such as \"is my favorite\".", "By this token, Convolutional Neural Networks (CNNs)-whose capability for extracting the informative n-gram features (also called \"active local features\") as sentence representations has been verified in (Kim, 2014; Johnson and Zhang, 2015) -should be a suitable model for this classification problem.", "However, CNN likely fails in cases where a sentence expresses different sentiments over multiple targets, such as \"great food but the service was dreadful!\".", "One reason is that CNN cannot fully explore the target information as done by RNN-based meth-ods (Tang et al., 2016a) .", "2 Moreover, it is hard for vanilla CNN to differentiate opinion words of multiple targets.", "Precisely, multiple active local features holding different sentiments (e.g., \"great food\" and \"service was dreadful\") may be captured for a single target, thus it will hinder the prediction.", "We propose a new architecture, named Target-Specific Transformation Networks (TNet), to solve the above issues in the task of target sentiment classification.", "TNet firstly encodes the context information into word embeddings and generates the contextualized word representations with LSTMs.", "To integrate the target information into the word representations, TNet introduces a novel Target-Specific Transformation (TST) component for generating the target-specific word representations.", "Contrary to the previous attention-based approaches which apply the same target representation to determine the attention scores of individual context words, TST firstly generates different representations of the target conditioned on individual context words, then it consolidates each context word with its tailor-made target representation to obtain the transformed word representation.", "Considering the context word \"long\" and the target \"battery life\" in the above example, TST firstly measures the associations between \"long\" and individual target words.", "Then it uses the association scores to generate the target representation conditioned on \"long\".", "After that, TST transforms the representation of \"long\" into its target-specific version with the new target representation.", "Note that \"long\" could also indicate a negative sentiment (say for \"startup time\"), and the above TST is able to differentiate them.", "As the context information carried by the representations from the LSTM layer will be lost after the non-linear TST, we design a contextpreserving mechanism to contextualize the generated target-specific word representations.", "Such mechanism also allows deep transformation structure to learn abstract features 3 .", "To help the CNN feature extractor locate sentiment indicators more accurately, we adopt a proximity strategy to scale the input of convolutional layer with positional relevance between a word and the target.", "2 One method could be concatenating the target representation with each word representation, but the effect as shown in (Wang et al., 2016) is limited.", "3 Abstract features usually refer to the features ultimately useful for the task (Bengio et al., 2013; LeCun et al., 2015) .", "In summary, our contributions are as follows: • TNet adapts CNN to handle target-level sentiment classification, and its performance dominates the state-of-the-art models on benchmark datasets.", "• A novel Target-Specific Transformation component is proposed to better integrate target information into the word representations.", "• A context-preserving mechanism is designed to forward the context information into a deep transformation architecture, thus, the model can learn more abstract contextualized word features from deeper networks.", "Model Description Given a target-sentence pair (w τ , w), where w τ = {w τ 1 , w τ 2 , ..., w τ m } is a sub-sequence of w = {w 1 , w 2 , ..., w n }, and the corresponding word embeddings x τ = {x τ 1 , x τ 2 , ..., x τ m } and x = {x 1 , x 2 , ..., x n }, the aim of target sentiment classification is to predict the sentiment polarity y ∈ {P, N, O} of the sentence w over the target w τ , where P , N and O denote \"positive\", \"negative\" and \"neutral\" sentiments respectively.", "The architecture of the proposed Target-Specific Transformation Networks (TNet) is shown in Fig.", "1 .", "The bottom layer is a BiLSTM which transforms the input x = {x 1 , x 2 , ..., x n } ∈ R n×dimw into the contextualized word representations h (0) = {h (0) 1 , h (0) 2 , ..., h (0) n } ∈ R n×2dim h (i.e.", "hidden states of BiLSTM), where dim w and dim h denote the dimensions of the word embeddings and the hidden representations respectively.", "The middle part, the core part of our TNet, consists of L Context-Preserving Transformation (CPT) layers.", "The CPT layer incorporates the target information into the word representations via a novel Target-Specific Transformation (TST) component.", "CPT also contains a contextpreserving mechanism, resembling identity mapping (He et al., 2016a,b) and highway connection (Srivastava et al., 2015a,b) , allows preserving the context information and learning more abstract word-level features using a deep network.", "The top most part is a position-aware convolutional layer which first encodes positional relevance between a word and a target, and then extracts informative features for classification.", "Bi-directional LSTM Layer As observed in Lai et al.", "(2015) , combining contextual information with word embeddings is an effective way to represent a word in convolutionbased architectures.", "TNet also employs a BiL-STM to accumulate the context information for each word of the input sentence, i.e., the bottom part in Fig.", "1 .", "For simplicity and space issue, we denote the operation of an LSTM unit on x i as LSTM(x i ).", "Thus, the contextualized word representation h (0) i ∈ R 2dim h is obtained as follows: h (0) i = [ − −−− → LSTM(x i ); ← −−− − LSTM(x i )], i ∈ [1, n].", "(1) Context-Preserving Transformation The above word-level representation has not considered the target information yet.", "Traditional attention-based approaches keep the word-level features static and aggregate them with weights as the final sentence representation.", "In contrast, as shown in the middle part in Fig.", "1 , we introduce multiple CPT layers and the detail of a single CPT is shown in Fig.", "2 .", "In each CPT layer, a tailor-made TST component that aims at better consolidating word representation and target representation is proposed.", "Moreover, we design a context-preserving mechanism enabling the learning of target-specific word representations in a deep neural architecture.", "Target-Specific Transformation TST component is depicted with the TST block in Liu and Zhang, 2017) average the embeddings of the target words as the target representation.", "This strategy may be inappropriate in some cases because different target words usually do not contribute equally.", "For example, in the target \"amd turin processor\", the word \"processor\" is more important than \"amd\" and \"turin\", because the sentiment is usually conveyed over the phrase head, i.e.,\"processor\", but seldom over modifiers (such as brand name \"amd\").", "Ma et al.", "(2017) attempted to overcome this issue by measuring the importance score between each target word representation and the averaged sentence vector.", "However, it may be ineffective for sentences expressing multiple sentiments (e.g., \"Air has higher resolution but the fonts are small.", "\"), because taking the average tends to neutralize different sentiments.", "We propose to dynamically compute the importance of target words based on each sentence word rather than the whole sentence.", "We first employ another BiLSTM to obtain the target word representations h τ ∈ R m×2dim h : h τ j = [ − −−− → LSTM(x τ j ); ← −−− − LSTM(x τ j )], j ∈ [1, m].", "(2) Then, we dynamically associate them with each word w i in the sentence to tailor-make target representation r τ i at the time step i: r τ i = m j=1 h τ j * F(h (l) i , h τ j ) , (3) where the function F measures the relatedness between the j-th target word representation h τ j and the i-th word-level representation h (l) i : F(h (l) i , h τ j ) = exp (h (l) i h τ j ) m k=1 exp (h (l) i h τ k ) .", "(4) Finally, the concatenation of r τ i and h (l) i is fed into a fully-connected layer to obtain the i-th targetspecific word representationh i (l) : h (l) i = g(W τ [h (l) i : r τ i ] + b τ ), (5) where g( * ) is a non-linear activation function and \":\" denotes vector concatenation.", "W τ and b τ are the weights of the layer.", "Context-Preserving Mechanism After the non-linear TST (see Eq.", "5), the context information captured with contextualized representations from the BiLSTM layer will be lost since the mean and the variance of the features within the feature vector will be changed.", "To take advantage of the context information, which has been proved to be useful in (Lai et al., 2015) , we investigate two strategies: Lossless Forwarding (LF) and Adaptive Scaling (AS), to pass the context information to each following layer, as depicted by the block \"LF/AS\" in Fig.", "2 .", "Accordingly, the model variants are named TNet-LF and TNet-AS.", "Lossless Forwarding.", "This strategy preserves context information by directly feeding the features before the transformation to the next layer.", "Specifically, the input h (l+1) i of the (l + 1)-th CPT layer is formulated as: h (l+1) i = h (l) i +h (l) i , i ∈ [1, n], l ∈ [0, L], (6) where h (l) i is the input of the l-th layer andh (l) i is the output of TST in this layer.", "We unfold the recursive form of Eq.", "6 as follows: h (l+1) i = h (0) i +TST(h (0) i )+· · ·+TST(h (l) i ).", "(7) Here, we denoteh (l) i as TST(h (l) i ).", "From Eq.", "7, we can see that the output of each layer will contain the contextualized word representations (i.e., h (0) i ), thus, the context information is encoded into the transformed features.", "We call this strategy \"Lossless Forwarding\" because the contextualized representations and the transformed representations (i.e., TST(h (l) i )) are kept unchanged during the feature combination.", "Adaptive Scaling.", "Lossless Forwarding introduces the context information by directly adding back the contextualized features to the transformed features, which raises a question: Can the weights of the input and the transformed features be adjusted dynamically?", "With this motivation, we propose another strategy, named \"Adaptive Scaling\".", "Similar to the gate mechanism in RNN variants (Jozefowicz et al., 2015) , Adaptive Scaling introduces a gating function to control the passed proportions of the transformed features and the input features.", "The gate t (l) as follows: t (l) i = σ(W trans h (l) i + b trans ), (8) where t (l) i is the gate for the i-th input of the l-th CPT layer, and σ is the sigmoid activation function.", "Then we perform convex combination of h (l) i andh (l) i based on the gate: h (l+1) i = t (l) i h (l) i + (1 − t (l) i ) h (l) i .", "(9) Here, denotes element-wise multiplication.", "The non-recursive form of this equation is as follows (for clarity, we ignore the subscripts): h (l+1) = [ l k=0 (1 − t (k) )] h (0) +[t (0) l k=1 (1 − t (k) )] TST(h (0) ) + · · · +t (l−1) (1 − t (l) ) TST(h (l−1) ) + t (l) TST(h (l) ).", "Thus, the context information is integrated in each upper layer and the proportions of the contextualized representations and the transformed representations are controlled by the computed gates in different transformation layers.", "Convolutional Feature Extractor Recall that the second issue that blocks CNN to perform well is that vanilla CNN may associate a target with unrelated general opinion words which are frequently used as modifiers for different targets across domains.", "For example, \"service\" in \"Great food but the service is dreadful\" may be associated with both \"great\" and \"dreadful\".", "To solve it, we adopt a proximity strategy, which is observed effective in Li and Lam, 2017) .", "The idea is a closer opinion word is more likely to be the actual modifier of the target.", "Specifically, we first calculate the position relevance v i between the i-th word and the target 4 : v i =      1 − (k+m−i) C i < k + m 1 − i−k C k + m ≤ i ≤ n 0 i > n (10) where k is the index of the first target word, C is a pre-specified constant, and m is the length of the target w τ .", "Then, we use v to help CNN locate the correct opinion w.r.t.", "the given target: h (l) i = h (l) i * v i , i ∈ [1, n], l ∈ [1, L].", "(11) Based on Eq.", "10 and Eq.", "11, the words close to the target will be highlighted and those far away will be downgraded.", "v is also applied on the intermediate output to introduce the position information into each CPT layer.", "Then we feed the weighted h (L) to the convolutional layer, i.e., the top-most layer in Fig.", "1 , to generate the feature map c ∈ R n−s+1 as follows: c i = ReLU(w conv h (L) i:i+s−1 + b conv ), (12) where h (L) i:i+s−1 ∈ R s·dim h is the concatenated vec- tor ofĥ (L) i , · · · ,ĥ (L) i+s−1 , and s is the kernel size.", "w conv ∈ R s·dim h and b conv ∈ R are learnable weights of the convolutional kernel.", "To capture the most informative features, we apply max pooling (Kim, 2014) and obtain the sentence representation z ∈ R n k by employing n k kernels: z = [max(c 1 ), · · · , max(c n k )] .", "(13) Finally, we pass z to a fully connected layer for sentiment prediction: p(y|w τ , w) = Softmax(W f z + b f ).", "(14) where W f and b f are learnable parameters.", "4 As we perform sentence padding, it is possible that the index i is larger than the actual length n of the sentence.", "Experiments Experimental Setup As shown in Table 1 , we evaluate the proposed TNet on three benchmark datasets: LAPTOP and REST are from SemEval ABSA challenge (Pontiki et al., 2014) , containing user reviews in laptop domain and restaurant domain respectively.", "We also remove a few examples having the \"conflict label\" as done in ; TWITTER is built by Dong et al.", "(2014) , containing twitter posts.", "All tokens are lowercased without removal of stop words, symbols or digits, and sentences are zero-padded to the length of the longest sentence in the dataset.", "Evaluation metrics are Accuracy and Macro-Averaged F1 where the latter is more appropriate for datasets with unbalanced classes.", "We also conduct pairwise t-test on both Accuracy and Macro-Averaged F1 to verify if the improvements over the compared models are reliable.", "TNet is compared with the following methods.", "• SVM (Kiritchenko et al., 2014) : It is a traditional support vector machine based model with extensive feature engineering; • AdaRNN (Dong et al., 2014) : It learns the sentence representation toward target for sentiment prediction via semantic composition over dependency tree; • AE-LSTM, and ATAE-LSTM (Wang et al., 2016) : AE-LSTM is a simple LSTM model incorporating the target embedding as input, while ATAE-LSTM extends AE-LSTM with attention; • IAN (Ma et al., 2017) : IAN employs two LSTMs to learn the representations of the context and the target phrase interactively; • CNN-ASP: It is a CNN-based model implemented by us which directly concatenates target representation to each word embedding; • TD-LSTM (Tang et al., 2016a) : It employs two LSTMs to model the left and right contexts of the target separately, then performs predictions based on concatenated context representations; • MemNet (Tang et al., 2016b) : It applies attention mechanism over the word embeddings multiple times and predicts sentiments based on the top-most sentence representations; • BILSTM-ATT-G (Liu and Zhang, 2017): It models left and right contexts using two attention-based LSTMs and introduces gates to measure the importance of left context, right context, and the entire sentence for the prediction; • RAM : RAM is a multilayer architecture where each layer consists of attention-based aggregation of word features and a GRU cell to learn the sentence representation.", "We run the released codes of TD-LSTM and BILSTM-ATT-G to generate results, since their papers only reported results on TWITTER.", "We also rerun MemNet on our datasets and evaluate it with both accuracy and Macro-Averaged F1.", "5 We use pre-trained GloVe vectors (Pennington et al., 2014) to initialize the word embeddings and the dimension is 300 (i.e., dim w = 300).", "For out-of-vocabulary words, we randomly sample their embeddings from the uniform distribution U(−0.25, 0.25), as done in (Kim, 2014) .", "We only use one convolutional kernel size because it was observed that CNN with single optimal kernel size is comparable with CNN having multiple kernel sizes on small datasets (Zhang and Wallace, 2017) .", "To alleviate overfitting, we apply dropout on the input word embeddings of the LSTM and the ultimate sentence representation z.", "All weight matrices are initialized with the uniform distribution U(−0.01, 0.01) and the biases are initialized 5 The codes of TD-LSTM/MemNet and BILSTM-ATT-G are available at: http://ir.hit.edu.cn/˜dytang and http://leoncrashcode.github.io.", "Note that MemNet was only evaluated with accuracy.", "as zeros.", "The training objective is cross-entropy, and Adam (Kingma and Ba, 2015) is adopted as the optimizer by following the learning rate and the decay rates in the original paper.", "The hyper-parameters of TNet-LF and TNet-AS are listed in Table 2 .", "Specifically, all hyperparameters are tuned on 20% randomly held-out training data and the hyper-parameter collection producing the highest accuracy score is used for testing.", "Our model has comparable number of parameters compared to traditional LSTM-based models as we reuse parameters in the transformation layers and BiLSTM.", "6 Table 3 , both TNet-LF and TNet-AS consistently achieve the best performance on all datasets, which verifies the efficacy of our whole TNet model.", "Moreover, TNet can perform well for different kinds of user generated content, such as product reviews with relatively formal sentences in LAPTOP and REST, and tweets with more ungrammatical sentences in TWITTER.", "The reason is the CNN-based feature extractor arms TNet with more power to extract accurate features from ungrammatical sentences.", "Indeed, we can also observe that another CNN-based baseline, i.e., CNN-ASP implemented by us, also obtains good results on TWITTER.", "Main Results As shown in On the other hand, the performance of those comparison methods is mostly unstable.", "For the tweet in TWITTER, the competitive BILSTM-ATT-G and RAM cannot perform as effective as they do for the reviews in LAPTOP and REST, due to the fact that they are heavily rooted in LSTMs and the ungrammatical sentences hinder their ca- Table 3 : Experimental results (%).", "The results with symbol\" \" are retrieved from the original papers, and those starred ( * ) one are from Dong et al.", "(2014) .", "The marker † refers to p-value < 0.01 when comparing with BILSTM-ATT-G, while the marker ‡ refers to p-value < 0.01 when comparing with RAM.", "pability in capturing the context features.", "Another difficulty caused by the ungrammatical sentences is that the dependency parsing might be errorprone, which will affect those methods such as AdaRNN using dependency information.", "From the above observations and analysis, some takeaway message for the task of target sentiment classification could be: • LSTM-based models relying on sequential information can perform well for formal sentences by capturing more useful context features; • For ungrammatical text, CNN-based models may have some advantages because CNN aims to extract the most informative n-gram features and is thus less sensitive to informal texts without strong sequential patterns.", "Performance of Ablated TNet To investigate the impact of each component such as deep transformation, context-preserving mechanism, and positional relevance, we perform comparison between the full TNet models and its ablations (the third group in Table 3 ).", "After removing the deep transformation (i.e., the techniques introduced in Section 2.2), both TNet-LF and TNet-AS are reduced to TNet w/o transformation (where position relevance is kept), and their results in both accuracy and F1 measure are incomparable with those of TNet.", "It shows that the integration of target information into the word-level representations is crucial for good performance.", "Comparing the results of TNet and TNet w/o context (where TST and position relevance are kept), we observe that the performance of TNet w/o context drops significantly on LAPTOP and REST 7 , while on TWITTER, TNet w/o context performs very competitive (p-values with TNet-LF and TNet-AS are 0.066 and 0.053 respectively for Accuracy).", "Again, we could attribute this phenomenon to the ungrammatical user generated content of twitter, because the contextpreserving component becomes less important for such data.", "TNet w/o context performs consistently better than TNet w/o transformation, which verifies the efficacy of the target specific transformation (TST), before applying context-preserving.", "As for the position information, we conduct statistical t-test between TNet-LF/AS and TNet-LF/AS w/o position together with performance comparison.", "All of the produced p-values are less than 0.05, suggesting that the improvements brought in by position information are significant.", "CPT versus Alternatives The next interesting question is what if we replace the transformation module (i.e., the CPT layers in Fig.1) of TNet with other commonly-used components?", "We investigate two alternatives: attention mechanism and fully-connected (FC) layer, resulting in three pipelines as shown in the second group of Table 3 (position relevance is kept for them).", "LSTM-ATT-CNN applies attention as the alternative 8 , and it does not need the contextpreserving mechanism.", "It performs unexceptionally worse than the TNet variants.", "We are surprised that LSTM-ATT-CNN is even worse than TNet w/o transformation (a pipeline simply removing the transformation module) on TWITTER.", "More concretely, applying attention results in negative effect on TWITTER, which is consistent with the observation that all those attention-based state-of-the-art methods (i.e., TD-LSTM, Mem-Net, BILSTM-ATT-G, and RAM) cannot perform well on TWITTER.", "LSTM-FC-CNN-LF and LSTM-FC-CNN-AS are built by applying FC layer to replace TST and keeping the context-preserving mechanism (i.e., LF and AS).", "Specifically, the concatenation of word representation and the averaged target vector is fed to the FC layer to obtain targetspecific features.", "Note that LSTM-FC-CNN-LF/AS are equivalent to TNet-LF/AS when processing single-word targets (see Eq.", "3).", "They obtain competitive results on all datasets: comparable with or better than the state-of-the-art methods.", "The TNet variants can still outperform LSTM-FC-CNN-LF/AS with significant gaps, e.g., on LAPTOP and REST, the accuracy gaps between TNet-LF and LSTM-FC-CNN-LF are 0.42% (p < 0.03) and 0.38% (p < 0.04) respectively.", "Impact of CPT Layer Number As our TNet involves multiple CPT layers, we investigate the effect of the layer number L. Specifically, we conduct experiments on the held-out training data of LAPTOP and vary L from 2 to 10, increased by 2.", "The cases L=1 and L=15 are also included.", "The results are illustrated in Figure 3 .", "We can see that both TNet-LF and TNet-AS achieve the best results when L=2.", "While increasing L, the performance is basically becoming worse.", "For large L, the performance of TNet-AS 8 We tried different attention mechanisms and report the best one here, namely, dot attention (Luong et al., 2015) .", "generally becomes more sensitive, it is probably because AS involves extra parameters (see Eq 9) that increase the training difficulty.", "Table 4 shows some sample cases.", "The input targets are wrapped in the brackets with true labels given as subscripts.", "The notations P, N and O in the table represent positive, negative and neutral respectively.", "For each sentence, we underline the target with a particular color, and the text of its corresponding most informative n-gram feature 9 captured by TNet-AS (TNet-LF captures very similar features) is in the same color (so color printing is preferred).", "For example, for the target \"resolution\" in the first sentence, the captured feature is \"Air has higher\".", "Note that as discussed above, the CNN layer of TNet captures such features with the size-three kernels, so that the features are trigrams.", "Each of the last features of the second and seventh sentences contains a padding token, which is not shown.", "Case Study Our TNet variants can predict target sentiment more accurately than RAM and BILSTM-ATT-G in the transitional sentences such as the first sentence by capturing correct trigram features.", "For the third sentence, its second and third most informative trigrams are \"100% .", "PAD\" and \"' s not\", being used together with \"features make up\", our models can make correct predictions.", "Moreover, TNet can still make correct prediction when the explicit opinion is target-specific.", "For example, (P, P, P) (P, P, P) (P, P, P) (P, P, P) 7.", "The [staff] N should be a bit more friendly .", "P P P P Table 4 : Example predictions, color printing is preferred.", "The input targets are wrapped in brackets with the true labels given as subscripts.", "indicates incorrect prediction.", "\"long\" in the fifth sentence is negative for \"startup time\", while it could be positive for other targets such as \"battery life\" in the sixth sentence.", "The sentiment of target-specific opinion word is conditioned on the given target.", "Our TNet variants, armed with the word-level feature transformation w.r.t.", "the target, is capable of handling such case.", "We also find that all these models cannot give correct prediction for the last sentence, a commonly used subjunctive style.", "In this case, the difficulty of prediction does not come from the detection of explicit opinion words but the inference based on implicit semantics, which is still quite challenging for neural network models.", "Related Work Apart from sentence level sentiment classification (Kim, 2014; Shi et al., 2018) , aspect/target level sentiment classification is also an important research topic in the field of sentiment analysis.", "The early methods mostly adopted supervised learning approach with extensive hand-coded features (Blair-Goldensohn et al., 2008; Titov and McDonald, 2008; Jiang et al., 2011; Kiritchenko et al., 2014; Wagner et al., 2014; Vo and Zhang, 2015) , and they fail to model the semantic relatedness between a target and its context which is critical for target sentiment analysis.", "Dong et al.", "(2014) incorporate the target information into the feature learning using dependency trees.", "As observed in previous works, the performance heavily relies on the quality of dependency parsing.", "Tang et al.", "(2016a) propose to split the context into two parts and associate target with contextual features separately.", "Similar to (Tang et al., 2016a) , Zhang et al.", "(2016) develop a three-way gated neural network to model the in-teraction between the target and its surrounding contexts.", "Despite the advantages of jointly modeling target and context, they are not capable of capturing long-range information when some critical context information is far from the target.", "To overcome this limitation, researchers bring in the attention mechanism to model target-context association (Tang et al., 2016a,b; Wang et al., 2016; Liu and Zhang, 2017; Ma et al., 2017; Tay et al., 2017) .", "Compared with these methods, our TNet avoids using attention for feature extraction so as to alleviate the attended noise." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.2.1", "2.2.2", "2.3", "3.1", "3.3", "3.4", "3.5", "3.6", "4" ], "paper_header_content": [ "Introduction", "Model Description", "Bi-directional LSTM Layer", "Context-Preserving Transformation", "Target-Specific Transformation", "Context-Preserving Mechanism", "Convolutional Feature Extractor", "Experimental Setup", "Performance of Ablated TNet", "CPT versus Alternatives", "Impact of CPT Layer Number", "Case Study", "Related Work" ] }
GEM-SciDuet-train-35#paper-1049#slide-10
Impact of CPT layer number
We conduct experiments on the held-out training data of LAPTOP and vary layer number L from 2 to 10, increased by 2. Increasing the layer number can increase the performance but the results will go down when L 4 due to the limited training data.
We conduct experiments on the held-out training data of LAPTOP and vary layer number L from 2 to 10, increased by 2. Increasing the layer number can increase the performance but the results will go down when L 4 due to the limited training data.
[]
GEM-SciDuet-train-35#paper-1049#slide-11
1049
Transformation Networks for Target-Oriented Sentiment Classification *
Target-oriented sentiment classification aims at classifying sentiment polarities over individual opinion targets in a sentence. RNN with attention seems a good fit for the characteristics of this task, and indeed it achieves the state-of-the-art performance. After re-examining the drawbacks of attention mechanism and the obstacles that block CNN to perform well in this classification task, we propose a new model to overcome these issues. Instead of attention, our model employs a CNN layer to extract salient features from the transformed word representations originated from a bi-directional RNN layer. Between the two layers, we propose a component to generate target-specific representations of words in the sentence, meanwhile incorporate a mechanism for preserving the original contextual information from the RNN layer. Experiments show that our model achieves a new state-of-the-art performance on a few benchmarks. 1
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204 ], "paper_content_text": [ "Introduction Target-oriented (also mentioned as \"target-level\" or \"aspect-level\" in some works) sentiment classification aims to determine sentiment polarities over \"opinion targets\" that explicitly appear in the sentences (Liu, 2012) .", "For example, in the sentence \"I am pleased with the fast log on, and the long battery life\", the user mentions two targets * The work was done when Xin Li was an intern at Tencent AI Lab.", "This project is substantially supported by a grant from the Research Grant Council of the Hong Kong Special Administrative Region, China (Project Code: 14203414).", "1 Our code is open-source and available at https:// github.com/lixin4ever/TNet \"log on\" and \"better life\", and expresses positive sentiments over them.", "The task is usually formulated as predicting a sentiment category for a (target, sentence) pair.", "Recurrent Neural Networks (RNNs) with attention mechanism, firstly proposed in machine translation (Bahdanau et al., 2014) , is the most commonly-used technique for this task.", "For example, Wang et al.", "(2016) ; Tang et al.", "(2016b) ; ; Liu and Zhang (2017) ; Ma et al.", "(2017) and employ attention to measure the semantic relatedness between each context word and the target, and then use the induced attention scores to aggregate contextual features for prediction.", "In these works, the attention weight based combination of word-level features for classification may introduce noise and downgrade the prediction accuracy.", "For example, in \"This dish is my favorite and I always get it and never get tired of it.", "\", these approaches tend to involve irrelevant words such as \"never\" and \"tired\" when they highlight the opinion modifier \"favorite\".", "To some extent, this drawback is rooted in the attention mechanism, as also observed in machine translation (Luong et al., 2015) and image captioning .", "Another observation is that the sentiment of a target is usually determined by key phrases such as \"is my favorite\".", "By this token, Convolutional Neural Networks (CNNs)-whose capability for extracting the informative n-gram features (also called \"active local features\") as sentence representations has been verified in (Kim, 2014; Johnson and Zhang, 2015) -should be a suitable model for this classification problem.", "However, CNN likely fails in cases where a sentence expresses different sentiments over multiple targets, such as \"great food but the service was dreadful!\".", "One reason is that CNN cannot fully explore the target information as done by RNN-based meth-ods (Tang et al., 2016a) .", "2 Moreover, it is hard for vanilla CNN to differentiate opinion words of multiple targets.", "Precisely, multiple active local features holding different sentiments (e.g., \"great food\" and \"service was dreadful\") may be captured for a single target, thus it will hinder the prediction.", "We propose a new architecture, named Target-Specific Transformation Networks (TNet), to solve the above issues in the task of target sentiment classification.", "TNet firstly encodes the context information into word embeddings and generates the contextualized word representations with LSTMs.", "To integrate the target information into the word representations, TNet introduces a novel Target-Specific Transformation (TST) component for generating the target-specific word representations.", "Contrary to the previous attention-based approaches which apply the same target representation to determine the attention scores of individual context words, TST firstly generates different representations of the target conditioned on individual context words, then it consolidates each context word with its tailor-made target representation to obtain the transformed word representation.", "Considering the context word \"long\" and the target \"battery life\" in the above example, TST firstly measures the associations between \"long\" and individual target words.", "Then it uses the association scores to generate the target representation conditioned on \"long\".", "After that, TST transforms the representation of \"long\" into its target-specific version with the new target representation.", "Note that \"long\" could also indicate a negative sentiment (say for \"startup time\"), and the above TST is able to differentiate them.", "As the context information carried by the representations from the LSTM layer will be lost after the non-linear TST, we design a contextpreserving mechanism to contextualize the generated target-specific word representations.", "Such mechanism also allows deep transformation structure to learn abstract features 3 .", "To help the CNN feature extractor locate sentiment indicators more accurately, we adopt a proximity strategy to scale the input of convolutional layer with positional relevance between a word and the target.", "2 One method could be concatenating the target representation with each word representation, but the effect as shown in (Wang et al., 2016) is limited.", "3 Abstract features usually refer to the features ultimately useful for the task (Bengio et al., 2013; LeCun et al., 2015) .", "In summary, our contributions are as follows: • TNet adapts CNN to handle target-level sentiment classification, and its performance dominates the state-of-the-art models on benchmark datasets.", "• A novel Target-Specific Transformation component is proposed to better integrate target information into the word representations.", "• A context-preserving mechanism is designed to forward the context information into a deep transformation architecture, thus, the model can learn more abstract contextualized word features from deeper networks.", "Model Description Given a target-sentence pair (w τ , w), where w τ = {w τ 1 , w τ 2 , ..., w τ m } is a sub-sequence of w = {w 1 , w 2 , ..., w n }, and the corresponding word embeddings x τ = {x τ 1 , x τ 2 , ..., x τ m } and x = {x 1 , x 2 , ..., x n }, the aim of target sentiment classification is to predict the sentiment polarity y ∈ {P, N, O} of the sentence w over the target w τ , where P , N and O denote \"positive\", \"negative\" and \"neutral\" sentiments respectively.", "The architecture of the proposed Target-Specific Transformation Networks (TNet) is shown in Fig.", "1 .", "The bottom layer is a BiLSTM which transforms the input x = {x 1 , x 2 , ..., x n } ∈ R n×dimw into the contextualized word representations h (0) = {h (0) 1 , h (0) 2 , ..., h (0) n } ∈ R n×2dim h (i.e.", "hidden states of BiLSTM), where dim w and dim h denote the dimensions of the word embeddings and the hidden representations respectively.", "The middle part, the core part of our TNet, consists of L Context-Preserving Transformation (CPT) layers.", "The CPT layer incorporates the target information into the word representations via a novel Target-Specific Transformation (TST) component.", "CPT also contains a contextpreserving mechanism, resembling identity mapping (He et al., 2016a,b) and highway connection (Srivastava et al., 2015a,b) , allows preserving the context information and learning more abstract word-level features using a deep network.", "The top most part is a position-aware convolutional layer which first encodes positional relevance between a word and a target, and then extracts informative features for classification.", "Bi-directional LSTM Layer As observed in Lai et al.", "(2015) , combining contextual information with word embeddings is an effective way to represent a word in convolutionbased architectures.", "TNet also employs a BiL-STM to accumulate the context information for each word of the input sentence, i.e., the bottom part in Fig.", "1 .", "For simplicity and space issue, we denote the operation of an LSTM unit on x i as LSTM(x i ).", "Thus, the contextualized word representation h (0) i ∈ R 2dim h is obtained as follows: h (0) i = [ − −−− → LSTM(x i ); ← −−− − LSTM(x i )], i ∈ [1, n].", "(1) Context-Preserving Transformation The above word-level representation has not considered the target information yet.", "Traditional attention-based approaches keep the word-level features static and aggregate them with weights as the final sentence representation.", "In contrast, as shown in the middle part in Fig.", "1 , we introduce multiple CPT layers and the detail of a single CPT is shown in Fig.", "2 .", "In each CPT layer, a tailor-made TST component that aims at better consolidating word representation and target representation is proposed.", "Moreover, we design a context-preserving mechanism enabling the learning of target-specific word representations in a deep neural architecture.", "Target-Specific Transformation TST component is depicted with the TST block in Liu and Zhang, 2017) average the embeddings of the target words as the target representation.", "This strategy may be inappropriate in some cases because different target words usually do not contribute equally.", "For example, in the target \"amd turin processor\", the word \"processor\" is more important than \"amd\" and \"turin\", because the sentiment is usually conveyed over the phrase head, i.e.,\"processor\", but seldom over modifiers (such as brand name \"amd\").", "Ma et al.", "(2017) attempted to overcome this issue by measuring the importance score between each target word representation and the averaged sentence vector.", "However, it may be ineffective for sentences expressing multiple sentiments (e.g., \"Air has higher resolution but the fonts are small.", "\"), because taking the average tends to neutralize different sentiments.", "We propose to dynamically compute the importance of target words based on each sentence word rather than the whole sentence.", "We first employ another BiLSTM to obtain the target word representations h τ ∈ R m×2dim h : h τ j = [ − −−− → LSTM(x τ j ); ← −−− − LSTM(x τ j )], j ∈ [1, m].", "(2) Then, we dynamically associate them with each word w i in the sentence to tailor-make target representation r τ i at the time step i: r τ i = m j=1 h τ j * F(h (l) i , h τ j ) , (3) where the function F measures the relatedness between the j-th target word representation h τ j and the i-th word-level representation h (l) i : F(h (l) i , h τ j ) = exp (h (l) i h τ j ) m k=1 exp (h (l) i h τ k ) .", "(4) Finally, the concatenation of r τ i and h (l) i is fed into a fully-connected layer to obtain the i-th targetspecific word representationh i (l) : h (l) i = g(W τ [h (l) i : r τ i ] + b τ ), (5) where g( * ) is a non-linear activation function and \":\" denotes vector concatenation.", "W τ and b τ are the weights of the layer.", "Context-Preserving Mechanism After the non-linear TST (see Eq.", "5), the context information captured with contextualized representations from the BiLSTM layer will be lost since the mean and the variance of the features within the feature vector will be changed.", "To take advantage of the context information, which has been proved to be useful in (Lai et al., 2015) , we investigate two strategies: Lossless Forwarding (LF) and Adaptive Scaling (AS), to pass the context information to each following layer, as depicted by the block \"LF/AS\" in Fig.", "2 .", "Accordingly, the model variants are named TNet-LF and TNet-AS.", "Lossless Forwarding.", "This strategy preserves context information by directly feeding the features before the transformation to the next layer.", "Specifically, the input h (l+1) i of the (l + 1)-th CPT layer is formulated as: h (l+1) i = h (l) i +h (l) i , i ∈ [1, n], l ∈ [0, L], (6) where h (l) i is the input of the l-th layer andh (l) i is the output of TST in this layer.", "We unfold the recursive form of Eq.", "6 as follows: h (l+1) i = h (0) i +TST(h (0) i )+· · ·+TST(h (l) i ).", "(7) Here, we denoteh (l) i as TST(h (l) i ).", "From Eq.", "7, we can see that the output of each layer will contain the contextualized word representations (i.e., h (0) i ), thus, the context information is encoded into the transformed features.", "We call this strategy \"Lossless Forwarding\" because the contextualized representations and the transformed representations (i.e., TST(h (l) i )) are kept unchanged during the feature combination.", "Adaptive Scaling.", "Lossless Forwarding introduces the context information by directly adding back the contextualized features to the transformed features, which raises a question: Can the weights of the input and the transformed features be adjusted dynamically?", "With this motivation, we propose another strategy, named \"Adaptive Scaling\".", "Similar to the gate mechanism in RNN variants (Jozefowicz et al., 2015) , Adaptive Scaling introduces a gating function to control the passed proportions of the transformed features and the input features.", "The gate t (l) as follows: t (l) i = σ(W trans h (l) i + b trans ), (8) where t (l) i is the gate for the i-th input of the l-th CPT layer, and σ is the sigmoid activation function.", "Then we perform convex combination of h (l) i andh (l) i based on the gate: h (l+1) i = t (l) i h (l) i + (1 − t (l) i ) h (l) i .", "(9) Here, denotes element-wise multiplication.", "The non-recursive form of this equation is as follows (for clarity, we ignore the subscripts): h (l+1) = [ l k=0 (1 − t (k) )] h (0) +[t (0) l k=1 (1 − t (k) )] TST(h (0) ) + · · · +t (l−1) (1 − t (l) ) TST(h (l−1) ) + t (l) TST(h (l) ).", "Thus, the context information is integrated in each upper layer and the proportions of the contextualized representations and the transformed representations are controlled by the computed gates in different transformation layers.", "Convolutional Feature Extractor Recall that the second issue that blocks CNN to perform well is that vanilla CNN may associate a target with unrelated general opinion words which are frequently used as modifiers for different targets across domains.", "For example, \"service\" in \"Great food but the service is dreadful\" may be associated with both \"great\" and \"dreadful\".", "To solve it, we adopt a proximity strategy, which is observed effective in Li and Lam, 2017) .", "The idea is a closer opinion word is more likely to be the actual modifier of the target.", "Specifically, we first calculate the position relevance v i between the i-th word and the target 4 : v i =      1 − (k+m−i) C i < k + m 1 − i−k C k + m ≤ i ≤ n 0 i > n (10) where k is the index of the first target word, C is a pre-specified constant, and m is the length of the target w τ .", "Then, we use v to help CNN locate the correct opinion w.r.t.", "the given target: h (l) i = h (l) i * v i , i ∈ [1, n], l ∈ [1, L].", "(11) Based on Eq.", "10 and Eq.", "11, the words close to the target will be highlighted and those far away will be downgraded.", "v is also applied on the intermediate output to introduce the position information into each CPT layer.", "Then we feed the weighted h (L) to the convolutional layer, i.e., the top-most layer in Fig.", "1 , to generate the feature map c ∈ R n−s+1 as follows: c i = ReLU(w conv h (L) i:i+s−1 + b conv ), (12) where h (L) i:i+s−1 ∈ R s·dim h is the concatenated vec- tor ofĥ (L) i , · · · ,ĥ (L) i+s−1 , and s is the kernel size.", "w conv ∈ R s·dim h and b conv ∈ R are learnable weights of the convolutional kernel.", "To capture the most informative features, we apply max pooling (Kim, 2014) and obtain the sentence representation z ∈ R n k by employing n k kernels: z = [max(c 1 ), · · · , max(c n k )] .", "(13) Finally, we pass z to a fully connected layer for sentiment prediction: p(y|w τ , w) = Softmax(W f z + b f ).", "(14) where W f and b f are learnable parameters.", "4 As we perform sentence padding, it is possible that the index i is larger than the actual length n of the sentence.", "Experiments Experimental Setup As shown in Table 1 , we evaluate the proposed TNet on three benchmark datasets: LAPTOP and REST are from SemEval ABSA challenge (Pontiki et al., 2014) , containing user reviews in laptop domain and restaurant domain respectively.", "We also remove a few examples having the \"conflict label\" as done in ; TWITTER is built by Dong et al.", "(2014) , containing twitter posts.", "All tokens are lowercased without removal of stop words, symbols or digits, and sentences are zero-padded to the length of the longest sentence in the dataset.", "Evaluation metrics are Accuracy and Macro-Averaged F1 where the latter is more appropriate for datasets with unbalanced classes.", "We also conduct pairwise t-test on both Accuracy and Macro-Averaged F1 to verify if the improvements over the compared models are reliable.", "TNet is compared with the following methods.", "• SVM (Kiritchenko et al., 2014) : It is a traditional support vector machine based model with extensive feature engineering; • AdaRNN (Dong et al., 2014) : It learns the sentence representation toward target for sentiment prediction via semantic composition over dependency tree; • AE-LSTM, and ATAE-LSTM (Wang et al., 2016) : AE-LSTM is a simple LSTM model incorporating the target embedding as input, while ATAE-LSTM extends AE-LSTM with attention; • IAN (Ma et al., 2017) : IAN employs two LSTMs to learn the representations of the context and the target phrase interactively; • CNN-ASP: It is a CNN-based model implemented by us which directly concatenates target representation to each word embedding; • TD-LSTM (Tang et al., 2016a) : It employs two LSTMs to model the left and right contexts of the target separately, then performs predictions based on concatenated context representations; • MemNet (Tang et al., 2016b) : It applies attention mechanism over the word embeddings multiple times and predicts sentiments based on the top-most sentence representations; • BILSTM-ATT-G (Liu and Zhang, 2017): It models left and right contexts using two attention-based LSTMs and introduces gates to measure the importance of left context, right context, and the entire sentence for the prediction; • RAM : RAM is a multilayer architecture where each layer consists of attention-based aggregation of word features and a GRU cell to learn the sentence representation.", "We run the released codes of TD-LSTM and BILSTM-ATT-G to generate results, since their papers only reported results on TWITTER.", "We also rerun MemNet on our datasets and evaluate it with both accuracy and Macro-Averaged F1.", "5 We use pre-trained GloVe vectors (Pennington et al., 2014) to initialize the word embeddings and the dimension is 300 (i.e., dim w = 300).", "For out-of-vocabulary words, we randomly sample their embeddings from the uniform distribution U(−0.25, 0.25), as done in (Kim, 2014) .", "We only use one convolutional kernel size because it was observed that CNN with single optimal kernel size is comparable with CNN having multiple kernel sizes on small datasets (Zhang and Wallace, 2017) .", "To alleviate overfitting, we apply dropout on the input word embeddings of the LSTM and the ultimate sentence representation z.", "All weight matrices are initialized with the uniform distribution U(−0.01, 0.01) and the biases are initialized 5 The codes of TD-LSTM/MemNet and BILSTM-ATT-G are available at: http://ir.hit.edu.cn/˜dytang and http://leoncrashcode.github.io.", "Note that MemNet was only evaluated with accuracy.", "as zeros.", "The training objective is cross-entropy, and Adam (Kingma and Ba, 2015) is adopted as the optimizer by following the learning rate and the decay rates in the original paper.", "The hyper-parameters of TNet-LF and TNet-AS are listed in Table 2 .", "Specifically, all hyperparameters are tuned on 20% randomly held-out training data and the hyper-parameter collection producing the highest accuracy score is used for testing.", "Our model has comparable number of parameters compared to traditional LSTM-based models as we reuse parameters in the transformation layers and BiLSTM.", "6 Table 3 , both TNet-LF and TNet-AS consistently achieve the best performance on all datasets, which verifies the efficacy of our whole TNet model.", "Moreover, TNet can perform well for different kinds of user generated content, such as product reviews with relatively formal sentences in LAPTOP and REST, and tweets with more ungrammatical sentences in TWITTER.", "The reason is the CNN-based feature extractor arms TNet with more power to extract accurate features from ungrammatical sentences.", "Indeed, we can also observe that another CNN-based baseline, i.e., CNN-ASP implemented by us, also obtains good results on TWITTER.", "Main Results As shown in On the other hand, the performance of those comparison methods is mostly unstable.", "For the tweet in TWITTER, the competitive BILSTM-ATT-G and RAM cannot perform as effective as they do for the reviews in LAPTOP and REST, due to the fact that they are heavily rooted in LSTMs and the ungrammatical sentences hinder their ca- Table 3 : Experimental results (%).", "The results with symbol\" \" are retrieved from the original papers, and those starred ( * ) one are from Dong et al.", "(2014) .", "The marker † refers to p-value < 0.01 when comparing with BILSTM-ATT-G, while the marker ‡ refers to p-value < 0.01 when comparing with RAM.", "pability in capturing the context features.", "Another difficulty caused by the ungrammatical sentences is that the dependency parsing might be errorprone, which will affect those methods such as AdaRNN using dependency information.", "From the above observations and analysis, some takeaway message for the task of target sentiment classification could be: • LSTM-based models relying on sequential information can perform well for formal sentences by capturing more useful context features; • For ungrammatical text, CNN-based models may have some advantages because CNN aims to extract the most informative n-gram features and is thus less sensitive to informal texts without strong sequential patterns.", "Performance of Ablated TNet To investigate the impact of each component such as deep transformation, context-preserving mechanism, and positional relevance, we perform comparison between the full TNet models and its ablations (the third group in Table 3 ).", "After removing the deep transformation (i.e., the techniques introduced in Section 2.2), both TNet-LF and TNet-AS are reduced to TNet w/o transformation (where position relevance is kept), and their results in both accuracy and F1 measure are incomparable with those of TNet.", "It shows that the integration of target information into the word-level representations is crucial for good performance.", "Comparing the results of TNet and TNet w/o context (where TST and position relevance are kept), we observe that the performance of TNet w/o context drops significantly on LAPTOP and REST 7 , while on TWITTER, TNet w/o context performs very competitive (p-values with TNet-LF and TNet-AS are 0.066 and 0.053 respectively for Accuracy).", "Again, we could attribute this phenomenon to the ungrammatical user generated content of twitter, because the contextpreserving component becomes less important for such data.", "TNet w/o context performs consistently better than TNet w/o transformation, which verifies the efficacy of the target specific transformation (TST), before applying context-preserving.", "As for the position information, we conduct statistical t-test between TNet-LF/AS and TNet-LF/AS w/o position together with performance comparison.", "All of the produced p-values are less than 0.05, suggesting that the improvements brought in by position information are significant.", "CPT versus Alternatives The next interesting question is what if we replace the transformation module (i.e., the CPT layers in Fig.1) of TNet with other commonly-used components?", "We investigate two alternatives: attention mechanism and fully-connected (FC) layer, resulting in three pipelines as shown in the second group of Table 3 (position relevance is kept for them).", "LSTM-ATT-CNN applies attention as the alternative 8 , and it does not need the contextpreserving mechanism.", "It performs unexceptionally worse than the TNet variants.", "We are surprised that LSTM-ATT-CNN is even worse than TNet w/o transformation (a pipeline simply removing the transformation module) on TWITTER.", "More concretely, applying attention results in negative effect on TWITTER, which is consistent with the observation that all those attention-based state-of-the-art methods (i.e., TD-LSTM, Mem-Net, BILSTM-ATT-G, and RAM) cannot perform well on TWITTER.", "LSTM-FC-CNN-LF and LSTM-FC-CNN-AS are built by applying FC layer to replace TST and keeping the context-preserving mechanism (i.e., LF and AS).", "Specifically, the concatenation of word representation and the averaged target vector is fed to the FC layer to obtain targetspecific features.", "Note that LSTM-FC-CNN-LF/AS are equivalent to TNet-LF/AS when processing single-word targets (see Eq.", "3).", "They obtain competitive results on all datasets: comparable with or better than the state-of-the-art methods.", "The TNet variants can still outperform LSTM-FC-CNN-LF/AS with significant gaps, e.g., on LAPTOP and REST, the accuracy gaps between TNet-LF and LSTM-FC-CNN-LF are 0.42% (p < 0.03) and 0.38% (p < 0.04) respectively.", "Impact of CPT Layer Number As our TNet involves multiple CPT layers, we investigate the effect of the layer number L. Specifically, we conduct experiments on the held-out training data of LAPTOP and vary L from 2 to 10, increased by 2.", "The cases L=1 and L=15 are also included.", "The results are illustrated in Figure 3 .", "We can see that both TNet-LF and TNet-AS achieve the best results when L=2.", "While increasing L, the performance is basically becoming worse.", "For large L, the performance of TNet-AS 8 We tried different attention mechanisms and report the best one here, namely, dot attention (Luong et al., 2015) .", "generally becomes more sensitive, it is probably because AS involves extra parameters (see Eq 9) that increase the training difficulty.", "Table 4 shows some sample cases.", "The input targets are wrapped in the brackets with true labels given as subscripts.", "The notations P, N and O in the table represent positive, negative and neutral respectively.", "For each sentence, we underline the target with a particular color, and the text of its corresponding most informative n-gram feature 9 captured by TNet-AS (TNet-LF captures very similar features) is in the same color (so color printing is preferred).", "For example, for the target \"resolution\" in the first sentence, the captured feature is \"Air has higher\".", "Note that as discussed above, the CNN layer of TNet captures such features with the size-three kernels, so that the features are trigrams.", "Each of the last features of the second and seventh sentences contains a padding token, which is not shown.", "Case Study Our TNet variants can predict target sentiment more accurately than RAM and BILSTM-ATT-G in the transitional sentences such as the first sentence by capturing correct trigram features.", "For the third sentence, its second and third most informative trigrams are \"100% .", "PAD\" and \"' s not\", being used together with \"features make up\", our models can make correct predictions.", "Moreover, TNet can still make correct prediction when the explicit opinion is target-specific.", "For example, (P, P, P) (P, P, P) (P, P, P) (P, P, P) 7.", "The [staff] N should be a bit more friendly .", "P P P P Table 4 : Example predictions, color printing is preferred.", "The input targets are wrapped in brackets with the true labels given as subscripts.", "indicates incorrect prediction.", "\"long\" in the fifth sentence is negative for \"startup time\", while it could be positive for other targets such as \"battery life\" in the sixth sentence.", "The sentiment of target-specific opinion word is conditioned on the given target.", "Our TNet variants, armed with the word-level feature transformation w.r.t.", "the target, is capable of handling such case.", "We also find that all these models cannot give correct prediction for the last sentence, a commonly used subjunctive style.", "In this case, the difficulty of prediction does not come from the detection of explicit opinion words but the inference based on implicit semantics, which is still quite challenging for neural network models.", "Related Work Apart from sentence level sentiment classification (Kim, 2014; Shi et al., 2018) , aspect/target level sentiment classification is also an important research topic in the field of sentiment analysis.", "The early methods mostly adopted supervised learning approach with extensive hand-coded features (Blair-Goldensohn et al., 2008; Titov and McDonald, 2008; Jiang et al., 2011; Kiritchenko et al., 2014; Wagner et al., 2014; Vo and Zhang, 2015) , and they fail to model the semantic relatedness between a target and its context which is critical for target sentiment analysis.", "Dong et al.", "(2014) incorporate the target information into the feature learning using dependency trees.", "As observed in previous works, the performance heavily relies on the quality of dependency parsing.", "Tang et al.", "(2016a) propose to split the context into two parts and associate target with contextual features separately.", "Similar to (Tang et al., 2016a) , Zhang et al.", "(2016) develop a three-way gated neural network to model the in-teraction between the target and its surrounding contexts.", "Despite the advantages of jointly modeling target and context, they are not capable of capturing long-range information when some critical context information is far from the target.", "To overcome this limitation, researchers bring in the attention mechanism to model target-context association (Tang et al., 2016a,b; Wang et al., 2016; Liu and Zhang, 2017; Ma et al., 2017; Tay et al., 2017) .", "Compared with these methods, our TNet avoids using attention for feature extraction so as to alleviate the attended noise." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.2.1", "2.2.2", "2.3", "3.1", "3.3", "3.4", "3.5", "3.6", "4" ], "paper_header_content": [ "Introduction", "Model Description", "Bi-directional LSTM Layer", "Context-Preserving Transformation", "Target-Specific Transformation", "Context-Preserving Mechanism", "Convolutional Feature Extractor", "Experimental Setup", "Performance of Ablated TNet", "CPT versus Alternatives", "Impact of CPT Layer Number", "Case Study", "Related Work" ] }
GEM-SciDuet-train-35#paper-1049#slide-11
Case Study
Sentence BILSTM-ATT-G RAM TNet-LF TNet-AS 3. Sure it s not light and slim but the [features]P make N7 N7 P P up for it 100% . 4. Not only did they have amazing , [sandwiches]P are out of this world ! [startup times]N are incredibly long : over two minutes P7 P7 N N 6. I am pleased with the fast [log on]P speedy [wifi (P, P, P) (P, P, P) (P, P, P) (P, P, P) connection]P and the long [battery life]P 6 hrs ) . 7. The [staff]N should be a bit more friendly . P P7 P7 P7 Our TNet can make correct predictions when the opinion is target specific, e.g., long in the 5th and the 6th example. TNet can capture the salient features for target sentiment prediction accurately.
Sentence BILSTM-ATT-G RAM TNet-LF TNet-AS 3. Sure it s not light and slim but the [features]P make N7 N7 P P up for it 100% . 4. Not only did they have amazing , [sandwiches]P are out of this world ! [startup times]N are incredibly long : over two minutes P7 P7 N N 6. I am pleased with the fast [log on]P speedy [wifi (P, P, P) (P, P, P) (P, P, P) (P, P, P) connection]P and the long [battery life]P 6 hrs ) . 7. The [staff]N should be a bit more friendly . P P7 P7 P7 Our TNet can make correct predictions when the opinion is target specific, e.g., long in the 5th and the 6th example. TNet can capture the salient features for target sentiment prediction accurately.
[]
GEM-SciDuet-train-35#paper-1049#slide-12
1049
Transformation Networks for Target-Oriented Sentiment Classification *
Target-oriented sentiment classification aims at classifying sentiment polarities over individual opinion targets in a sentence. RNN with attention seems a good fit for the characteristics of this task, and indeed it achieves the state-of-the-art performance. After re-examining the drawbacks of attention mechanism and the obstacles that block CNN to perform well in this classification task, we propose a new model to overcome these issues. Instead of attention, our model employs a CNN layer to extract salient features from the transformed word representations originated from a bi-directional RNN layer. Between the two layers, we propose a component to generate target-specific representations of words in the sentence, meanwhile incorporate a mechanism for preserving the original contextual information from the RNN layer. Experiments show that our model achieves a new state-of-the-art performance on a few benchmarks. 1
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204 ], "paper_content_text": [ "Introduction Target-oriented (also mentioned as \"target-level\" or \"aspect-level\" in some works) sentiment classification aims to determine sentiment polarities over \"opinion targets\" that explicitly appear in the sentences (Liu, 2012) .", "For example, in the sentence \"I am pleased with the fast log on, and the long battery life\", the user mentions two targets * The work was done when Xin Li was an intern at Tencent AI Lab.", "This project is substantially supported by a grant from the Research Grant Council of the Hong Kong Special Administrative Region, China (Project Code: 14203414).", "1 Our code is open-source and available at https:// github.com/lixin4ever/TNet \"log on\" and \"better life\", and expresses positive sentiments over them.", "The task is usually formulated as predicting a sentiment category for a (target, sentence) pair.", "Recurrent Neural Networks (RNNs) with attention mechanism, firstly proposed in machine translation (Bahdanau et al., 2014) , is the most commonly-used technique for this task.", "For example, Wang et al.", "(2016) ; Tang et al.", "(2016b) ; ; Liu and Zhang (2017) ; Ma et al.", "(2017) and employ attention to measure the semantic relatedness between each context word and the target, and then use the induced attention scores to aggregate contextual features for prediction.", "In these works, the attention weight based combination of word-level features for classification may introduce noise and downgrade the prediction accuracy.", "For example, in \"This dish is my favorite and I always get it and never get tired of it.", "\", these approaches tend to involve irrelevant words such as \"never\" and \"tired\" when they highlight the opinion modifier \"favorite\".", "To some extent, this drawback is rooted in the attention mechanism, as also observed in machine translation (Luong et al., 2015) and image captioning .", "Another observation is that the sentiment of a target is usually determined by key phrases such as \"is my favorite\".", "By this token, Convolutional Neural Networks (CNNs)-whose capability for extracting the informative n-gram features (also called \"active local features\") as sentence representations has been verified in (Kim, 2014; Johnson and Zhang, 2015) -should be a suitable model for this classification problem.", "However, CNN likely fails in cases where a sentence expresses different sentiments over multiple targets, such as \"great food but the service was dreadful!\".", "One reason is that CNN cannot fully explore the target information as done by RNN-based meth-ods (Tang et al., 2016a) .", "2 Moreover, it is hard for vanilla CNN to differentiate opinion words of multiple targets.", "Precisely, multiple active local features holding different sentiments (e.g., \"great food\" and \"service was dreadful\") may be captured for a single target, thus it will hinder the prediction.", "We propose a new architecture, named Target-Specific Transformation Networks (TNet), to solve the above issues in the task of target sentiment classification.", "TNet firstly encodes the context information into word embeddings and generates the contextualized word representations with LSTMs.", "To integrate the target information into the word representations, TNet introduces a novel Target-Specific Transformation (TST) component for generating the target-specific word representations.", "Contrary to the previous attention-based approaches which apply the same target representation to determine the attention scores of individual context words, TST firstly generates different representations of the target conditioned on individual context words, then it consolidates each context word with its tailor-made target representation to obtain the transformed word representation.", "Considering the context word \"long\" and the target \"battery life\" in the above example, TST firstly measures the associations between \"long\" and individual target words.", "Then it uses the association scores to generate the target representation conditioned on \"long\".", "After that, TST transforms the representation of \"long\" into its target-specific version with the new target representation.", "Note that \"long\" could also indicate a negative sentiment (say for \"startup time\"), and the above TST is able to differentiate them.", "As the context information carried by the representations from the LSTM layer will be lost after the non-linear TST, we design a contextpreserving mechanism to contextualize the generated target-specific word representations.", "Such mechanism also allows deep transformation structure to learn abstract features 3 .", "To help the CNN feature extractor locate sentiment indicators more accurately, we adopt a proximity strategy to scale the input of convolutional layer with positional relevance between a word and the target.", "2 One method could be concatenating the target representation with each word representation, but the effect as shown in (Wang et al., 2016) is limited.", "3 Abstract features usually refer to the features ultimately useful for the task (Bengio et al., 2013; LeCun et al., 2015) .", "In summary, our contributions are as follows: • TNet adapts CNN to handle target-level sentiment classification, and its performance dominates the state-of-the-art models on benchmark datasets.", "• A novel Target-Specific Transformation component is proposed to better integrate target information into the word representations.", "• A context-preserving mechanism is designed to forward the context information into a deep transformation architecture, thus, the model can learn more abstract contextualized word features from deeper networks.", "Model Description Given a target-sentence pair (w τ , w), where w τ = {w τ 1 , w τ 2 , ..., w τ m } is a sub-sequence of w = {w 1 , w 2 , ..., w n }, and the corresponding word embeddings x τ = {x τ 1 , x τ 2 , ..., x τ m } and x = {x 1 , x 2 , ..., x n }, the aim of target sentiment classification is to predict the sentiment polarity y ∈ {P, N, O} of the sentence w over the target w τ , where P , N and O denote \"positive\", \"negative\" and \"neutral\" sentiments respectively.", "The architecture of the proposed Target-Specific Transformation Networks (TNet) is shown in Fig.", "1 .", "The bottom layer is a BiLSTM which transforms the input x = {x 1 , x 2 , ..., x n } ∈ R n×dimw into the contextualized word representations h (0) = {h (0) 1 , h (0) 2 , ..., h (0) n } ∈ R n×2dim h (i.e.", "hidden states of BiLSTM), where dim w and dim h denote the dimensions of the word embeddings and the hidden representations respectively.", "The middle part, the core part of our TNet, consists of L Context-Preserving Transformation (CPT) layers.", "The CPT layer incorporates the target information into the word representations via a novel Target-Specific Transformation (TST) component.", "CPT also contains a contextpreserving mechanism, resembling identity mapping (He et al., 2016a,b) and highway connection (Srivastava et al., 2015a,b) , allows preserving the context information and learning more abstract word-level features using a deep network.", "The top most part is a position-aware convolutional layer which first encodes positional relevance between a word and a target, and then extracts informative features for classification.", "Bi-directional LSTM Layer As observed in Lai et al.", "(2015) , combining contextual information with word embeddings is an effective way to represent a word in convolutionbased architectures.", "TNet also employs a BiL-STM to accumulate the context information for each word of the input sentence, i.e., the bottom part in Fig.", "1 .", "For simplicity and space issue, we denote the operation of an LSTM unit on x i as LSTM(x i ).", "Thus, the contextualized word representation h (0) i ∈ R 2dim h is obtained as follows: h (0) i = [ − −−− → LSTM(x i ); ← −−− − LSTM(x i )], i ∈ [1, n].", "(1) Context-Preserving Transformation The above word-level representation has not considered the target information yet.", "Traditional attention-based approaches keep the word-level features static and aggregate them with weights as the final sentence representation.", "In contrast, as shown in the middle part in Fig.", "1 , we introduce multiple CPT layers and the detail of a single CPT is shown in Fig.", "2 .", "In each CPT layer, a tailor-made TST component that aims at better consolidating word representation and target representation is proposed.", "Moreover, we design a context-preserving mechanism enabling the learning of target-specific word representations in a deep neural architecture.", "Target-Specific Transformation TST component is depicted with the TST block in Liu and Zhang, 2017) average the embeddings of the target words as the target representation.", "This strategy may be inappropriate in some cases because different target words usually do not contribute equally.", "For example, in the target \"amd turin processor\", the word \"processor\" is more important than \"amd\" and \"turin\", because the sentiment is usually conveyed over the phrase head, i.e.,\"processor\", but seldom over modifiers (such as brand name \"amd\").", "Ma et al.", "(2017) attempted to overcome this issue by measuring the importance score between each target word representation and the averaged sentence vector.", "However, it may be ineffective for sentences expressing multiple sentiments (e.g., \"Air has higher resolution but the fonts are small.", "\"), because taking the average tends to neutralize different sentiments.", "We propose to dynamically compute the importance of target words based on each sentence word rather than the whole sentence.", "We first employ another BiLSTM to obtain the target word representations h τ ∈ R m×2dim h : h τ j = [ − −−− → LSTM(x τ j ); ← −−− − LSTM(x τ j )], j ∈ [1, m].", "(2) Then, we dynamically associate them with each word w i in the sentence to tailor-make target representation r τ i at the time step i: r τ i = m j=1 h τ j * F(h (l) i , h τ j ) , (3) where the function F measures the relatedness between the j-th target word representation h τ j and the i-th word-level representation h (l) i : F(h (l) i , h τ j ) = exp (h (l) i h τ j ) m k=1 exp (h (l) i h τ k ) .", "(4) Finally, the concatenation of r τ i and h (l) i is fed into a fully-connected layer to obtain the i-th targetspecific word representationh i (l) : h (l) i = g(W τ [h (l) i : r τ i ] + b τ ), (5) where g( * ) is a non-linear activation function and \":\" denotes vector concatenation.", "W τ and b τ are the weights of the layer.", "Context-Preserving Mechanism After the non-linear TST (see Eq.", "5), the context information captured with contextualized representations from the BiLSTM layer will be lost since the mean and the variance of the features within the feature vector will be changed.", "To take advantage of the context information, which has been proved to be useful in (Lai et al., 2015) , we investigate two strategies: Lossless Forwarding (LF) and Adaptive Scaling (AS), to pass the context information to each following layer, as depicted by the block \"LF/AS\" in Fig.", "2 .", "Accordingly, the model variants are named TNet-LF and TNet-AS.", "Lossless Forwarding.", "This strategy preserves context information by directly feeding the features before the transformation to the next layer.", "Specifically, the input h (l+1) i of the (l + 1)-th CPT layer is formulated as: h (l+1) i = h (l) i +h (l) i , i ∈ [1, n], l ∈ [0, L], (6) where h (l) i is the input of the l-th layer andh (l) i is the output of TST in this layer.", "We unfold the recursive form of Eq.", "6 as follows: h (l+1) i = h (0) i +TST(h (0) i )+· · ·+TST(h (l) i ).", "(7) Here, we denoteh (l) i as TST(h (l) i ).", "From Eq.", "7, we can see that the output of each layer will contain the contextualized word representations (i.e., h (0) i ), thus, the context information is encoded into the transformed features.", "We call this strategy \"Lossless Forwarding\" because the contextualized representations and the transformed representations (i.e., TST(h (l) i )) are kept unchanged during the feature combination.", "Adaptive Scaling.", "Lossless Forwarding introduces the context information by directly adding back the contextualized features to the transformed features, which raises a question: Can the weights of the input and the transformed features be adjusted dynamically?", "With this motivation, we propose another strategy, named \"Adaptive Scaling\".", "Similar to the gate mechanism in RNN variants (Jozefowicz et al., 2015) , Adaptive Scaling introduces a gating function to control the passed proportions of the transformed features and the input features.", "The gate t (l) as follows: t (l) i = σ(W trans h (l) i + b trans ), (8) where t (l) i is the gate for the i-th input of the l-th CPT layer, and σ is the sigmoid activation function.", "Then we perform convex combination of h (l) i andh (l) i based on the gate: h (l+1) i = t (l) i h (l) i + (1 − t (l) i ) h (l) i .", "(9) Here, denotes element-wise multiplication.", "The non-recursive form of this equation is as follows (for clarity, we ignore the subscripts): h (l+1) = [ l k=0 (1 − t (k) )] h (0) +[t (0) l k=1 (1 − t (k) )] TST(h (0) ) + · · · +t (l−1) (1 − t (l) ) TST(h (l−1) ) + t (l) TST(h (l) ).", "Thus, the context information is integrated in each upper layer and the proportions of the contextualized representations and the transformed representations are controlled by the computed gates in different transformation layers.", "Convolutional Feature Extractor Recall that the second issue that blocks CNN to perform well is that vanilla CNN may associate a target with unrelated general opinion words which are frequently used as modifiers for different targets across domains.", "For example, \"service\" in \"Great food but the service is dreadful\" may be associated with both \"great\" and \"dreadful\".", "To solve it, we adopt a proximity strategy, which is observed effective in Li and Lam, 2017) .", "The idea is a closer opinion word is more likely to be the actual modifier of the target.", "Specifically, we first calculate the position relevance v i between the i-th word and the target 4 : v i =      1 − (k+m−i) C i < k + m 1 − i−k C k + m ≤ i ≤ n 0 i > n (10) where k is the index of the first target word, C is a pre-specified constant, and m is the length of the target w τ .", "Then, we use v to help CNN locate the correct opinion w.r.t.", "the given target: h (l) i = h (l) i * v i , i ∈ [1, n], l ∈ [1, L].", "(11) Based on Eq.", "10 and Eq.", "11, the words close to the target will be highlighted and those far away will be downgraded.", "v is also applied on the intermediate output to introduce the position information into each CPT layer.", "Then we feed the weighted h (L) to the convolutional layer, i.e., the top-most layer in Fig.", "1 , to generate the feature map c ∈ R n−s+1 as follows: c i = ReLU(w conv h (L) i:i+s−1 + b conv ), (12) where h (L) i:i+s−1 ∈ R s·dim h is the concatenated vec- tor ofĥ (L) i , · · · ,ĥ (L) i+s−1 , and s is the kernel size.", "w conv ∈ R s·dim h and b conv ∈ R are learnable weights of the convolutional kernel.", "To capture the most informative features, we apply max pooling (Kim, 2014) and obtain the sentence representation z ∈ R n k by employing n k kernels: z = [max(c 1 ), · · · , max(c n k )] .", "(13) Finally, we pass z to a fully connected layer for sentiment prediction: p(y|w τ , w) = Softmax(W f z + b f ).", "(14) where W f and b f are learnable parameters.", "4 As we perform sentence padding, it is possible that the index i is larger than the actual length n of the sentence.", "Experiments Experimental Setup As shown in Table 1 , we evaluate the proposed TNet on three benchmark datasets: LAPTOP and REST are from SemEval ABSA challenge (Pontiki et al., 2014) , containing user reviews in laptop domain and restaurant domain respectively.", "We also remove a few examples having the \"conflict label\" as done in ; TWITTER is built by Dong et al.", "(2014) , containing twitter posts.", "All tokens are lowercased without removal of stop words, symbols or digits, and sentences are zero-padded to the length of the longest sentence in the dataset.", "Evaluation metrics are Accuracy and Macro-Averaged F1 where the latter is more appropriate for datasets with unbalanced classes.", "We also conduct pairwise t-test on both Accuracy and Macro-Averaged F1 to verify if the improvements over the compared models are reliable.", "TNet is compared with the following methods.", "• SVM (Kiritchenko et al., 2014) : It is a traditional support vector machine based model with extensive feature engineering; • AdaRNN (Dong et al., 2014) : It learns the sentence representation toward target for sentiment prediction via semantic composition over dependency tree; • AE-LSTM, and ATAE-LSTM (Wang et al., 2016) : AE-LSTM is a simple LSTM model incorporating the target embedding as input, while ATAE-LSTM extends AE-LSTM with attention; • IAN (Ma et al., 2017) : IAN employs two LSTMs to learn the representations of the context and the target phrase interactively; • CNN-ASP: It is a CNN-based model implemented by us which directly concatenates target representation to each word embedding; • TD-LSTM (Tang et al., 2016a) : It employs two LSTMs to model the left and right contexts of the target separately, then performs predictions based on concatenated context representations; • MemNet (Tang et al., 2016b) : It applies attention mechanism over the word embeddings multiple times and predicts sentiments based on the top-most sentence representations; • BILSTM-ATT-G (Liu and Zhang, 2017): It models left and right contexts using two attention-based LSTMs and introduces gates to measure the importance of left context, right context, and the entire sentence for the prediction; • RAM : RAM is a multilayer architecture where each layer consists of attention-based aggregation of word features and a GRU cell to learn the sentence representation.", "We run the released codes of TD-LSTM and BILSTM-ATT-G to generate results, since their papers only reported results on TWITTER.", "We also rerun MemNet on our datasets and evaluate it with both accuracy and Macro-Averaged F1.", "5 We use pre-trained GloVe vectors (Pennington et al., 2014) to initialize the word embeddings and the dimension is 300 (i.e., dim w = 300).", "For out-of-vocabulary words, we randomly sample their embeddings from the uniform distribution U(−0.25, 0.25), as done in (Kim, 2014) .", "We only use one convolutional kernel size because it was observed that CNN with single optimal kernel size is comparable with CNN having multiple kernel sizes on small datasets (Zhang and Wallace, 2017) .", "To alleviate overfitting, we apply dropout on the input word embeddings of the LSTM and the ultimate sentence representation z.", "All weight matrices are initialized with the uniform distribution U(−0.01, 0.01) and the biases are initialized 5 The codes of TD-LSTM/MemNet and BILSTM-ATT-G are available at: http://ir.hit.edu.cn/˜dytang and http://leoncrashcode.github.io.", "Note that MemNet was only evaluated with accuracy.", "as zeros.", "The training objective is cross-entropy, and Adam (Kingma and Ba, 2015) is adopted as the optimizer by following the learning rate and the decay rates in the original paper.", "The hyper-parameters of TNet-LF and TNet-AS are listed in Table 2 .", "Specifically, all hyperparameters are tuned on 20% randomly held-out training data and the hyper-parameter collection producing the highest accuracy score is used for testing.", "Our model has comparable number of parameters compared to traditional LSTM-based models as we reuse parameters in the transformation layers and BiLSTM.", "6 Table 3 , both TNet-LF and TNet-AS consistently achieve the best performance on all datasets, which verifies the efficacy of our whole TNet model.", "Moreover, TNet can perform well for different kinds of user generated content, such as product reviews with relatively formal sentences in LAPTOP and REST, and tweets with more ungrammatical sentences in TWITTER.", "The reason is the CNN-based feature extractor arms TNet with more power to extract accurate features from ungrammatical sentences.", "Indeed, we can also observe that another CNN-based baseline, i.e., CNN-ASP implemented by us, also obtains good results on TWITTER.", "Main Results As shown in On the other hand, the performance of those comparison methods is mostly unstable.", "For the tweet in TWITTER, the competitive BILSTM-ATT-G and RAM cannot perform as effective as they do for the reviews in LAPTOP and REST, due to the fact that they are heavily rooted in LSTMs and the ungrammatical sentences hinder their ca- Table 3 : Experimental results (%).", "The results with symbol\" \" are retrieved from the original papers, and those starred ( * ) one are from Dong et al.", "(2014) .", "The marker † refers to p-value < 0.01 when comparing with BILSTM-ATT-G, while the marker ‡ refers to p-value < 0.01 when comparing with RAM.", "pability in capturing the context features.", "Another difficulty caused by the ungrammatical sentences is that the dependency parsing might be errorprone, which will affect those methods such as AdaRNN using dependency information.", "From the above observations and analysis, some takeaway message for the task of target sentiment classification could be: • LSTM-based models relying on sequential information can perform well for formal sentences by capturing more useful context features; • For ungrammatical text, CNN-based models may have some advantages because CNN aims to extract the most informative n-gram features and is thus less sensitive to informal texts without strong sequential patterns.", "Performance of Ablated TNet To investigate the impact of each component such as deep transformation, context-preserving mechanism, and positional relevance, we perform comparison between the full TNet models and its ablations (the third group in Table 3 ).", "After removing the deep transformation (i.e., the techniques introduced in Section 2.2), both TNet-LF and TNet-AS are reduced to TNet w/o transformation (where position relevance is kept), and their results in both accuracy and F1 measure are incomparable with those of TNet.", "It shows that the integration of target information into the word-level representations is crucial for good performance.", "Comparing the results of TNet and TNet w/o context (where TST and position relevance are kept), we observe that the performance of TNet w/o context drops significantly on LAPTOP and REST 7 , while on TWITTER, TNet w/o context performs very competitive (p-values with TNet-LF and TNet-AS are 0.066 and 0.053 respectively for Accuracy).", "Again, we could attribute this phenomenon to the ungrammatical user generated content of twitter, because the contextpreserving component becomes less important for such data.", "TNet w/o context performs consistently better than TNet w/o transformation, which verifies the efficacy of the target specific transformation (TST), before applying context-preserving.", "As for the position information, we conduct statistical t-test between TNet-LF/AS and TNet-LF/AS w/o position together with performance comparison.", "All of the produced p-values are less than 0.05, suggesting that the improvements brought in by position information are significant.", "CPT versus Alternatives The next interesting question is what if we replace the transformation module (i.e., the CPT layers in Fig.1) of TNet with other commonly-used components?", "We investigate two alternatives: attention mechanism and fully-connected (FC) layer, resulting in three pipelines as shown in the second group of Table 3 (position relevance is kept for them).", "LSTM-ATT-CNN applies attention as the alternative 8 , and it does not need the contextpreserving mechanism.", "It performs unexceptionally worse than the TNet variants.", "We are surprised that LSTM-ATT-CNN is even worse than TNet w/o transformation (a pipeline simply removing the transformation module) on TWITTER.", "More concretely, applying attention results in negative effect on TWITTER, which is consistent with the observation that all those attention-based state-of-the-art methods (i.e., TD-LSTM, Mem-Net, BILSTM-ATT-G, and RAM) cannot perform well on TWITTER.", "LSTM-FC-CNN-LF and LSTM-FC-CNN-AS are built by applying FC layer to replace TST and keeping the context-preserving mechanism (i.e., LF and AS).", "Specifically, the concatenation of word representation and the averaged target vector is fed to the FC layer to obtain targetspecific features.", "Note that LSTM-FC-CNN-LF/AS are equivalent to TNet-LF/AS when processing single-word targets (see Eq.", "3).", "They obtain competitive results on all datasets: comparable with or better than the state-of-the-art methods.", "The TNet variants can still outperform LSTM-FC-CNN-LF/AS with significant gaps, e.g., on LAPTOP and REST, the accuracy gaps between TNet-LF and LSTM-FC-CNN-LF are 0.42% (p < 0.03) and 0.38% (p < 0.04) respectively.", "Impact of CPT Layer Number As our TNet involves multiple CPT layers, we investigate the effect of the layer number L. Specifically, we conduct experiments on the held-out training data of LAPTOP and vary L from 2 to 10, increased by 2.", "The cases L=1 and L=15 are also included.", "The results are illustrated in Figure 3 .", "We can see that both TNet-LF and TNet-AS achieve the best results when L=2.", "While increasing L, the performance is basically becoming worse.", "For large L, the performance of TNet-AS 8 We tried different attention mechanisms and report the best one here, namely, dot attention (Luong et al., 2015) .", "generally becomes more sensitive, it is probably because AS involves extra parameters (see Eq 9) that increase the training difficulty.", "Table 4 shows some sample cases.", "The input targets are wrapped in the brackets with true labels given as subscripts.", "The notations P, N and O in the table represent positive, negative and neutral respectively.", "For each sentence, we underline the target with a particular color, and the text of its corresponding most informative n-gram feature 9 captured by TNet-AS (TNet-LF captures very similar features) is in the same color (so color printing is preferred).", "For example, for the target \"resolution\" in the first sentence, the captured feature is \"Air has higher\".", "Note that as discussed above, the CNN layer of TNet captures such features with the size-three kernels, so that the features are trigrams.", "Each of the last features of the second and seventh sentences contains a padding token, which is not shown.", "Case Study Our TNet variants can predict target sentiment more accurately than RAM and BILSTM-ATT-G in the transitional sentences such as the first sentence by capturing correct trigram features.", "For the third sentence, its second and third most informative trigrams are \"100% .", "PAD\" and \"' s not\", being used together with \"features make up\", our models can make correct predictions.", "Moreover, TNet can still make correct prediction when the explicit opinion is target-specific.", "For example, (P, P, P) (P, P, P) (P, P, P) (P, P, P) 7.", "The [staff] N should be a bit more friendly .", "P P P P Table 4 : Example predictions, color printing is preferred.", "The input targets are wrapped in brackets with the true labels given as subscripts.", "indicates incorrect prediction.", "\"long\" in the fifth sentence is negative for \"startup time\", while it could be positive for other targets such as \"battery life\" in the sixth sentence.", "The sentiment of target-specific opinion word is conditioned on the given target.", "Our TNet variants, armed with the word-level feature transformation w.r.t.", "the target, is capable of handling such case.", "We also find that all these models cannot give correct prediction for the last sentence, a commonly used subjunctive style.", "In this case, the difficulty of prediction does not come from the detection of explicit opinion words but the inference based on implicit semantics, which is still quite challenging for neural network models.", "Related Work Apart from sentence level sentiment classification (Kim, 2014; Shi et al., 2018) , aspect/target level sentiment classification is also an important research topic in the field of sentiment analysis.", "The early methods mostly adopted supervised learning approach with extensive hand-coded features (Blair-Goldensohn et al., 2008; Titov and McDonald, 2008; Jiang et al., 2011; Kiritchenko et al., 2014; Wagner et al., 2014; Vo and Zhang, 2015) , and they fail to model the semantic relatedness between a target and its context which is critical for target sentiment analysis.", "Dong et al.", "(2014) incorporate the target information into the feature learning using dependency trees.", "As observed in previous works, the performance heavily relies on the quality of dependency parsing.", "Tang et al.", "(2016a) propose to split the context into two parts and associate target with contextual features separately.", "Similar to (Tang et al., 2016a) , Zhang et al.", "(2016) develop a three-way gated neural network to model the in-teraction between the target and its surrounding contexts.", "Despite the advantages of jointly modeling target and context, they are not capable of capturing long-range information when some critical context information is far from the target.", "To overcome this limitation, researchers bring in the attention mechanism to model target-context association (Tang et al., 2016a,b; Wang et al., 2016; Liu and Zhang, 2017; Ma et al., 2017; Tay et al., 2017) .", "Compared with these methods, our TNet avoids using attention for feature extraction so as to alleviate the attended noise." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.2.1", "2.2.2", "2.3", "3.1", "3.3", "3.4", "3.5", "3.6", "4" ], "paper_header_content": [ "Introduction", "Model Description", "Bi-directional LSTM Layer", "Context-Preserving Transformation", "Target-Specific Transformation", "Context-Preserving Mechanism", "Convolutional Feature Extractor", "Experimental Setup", "Performance of Ablated TNet", "CPT versus Alternatives", "Impact of CPT Layer Number", "Case Study", "Related Work" ] }
GEM-SciDuet-train-35#paper-1049#slide-12
Summary
Our TNet employs CNN as feature extractor to detect the salient features, avoiding introducing the noises. Armed with target-specific word representation and proximity information, the TNet variants can predict the sentiment w.r.t. the target more accurately.
Our TNet employs CNN as feature extractor to detect the salient features, avoiding introducing the noises. Armed with target-specific word representation and proximity information, the TNet variants can predict the sentiment w.r.t. the target more accurately.
[]
GEM-SciDuet-train-36#paper-1050#slide-0
1050
Guiding Extractive Summarization with Question-Answering Rewards
Highlighting while reading is a natural behavior for people to track salient content of a document. It would be desirable to teach an extractive summarizer to do the same. However, a major obstacle to the development of a supervised summarizer is the lack of ground-truth. Manual annotation of extraction units is costprohibitive, whereas acquiring labels by automatically aligning human abstracts and source documents can yield inferior results. In this paper we describe a novel framework to guide a supervised, extractive summarization system with question-answering rewards. We argue that quality summaries should serve as a document surrogate to answer important questions, and such question-answer pairs can be conveniently obtained from human abstracts. The system learns to promote summaries that are informative, fluent, and perform competitively on question-answering. Our results compare favorably with those reported by strong summarization baselines as evaluated by automatic metrics and human assessors.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289 ], "paper_content_text": [ "Introduction Our increasingly digitized lifestyle calls for summarization techniques to produce short and accurate summaries that can be accessed at any time.", "These summaries should factually adhere to the content of the source text and present the reader with the key points therein.", "Although neural abstractive summarization has shown promising results (Rush et al., 2015; Nallapati et al., 2016; See et al., 2017) , these methods can have potential drawbacks.", "It was revealed that abstracts generated by neural systems sometimes alter or falsify objective details, and introduce new meanings not present in the original text (Cao et al., 2018) .", "Reading these abstracts can lead to misinterpretation of the source materials, which is clearly undesirable.", "In this work, we focus on extractive summarization, where the summaries are guaranteed (CNN) A judge this week sentenced a former TSA agent to six months in jail for secretly videotaping a female co-worker while she was in the bathroom, prosecutors said.", "During the investigation, detectives with the Metro Nashville Police Department in Tennessee also found that the agent, 33-year-old Daniel Boykin, entered the woman's home multiple times, where he took videos, photos and other data.", "Police found more than 90 videos and 1,500 photos of the victim on Boykin's phone and computer .", "The victim filed a complaint after seeing images of herself on his phone last year.", "[...] Comprehension Questions (Human Abstract): Former Daniel Boykin, 33, videotaped his female co-worker in the restroom, authorities say.", "Authorities say they found 90 videos and 1,500 photos of the victim on and computer.", "to remain faithful to the original content.", "Our system seeks to identify salient and consecutive sequences of words from the source document, and highlight them in the text to assist users in browsing and comprehending lengthy documents.", "An example is illustrated in Table 1 .", "A primary challenge faced by extractive summarizers is the lack of annotated data.", "The cost of hiring humans to label a necessary amount of source articles with summary words, good for training a modern classifier, can be prohibitive.", "Previous work has exploited using human abstracts to derive labels for extraction units (Woodsend and Lapata, 2010) .", "E.g., a source word is tagged 1 if it appears in the abstract, 0 otherwise.", "Although pairs of source articles and human abstracts are abundant, labels derived in this way are not necessarily best since summary saliency can not be easily captured with a rule based categorization.", "Considering that human abstracts involve generalization, paraphrasing, and can con-tain words not present in the source text, leveraging them to derive labels for extraction units can be suboptimal.", "In this work, we investigate a new strategy that seeks to better utilize human abstracts to guide the extraction of summary text units.", "We hypothesize that quality extractive summaries should contain informative content so that they can be used as document surrogates to answer important questions, thereby satisfying users' information needs.", "The question-answer pairs can be conveniently developed from human abstracts.", "Our proposed approach identifies answer tokens from each sentence of the human abstract, then replaces each answer token with a blank to create a Cloze-style question-answer pair.", "To answer all questions (≈human abstract), the system summary must contain content that is semantically close to and collectively resembles the human abstract.", "In this paper, we construct an extractive summary by selecting consecutive word sequences from the source document.", "To accomplish this we utilize a novel reinforcement learning framework to explore the space of possible extractive summaries and assess each summary using a novel reward function judging the summary's adequacy, fluency, length, and its competency to answer important questions.", "The system learns to sample extractive summaries yielding the highest expected rewards, with no pre-derived extraction labels needed.", "This work extends the methodology of Arumae and Liu (2018) with new representations of extraction units and thorough experimental evaluation.", "The contributions of this research can be summarized as follows: • we describe a novel framework generating extractive summaries by selecting consecutive sequences of words from source documents.", "This new system explores various encoding mechanisms, as well as new sampling techniques to capture phrase level data.", "Such a framework has not been thoroughly investigated in the past; • We conduct a methodical empirical evaluation from the point of view of information saliency.", "Rather than solely relying on automatic summarization evaluation methods, we also show the advantages of our system by assessing the summary quality with reading comprehension tasks.", "Our summaries compare favorably with the automatic metrics against state of the art, and show promising results against baselines when evaluated by humans for question answering.", "Related Work Extractive summarization has seen growing popularity in the past decades (Nenkova and McKeown, 2011) .", "The methods focus on selecting representative sentences from the document(s) and optionally deleting unimportant sentence constituents to form a summary (Knight and Marcu, 2002; Radev et al., 2004; Zajic et al., 2007; Martins and Smith, 2009; Gillick and Favre, 2009; Lin and Bilmes, 2010; Wang et al., 2013; Li et al., 2013 Li et al., , 2014 Hong et al., 2014; Yogatama et al., 2015) .", "A majority of the methods are unsupervised.", "They estimate sentence importance based on the sentence's length and position in the document, whether the sentence contains topical content and its relationship with other sentences.", "The summarization objective is to select a handful of sentences to maximize the coverage of important content while minimizing summary redundancy.", "Although unsupervised methods are promising, they cannot benefit from the large-scale training data harvested from the Web (Sandhaus, 2008; Hermann et al., 2015; Grusky et al., 2018) .", "Neural extractive summarization has focused primarily on extracting sentences (Nallapati et al., 2017; Cao et al., 2017; Isonuma et al., 2017; Tarnpradab et al., 2017; Zhou et al., 2018; Kedzie et al., 2018) .", "These studies exploit parallel training data consisting of source articles and story highlights (i.e., human abstracts) to create ground-truth labels for sentences.", "A neural extractive summarizer learns to predict a binary label for each source sentence indicating if it is to be included in the summary.", "These studies build distributed sentence representations using neural networks (Cheng and Lapata, 2016; Yasunaga et al., 2017) and use reinforcement learning to optimize the evaluation metric (Narayan et al., 2018b) and improve summary coherence (Wu and Hu, 2018) .", "However, sentence extraction can be coarse and in many cases, only a part of the sentence is worthy to be added to the summary.", "In this study, we perform finer-grained extractive summarization by allowing the system to select consecutive sequences of words rather than sentences to form a summary.", "Interestingly, studies reveal that summaries generated by recent neural abstractive systems are, in fact, quite \"extractive.\"", "Abstractive systems often adopt the encoder-decoder architecture with an attention mechanism (Rush et al., 2015; Nallapati et al., 2016; Paulus et al., 2017; Guo et al., 2018; Gehrmann et al., 2018; Lebanoff et al., 2018; Ce-likyilmaz et al., 2018) .", "The encoder condenses a source sequence to a fixed-length vector and the decoder takes the vector as input and generates a summary by predicting one word at a time.", "See, Liu, and Manning (2017) suggest that about 35% of the summary sentences occur in the source documents, and 90% of summary n-grams appear in the source.", "Moreover, the summaries may contain inaccurate factual details and introduce new meanings not present in the original text (Cao et al., 2018; .", "It thus raises concerns as to whether such systems can be used in realworld scenarios to summarize materials such as legal documents.", "In this work, we choose to focus on extractive summarization where selected word sequences can be highlighted on the source text to avoid change of meaning.", "Our proposed method is inspired by the work of Lei et al.", "(2016) who seek to identify rationales from textual input to support sentiment classification and question retrieval.", "Distinct from this previous work, we focus on generating generic document summaries.", "We present a novel supervised framework encouraging the selection of consecutive sequences of words to form an extractive summary.", "Further, we leverage reinforcement learning to explore the space of possible extractive summaries and promote those that are fluent, adequate, and competent in question answering.", "We seek to test the hypothesis that successful summaries can serve as document surrogates to answer important questions, and moreover, ground-truth questionanswer pairs can be derived from human abstracts.", "In the following section we describe our proposed approach in details.", "Our Approach Let S be an extractive summary consisting of text segments selected from a source document x.", "The summary can be mapped to a sequence of binary labels y assigned to document words.", "In this section we first present a supervised framework for identifying consecutive sequences of words that are summary-worthy, then proceed by describing our question-answering rewards and a deep reinforcement learning framework to guide the selection of summaries so that they can be used as document surrogates to answer important questions.", "1 Representing an Extraction Unit How best to decompose a source document into a set of text units useful for extractive summarization remains an open problem.", "A natural choice is to use words as extraction units.", "However, this choice ignores the cohesiveness of text.", "A text chunk (e.g., a prepositional phrase) can be either selected to the summary in its entirety or not at all.", "In this paper we experiment with both schemes, using either words or chunks as extraction units.", "When a text chunk is selected in the summary, all its consisting words are selected.", "We obtain text chunks by breaking down the sentence constituent parse tree in a top-down manner until each tree fragment governs at most 5 words.", "A chunk thus can contain from 1 to 5 words.", "Additionally, word level modeling can be considered a special case of chunks where the length of each phrase is 1.", "It is important to note that using sentences as extraction units is out of the scope of this paper, because our work focuses on finer-grained extraction units such as words and phrases and this is notably a more challenging task.", "The most successful neural models for encoding a piece of text to a fixed-length vector include the recurrent (Hochreiter and Schmidhuber, 1997) and convolutional neural networks (CNN; Kim et al., 2014) , among others.", "A recent study by Khandelwal et al.", "(2018) reported that the recurrent networks are capable of memorizing a recent context of about 20 tokens and the model is highly sensitive to word order, whereas this is less the case for CNN whose max-pooling operation makes it agnostic to word order.", "We implement both networks and are curious to compare their effectiveness at encoding extraction units for summarization.", "{h e t } = f Bi-LSTM 1 (x) (1) or {h e t } = f CNN 2 (x) (2) Our model first encodes the source document using a bidirectional LSTM with the forward and backward passes (Eq.", "(1)).", "The representation of the t-th source word h e t = [ ← − h e t || − → h e t ] is the con- catenation of the hidden states in both directions.", "A chunk is similarly denoted by h e t = [ ← − h e t || − → h e t+n ] where t and t + n are the indices of its beginning and ending words.", "In both cases, a fixed-length vector (h e t ∈ R m ) is created for the word/chunk.", "Further, our CNN encoder (Eq.", "(2)) uses a sliding window of {1,3,5,7} words, corresponding to the kernel sizes, to scan through the source document.", "We apply a number of filters to each window size to extract local features.", "The t-th source word is represented by the concatenation of feature maps (an m-dimensional vector).", "To obtain the chunk vector we perform max-pooling over the representations of its consisting words (from t to t + n).", "In the following we use h e t to denote the vector representation of the t-th extraction unit, may it be a word or a chunk, generated using either encoder.", "Constructing an Extractive Summary It is desirable to first develop a supervised framework for identifying summary-worthy text segments from a source article.", "These segments collectively form an extractive summary to be highlighted on the source text.", "The task can be formulated as a sequence labeling problem: a source text unit (a word or chunk) is labelled 1 if it is to be included in the summary and 0 otherwise.", "It is not unusual to develop an auto-regressive model to perform sequence labeling, where the label of the t-th extraction unit (y t ) depends on all previous labels (y <t ).", "Given this hypothesis, we build a framework to extract summary units where the importance of the t-th source unit is characterized by its informativeness (encoded in h e t ), its position in the document, and relationship with the partial summary.", "The details are presented below.", "We use a positional embedding (g t ) to signify the position of the t-th text unit in the source document.", "The position corresponds to the index of the source sentence containing the t-th unit, and further, all text units belonging to the same sentence share the same positional embedding.", "We apply sinusoidal initialization to the embeddings, following Vaswani et al.", "(2017) .", "Importantly, positional embeddings allow us to inject macro-positional knowledge about words/chunks into a neural summarization framework to offset the natural bias that humans tend to have on putting important content at the beginning of an article.", "Next, we build a representation for the partial summary to aid the system in selecting future text units.", "The representation s t is expected to encode the extraction decisions up to time t-1 and it can be realized using a unidirectional LSTM network (Eq.", "(3)).", "The t-th input to the network is represented as y t−1 ⊗ h e t−1 where y t−1 is a binary label serving as a gating mechanism to control if the semantic content of the previous text unit (h e t−1 ) is to be included in the summary (\"⊗\" corresponds to elementwise product).", "During training, we apply teacher forcing and y t−1 is the ground-truth extraction label for the (t − 1)-th unit; at test time, Figure 1 : A unidirectional LSTM (blue, Eq.", "(3)) encodes the partial summary, while the multilayer perceptron network (orange, Eq.", "(4-5)) utilizes the text unit representation (h e t ), its positional embedding (gt), and the partial summary representation (st) to determine if the t-th text unit is to be included in the summary.", "Best viewed in color.", "g t 1 g t g t+1 g t+2 s t+2 s t+1 s t s t 1 h e t 1 h e t h e t+1 h e t+2 y t−1 is generated on-the-fly by obtaining the label yielding the highest probability according to Eq.", "(5).", "In the previous work of Cheng and Lapata (2016) and Nallapati et al.", "(2017) , similar auto-regressive models are developed to identify summary sentences.", "Different from the previous work, this study focuses on extracting consecutive sequences of words and chunks from the source document, and the partial summary representation is particularly useful for predicting if the next unit is to be included in the summary to improve summary fluency.", "s t = f Uni-LSTM 3 (s t−1 , y t−1 ⊗ h e t−1 ) (3) Given the partial summary representation (s t ), and representation of the text unit (h e t ) and its positional encoding (g t ), we employ a multilayer perceptron to predict how likely the unit is to be included in the summary.", "This process is described by Eqs.", "(4-5) and further illustrated in Figure 1 .", "a t = f ReLU (W a [h e t ; g t ; s t ] + b a ) (4) p(y t |y <t , x) = σ(w y a t + b y ) (5) Our model parameters include {W a , b a , w y , b y } along with those required by f Bi-LSTM 1 , f CNN 2 and f Uni-LSTM 3 .", "It is possible to train this model in a fully supervised fashion by minimizing the negative log-likelihood of the training data.", "We generate ground-truth labels for source text units as follows.", "A source word receives a label of 1 if both itself and its adjacent word appear in the human abstract (excluding cases where both words are stopwords).", "This heuristic aims to label consecutive source words (2 or more) as summaryworthy, as opposed to picking single words which can be less informative.", "A source text chunk receives a label of 1 if one of its component words is labelled 1 in the above process.", "Because human abstracts are often short and contain novel words not present in source documents, they can be suboptimal for generating ground-truth labels for extraction units.", "Only a small portion of the source words (about 8% in our dataset) are labelled as positive, whereas the vast majority are negative.", "Such labels can be ineffective in providing supervision.", "In the following section, we investigate a new learning paradigm, which encourages extractive summaries to contain informative content useful for answering important questions, while question-answer pairs can be automatically derived from human abstracts.", "Using Summaries to Answer Questions Our hypothesis is that high-quality summaries should contain informative content making them appropriate to serve as document surrogates to satisfy users' information needs.", "We train the extractive summarizer to identify source text units necessary for answering questions, and the questionanswer (QA) pairs can be conveniently developed from human abstracts.", "To obtain QA pairs, we set an answer token to be either a salient word or a named entity to limit the space of potential answers.", "For any sentence in the human abstract, we identify an answer token from it, then replace the answer token with a blank to create a Cloze-style question-answer pair (see Table 1 ).", "When a sentence contains multiple answer tokens, a set of QA pairs can be obtained from it.", "It is important to note that at least one QA pair should be extracted from each sentence of the abstract.", "Because a system summary is trained to contain content useful for answering all questions (≈human abstract), any missing QA pair is likely to cause the summary to be insufficient.", "We collect answer tokens using the following methods: (a) we extract a set of entities with tag {PER, LOC, ORG, MISC} from each sentence using the Stanford CoreNLP toolkit ; (b) we also identify the ROOT word of each sentence's dependency parse tree along with the sentence's subject/object word, whose type is {NSUBJ, CSUBJ, OBJ, IOBJ} (if exists), then add them to the collection of answer tokens.", "Further, we prune the answer space by excluding those which appear fewer than 5 times overall.", "Having several methods for question construction allows us to explore the answer space properly.", "In the results section we perform experiments on root, subject/object, and named entities to see which model provides the best extraction guide.", "Given an extractive summary S containing a set of source text units, and a collection of questionanswer pairs P = {(Q k , e * k )} K k=1 related to the source document, we want to develop a mechanism leveraging the extractive summary to answer these questions.", "We first encode each question Q k to a vector representation (q k ).", "This is achieved by concatenating the last hidden states of the forward/backward passes of a bidirectional LSTM (Eq.", "(6) ).", "Next, we exploit the attention mechanism to locate summary parts that are relevant to answering the k-th question.", "Given the attention mechanism, an extractive summary S can be used to answer multiple questions related to the document.", "We define α t,k to be the semantic relatedness between the t-th source text unit and the k-th question.", "Following Chen et al.", "(2016a) , we introduce a bilinear term to characterize their relationship (α t,k ∝ h e t W α q k ; see Eq.", "(7) ).", "In this process, we consider only those source text units selected in summary S. Using α t,k as weights, we then compute a context vector c k condensing summary content related to the k-th question (Eq.", "(8)) .", "q k = f Bi-LSTM 4 (Q k ) (6) α t,k = exp(h e t W α q k ) t exp(h e t W α q k ) (7) c k = t α t,k h e t (8) u k = [c k ; q k ; |c k − q k |; c k ⊗ q k ] (9) To predict the most probable answer, we construct a fully-connected network as the output layer.", "The input to the network includes a concatenation of the context vector (c k ), question vector (q k ), absolute difference (|c k − q k |) and element-wise product (c k ⊗ q k ) of the two vectors (Eq.", "(9)).", "A softmax function is used to estimate a probability distribution over the space of candidate answers: .", "P (e k |S, Q k ) = softmax(W e f ReLU (W u u k + b u )).", "A Reinforcement Learning Framework In this section we introduce a reinforcement learning framework to explore the space of possible extractive summaries and present a novel reward function to promote summaries that are adequate, fluent, restricted in length, and competent in question answering.", "Our reward function consists of four components, whose interpolation weights γ, α, and β are tuned on the dev set.", "R(y) = R c (y) + γR a (y) + αR f (y) + βR l (y) We define QA competency (Eq.", "(10)) as the average log-likelihood of correctly answering questions using the system summary (y).", "A highquality system summary is expected to resemble reference summary by using similar wording.", "The adequacy metric (Eq.", "(11)) measures the percentage of overlapping unigrams between the system (y) and reference summary (y * ).", "The fluency criterion (Eq.", "(12)) encourages consecutive sequences of source words to be selected by preventing many 0/1 switches in the label sequence (i.e., |y t − y t−1 |).", "Finally, we limit the summary size by setting the ratio of selected words to be close to a threshold δ (Eq.", "(13) ).", "QA R c (y) = 1 K K k=1 log P (e * k |y, Q k ) (10) Adequ.", "R a (y) = 1 |y * | U(y, y * ) (11) Fluency R f (y) = − |y| t=2 |y t − y t−1 | (12) Length R l (y) = − 1 |y| t y t − δ (13) The reward function R(y) successfully combines intrinsic measures of summary fluency and adequacy (Goldstein et al., 2005) with extrinsic measure of summary responsiveness to given questions (Dang, 2006; Murray et al., 2008) .", "A reinforcement learning agent finds a policy P (y|x) to maximize the expected reward E P (y|x) [R(y)].", "Training the system with policy gradient (Eq.", "(14) ) involves repeatedly sampling an extractive summaryŷ from the source document x.", "At time t, the agent takes an action by sampling a decision based on p(y t |ŷ <t , x) (Eq.", "(5)) indicating whether the t-th source text unit is to be included in the summary.", "Once the full summary sequenceŷ is generated, it is compared to the ground-truth sequence to compute the reward R(ŷ).", "In this way, reinforcement learning explores the space of extractive summaries and promotes those yielding high rewards.", "At inference time, rather than sampling actions from p(y t |y <t , x), we choose y t that yields the highest probability to generate the system summary y.", "This process is deterministic and no QA is required.", "∇ θ E P (y|x) [R(y)] = E P (y|x) [R(y)∇ θ log P (y|x)] ≈ 1 N N n=1 R(ŷ (n) )∇ θ log P (ŷ (n) |x) (14) Experiments We proceed by discussing the dataset and settings, comparison systems, and experimental results obtained through both automatic metrics and human evaluation in a reading comprehension setting.", "Dataset and Settings Our goal is to build an extractive summarizer identifying important textual segments from source articles.", "To investigate the effectiveness of the proposed approach, we conduct experiments on the CNN/Daily Mail dataset using a version provided by See et al.", "(2017) .", "The reference summaries of this dataset were created by human editors exhibiting a moderate degree of extractiveness.", "E.g., 83% of summary unigrams and 45% of bigrams appear in source articles (Narayan et al., 2018a) .", "On average, a CNN article contains 761 words / 34 sentences and a DM article contains 653 words / 29 sentences.", "We report results respectively for the CNN and DM portion of the dataset.", "Our hyperparameter settings are as follows.", "We set the hidden state dimension of the LSTM to be 256 in either direction.", "A bidirectional LSTM f Bi-LSTM 1 (·) produces a 512-dimensional vector for each content word.", "Similarly, f Bi-LSTM 4 (·) generates a question vector q k of the same size.", "Our CNN encoder f CNN 2 (·) uses multiple window sizes of {1, 3, 5, 7} and 128 filters per window size.", "h e t is thus a 512-dimensional vector using either CNN or LSTM encoder.", "We set the hidden state dimension of s t to be 128.", "We also use 100-dimensional word embeddings (Pennington et al., 2014) and sinusoidal positional encodings (Vaswani et al., 2017) of 30 dimensions.", "The maximum article length is set to 400 words.", "Compared to the study of Arumae and Liu (2018) , we expand the search space dramatically from 100 to 400 words, which poses a challenge to the RLbased summarizers.", "We associate each article with at most 10 QA pairs (K=10) and use them to guide the extraction of summary segments.", "We apply mini-batch training with Adam optimizer (Kingma and Ba, 2014) , where a mini-batch contains 128 articles and their QA pairs.", "The summary ratio δ is set to 0.15, yielding extractive summaries of about 60 words.", "Following Arumae and Liu (2018) , we set hyperparameters β = 2α; α and γ are tuned on the dev set using grid search.", "Experimental Results Comparison systems We compare our method with a number of extractive and abstractive systems that have reported results on the CNN/DM datasets.", "We consider non-neural approaches that extract sentences from the source article to form a summary.", "These include LexRank (Radev et al., 2004) , SumBasic (Vanderwende et al., 2007) , and KLSum (Haghighi and Vanderwende, 2009) .", "Such methods treat sentences as bags of words, and then select sentences containing topically important words.", "We further include the Lead-3 baseline that extracts the first 3 sentences from any given article.", "The method has been shown to be a strong baseline for summarizing news articles.", "Neural extractive approaches focus on learning vector representations for sentences and words, then performing extraction based on the learned representations.", "Cheng et al.", "(2016) describe a neural network method composed of a hierarchical document encoder and an attention-based extractor.", "The system has two variants: NN-WE extracts words from the source article and NN-SE extracts sentences.", "SummaRuNNer (Nallapati et al., 2017) presents an autoregressive sequence labeling method based on recurrent neural networks.", "It selects summary sentences based on their content, salience, position, and novelty representations.", "Abstractive summarization methods are not directly comparable to our approach, but we choose to include three systems that report results respectively for CNN and DM datasets.", "Distraction-M3 (Chen et al., 2016b ) trains the summarization system to distract its attention to traverse different regions of the source article.", "Graph attention (Tan et al., 2017) introduces a graph-based attention mechanism to enhance the encoderdecoder framework.", "PointerGen+Cov.", "(See et al., 2017) allows the system to not only copy words from the source text but also generate summary words by selecting them from a vocabulary.", "Abstractive methods can thus introduce new words to the summary that are not present in the source article.", "However, system summaries may change the meaning of the original texts due to this flexibility.", "Results We present summarization results of various systems in Tables 2 and 3 , evaluated on the standard CNN/DM test sets by R-1, R-2, and R-L metrics (Lin, 2004) , which respectively measure the overlap of unigrams, bigrams, and longest common subsequences between system and reference summaries.", "We investigate four variants of our method: QASumm+NoQ does not utilize any question-answer pairs during training.", "It extracts summary text chunks by learning from groundtruth labels ( §3.2) and the chunks are encoded by f Bi-LSTM 1 .", "Other variants initialize their models using pretrained parameters from QASumm+NoQ, then integrate the reinforcement learning objective ( §3.4) to exploit the space of possible extractive summaries and reward those that are useful for answering questions.", "We consider three types of QA pairs: the answer token is the root of a sentence dependency parse tree (+ROOT), a subject or object (+SUBJ/OBJ), or an entity found in the sentence (+NER).", "In all cases, the question is generated by replacing the answer token with a blank symbol.", "As illustrated in Tables 2 and 3 , our QASumm methods with reinforcement learning (+ROOT, +SUBJ/OBJ, +NER) perform competitively with strong baselines.", "They outperform the counterpart QASumm+NoQ that makes no use of the QA pairs by a substantial margin.", "They outperform or perform at a comparable level to state-of-the-art published systems on the CNN/DM datasets but are generally inferior to PointerGen.", "We observe that exacting summary chunks is highly desirable in real-world applications as it provides a mechanism to generate concise summaries.", "Nonetheless, accurately identifying summary chunks is challenging because the search space is vast and spuriousness arises in chunking sentences.", "Cheng and Lapata (2016) report a substantial performance drop when adapting their system to extract words.", "Our QASumm methods focusing on chunk extraction perform on par with competitive systems that extract whole sentences.", "We additionally present human evaluation results of summary usefulness for a reading comprehension task in §4.3.", "In Tables 2 and 3 , we further show the number of unique answers per QA type.", "We find that the ROOT-type QA pairs have the least number of unique answers.", "They are often main verbs of sentences.", "In contrast, the SUBJ/OBJ-type has the most number of answers.", "They are subjects and objects of sentences and correspond to an open class of content words.", "The NER-type has a moderate number of answers compared to others.", "Note that all answer tokens have been filtered by frequency; those appearing less than 5 times in the dataset are removed to avoid overfitting.", "Among variants of the QASumm method, we find that QASumm+ROOT achieves the highest scores on DM dataset.", "QASumm+NER performs consistently well on both CNN and DM datasets, suggesting QA pairs of this type are effective in guiding the system to extract summary chunks.", "We conjecture that maintaining a moderate number of answers is important to maximize performance.", "To answer questions with missing entities, the summary is encouraged to contain similar content as the question body.", "Because questions are derived from the human abstract, this in turn requires the system summary to carry similar semantic content as the human abstract.", "Question-answering accuracy We next dive into the QA component of our system to investigate question-answering performance when different types of summaries and QA pairs are supplied to the system ( §3.3).", "Given a question, the system predicts an answer using an extractive summary as the source input.", "Intuitively, an informative summary can lead to high QA accuracy, as the summary content serves well as the basis for predicting answers.", "With the same summary as input, certain types of questions can be more difficult to answer than others, and the system must rely heavily on the summary to gauge correct answers.", "We compare various types of summaries.", "These include (a) QASumm+NoQ which extracts summary chunks without requiring QA pairs; and (b) GoldSumm, which are gold-standard extractive summaries generated by collecting source words appearing in human summaries.", "We further consider NoText and FullText, corresponding to using no source text or the full source article as input.", "They represent the two extremes.", "In all cases the QA component ( §3.3) is trained on the training set and we report QA accuracies on the dev set.", "In Table 4 , we observe that question-answering with GoldSumm performs the best for all QA types.", "It outperforms the scenarios using Full-Text as the source input.", "This indicates that distilled information contained in a high-quality summary can be useful for answering questions, as searching for answers in a succinct summary can be more efficient than that in a full article.", "Moreover, we observe that the performance of QA-Summ+NoQ is in between NoText and GoldSumm for all answer types.", "The results suggest that extractive summaries with even modest ROUGE scores can prove useful for question-answering.", "Regarding different types of QA pairs, we find that the ROOT-type can achieve high QA accuracy when using NoText input.", "It suggests that ROOT answers can to some extent be predicted based on the question context.", "The NER-type QA pairs work the best for both GoldSumm and Full-Text, likely because the source texts contain necessary entities required to correctly answer those questions.", "We also find the SUBJ/OBJ-type QA pairs have the smallest gap between train/dev accuracies, despite that they have a large answer space.", "Based on the analysis we would suggest future work to consider using NER-based QA pairs as they encourage the summaries to contain salient source content and be informative.", "Extraction units We finally compare the performance of using either words or chunks as extraction units ( §3.1).", "The chunks are obtained by breaking down sentence constituent parse trees in a top-down manner until all tree fragments contain 5 words or less.", "We observe that 70% of the chunks are 1-grams, and 2/3/4/5-grams are 9%, 7%, 6%, and 8% respectively.", "We compare the bidirectional LSTM (f LSTM 1 ) and CNN (f CNN 2 ) encoders for their effectiveness on generating representations for extraction units.", "Figure 2 presents the results of the QASumm+NoQ system under various settings.", "We find that extracting chunks performs superior, and combining chunks with LSTM representations yield the highest scores.", "Human Evaluation Testing the usefulness of an extractive system driven by reading comprehension is not inherently measured by automatic metrics (i.e.", "ROUGE).", "We conducted a human evaluation to assess whether the highlighted summaries contribute to document understanding.", "Similar to our training paradigm we presented each participant with the document and three fill-in-the-blank questions created from the human abstracts.", "It was guaranteed that each question was from a unique human abstract to avoid seeing the answer adjacent to the same template.", "The missing section was randomly generated to be either the root word, the subject or ob- See et al.", "(2017) .", "Our systems tested were the supervised extractor, and our full model (NER).", "ject of the sentence, or a named entity.", "We compare our reinforced extracted summary (presented as a bold overlay to the document), against our supervised method (section 3.2), abstractive summaries generated by See et al.", "(2017) , and the human abstracts in full.", "Additionally we asked the participants to rate the quality of the summary presented (1-5, with 5 being most informative).", "We utilized Amazon Mechanical Turk, and conducted an experiment where we sampled 80 documents from the CNN test set.", "The articles were evenly split across the four competing systems, and each HIT was completed by 5 turkers.", "Upon completion the data was analyzed manually for accuracy since turkers entered each answer as free text, and to remove any meaningless datapoints.", "Table 5 shows the average time (in seconds) to complete a single question, the overall accuracy of the participants, and the informativeness of a given summary type.", "Excluding the use of human abstracts, all systems resulted in similar performance times.", "However we observe a large margin in QA accuracy in our full system compared to the abstractive and our supervised approach.", "Although participants rated the informativeness of the summaries to be the same our systems yielded a higher performance.", "This strongly indicates that having a system which makes using of document comprehension has a tangible effect when applied towards a real-world task.", "Conclusion We exploited an extractive summarization framework using deep reinforcement learning to identify consecutive word sequences from a document to form an extractive summary.", "Our reward function promotes adequate and fluent summaries that can serve as document surrogates to answer important questions, directly addressing users' information needs.", "Experimental results on benchmark datasets demonstrated the efficacy of our proposed method over state-of-the-art baselines, assessed by both automatic metrics and human evaluators." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "4.3", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Our Approach", "Representing an Extraction Unit", "Constructing an Extractive Summary", "Using Summaries to Answer Questions", "A Reinforcement Learning Framework", "Experiments", "Dataset and Settings", "Experimental Results", "Human Evaluation", "Conclusion" ] }
GEM-SciDuet-train-36#paper-1050#slide-0
Summary Usefulness
(CNN) It looks like the Republicans in Congress have failed again. House Republicans defeated a plan pushed by Senate Majority Leader Mitch McConnell to fund the Department of Homeland Security, money that congressional Republicans have been holding hostage in their effort to overturn President Obama's executive order on immigration. McConnell proposed that there would be a separate vote on the immigration issue. When Speaker John Boehner proposed an even narrower compromise, funding the Department for only three more weeks, his caucus said no. The final bill provides funding for one more week, at which point Congress needs to take up the issue again. [] GOP hogs the spotlight with funding deadlines like the battle over money for the Department of Homeland Security. He says the continual crises deprive _______ of the chance to move his agenda forward even slightly. Kristjan Arumae and Fei Liu Guiding Extractive Summarization with Question-Answering Rewards - NAACL 2019
(CNN) It looks like the Republicans in Congress have failed again. House Republicans defeated a plan pushed by Senate Majority Leader Mitch McConnell to fund the Department of Homeland Security, money that congressional Republicans have been holding hostage in their effort to overturn President Obama's executive order on immigration. McConnell proposed that there would be a separate vote on the immigration issue. When Speaker John Boehner proposed an even narrower compromise, funding the Department for only three more weeks, his caucus said no. The final bill provides funding for one more week, at which point Congress needs to take up the issue again. [] GOP hogs the spotlight with funding deadlines like the battle over money for the Department of Homeland Security. He says the continual crises deprive _______ of the chance to move his agenda forward even slightly. Kristjan Arumae and Fei Liu Guiding Extractive Summarization with Question-Answering Rewards - NAACL 2019
[]
GEM-SciDuet-train-36#paper-1050#slide-1
1050
Guiding Extractive Summarization with Question-Answering Rewards
Highlighting while reading is a natural behavior for people to track salient content of a document. It would be desirable to teach an extractive summarizer to do the same. However, a major obstacle to the development of a supervised summarizer is the lack of ground-truth. Manual annotation of extraction units is costprohibitive, whereas acquiring labels by automatically aligning human abstracts and source documents can yield inferior results. In this paper we describe a novel framework to guide a supervised, extractive summarization system with question-answering rewards. We argue that quality summaries should serve as a document surrogate to answer important questions, and such question-answer pairs can be conveniently obtained from human abstracts. The system learns to promote summaries that are informative, fluent, and perform competitively on question-answering. Our results compare favorably with those reported by strong summarization baselines as evaluated by automatic metrics and human assessors.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289 ], "paper_content_text": [ "Introduction Our increasingly digitized lifestyle calls for summarization techniques to produce short and accurate summaries that can be accessed at any time.", "These summaries should factually adhere to the content of the source text and present the reader with the key points therein.", "Although neural abstractive summarization has shown promising results (Rush et al., 2015; Nallapati et al., 2016; See et al., 2017) , these methods can have potential drawbacks.", "It was revealed that abstracts generated by neural systems sometimes alter or falsify objective details, and introduce new meanings not present in the original text (Cao et al., 2018) .", "Reading these abstracts can lead to misinterpretation of the source materials, which is clearly undesirable.", "In this work, we focus on extractive summarization, where the summaries are guaranteed (CNN) A judge this week sentenced a former TSA agent to six months in jail for secretly videotaping a female co-worker while she was in the bathroom, prosecutors said.", "During the investigation, detectives with the Metro Nashville Police Department in Tennessee also found that the agent, 33-year-old Daniel Boykin, entered the woman's home multiple times, where he took videos, photos and other data.", "Police found more than 90 videos and 1,500 photos of the victim on Boykin's phone and computer .", "The victim filed a complaint after seeing images of herself on his phone last year.", "[...] Comprehension Questions (Human Abstract): Former Daniel Boykin, 33, videotaped his female co-worker in the restroom, authorities say.", "Authorities say they found 90 videos and 1,500 photos of the victim on and computer.", "to remain faithful to the original content.", "Our system seeks to identify salient and consecutive sequences of words from the source document, and highlight them in the text to assist users in browsing and comprehending lengthy documents.", "An example is illustrated in Table 1 .", "A primary challenge faced by extractive summarizers is the lack of annotated data.", "The cost of hiring humans to label a necessary amount of source articles with summary words, good for training a modern classifier, can be prohibitive.", "Previous work has exploited using human abstracts to derive labels for extraction units (Woodsend and Lapata, 2010) .", "E.g., a source word is tagged 1 if it appears in the abstract, 0 otherwise.", "Although pairs of source articles and human abstracts are abundant, labels derived in this way are not necessarily best since summary saliency can not be easily captured with a rule based categorization.", "Considering that human abstracts involve generalization, paraphrasing, and can con-tain words not present in the source text, leveraging them to derive labels for extraction units can be suboptimal.", "In this work, we investigate a new strategy that seeks to better utilize human abstracts to guide the extraction of summary text units.", "We hypothesize that quality extractive summaries should contain informative content so that they can be used as document surrogates to answer important questions, thereby satisfying users' information needs.", "The question-answer pairs can be conveniently developed from human abstracts.", "Our proposed approach identifies answer tokens from each sentence of the human abstract, then replaces each answer token with a blank to create a Cloze-style question-answer pair.", "To answer all questions (≈human abstract), the system summary must contain content that is semantically close to and collectively resembles the human abstract.", "In this paper, we construct an extractive summary by selecting consecutive word sequences from the source document.", "To accomplish this we utilize a novel reinforcement learning framework to explore the space of possible extractive summaries and assess each summary using a novel reward function judging the summary's adequacy, fluency, length, and its competency to answer important questions.", "The system learns to sample extractive summaries yielding the highest expected rewards, with no pre-derived extraction labels needed.", "This work extends the methodology of Arumae and Liu (2018) with new representations of extraction units and thorough experimental evaluation.", "The contributions of this research can be summarized as follows: • we describe a novel framework generating extractive summaries by selecting consecutive sequences of words from source documents.", "This new system explores various encoding mechanisms, as well as new sampling techniques to capture phrase level data.", "Such a framework has not been thoroughly investigated in the past; • We conduct a methodical empirical evaluation from the point of view of information saliency.", "Rather than solely relying on automatic summarization evaluation methods, we also show the advantages of our system by assessing the summary quality with reading comprehension tasks.", "Our summaries compare favorably with the automatic metrics against state of the art, and show promising results against baselines when evaluated by humans for question answering.", "Related Work Extractive summarization has seen growing popularity in the past decades (Nenkova and McKeown, 2011) .", "The methods focus on selecting representative sentences from the document(s) and optionally deleting unimportant sentence constituents to form a summary (Knight and Marcu, 2002; Radev et al., 2004; Zajic et al., 2007; Martins and Smith, 2009; Gillick and Favre, 2009; Lin and Bilmes, 2010; Wang et al., 2013; Li et al., 2013 Li et al., , 2014 Hong et al., 2014; Yogatama et al., 2015) .", "A majority of the methods are unsupervised.", "They estimate sentence importance based on the sentence's length and position in the document, whether the sentence contains topical content and its relationship with other sentences.", "The summarization objective is to select a handful of sentences to maximize the coverage of important content while minimizing summary redundancy.", "Although unsupervised methods are promising, they cannot benefit from the large-scale training data harvested from the Web (Sandhaus, 2008; Hermann et al., 2015; Grusky et al., 2018) .", "Neural extractive summarization has focused primarily on extracting sentences (Nallapati et al., 2017; Cao et al., 2017; Isonuma et al., 2017; Tarnpradab et al., 2017; Zhou et al., 2018; Kedzie et al., 2018) .", "These studies exploit parallel training data consisting of source articles and story highlights (i.e., human abstracts) to create ground-truth labels for sentences.", "A neural extractive summarizer learns to predict a binary label for each source sentence indicating if it is to be included in the summary.", "These studies build distributed sentence representations using neural networks (Cheng and Lapata, 2016; Yasunaga et al., 2017) and use reinforcement learning to optimize the evaluation metric (Narayan et al., 2018b) and improve summary coherence (Wu and Hu, 2018) .", "However, sentence extraction can be coarse and in many cases, only a part of the sentence is worthy to be added to the summary.", "In this study, we perform finer-grained extractive summarization by allowing the system to select consecutive sequences of words rather than sentences to form a summary.", "Interestingly, studies reveal that summaries generated by recent neural abstractive systems are, in fact, quite \"extractive.\"", "Abstractive systems often adopt the encoder-decoder architecture with an attention mechanism (Rush et al., 2015; Nallapati et al., 2016; Paulus et al., 2017; Guo et al., 2018; Gehrmann et al., 2018; Lebanoff et al., 2018; Ce-likyilmaz et al., 2018) .", "The encoder condenses a source sequence to a fixed-length vector and the decoder takes the vector as input and generates a summary by predicting one word at a time.", "See, Liu, and Manning (2017) suggest that about 35% of the summary sentences occur in the source documents, and 90% of summary n-grams appear in the source.", "Moreover, the summaries may contain inaccurate factual details and introduce new meanings not present in the original text (Cao et al., 2018; .", "It thus raises concerns as to whether such systems can be used in realworld scenarios to summarize materials such as legal documents.", "In this work, we choose to focus on extractive summarization where selected word sequences can be highlighted on the source text to avoid change of meaning.", "Our proposed method is inspired by the work of Lei et al.", "(2016) who seek to identify rationales from textual input to support sentiment classification and question retrieval.", "Distinct from this previous work, we focus on generating generic document summaries.", "We present a novel supervised framework encouraging the selection of consecutive sequences of words to form an extractive summary.", "Further, we leverage reinforcement learning to explore the space of possible extractive summaries and promote those that are fluent, adequate, and competent in question answering.", "We seek to test the hypothesis that successful summaries can serve as document surrogates to answer important questions, and moreover, ground-truth questionanswer pairs can be derived from human abstracts.", "In the following section we describe our proposed approach in details.", "Our Approach Let S be an extractive summary consisting of text segments selected from a source document x.", "The summary can be mapped to a sequence of binary labels y assigned to document words.", "In this section we first present a supervised framework for identifying consecutive sequences of words that are summary-worthy, then proceed by describing our question-answering rewards and a deep reinforcement learning framework to guide the selection of summaries so that they can be used as document surrogates to answer important questions.", "1 Representing an Extraction Unit How best to decompose a source document into a set of text units useful for extractive summarization remains an open problem.", "A natural choice is to use words as extraction units.", "However, this choice ignores the cohesiveness of text.", "A text chunk (e.g., a prepositional phrase) can be either selected to the summary in its entirety or not at all.", "In this paper we experiment with both schemes, using either words or chunks as extraction units.", "When a text chunk is selected in the summary, all its consisting words are selected.", "We obtain text chunks by breaking down the sentence constituent parse tree in a top-down manner until each tree fragment governs at most 5 words.", "A chunk thus can contain from 1 to 5 words.", "Additionally, word level modeling can be considered a special case of chunks where the length of each phrase is 1.", "It is important to note that using sentences as extraction units is out of the scope of this paper, because our work focuses on finer-grained extraction units such as words and phrases and this is notably a more challenging task.", "The most successful neural models for encoding a piece of text to a fixed-length vector include the recurrent (Hochreiter and Schmidhuber, 1997) and convolutional neural networks (CNN; Kim et al., 2014) , among others.", "A recent study by Khandelwal et al.", "(2018) reported that the recurrent networks are capable of memorizing a recent context of about 20 tokens and the model is highly sensitive to word order, whereas this is less the case for CNN whose max-pooling operation makes it agnostic to word order.", "We implement both networks and are curious to compare their effectiveness at encoding extraction units for summarization.", "{h e t } = f Bi-LSTM 1 (x) (1) or {h e t } = f CNN 2 (x) (2) Our model first encodes the source document using a bidirectional LSTM with the forward and backward passes (Eq.", "(1)).", "The representation of the t-th source word h e t = [ ← − h e t || − → h e t ] is the con- catenation of the hidden states in both directions.", "A chunk is similarly denoted by h e t = [ ← − h e t || − → h e t+n ] where t and t + n are the indices of its beginning and ending words.", "In both cases, a fixed-length vector (h e t ∈ R m ) is created for the word/chunk.", "Further, our CNN encoder (Eq.", "(2)) uses a sliding window of {1,3,5,7} words, corresponding to the kernel sizes, to scan through the source document.", "We apply a number of filters to each window size to extract local features.", "The t-th source word is represented by the concatenation of feature maps (an m-dimensional vector).", "To obtain the chunk vector we perform max-pooling over the representations of its consisting words (from t to t + n).", "In the following we use h e t to denote the vector representation of the t-th extraction unit, may it be a word or a chunk, generated using either encoder.", "Constructing an Extractive Summary It is desirable to first develop a supervised framework for identifying summary-worthy text segments from a source article.", "These segments collectively form an extractive summary to be highlighted on the source text.", "The task can be formulated as a sequence labeling problem: a source text unit (a word or chunk) is labelled 1 if it is to be included in the summary and 0 otherwise.", "It is not unusual to develop an auto-regressive model to perform sequence labeling, where the label of the t-th extraction unit (y t ) depends on all previous labels (y <t ).", "Given this hypothesis, we build a framework to extract summary units where the importance of the t-th source unit is characterized by its informativeness (encoded in h e t ), its position in the document, and relationship with the partial summary.", "The details are presented below.", "We use a positional embedding (g t ) to signify the position of the t-th text unit in the source document.", "The position corresponds to the index of the source sentence containing the t-th unit, and further, all text units belonging to the same sentence share the same positional embedding.", "We apply sinusoidal initialization to the embeddings, following Vaswani et al.", "(2017) .", "Importantly, positional embeddings allow us to inject macro-positional knowledge about words/chunks into a neural summarization framework to offset the natural bias that humans tend to have on putting important content at the beginning of an article.", "Next, we build a representation for the partial summary to aid the system in selecting future text units.", "The representation s t is expected to encode the extraction decisions up to time t-1 and it can be realized using a unidirectional LSTM network (Eq.", "(3)).", "The t-th input to the network is represented as y t−1 ⊗ h e t−1 where y t−1 is a binary label serving as a gating mechanism to control if the semantic content of the previous text unit (h e t−1 ) is to be included in the summary (\"⊗\" corresponds to elementwise product).", "During training, we apply teacher forcing and y t−1 is the ground-truth extraction label for the (t − 1)-th unit; at test time, Figure 1 : A unidirectional LSTM (blue, Eq.", "(3)) encodes the partial summary, while the multilayer perceptron network (orange, Eq.", "(4-5)) utilizes the text unit representation (h e t ), its positional embedding (gt), and the partial summary representation (st) to determine if the t-th text unit is to be included in the summary.", "Best viewed in color.", "g t 1 g t g t+1 g t+2 s t+2 s t+1 s t s t 1 h e t 1 h e t h e t+1 h e t+2 y t−1 is generated on-the-fly by obtaining the label yielding the highest probability according to Eq.", "(5).", "In the previous work of Cheng and Lapata (2016) and Nallapati et al.", "(2017) , similar auto-regressive models are developed to identify summary sentences.", "Different from the previous work, this study focuses on extracting consecutive sequences of words and chunks from the source document, and the partial summary representation is particularly useful for predicting if the next unit is to be included in the summary to improve summary fluency.", "s t = f Uni-LSTM 3 (s t−1 , y t−1 ⊗ h e t−1 ) (3) Given the partial summary representation (s t ), and representation of the text unit (h e t ) and its positional encoding (g t ), we employ a multilayer perceptron to predict how likely the unit is to be included in the summary.", "This process is described by Eqs.", "(4-5) and further illustrated in Figure 1 .", "a t = f ReLU (W a [h e t ; g t ; s t ] + b a ) (4) p(y t |y <t , x) = σ(w y a t + b y ) (5) Our model parameters include {W a , b a , w y , b y } along with those required by f Bi-LSTM 1 , f CNN 2 and f Uni-LSTM 3 .", "It is possible to train this model in a fully supervised fashion by minimizing the negative log-likelihood of the training data.", "We generate ground-truth labels for source text units as follows.", "A source word receives a label of 1 if both itself and its adjacent word appear in the human abstract (excluding cases where both words are stopwords).", "This heuristic aims to label consecutive source words (2 or more) as summaryworthy, as opposed to picking single words which can be less informative.", "A source text chunk receives a label of 1 if one of its component words is labelled 1 in the above process.", "Because human abstracts are often short and contain novel words not present in source documents, they can be suboptimal for generating ground-truth labels for extraction units.", "Only a small portion of the source words (about 8% in our dataset) are labelled as positive, whereas the vast majority are negative.", "Such labels can be ineffective in providing supervision.", "In the following section, we investigate a new learning paradigm, which encourages extractive summaries to contain informative content useful for answering important questions, while question-answer pairs can be automatically derived from human abstracts.", "Using Summaries to Answer Questions Our hypothesis is that high-quality summaries should contain informative content making them appropriate to serve as document surrogates to satisfy users' information needs.", "We train the extractive summarizer to identify source text units necessary for answering questions, and the questionanswer (QA) pairs can be conveniently developed from human abstracts.", "To obtain QA pairs, we set an answer token to be either a salient word or a named entity to limit the space of potential answers.", "For any sentence in the human abstract, we identify an answer token from it, then replace the answer token with a blank to create a Cloze-style question-answer pair (see Table 1 ).", "When a sentence contains multiple answer tokens, a set of QA pairs can be obtained from it.", "It is important to note that at least one QA pair should be extracted from each sentence of the abstract.", "Because a system summary is trained to contain content useful for answering all questions (≈human abstract), any missing QA pair is likely to cause the summary to be insufficient.", "We collect answer tokens using the following methods: (a) we extract a set of entities with tag {PER, LOC, ORG, MISC} from each sentence using the Stanford CoreNLP toolkit ; (b) we also identify the ROOT word of each sentence's dependency parse tree along with the sentence's subject/object word, whose type is {NSUBJ, CSUBJ, OBJ, IOBJ} (if exists), then add them to the collection of answer tokens.", "Further, we prune the answer space by excluding those which appear fewer than 5 times overall.", "Having several methods for question construction allows us to explore the answer space properly.", "In the results section we perform experiments on root, subject/object, and named entities to see which model provides the best extraction guide.", "Given an extractive summary S containing a set of source text units, and a collection of questionanswer pairs P = {(Q k , e * k )} K k=1 related to the source document, we want to develop a mechanism leveraging the extractive summary to answer these questions.", "We first encode each question Q k to a vector representation (q k ).", "This is achieved by concatenating the last hidden states of the forward/backward passes of a bidirectional LSTM (Eq.", "(6) ).", "Next, we exploit the attention mechanism to locate summary parts that are relevant to answering the k-th question.", "Given the attention mechanism, an extractive summary S can be used to answer multiple questions related to the document.", "We define α t,k to be the semantic relatedness between the t-th source text unit and the k-th question.", "Following Chen et al.", "(2016a) , we introduce a bilinear term to characterize their relationship (α t,k ∝ h e t W α q k ; see Eq.", "(7) ).", "In this process, we consider only those source text units selected in summary S. Using α t,k as weights, we then compute a context vector c k condensing summary content related to the k-th question (Eq.", "(8)) .", "q k = f Bi-LSTM 4 (Q k ) (6) α t,k = exp(h e t W α q k ) t exp(h e t W α q k ) (7) c k = t α t,k h e t (8) u k = [c k ; q k ; |c k − q k |; c k ⊗ q k ] (9) To predict the most probable answer, we construct a fully-connected network as the output layer.", "The input to the network includes a concatenation of the context vector (c k ), question vector (q k ), absolute difference (|c k − q k |) and element-wise product (c k ⊗ q k ) of the two vectors (Eq.", "(9)).", "A softmax function is used to estimate a probability distribution over the space of candidate answers: .", "P (e k |S, Q k ) = softmax(W e f ReLU (W u u k + b u )).", "A Reinforcement Learning Framework In this section we introduce a reinforcement learning framework to explore the space of possible extractive summaries and present a novel reward function to promote summaries that are adequate, fluent, restricted in length, and competent in question answering.", "Our reward function consists of four components, whose interpolation weights γ, α, and β are tuned on the dev set.", "R(y) = R c (y) + γR a (y) + αR f (y) + βR l (y) We define QA competency (Eq.", "(10)) as the average log-likelihood of correctly answering questions using the system summary (y).", "A highquality system summary is expected to resemble reference summary by using similar wording.", "The adequacy metric (Eq.", "(11)) measures the percentage of overlapping unigrams between the system (y) and reference summary (y * ).", "The fluency criterion (Eq.", "(12)) encourages consecutive sequences of source words to be selected by preventing many 0/1 switches in the label sequence (i.e., |y t − y t−1 |).", "Finally, we limit the summary size by setting the ratio of selected words to be close to a threshold δ (Eq.", "(13) ).", "QA R c (y) = 1 K K k=1 log P (e * k |y, Q k ) (10) Adequ.", "R a (y) = 1 |y * | U(y, y * ) (11) Fluency R f (y) = − |y| t=2 |y t − y t−1 | (12) Length R l (y) = − 1 |y| t y t − δ (13) The reward function R(y) successfully combines intrinsic measures of summary fluency and adequacy (Goldstein et al., 2005) with extrinsic measure of summary responsiveness to given questions (Dang, 2006; Murray et al., 2008) .", "A reinforcement learning agent finds a policy P (y|x) to maximize the expected reward E P (y|x) [R(y)].", "Training the system with policy gradient (Eq.", "(14) ) involves repeatedly sampling an extractive summaryŷ from the source document x.", "At time t, the agent takes an action by sampling a decision based on p(y t |ŷ <t , x) (Eq.", "(5)) indicating whether the t-th source text unit is to be included in the summary.", "Once the full summary sequenceŷ is generated, it is compared to the ground-truth sequence to compute the reward R(ŷ).", "In this way, reinforcement learning explores the space of extractive summaries and promotes those yielding high rewards.", "At inference time, rather than sampling actions from p(y t |y <t , x), we choose y t that yields the highest probability to generate the system summary y.", "This process is deterministic and no QA is required.", "∇ θ E P (y|x) [R(y)] = E P (y|x) [R(y)∇ θ log P (y|x)] ≈ 1 N N n=1 R(ŷ (n) )∇ θ log P (ŷ (n) |x) (14) Experiments We proceed by discussing the dataset and settings, comparison systems, and experimental results obtained through both automatic metrics and human evaluation in a reading comprehension setting.", "Dataset and Settings Our goal is to build an extractive summarizer identifying important textual segments from source articles.", "To investigate the effectiveness of the proposed approach, we conduct experiments on the CNN/Daily Mail dataset using a version provided by See et al.", "(2017) .", "The reference summaries of this dataset were created by human editors exhibiting a moderate degree of extractiveness.", "E.g., 83% of summary unigrams and 45% of bigrams appear in source articles (Narayan et al., 2018a) .", "On average, a CNN article contains 761 words / 34 sentences and a DM article contains 653 words / 29 sentences.", "We report results respectively for the CNN and DM portion of the dataset.", "Our hyperparameter settings are as follows.", "We set the hidden state dimension of the LSTM to be 256 in either direction.", "A bidirectional LSTM f Bi-LSTM 1 (·) produces a 512-dimensional vector for each content word.", "Similarly, f Bi-LSTM 4 (·) generates a question vector q k of the same size.", "Our CNN encoder f CNN 2 (·) uses multiple window sizes of {1, 3, 5, 7} and 128 filters per window size.", "h e t is thus a 512-dimensional vector using either CNN or LSTM encoder.", "We set the hidden state dimension of s t to be 128.", "We also use 100-dimensional word embeddings (Pennington et al., 2014) and sinusoidal positional encodings (Vaswani et al., 2017) of 30 dimensions.", "The maximum article length is set to 400 words.", "Compared to the study of Arumae and Liu (2018) , we expand the search space dramatically from 100 to 400 words, which poses a challenge to the RLbased summarizers.", "We associate each article with at most 10 QA pairs (K=10) and use them to guide the extraction of summary segments.", "We apply mini-batch training with Adam optimizer (Kingma and Ba, 2014) , where a mini-batch contains 128 articles and their QA pairs.", "The summary ratio δ is set to 0.15, yielding extractive summaries of about 60 words.", "Following Arumae and Liu (2018) , we set hyperparameters β = 2α; α and γ are tuned on the dev set using grid search.", "Experimental Results Comparison systems We compare our method with a number of extractive and abstractive systems that have reported results on the CNN/DM datasets.", "We consider non-neural approaches that extract sentences from the source article to form a summary.", "These include LexRank (Radev et al., 2004) , SumBasic (Vanderwende et al., 2007) , and KLSum (Haghighi and Vanderwende, 2009) .", "Such methods treat sentences as bags of words, and then select sentences containing topically important words.", "We further include the Lead-3 baseline that extracts the first 3 sentences from any given article.", "The method has been shown to be a strong baseline for summarizing news articles.", "Neural extractive approaches focus on learning vector representations for sentences and words, then performing extraction based on the learned representations.", "Cheng et al.", "(2016) describe a neural network method composed of a hierarchical document encoder and an attention-based extractor.", "The system has two variants: NN-WE extracts words from the source article and NN-SE extracts sentences.", "SummaRuNNer (Nallapati et al., 2017) presents an autoregressive sequence labeling method based on recurrent neural networks.", "It selects summary sentences based on their content, salience, position, and novelty representations.", "Abstractive summarization methods are not directly comparable to our approach, but we choose to include three systems that report results respectively for CNN and DM datasets.", "Distraction-M3 (Chen et al., 2016b ) trains the summarization system to distract its attention to traverse different regions of the source article.", "Graph attention (Tan et al., 2017) introduces a graph-based attention mechanism to enhance the encoderdecoder framework.", "PointerGen+Cov.", "(See et al., 2017) allows the system to not only copy words from the source text but also generate summary words by selecting them from a vocabulary.", "Abstractive methods can thus introduce new words to the summary that are not present in the source article.", "However, system summaries may change the meaning of the original texts due to this flexibility.", "Results We present summarization results of various systems in Tables 2 and 3 , evaluated on the standard CNN/DM test sets by R-1, R-2, and R-L metrics (Lin, 2004) , which respectively measure the overlap of unigrams, bigrams, and longest common subsequences between system and reference summaries.", "We investigate four variants of our method: QASumm+NoQ does not utilize any question-answer pairs during training.", "It extracts summary text chunks by learning from groundtruth labels ( §3.2) and the chunks are encoded by f Bi-LSTM 1 .", "Other variants initialize their models using pretrained parameters from QASumm+NoQ, then integrate the reinforcement learning objective ( §3.4) to exploit the space of possible extractive summaries and reward those that are useful for answering questions.", "We consider three types of QA pairs: the answer token is the root of a sentence dependency parse tree (+ROOT), a subject or object (+SUBJ/OBJ), or an entity found in the sentence (+NER).", "In all cases, the question is generated by replacing the answer token with a blank symbol.", "As illustrated in Tables 2 and 3 , our QASumm methods with reinforcement learning (+ROOT, +SUBJ/OBJ, +NER) perform competitively with strong baselines.", "They outperform the counterpart QASumm+NoQ that makes no use of the QA pairs by a substantial margin.", "They outperform or perform at a comparable level to state-of-the-art published systems on the CNN/DM datasets but are generally inferior to PointerGen.", "We observe that exacting summary chunks is highly desirable in real-world applications as it provides a mechanism to generate concise summaries.", "Nonetheless, accurately identifying summary chunks is challenging because the search space is vast and spuriousness arises in chunking sentences.", "Cheng and Lapata (2016) report a substantial performance drop when adapting their system to extract words.", "Our QASumm methods focusing on chunk extraction perform on par with competitive systems that extract whole sentences.", "We additionally present human evaluation results of summary usefulness for a reading comprehension task in §4.3.", "In Tables 2 and 3 , we further show the number of unique answers per QA type.", "We find that the ROOT-type QA pairs have the least number of unique answers.", "They are often main verbs of sentences.", "In contrast, the SUBJ/OBJ-type has the most number of answers.", "They are subjects and objects of sentences and correspond to an open class of content words.", "The NER-type has a moderate number of answers compared to others.", "Note that all answer tokens have been filtered by frequency; those appearing less than 5 times in the dataset are removed to avoid overfitting.", "Among variants of the QASumm method, we find that QASumm+ROOT achieves the highest scores on DM dataset.", "QASumm+NER performs consistently well on both CNN and DM datasets, suggesting QA pairs of this type are effective in guiding the system to extract summary chunks.", "We conjecture that maintaining a moderate number of answers is important to maximize performance.", "To answer questions with missing entities, the summary is encouraged to contain similar content as the question body.", "Because questions are derived from the human abstract, this in turn requires the system summary to carry similar semantic content as the human abstract.", "Question-answering accuracy We next dive into the QA component of our system to investigate question-answering performance when different types of summaries and QA pairs are supplied to the system ( §3.3).", "Given a question, the system predicts an answer using an extractive summary as the source input.", "Intuitively, an informative summary can lead to high QA accuracy, as the summary content serves well as the basis for predicting answers.", "With the same summary as input, certain types of questions can be more difficult to answer than others, and the system must rely heavily on the summary to gauge correct answers.", "We compare various types of summaries.", "These include (a) QASumm+NoQ which extracts summary chunks without requiring QA pairs; and (b) GoldSumm, which are gold-standard extractive summaries generated by collecting source words appearing in human summaries.", "We further consider NoText and FullText, corresponding to using no source text or the full source article as input.", "They represent the two extremes.", "In all cases the QA component ( §3.3) is trained on the training set and we report QA accuracies on the dev set.", "In Table 4 , we observe that question-answering with GoldSumm performs the best for all QA types.", "It outperforms the scenarios using Full-Text as the source input.", "This indicates that distilled information contained in a high-quality summary can be useful for answering questions, as searching for answers in a succinct summary can be more efficient than that in a full article.", "Moreover, we observe that the performance of QA-Summ+NoQ is in between NoText and GoldSumm for all answer types.", "The results suggest that extractive summaries with even modest ROUGE scores can prove useful for question-answering.", "Regarding different types of QA pairs, we find that the ROOT-type can achieve high QA accuracy when using NoText input.", "It suggests that ROOT answers can to some extent be predicted based on the question context.", "The NER-type QA pairs work the best for both GoldSumm and Full-Text, likely because the source texts contain necessary entities required to correctly answer those questions.", "We also find the SUBJ/OBJ-type QA pairs have the smallest gap between train/dev accuracies, despite that they have a large answer space.", "Based on the analysis we would suggest future work to consider using NER-based QA pairs as they encourage the summaries to contain salient source content and be informative.", "Extraction units We finally compare the performance of using either words or chunks as extraction units ( §3.1).", "The chunks are obtained by breaking down sentence constituent parse trees in a top-down manner until all tree fragments contain 5 words or less.", "We observe that 70% of the chunks are 1-grams, and 2/3/4/5-grams are 9%, 7%, 6%, and 8% respectively.", "We compare the bidirectional LSTM (f LSTM 1 ) and CNN (f CNN 2 ) encoders for their effectiveness on generating representations for extraction units.", "Figure 2 presents the results of the QASumm+NoQ system under various settings.", "We find that extracting chunks performs superior, and combining chunks with LSTM representations yield the highest scores.", "Human Evaluation Testing the usefulness of an extractive system driven by reading comprehension is not inherently measured by automatic metrics (i.e.", "ROUGE).", "We conducted a human evaluation to assess whether the highlighted summaries contribute to document understanding.", "Similar to our training paradigm we presented each participant with the document and three fill-in-the-blank questions created from the human abstracts.", "It was guaranteed that each question was from a unique human abstract to avoid seeing the answer adjacent to the same template.", "The missing section was randomly generated to be either the root word, the subject or ob- See et al.", "(2017) .", "Our systems tested were the supervised extractor, and our full model (NER).", "ject of the sentence, or a named entity.", "We compare our reinforced extracted summary (presented as a bold overlay to the document), against our supervised method (section 3.2), abstractive summaries generated by See et al.", "(2017) , and the human abstracts in full.", "Additionally we asked the participants to rate the quality of the summary presented (1-5, with 5 being most informative).", "We utilized Amazon Mechanical Turk, and conducted an experiment where we sampled 80 documents from the CNN test set.", "The articles were evenly split across the four competing systems, and each HIT was completed by 5 turkers.", "Upon completion the data was analyzed manually for accuracy since turkers entered each answer as free text, and to remove any meaningless datapoints.", "Table 5 shows the average time (in seconds) to complete a single question, the overall accuracy of the participants, and the informativeness of a given summary type.", "Excluding the use of human abstracts, all systems resulted in similar performance times.", "However we observe a large margin in QA accuracy in our full system compared to the abstractive and our supervised approach.", "Although participants rated the informativeness of the summaries to be the same our systems yielded a higher performance.", "This strongly indicates that having a system which makes using of document comprehension has a tangible effect when applied towards a real-world task.", "Conclusion We exploited an extractive summarization framework using deep reinforcement learning to identify consecutive word sequences from a document to form an extractive summary.", "Our reward function promotes adequate and fluent summaries that can serve as document surrogates to answer important questions, directly addressing users' information needs.", "Experimental results on benchmark datasets demonstrated the efficacy of our proposed method over state-of-the-art baselines, assessed by both automatic metrics and human evaluators." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "4.3", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Our Approach", "Representing an Extraction Unit", "Constructing an Extractive Summary", "Using Summaries to Answer Questions", "A Reinforcement Learning Framework", "Experiments", "Dataset and Settings", "Experimental Results", "Human Evaluation", "Conclusion" ] }
GEM-SciDuet-train-36#paper-1050#slide-1
Extractive Summarization
Our system seeks to identify salient and consecutive sequences of words from the source document to assist users in comprehending lengthy documents. We hypothesize that quality extractive summaries should contain informative content so that they can be used as document surrogates. We investigate a new strategy that seeks to better utilize human abstracts to guide the extraction of summary text units. To accomplish this we utilize a reinforcement learning framework to explore the space of possible extractive summaries to answer important questions.
Our system seeks to identify salient and consecutive sequences of words from the source document to assist users in comprehending lengthy documents. We hypothesize that quality extractive summaries should contain informative content so that they can be used as document surrogates. We investigate a new strategy that seeks to better utilize human abstracts to guide the extraction of summary text units. To accomplish this we utilize a reinforcement learning framework to explore the space of possible extractive summaries to answer important questions.
[]
GEM-SciDuet-train-36#paper-1050#slide-2
1050
Guiding Extractive Summarization with Question-Answering Rewards
Highlighting while reading is a natural behavior for people to track salient content of a document. It would be desirable to teach an extractive summarizer to do the same. However, a major obstacle to the development of a supervised summarizer is the lack of ground-truth. Manual annotation of extraction units is costprohibitive, whereas acquiring labels by automatically aligning human abstracts and source documents can yield inferior results. In this paper we describe a novel framework to guide a supervised, extractive summarization system with question-answering rewards. We argue that quality summaries should serve as a document surrogate to answer important questions, and such question-answer pairs can be conveniently obtained from human abstracts. The system learns to promote summaries that are informative, fluent, and perform competitively on question-answering. Our results compare favorably with those reported by strong summarization baselines as evaluated by automatic metrics and human assessors.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289 ], "paper_content_text": [ "Introduction Our increasingly digitized lifestyle calls for summarization techniques to produce short and accurate summaries that can be accessed at any time.", "These summaries should factually adhere to the content of the source text and present the reader with the key points therein.", "Although neural abstractive summarization has shown promising results (Rush et al., 2015; Nallapati et al., 2016; See et al., 2017) , these methods can have potential drawbacks.", "It was revealed that abstracts generated by neural systems sometimes alter or falsify objective details, and introduce new meanings not present in the original text (Cao et al., 2018) .", "Reading these abstracts can lead to misinterpretation of the source materials, which is clearly undesirable.", "In this work, we focus on extractive summarization, where the summaries are guaranteed (CNN) A judge this week sentenced a former TSA agent to six months in jail for secretly videotaping a female co-worker while she was in the bathroom, prosecutors said.", "During the investigation, detectives with the Metro Nashville Police Department in Tennessee also found that the agent, 33-year-old Daniel Boykin, entered the woman's home multiple times, where he took videos, photos and other data.", "Police found more than 90 videos and 1,500 photos of the victim on Boykin's phone and computer .", "The victim filed a complaint after seeing images of herself on his phone last year.", "[...] Comprehension Questions (Human Abstract): Former Daniel Boykin, 33, videotaped his female co-worker in the restroom, authorities say.", "Authorities say they found 90 videos and 1,500 photos of the victim on and computer.", "to remain faithful to the original content.", "Our system seeks to identify salient and consecutive sequences of words from the source document, and highlight them in the text to assist users in browsing and comprehending lengthy documents.", "An example is illustrated in Table 1 .", "A primary challenge faced by extractive summarizers is the lack of annotated data.", "The cost of hiring humans to label a necessary amount of source articles with summary words, good for training a modern classifier, can be prohibitive.", "Previous work has exploited using human abstracts to derive labels for extraction units (Woodsend and Lapata, 2010) .", "E.g., a source word is tagged 1 if it appears in the abstract, 0 otherwise.", "Although pairs of source articles and human abstracts are abundant, labels derived in this way are not necessarily best since summary saliency can not be easily captured with a rule based categorization.", "Considering that human abstracts involve generalization, paraphrasing, and can con-tain words not present in the source text, leveraging them to derive labels for extraction units can be suboptimal.", "In this work, we investigate a new strategy that seeks to better utilize human abstracts to guide the extraction of summary text units.", "We hypothesize that quality extractive summaries should contain informative content so that they can be used as document surrogates to answer important questions, thereby satisfying users' information needs.", "The question-answer pairs can be conveniently developed from human abstracts.", "Our proposed approach identifies answer tokens from each sentence of the human abstract, then replaces each answer token with a blank to create a Cloze-style question-answer pair.", "To answer all questions (≈human abstract), the system summary must contain content that is semantically close to and collectively resembles the human abstract.", "In this paper, we construct an extractive summary by selecting consecutive word sequences from the source document.", "To accomplish this we utilize a novel reinforcement learning framework to explore the space of possible extractive summaries and assess each summary using a novel reward function judging the summary's adequacy, fluency, length, and its competency to answer important questions.", "The system learns to sample extractive summaries yielding the highest expected rewards, with no pre-derived extraction labels needed.", "This work extends the methodology of Arumae and Liu (2018) with new representations of extraction units and thorough experimental evaluation.", "The contributions of this research can be summarized as follows: • we describe a novel framework generating extractive summaries by selecting consecutive sequences of words from source documents.", "This new system explores various encoding mechanisms, as well as new sampling techniques to capture phrase level data.", "Such a framework has not been thoroughly investigated in the past; • We conduct a methodical empirical evaluation from the point of view of information saliency.", "Rather than solely relying on automatic summarization evaluation methods, we also show the advantages of our system by assessing the summary quality with reading comprehension tasks.", "Our summaries compare favorably with the automatic metrics against state of the art, and show promising results against baselines when evaluated by humans for question answering.", "Related Work Extractive summarization has seen growing popularity in the past decades (Nenkova and McKeown, 2011) .", "The methods focus on selecting representative sentences from the document(s) and optionally deleting unimportant sentence constituents to form a summary (Knight and Marcu, 2002; Radev et al., 2004; Zajic et al., 2007; Martins and Smith, 2009; Gillick and Favre, 2009; Lin and Bilmes, 2010; Wang et al., 2013; Li et al., 2013 Li et al., , 2014 Hong et al., 2014; Yogatama et al., 2015) .", "A majority of the methods are unsupervised.", "They estimate sentence importance based on the sentence's length and position in the document, whether the sentence contains topical content and its relationship with other sentences.", "The summarization objective is to select a handful of sentences to maximize the coverage of important content while minimizing summary redundancy.", "Although unsupervised methods are promising, they cannot benefit from the large-scale training data harvested from the Web (Sandhaus, 2008; Hermann et al., 2015; Grusky et al., 2018) .", "Neural extractive summarization has focused primarily on extracting sentences (Nallapati et al., 2017; Cao et al., 2017; Isonuma et al., 2017; Tarnpradab et al., 2017; Zhou et al., 2018; Kedzie et al., 2018) .", "These studies exploit parallel training data consisting of source articles and story highlights (i.e., human abstracts) to create ground-truth labels for sentences.", "A neural extractive summarizer learns to predict a binary label for each source sentence indicating if it is to be included in the summary.", "These studies build distributed sentence representations using neural networks (Cheng and Lapata, 2016; Yasunaga et al., 2017) and use reinforcement learning to optimize the evaluation metric (Narayan et al., 2018b) and improve summary coherence (Wu and Hu, 2018) .", "However, sentence extraction can be coarse and in many cases, only a part of the sentence is worthy to be added to the summary.", "In this study, we perform finer-grained extractive summarization by allowing the system to select consecutive sequences of words rather than sentences to form a summary.", "Interestingly, studies reveal that summaries generated by recent neural abstractive systems are, in fact, quite \"extractive.\"", "Abstractive systems often adopt the encoder-decoder architecture with an attention mechanism (Rush et al., 2015; Nallapati et al., 2016; Paulus et al., 2017; Guo et al., 2018; Gehrmann et al., 2018; Lebanoff et al., 2018; Ce-likyilmaz et al., 2018) .", "The encoder condenses a source sequence to a fixed-length vector and the decoder takes the vector as input and generates a summary by predicting one word at a time.", "See, Liu, and Manning (2017) suggest that about 35% of the summary sentences occur in the source documents, and 90% of summary n-grams appear in the source.", "Moreover, the summaries may contain inaccurate factual details and introduce new meanings not present in the original text (Cao et al., 2018; .", "It thus raises concerns as to whether such systems can be used in realworld scenarios to summarize materials such as legal documents.", "In this work, we choose to focus on extractive summarization where selected word sequences can be highlighted on the source text to avoid change of meaning.", "Our proposed method is inspired by the work of Lei et al.", "(2016) who seek to identify rationales from textual input to support sentiment classification and question retrieval.", "Distinct from this previous work, we focus on generating generic document summaries.", "We present a novel supervised framework encouraging the selection of consecutive sequences of words to form an extractive summary.", "Further, we leverage reinforcement learning to explore the space of possible extractive summaries and promote those that are fluent, adequate, and competent in question answering.", "We seek to test the hypothesis that successful summaries can serve as document surrogates to answer important questions, and moreover, ground-truth questionanswer pairs can be derived from human abstracts.", "In the following section we describe our proposed approach in details.", "Our Approach Let S be an extractive summary consisting of text segments selected from a source document x.", "The summary can be mapped to a sequence of binary labels y assigned to document words.", "In this section we first present a supervised framework for identifying consecutive sequences of words that are summary-worthy, then proceed by describing our question-answering rewards and a deep reinforcement learning framework to guide the selection of summaries so that they can be used as document surrogates to answer important questions.", "1 Representing an Extraction Unit How best to decompose a source document into a set of text units useful for extractive summarization remains an open problem.", "A natural choice is to use words as extraction units.", "However, this choice ignores the cohesiveness of text.", "A text chunk (e.g., a prepositional phrase) can be either selected to the summary in its entirety or not at all.", "In this paper we experiment with both schemes, using either words or chunks as extraction units.", "When a text chunk is selected in the summary, all its consisting words are selected.", "We obtain text chunks by breaking down the sentence constituent parse tree in a top-down manner until each tree fragment governs at most 5 words.", "A chunk thus can contain from 1 to 5 words.", "Additionally, word level modeling can be considered a special case of chunks where the length of each phrase is 1.", "It is important to note that using sentences as extraction units is out of the scope of this paper, because our work focuses on finer-grained extraction units such as words and phrases and this is notably a more challenging task.", "The most successful neural models for encoding a piece of text to a fixed-length vector include the recurrent (Hochreiter and Schmidhuber, 1997) and convolutional neural networks (CNN; Kim et al., 2014) , among others.", "A recent study by Khandelwal et al.", "(2018) reported that the recurrent networks are capable of memorizing a recent context of about 20 tokens and the model is highly sensitive to word order, whereas this is less the case for CNN whose max-pooling operation makes it agnostic to word order.", "We implement both networks and are curious to compare their effectiveness at encoding extraction units for summarization.", "{h e t } = f Bi-LSTM 1 (x) (1) or {h e t } = f CNN 2 (x) (2) Our model first encodes the source document using a bidirectional LSTM with the forward and backward passes (Eq.", "(1)).", "The representation of the t-th source word h e t = [ ← − h e t || − → h e t ] is the con- catenation of the hidden states in both directions.", "A chunk is similarly denoted by h e t = [ ← − h e t || − → h e t+n ] where t and t + n are the indices of its beginning and ending words.", "In both cases, a fixed-length vector (h e t ∈ R m ) is created for the word/chunk.", "Further, our CNN encoder (Eq.", "(2)) uses a sliding window of {1,3,5,7} words, corresponding to the kernel sizes, to scan through the source document.", "We apply a number of filters to each window size to extract local features.", "The t-th source word is represented by the concatenation of feature maps (an m-dimensional vector).", "To obtain the chunk vector we perform max-pooling over the representations of its consisting words (from t to t + n).", "In the following we use h e t to denote the vector representation of the t-th extraction unit, may it be a word or a chunk, generated using either encoder.", "Constructing an Extractive Summary It is desirable to first develop a supervised framework for identifying summary-worthy text segments from a source article.", "These segments collectively form an extractive summary to be highlighted on the source text.", "The task can be formulated as a sequence labeling problem: a source text unit (a word or chunk) is labelled 1 if it is to be included in the summary and 0 otherwise.", "It is not unusual to develop an auto-regressive model to perform sequence labeling, where the label of the t-th extraction unit (y t ) depends on all previous labels (y <t ).", "Given this hypothesis, we build a framework to extract summary units where the importance of the t-th source unit is characterized by its informativeness (encoded in h e t ), its position in the document, and relationship with the partial summary.", "The details are presented below.", "We use a positional embedding (g t ) to signify the position of the t-th text unit in the source document.", "The position corresponds to the index of the source sentence containing the t-th unit, and further, all text units belonging to the same sentence share the same positional embedding.", "We apply sinusoidal initialization to the embeddings, following Vaswani et al.", "(2017) .", "Importantly, positional embeddings allow us to inject macro-positional knowledge about words/chunks into a neural summarization framework to offset the natural bias that humans tend to have on putting important content at the beginning of an article.", "Next, we build a representation for the partial summary to aid the system in selecting future text units.", "The representation s t is expected to encode the extraction decisions up to time t-1 and it can be realized using a unidirectional LSTM network (Eq.", "(3)).", "The t-th input to the network is represented as y t−1 ⊗ h e t−1 where y t−1 is a binary label serving as a gating mechanism to control if the semantic content of the previous text unit (h e t−1 ) is to be included in the summary (\"⊗\" corresponds to elementwise product).", "During training, we apply teacher forcing and y t−1 is the ground-truth extraction label for the (t − 1)-th unit; at test time, Figure 1 : A unidirectional LSTM (blue, Eq.", "(3)) encodes the partial summary, while the multilayer perceptron network (orange, Eq.", "(4-5)) utilizes the text unit representation (h e t ), its positional embedding (gt), and the partial summary representation (st) to determine if the t-th text unit is to be included in the summary.", "Best viewed in color.", "g t 1 g t g t+1 g t+2 s t+2 s t+1 s t s t 1 h e t 1 h e t h e t+1 h e t+2 y t−1 is generated on-the-fly by obtaining the label yielding the highest probability according to Eq.", "(5).", "In the previous work of Cheng and Lapata (2016) and Nallapati et al.", "(2017) , similar auto-regressive models are developed to identify summary sentences.", "Different from the previous work, this study focuses on extracting consecutive sequences of words and chunks from the source document, and the partial summary representation is particularly useful for predicting if the next unit is to be included in the summary to improve summary fluency.", "s t = f Uni-LSTM 3 (s t−1 , y t−1 ⊗ h e t−1 ) (3) Given the partial summary representation (s t ), and representation of the text unit (h e t ) and its positional encoding (g t ), we employ a multilayer perceptron to predict how likely the unit is to be included in the summary.", "This process is described by Eqs.", "(4-5) and further illustrated in Figure 1 .", "a t = f ReLU (W a [h e t ; g t ; s t ] + b a ) (4) p(y t |y <t , x) = σ(w y a t + b y ) (5) Our model parameters include {W a , b a , w y , b y } along with those required by f Bi-LSTM 1 , f CNN 2 and f Uni-LSTM 3 .", "It is possible to train this model in a fully supervised fashion by minimizing the negative log-likelihood of the training data.", "We generate ground-truth labels for source text units as follows.", "A source word receives a label of 1 if both itself and its adjacent word appear in the human abstract (excluding cases where both words are stopwords).", "This heuristic aims to label consecutive source words (2 or more) as summaryworthy, as opposed to picking single words which can be less informative.", "A source text chunk receives a label of 1 if one of its component words is labelled 1 in the above process.", "Because human abstracts are often short and contain novel words not present in source documents, they can be suboptimal for generating ground-truth labels for extraction units.", "Only a small portion of the source words (about 8% in our dataset) are labelled as positive, whereas the vast majority are negative.", "Such labels can be ineffective in providing supervision.", "In the following section, we investigate a new learning paradigm, which encourages extractive summaries to contain informative content useful for answering important questions, while question-answer pairs can be automatically derived from human abstracts.", "Using Summaries to Answer Questions Our hypothesis is that high-quality summaries should contain informative content making them appropriate to serve as document surrogates to satisfy users' information needs.", "We train the extractive summarizer to identify source text units necessary for answering questions, and the questionanswer (QA) pairs can be conveniently developed from human abstracts.", "To obtain QA pairs, we set an answer token to be either a salient word or a named entity to limit the space of potential answers.", "For any sentence in the human abstract, we identify an answer token from it, then replace the answer token with a blank to create a Cloze-style question-answer pair (see Table 1 ).", "When a sentence contains multiple answer tokens, a set of QA pairs can be obtained from it.", "It is important to note that at least one QA pair should be extracted from each sentence of the abstract.", "Because a system summary is trained to contain content useful for answering all questions (≈human abstract), any missing QA pair is likely to cause the summary to be insufficient.", "We collect answer tokens using the following methods: (a) we extract a set of entities with tag {PER, LOC, ORG, MISC} from each sentence using the Stanford CoreNLP toolkit ; (b) we also identify the ROOT word of each sentence's dependency parse tree along with the sentence's subject/object word, whose type is {NSUBJ, CSUBJ, OBJ, IOBJ} (if exists), then add them to the collection of answer tokens.", "Further, we prune the answer space by excluding those which appear fewer than 5 times overall.", "Having several methods for question construction allows us to explore the answer space properly.", "In the results section we perform experiments on root, subject/object, and named entities to see which model provides the best extraction guide.", "Given an extractive summary S containing a set of source text units, and a collection of questionanswer pairs P = {(Q k , e * k )} K k=1 related to the source document, we want to develop a mechanism leveraging the extractive summary to answer these questions.", "We first encode each question Q k to a vector representation (q k ).", "This is achieved by concatenating the last hidden states of the forward/backward passes of a bidirectional LSTM (Eq.", "(6) ).", "Next, we exploit the attention mechanism to locate summary parts that are relevant to answering the k-th question.", "Given the attention mechanism, an extractive summary S can be used to answer multiple questions related to the document.", "We define α t,k to be the semantic relatedness between the t-th source text unit and the k-th question.", "Following Chen et al.", "(2016a) , we introduce a bilinear term to characterize their relationship (α t,k ∝ h e t W α q k ; see Eq.", "(7) ).", "In this process, we consider only those source text units selected in summary S. Using α t,k as weights, we then compute a context vector c k condensing summary content related to the k-th question (Eq.", "(8)) .", "q k = f Bi-LSTM 4 (Q k ) (6) α t,k = exp(h e t W α q k ) t exp(h e t W α q k ) (7) c k = t α t,k h e t (8) u k = [c k ; q k ; |c k − q k |; c k ⊗ q k ] (9) To predict the most probable answer, we construct a fully-connected network as the output layer.", "The input to the network includes a concatenation of the context vector (c k ), question vector (q k ), absolute difference (|c k − q k |) and element-wise product (c k ⊗ q k ) of the two vectors (Eq.", "(9)).", "A softmax function is used to estimate a probability distribution over the space of candidate answers: .", "P (e k |S, Q k ) = softmax(W e f ReLU (W u u k + b u )).", "A Reinforcement Learning Framework In this section we introduce a reinforcement learning framework to explore the space of possible extractive summaries and present a novel reward function to promote summaries that are adequate, fluent, restricted in length, and competent in question answering.", "Our reward function consists of four components, whose interpolation weights γ, α, and β are tuned on the dev set.", "R(y) = R c (y) + γR a (y) + αR f (y) + βR l (y) We define QA competency (Eq.", "(10)) as the average log-likelihood of correctly answering questions using the system summary (y).", "A highquality system summary is expected to resemble reference summary by using similar wording.", "The adequacy metric (Eq.", "(11)) measures the percentage of overlapping unigrams between the system (y) and reference summary (y * ).", "The fluency criterion (Eq.", "(12)) encourages consecutive sequences of source words to be selected by preventing many 0/1 switches in the label sequence (i.e., |y t − y t−1 |).", "Finally, we limit the summary size by setting the ratio of selected words to be close to a threshold δ (Eq.", "(13) ).", "QA R c (y) = 1 K K k=1 log P (e * k |y, Q k ) (10) Adequ.", "R a (y) = 1 |y * | U(y, y * ) (11) Fluency R f (y) = − |y| t=2 |y t − y t−1 | (12) Length R l (y) = − 1 |y| t y t − δ (13) The reward function R(y) successfully combines intrinsic measures of summary fluency and adequacy (Goldstein et al., 2005) with extrinsic measure of summary responsiveness to given questions (Dang, 2006; Murray et al., 2008) .", "A reinforcement learning agent finds a policy P (y|x) to maximize the expected reward E P (y|x) [R(y)].", "Training the system with policy gradient (Eq.", "(14) ) involves repeatedly sampling an extractive summaryŷ from the source document x.", "At time t, the agent takes an action by sampling a decision based on p(y t |ŷ <t , x) (Eq.", "(5)) indicating whether the t-th source text unit is to be included in the summary.", "Once the full summary sequenceŷ is generated, it is compared to the ground-truth sequence to compute the reward R(ŷ).", "In this way, reinforcement learning explores the space of extractive summaries and promotes those yielding high rewards.", "At inference time, rather than sampling actions from p(y t |y <t , x), we choose y t that yields the highest probability to generate the system summary y.", "This process is deterministic and no QA is required.", "∇ θ E P (y|x) [R(y)] = E P (y|x) [R(y)∇ θ log P (y|x)] ≈ 1 N N n=1 R(ŷ (n) )∇ θ log P (ŷ (n) |x) (14) Experiments We proceed by discussing the dataset and settings, comparison systems, and experimental results obtained through both automatic metrics and human evaluation in a reading comprehension setting.", "Dataset and Settings Our goal is to build an extractive summarizer identifying important textual segments from source articles.", "To investigate the effectiveness of the proposed approach, we conduct experiments on the CNN/Daily Mail dataset using a version provided by See et al.", "(2017) .", "The reference summaries of this dataset were created by human editors exhibiting a moderate degree of extractiveness.", "E.g., 83% of summary unigrams and 45% of bigrams appear in source articles (Narayan et al., 2018a) .", "On average, a CNN article contains 761 words / 34 sentences and a DM article contains 653 words / 29 sentences.", "We report results respectively for the CNN and DM portion of the dataset.", "Our hyperparameter settings are as follows.", "We set the hidden state dimension of the LSTM to be 256 in either direction.", "A bidirectional LSTM f Bi-LSTM 1 (·) produces a 512-dimensional vector for each content word.", "Similarly, f Bi-LSTM 4 (·) generates a question vector q k of the same size.", "Our CNN encoder f CNN 2 (·) uses multiple window sizes of {1, 3, 5, 7} and 128 filters per window size.", "h e t is thus a 512-dimensional vector using either CNN or LSTM encoder.", "We set the hidden state dimension of s t to be 128.", "We also use 100-dimensional word embeddings (Pennington et al., 2014) and sinusoidal positional encodings (Vaswani et al., 2017) of 30 dimensions.", "The maximum article length is set to 400 words.", "Compared to the study of Arumae and Liu (2018) , we expand the search space dramatically from 100 to 400 words, which poses a challenge to the RLbased summarizers.", "We associate each article with at most 10 QA pairs (K=10) and use them to guide the extraction of summary segments.", "We apply mini-batch training with Adam optimizer (Kingma and Ba, 2014) , where a mini-batch contains 128 articles and their QA pairs.", "The summary ratio δ is set to 0.15, yielding extractive summaries of about 60 words.", "Following Arumae and Liu (2018) , we set hyperparameters β = 2α; α and γ are tuned on the dev set using grid search.", "Experimental Results Comparison systems We compare our method with a number of extractive and abstractive systems that have reported results on the CNN/DM datasets.", "We consider non-neural approaches that extract sentences from the source article to form a summary.", "These include LexRank (Radev et al., 2004) , SumBasic (Vanderwende et al., 2007) , and KLSum (Haghighi and Vanderwende, 2009) .", "Such methods treat sentences as bags of words, and then select sentences containing topically important words.", "We further include the Lead-3 baseline that extracts the first 3 sentences from any given article.", "The method has been shown to be a strong baseline for summarizing news articles.", "Neural extractive approaches focus on learning vector representations for sentences and words, then performing extraction based on the learned representations.", "Cheng et al.", "(2016) describe a neural network method composed of a hierarchical document encoder and an attention-based extractor.", "The system has two variants: NN-WE extracts words from the source article and NN-SE extracts sentences.", "SummaRuNNer (Nallapati et al., 2017) presents an autoregressive sequence labeling method based on recurrent neural networks.", "It selects summary sentences based on their content, salience, position, and novelty representations.", "Abstractive summarization methods are not directly comparable to our approach, but we choose to include three systems that report results respectively for CNN and DM datasets.", "Distraction-M3 (Chen et al., 2016b ) trains the summarization system to distract its attention to traverse different regions of the source article.", "Graph attention (Tan et al., 2017) introduces a graph-based attention mechanism to enhance the encoderdecoder framework.", "PointerGen+Cov.", "(See et al., 2017) allows the system to not only copy words from the source text but also generate summary words by selecting them from a vocabulary.", "Abstractive methods can thus introduce new words to the summary that are not present in the source article.", "However, system summaries may change the meaning of the original texts due to this flexibility.", "Results We present summarization results of various systems in Tables 2 and 3 , evaluated on the standard CNN/DM test sets by R-1, R-2, and R-L metrics (Lin, 2004) , which respectively measure the overlap of unigrams, bigrams, and longest common subsequences between system and reference summaries.", "We investigate four variants of our method: QASumm+NoQ does not utilize any question-answer pairs during training.", "It extracts summary text chunks by learning from groundtruth labels ( §3.2) and the chunks are encoded by f Bi-LSTM 1 .", "Other variants initialize their models using pretrained parameters from QASumm+NoQ, then integrate the reinforcement learning objective ( §3.4) to exploit the space of possible extractive summaries and reward those that are useful for answering questions.", "We consider three types of QA pairs: the answer token is the root of a sentence dependency parse tree (+ROOT), a subject or object (+SUBJ/OBJ), or an entity found in the sentence (+NER).", "In all cases, the question is generated by replacing the answer token with a blank symbol.", "As illustrated in Tables 2 and 3 , our QASumm methods with reinforcement learning (+ROOT, +SUBJ/OBJ, +NER) perform competitively with strong baselines.", "They outperform the counterpart QASumm+NoQ that makes no use of the QA pairs by a substantial margin.", "They outperform or perform at a comparable level to state-of-the-art published systems on the CNN/DM datasets but are generally inferior to PointerGen.", "We observe that exacting summary chunks is highly desirable in real-world applications as it provides a mechanism to generate concise summaries.", "Nonetheless, accurately identifying summary chunks is challenging because the search space is vast and spuriousness arises in chunking sentences.", "Cheng and Lapata (2016) report a substantial performance drop when adapting their system to extract words.", "Our QASumm methods focusing on chunk extraction perform on par with competitive systems that extract whole sentences.", "We additionally present human evaluation results of summary usefulness for a reading comprehension task in §4.3.", "In Tables 2 and 3 , we further show the number of unique answers per QA type.", "We find that the ROOT-type QA pairs have the least number of unique answers.", "They are often main verbs of sentences.", "In contrast, the SUBJ/OBJ-type has the most number of answers.", "They are subjects and objects of sentences and correspond to an open class of content words.", "The NER-type has a moderate number of answers compared to others.", "Note that all answer tokens have been filtered by frequency; those appearing less than 5 times in the dataset are removed to avoid overfitting.", "Among variants of the QASumm method, we find that QASumm+ROOT achieves the highest scores on DM dataset.", "QASumm+NER performs consistently well on both CNN and DM datasets, suggesting QA pairs of this type are effective in guiding the system to extract summary chunks.", "We conjecture that maintaining a moderate number of answers is important to maximize performance.", "To answer questions with missing entities, the summary is encouraged to contain similar content as the question body.", "Because questions are derived from the human abstract, this in turn requires the system summary to carry similar semantic content as the human abstract.", "Question-answering accuracy We next dive into the QA component of our system to investigate question-answering performance when different types of summaries and QA pairs are supplied to the system ( §3.3).", "Given a question, the system predicts an answer using an extractive summary as the source input.", "Intuitively, an informative summary can lead to high QA accuracy, as the summary content serves well as the basis for predicting answers.", "With the same summary as input, certain types of questions can be more difficult to answer than others, and the system must rely heavily on the summary to gauge correct answers.", "We compare various types of summaries.", "These include (a) QASumm+NoQ which extracts summary chunks without requiring QA pairs; and (b) GoldSumm, which are gold-standard extractive summaries generated by collecting source words appearing in human summaries.", "We further consider NoText and FullText, corresponding to using no source text or the full source article as input.", "They represent the two extremes.", "In all cases the QA component ( §3.3) is trained on the training set and we report QA accuracies on the dev set.", "In Table 4 , we observe that question-answering with GoldSumm performs the best for all QA types.", "It outperforms the scenarios using Full-Text as the source input.", "This indicates that distilled information contained in a high-quality summary can be useful for answering questions, as searching for answers in a succinct summary can be more efficient than that in a full article.", "Moreover, we observe that the performance of QA-Summ+NoQ is in between NoText and GoldSumm for all answer types.", "The results suggest that extractive summaries with even modest ROUGE scores can prove useful for question-answering.", "Regarding different types of QA pairs, we find that the ROOT-type can achieve high QA accuracy when using NoText input.", "It suggests that ROOT answers can to some extent be predicted based on the question context.", "The NER-type QA pairs work the best for both GoldSumm and Full-Text, likely because the source texts contain necessary entities required to correctly answer those questions.", "We also find the SUBJ/OBJ-type QA pairs have the smallest gap between train/dev accuracies, despite that they have a large answer space.", "Based on the analysis we would suggest future work to consider using NER-based QA pairs as they encourage the summaries to contain salient source content and be informative.", "Extraction units We finally compare the performance of using either words or chunks as extraction units ( §3.1).", "The chunks are obtained by breaking down sentence constituent parse trees in a top-down manner until all tree fragments contain 5 words or less.", "We observe that 70% of the chunks are 1-grams, and 2/3/4/5-grams are 9%, 7%, 6%, and 8% respectively.", "We compare the bidirectional LSTM (f LSTM 1 ) and CNN (f CNN 2 ) encoders for their effectiveness on generating representations for extraction units.", "Figure 2 presents the results of the QASumm+NoQ system under various settings.", "We find that extracting chunks performs superior, and combining chunks with LSTM representations yield the highest scores.", "Human Evaluation Testing the usefulness of an extractive system driven by reading comprehension is not inherently measured by automatic metrics (i.e.", "ROUGE).", "We conducted a human evaluation to assess whether the highlighted summaries contribute to document understanding.", "Similar to our training paradigm we presented each participant with the document and three fill-in-the-blank questions created from the human abstracts.", "It was guaranteed that each question was from a unique human abstract to avoid seeing the answer adjacent to the same template.", "The missing section was randomly generated to be either the root word, the subject or ob- See et al.", "(2017) .", "Our systems tested were the supervised extractor, and our full model (NER).", "ject of the sentence, or a named entity.", "We compare our reinforced extracted summary (presented as a bold overlay to the document), against our supervised method (section 3.2), abstractive summaries generated by See et al.", "(2017) , and the human abstracts in full.", "Additionally we asked the participants to rate the quality of the summary presented (1-5, with 5 being most informative).", "We utilized Amazon Mechanical Turk, and conducted an experiment where we sampled 80 documents from the CNN test set.", "The articles were evenly split across the four competing systems, and each HIT was completed by 5 turkers.", "Upon completion the data was analyzed manually for accuracy since turkers entered each answer as free text, and to remove any meaningless datapoints.", "Table 5 shows the average time (in seconds) to complete a single question, the overall accuracy of the participants, and the informativeness of a given summary type.", "Excluding the use of human abstracts, all systems resulted in similar performance times.", "However we observe a large margin in QA accuracy in our full system compared to the abstractive and our supervised approach.", "Although participants rated the informativeness of the summaries to be the same our systems yielded a higher performance.", "This strongly indicates that having a system which makes using of document comprehension has a tangible effect when applied towards a real-world task.", "Conclusion We exploited an extractive summarization framework using deep reinforcement learning to identify consecutive word sequences from a document to form an extractive summary.", "Our reward function promotes adequate and fluent summaries that can serve as document surrogates to answer important questions, directly addressing users' information needs.", "Experimental results on benchmark datasets demonstrated the efficacy of our proposed method over state-of-the-art baselines, assessed by both automatic metrics and human evaluators." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "4.3", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Our Approach", "Representing an Extraction Unit", "Constructing an Extractive Summary", "Using Summaries to Answer Questions", "A Reinforcement Learning Framework", "Experiments", "Dataset and Settings", "Experimental Results", "Human Evaluation", "Conclusion" ] }
GEM-SciDuet-train-36#paper-1050#slide-2
Representing an Extraction Unit
We obtain text chunks by breaking down constituent parse tree until each fragment governs at most 5 words. Kristjan Arumae and Fei Liu Guiding Extractive Summarization with Question-Answering Rewards - NAACL 2019
We obtain text chunks by breaking down constituent parse tree until each fragment governs at most 5 words. Kristjan Arumae and Fei Liu Guiding Extractive Summarization with Question-Answering Rewards - NAACL 2019
[]
GEM-SciDuet-train-36#paper-1050#slide-4
1050
Guiding Extractive Summarization with Question-Answering Rewards
Highlighting while reading is a natural behavior for people to track salient content of a document. It would be desirable to teach an extractive summarizer to do the same. However, a major obstacle to the development of a supervised summarizer is the lack of ground-truth. Manual annotation of extraction units is costprohibitive, whereas acquiring labels by automatically aligning human abstracts and source documents can yield inferior results. In this paper we describe a novel framework to guide a supervised, extractive summarization system with question-answering rewards. We argue that quality summaries should serve as a document surrogate to answer important questions, and such question-answer pairs can be conveniently obtained from human abstracts. The system learns to promote summaries that are informative, fluent, and perform competitively on question-answering. Our results compare favorably with those reported by strong summarization baselines as evaluated by automatic metrics and human assessors.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289 ], "paper_content_text": [ "Introduction Our increasingly digitized lifestyle calls for summarization techniques to produce short and accurate summaries that can be accessed at any time.", "These summaries should factually adhere to the content of the source text and present the reader with the key points therein.", "Although neural abstractive summarization has shown promising results (Rush et al., 2015; Nallapati et al., 2016; See et al., 2017) , these methods can have potential drawbacks.", "It was revealed that abstracts generated by neural systems sometimes alter or falsify objective details, and introduce new meanings not present in the original text (Cao et al., 2018) .", "Reading these abstracts can lead to misinterpretation of the source materials, which is clearly undesirable.", "In this work, we focus on extractive summarization, where the summaries are guaranteed (CNN) A judge this week sentenced a former TSA agent to six months in jail for secretly videotaping a female co-worker while she was in the bathroom, prosecutors said.", "During the investigation, detectives with the Metro Nashville Police Department in Tennessee also found that the agent, 33-year-old Daniel Boykin, entered the woman's home multiple times, where he took videos, photos and other data.", "Police found more than 90 videos and 1,500 photos of the victim on Boykin's phone and computer .", "The victim filed a complaint after seeing images of herself on his phone last year.", "[...] Comprehension Questions (Human Abstract): Former Daniel Boykin, 33, videotaped his female co-worker in the restroom, authorities say.", "Authorities say they found 90 videos and 1,500 photos of the victim on and computer.", "to remain faithful to the original content.", "Our system seeks to identify salient and consecutive sequences of words from the source document, and highlight them in the text to assist users in browsing and comprehending lengthy documents.", "An example is illustrated in Table 1 .", "A primary challenge faced by extractive summarizers is the lack of annotated data.", "The cost of hiring humans to label a necessary amount of source articles with summary words, good for training a modern classifier, can be prohibitive.", "Previous work has exploited using human abstracts to derive labels for extraction units (Woodsend and Lapata, 2010) .", "E.g., a source word is tagged 1 if it appears in the abstract, 0 otherwise.", "Although pairs of source articles and human abstracts are abundant, labels derived in this way are not necessarily best since summary saliency can not be easily captured with a rule based categorization.", "Considering that human abstracts involve generalization, paraphrasing, and can con-tain words not present in the source text, leveraging them to derive labels for extraction units can be suboptimal.", "In this work, we investigate a new strategy that seeks to better utilize human abstracts to guide the extraction of summary text units.", "We hypothesize that quality extractive summaries should contain informative content so that they can be used as document surrogates to answer important questions, thereby satisfying users' information needs.", "The question-answer pairs can be conveniently developed from human abstracts.", "Our proposed approach identifies answer tokens from each sentence of the human abstract, then replaces each answer token with a blank to create a Cloze-style question-answer pair.", "To answer all questions (≈human abstract), the system summary must contain content that is semantically close to and collectively resembles the human abstract.", "In this paper, we construct an extractive summary by selecting consecutive word sequences from the source document.", "To accomplish this we utilize a novel reinforcement learning framework to explore the space of possible extractive summaries and assess each summary using a novel reward function judging the summary's adequacy, fluency, length, and its competency to answer important questions.", "The system learns to sample extractive summaries yielding the highest expected rewards, with no pre-derived extraction labels needed.", "This work extends the methodology of Arumae and Liu (2018) with new representations of extraction units and thorough experimental evaluation.", "The contributions of this research can be summarized as follows: • we describe a novel framework generating extractive summaries by selecting consecutive sequences of words from source documents.", "This new system explores various encoding mechanisms, as well as new sampling techniques to capture phrase level data.", "Such a framework has not been thoroughly investigated in the past; • We conduct a methodical empirical evaluation from the point of view of information saliency.", "Rather than solely relying on automatic summarization evaluation methods, we also show the advantages of our system by assessing the summary quality with reading comprehension tasks.", "Our summaries compare favorably with the automatic metrics against state of the art, and show promising results against baselines when evaluated by humans for question answering.", "Related Work Extractive summarization has seen growing popularity in the past decades (Nenkova and McKeown, 2011) .", "The methods focus on selecting representative sentences from the document(s) and optionally deleting unimportant sentence constituents to form a summary (Knight and Marcu, 2002; Radev et al., 2004; Zajic et al., 2007; Martins and Smith, 2009; Gillick and Favre, 2009; Lin and Bilmes, 2010; Wang et al., 2013; Li et al., 2013 Li et al., , 2014 Hong et al., 2014; Yogatama et al., 2015) .", "A majority of the methods are unsupervised.", "They estimate sentence importance based on the sentence's length and position in the document, whether the sentence contains topical content and its relationship with other sentences.", "The summarization objective is to select a handful of sentences to maximize the coverage of important content while minimizing summary redundancy.", "Although unsupervised methods are promising, they cannot benefit from the large-scale training data harvested from the Web (Sandhaus, 2008; Hermann et al., 2015; Grusky et al., 2018) .", "Neural extractive summarization has focused primarily on extracting sentences (Nallapati et al., 2017; Cao et al., 2017; Isonuma et al., 2017; Tarnpradab et al., 2017; Zhou et al., 2018; Kedzie et al., 2018) .", "These studies exploit parallel training data consisting of source articles and story highlights (i.e., human abstracts) to create ground-truth labels for sentences.", "A neural extractive summarizer learns to predict a binary label for each source sentence indicating if it is to be included in the summary.", "These studies build distributed sentence representations using neural networks (Cheng and Lapata, 2016; Yasunaga et al., 2017) and use reinforcement learning to optimize the evaluation metric (Narayan et al., 2018b) and improve summary coherence (Wu and Hu, 2018) .", "However, sentence extraction can be coarse and in many cases, only a part of the sentence is worthy to be added to the summary.", "In this study, we perform finer-grained extractive summarization by allowing the system to select consecutive sequences of words rather than sentences to form a summary.", "Interestingly, studies reveal that summaries generated by recent neural abstractive systems are, in fact, quite \"extractive.\"", "Abstractive systems often adopt the encoder-decoder architecture with an attention mechanism (Rush et al., 2015; Nallapati et al., 2016; Paulus et al., 2017; Guo et al., 2018; Gehrmann et al., 2018; Lebanoff et al., 2018; Ce-likyilmaz et al., 2018) .", "The encoder condenses a source sequence to a fixed-length vector and the decoder takes the vector as input and generates a summary by predicting one word at a time.", "See, Liu, and Manning (2017) suggest that about 35% of the summary sentences occur in the source documents, and 90% of summary n-grams appear in the source.", "Moreover, the summaries may contain inaccurate factual details and introduce new meanings not present in the original text (Cao et al., 2018; .", "It thus raises concerns as to whether such systems can be used in realworld scenarios to summarize materials such as legal documents.", "In this work, we choose to focus on extractive summarization where selected word sequences can be highlighted on the source text to avoid change of meaning.", "Our proposed method is inspired by the work of Lei et al.", "(2016) who seek to identify rationales from textual input to support sentiment classification and question retrieval.", "Distinct from this previous work, we focus on generating generic document summaries.", "We present a novel supervised framework encouraging the selection of consecutive sequences of words to form an extractive summary.", "Further, we leverage reinforcement learning to explore the space of possible extractive summaries and promote those that are fluent, adequate, and competent in question answering.", "We seek to test the hypothesis that successful summaries can serve as document surrogates to answer important questions, and moreover, ground-truth questionanswer pairs can be derived from human abstracts.", "In the following section we describe our proposed approach in details.", "Our Approach Let S be an extractive summary consisting of text segments selected from a source document x.", "The summary can be mapped to a sequence of binary labels y assigned to document words.", "In this section we first present a supervised framework for identifying consecutive sequences of words that are summary-worthy, then proceed by describing our question-answering rewards and a deep reinforcement learning framework to guide the selection of summaries so that they can be used as document surrogates to answer important questions.", "1 Representing an Extraction Unit How best to decompose a source document into a set of text units useful for extractive summarization remains an open problem.", "A natural choice is to use words as extraction units.", "However, this choice ignores the cohesiveness of text.", "A text chunk (e.g., a prepositional phrase) can be either selected to the summary in its entirety or not at all.", "In this paper we experiment with both schemes, using either words or chunks as extraction units.", "When a text chunk is selected in the summary, all its consisting words are selected.", "We obtain text chunks by breaking down the sentence constituent parse tree in a top-down manner until each tree fragment governs at most 5 words.", "A chunk thus can contain from 1 to 5 words.", "Additionally, word level modeling can be considered a special case of chunks where the length of each phrase is 1.", "It is important to note that using sentences as extraction units is out of the scope of this paper, because our work focuses on finer-grained extraction units such as words and phrases and this is notably a more challenging task.", "The most successful neural models for encoding a piece of text to a fixed-length vector include the recurrent (Hochreiter and Schmidhuber, 1997) and convolutional neural networks (CNN; Kim et al., 2014) , among others.", "A recent study by Khandelwal et al.", "(2018) reported that the recurrent networks are capable of memorizing a recent context of about 20 tokens and the model is highly sensitive to word order, whereas this is less the case for CNN whose max-pooling operation makes it agnostic to word order.", "We implement both networks and are curious to compare their effectiveness at encoding extraction units for summarization.", "{h e t } = f Bi-LSTM 1 (x) (1) or {h e t } = f CNN 2 (x) (2) Our model first encodes the source document using a bidirectional LSTM with the forward and backward passes (Eq.", "(1)).", "The representation of the t-th source word h e t = [ ← − h e t || − → h e t ] is the con- catenation of the hidden states in both directions.", "A chunk is similarly denoted by h e t = [ ← − h e t || − → h e t+n ] where t and t + n are the indices of its beginning and ending words.", "In both cases, a fixed-length vector (h e t ∈ R m ) is created for the word/chunk.", "Further, our CNN encoder (Eq.", "(2)) uses a sliding window of {1,3,5,7} words, corresponding to the kernel sizes, to scan through the source document.", "We apply a number of filters to each window size to extract local features.", "The t-th source word is represented by the concatenation of feature maps (an m-dimensional vector).", "To obtain the chunk vector we perform max-pooling over the representations of its consisting words (from t to t + n).", "In the following we use h e t to denote the vector representation of the t-th extraction unit, may it be a word or a chunk, generated using either encoder.", "Constructing an Extractive Summary It is desirable to first develop a supervised framework for identifying summary-worthy text segments from a source article.", "These segments collectively form an extractive summary to be highlighted on the source text.", "The task can be formulated as a sequence labeling problem: a source text unit (a word or chunk) is labelled 1 if it is to be included in the summary and 0 otherwise.", "It is not unusual to develop an auto-regressive model to perform sequence labeling, where the label of the t-th extraction unit (y t ) depends on all previous labels (y <t ).", "Given this hypothesis, we build a framework to extract summary units where the importance of the t-th source unit is characterized by its informativeness (encoded in h e t ), its position in the document, and relationship with the partial summary.", "The details are presented below.", "We use a positional embedding (g t ) to signify the position of the t-th text unit in the source document.", "The position corresponds to the index of the source sentence containing the t-th unit, and further, all text units belonging to the same sentence share the same positional embedding.", "We apply sinusoidal initialization to the embeddings, following Vaswani et al.", "(2017) .", "Importantly, positional embeddings allow us to inject macro-positional knowledge about words/chunks into a neural summarization framework to offset the natural bias that humans tend to have on putting important content at the beginning of an article.", "Next, we build a representation for the partial summary to aid the system in selecting future text units.", "The representation s t is expected to encode the extraction decisions up to time t-1 and it can be realized using a unidirectional LSTM network (Eq.", "(3)).", "The t-th input to the network is represented as y t−1 ⊗ h e t−1 where y t−1 is a binary label serving as a gating mechanism to control if the semantic content of the previous text unit (h e t−1 ) is to be included in the summary (\"⊗\" corresponds to elementwise product).", "During training, we apply teacher forcing and y t−1 is the ground-truth extraction label for the (t − 1)-th unit; at test time, Figure 1 : A unidirectional LSTM (blue, Eq.", "(3)) encodes the partial summary, while the multilayer perceptron network (orange, Eq.", "(4-5)) utilizes the text unit representation (h e t ), its positional embedding (gt), and the partial summary representation (st) to determine if the t-th text unit is to be included in the summary.", "Best viewed in color.", "g t 1 g t g t+1 g t+2 s t+2 s t+1 s t s t 1 h e t 1 h e t h e t+1 h e t+2 y t−1 is generated on-the-fly by obtaining the label yielding the highest probability according to Eq.", "(5).", "In the previous work of Cheng and Lapata (2016) and Nallapati et al.", "(2017) , similar auto-regressive models are developed to identify summary sentences.", "Different from the previous work, this study focuses on extracting consecutive sequences of words and chunks from the source document, and the partial summary representation is particularly useful for predicting if the next unit is to be included in the summary to improve summary fluency.", "s t = f Uni-LSTM 3 (s t−1 , y t−1 ⊗ h e t−1 ) (3) Given the partial summary representation (s t ), and representation of the text unit (h e t ) and its positional encoding (g t ), we employ a multilayer perceptron to predict how likely the unit is to be included in the summary.", "This process is described by Eqs.", "(4-5) and further illustrated in Figure 1 .", "a t = f ReLU (W a [h e t ; g t ; s t ] + b a ) (4) p(y t |y <t , x) = σ(w y a t + b y ) (5) Our model parameters include {W a , b a , w y , b y } along with those required by f Bi-LSTM 1 , f CNN 2 and f Uni-LSTM 3 .", "It is possible to train this model in a fully supervised fashion by minimizing the negative log-likelihood of the training data.", "We generate ground-truth labels for source text units as follows.", "A source word receives a label of 1 if both itself and its adjacent word appear in the human abstract (excluding cases where both words are stopwords).", "This heuristic aims to label consecutive source words (2 or more) as summaryworthy, as opposed to picking single words which can be less informative.", "A source text chunk receives a label of 1 if one of its component words is labelled 1 in the above process.", "Because human abstracts are often short and contain novel words not present in source documents, they can be suboptimal for generating ground-truth labels for extraction units.", "Only a small portion of the source words (about 8% in our dataset) are labelled as positive, whereas the vast majority are negative.", "Such labels can be ineffective in providing supervision.", "In the following section, we investigate a new learning paradigm, which encourages extractive summaries to contain informative content useful for answering important questions, while question-answer pairs can be automatically derived from human abstracts.", "Using Summaries to Answer Questions Our hypothesis is that high-quality summaries should contain informative content making them appropriate to serve as document surrogates to satisfy users' information needs.", "We train the extractive summarizer to identify source text units necessary for answering questions, and the questionanswer (QA) pairs can be conveniently developed from human abstracts.", "To obtain QA pairs, we set an answer token to be either a salient word or a named entity to limit the space of potential answers.", "For any sentence in the human abstract, we identify an answer token from it, then replace the answer token with a blank to create a Cloze-style question-answer pair (see Table 1 ).", "When a sentence contains multiple answer tokens, a set of QA pairs can be obtained from it.", "It is important to note that at least one QA pair should be extracted from each sentence of the abstract.", "Because a system summary is trained to contain content useful for answering all questions (≈human abstract), any missing QA pair is likely to cause the summary to be insufficient.", "We collect answer tokens using the following methods: (a) we extract a set of entities with tag {PER, LOC, ORG, MISC} from each sentence using the Stanford CoreNLP toolkit ; (b) we also identify the ROOT word of each sentence's dependency parse tree along with the sentence's subject/object word, whose type is {NSUBJ, CSUBJ, OBJ, IOBJ} (if exists), then add them to the collection of answer tokens.", "Further, we prune the answer space by excluding those which appear fewer than 5 times overall.", "Having several methods for question construction allows us to explore the answer space properly.", "In the results section we perform experiments on root, subject/object, and named entities to see which model provides the best extraction guide.", "Given an extractive summary S containing a set of source text units, and a collection of questionanswer pairs P = {(Q k , e * k )} K k=1 related to the source document, we want to develop a mechanism leveraging the extractive summary to answer these questions.", "We first encode each question Q k to a vector representation (q k ).", "This is achieved by concatenating the last hidden states of the forward/backward passes of a bidirectional LSTM (Eq.", "(6) ).", "Next, we exploit the attention mechanism to locate summary parts that are relevant to answering the k-th question.", "Given the attention mechanism, an extractive summary S can be used to answer multiple questions related to the document.", "We define α t,k to be the semantic relatedness between the t-th source text unit and the k-th question.", "Following Chen et al.", "(2016a) , we introduce a bilinear term to characterize their relationship (α t,k ∝ h e t W α q k ; see Eq.", "(7) ).", "In this process, we consider only those source text units selected in summary S. Using α t,k as weights, we then compute a context vector c k condensing summary content related to the k-th question (Eq.", "(8)) .", "q k = f Bi-LSTM 4 (Q k ) (6) α t,k = exp(h e t W α q k ) t exp(h e t W α q k ) (7) c k = t α t,k h e t (8) u k = [c k ; q k ; |c k − q k |; c k ⊗ q k ] (9) To predict the most probable answer, we construct a fully-connected network as the output layer.", "The input to the network includes a concatenation of the context vector (c k ), question vector (q k ), absolute difference (|c k − q k |) and element-wise product (c k ⊗ q k ) of the two vectors (Eq.", "(9)).", "A softmax function is used to estimate a probability distribution over the space of candidate answers: .", "P (e k |S, Q k ) = softmax(W e f ReLU (W u u k + b u )).", "A Reinforcement Learning Framework In this section we introduce a reinforcement learning framework to explore the space of possible extractive summaries and present a novel reward function to promote summaries that are adequate, fluent, restricted in length, and competent in question answering.", "Our reward function consists of four components, whose interpolation weights γ, α, and β are tuned on the dev set.", "R(y) = R c (y) + γR a (y) + αR f (y) + βR l (y) We define QA competency (Eq.", "(10)) as the average log-likelihood of correctly answering questions using the system summary (y).", "A highquality system summary is expected to resemble reference summary by using similar wording.", "The adequacy metric (Eq.", "(11)) measures the percentage of overlapping unigrams between the system (y) and reference summary (y * ).", "The fluency criterion (Eq.", "(12)) encourages consecutive sequences of source words to be selected by preventing many 0/1 switches in the label sequence (i.e., |y t − y t−1 |).", "Finally, we limit the summary size by setting the ratio of selected words to be close to a threshold δ (Eq.", "(13) ).", "QA R c (y) = 1 K K k=1 log P (e * k |y, Q k ) (10) Adequ.", "R a (y) = 1 |y * | U(y, y * ) (11) Fluency R f (y) = − |y| t=2 |y t − y t−1 | (12) Length R l (y) = − 1 |y| t y t − δ (13) The reward function R(y) successfully combines intrinsic measures of summary fluency and adequacy (Goldstein et al., 2005) with extrinsic measure of summary responsiveness to given questions (Dang, 2006; Murray et al., 2008) .", "A reinforcement learning agent finds a policy P (y|x) to maximize the expected reward E P (y|x) [R(y)].", "Training the system with policy gradient (Eq.", "(14) ) involves repeatedly sampling an extractive summaryŷ from the source document x.", "At time t, the agent takes an action by sampling a decision based on p(y t |ŷ <t , x) (Eq.", "(5)) indicating whether the t-th source text unit is to be included in the summary.", "Once the full summary sequenceŷ is generated, it is compared to the ground-truth sequence to compute the reward R(ŷ).", "In this way, reinforcement learning explores the space of extractive summaries and promotes those yielding high rewards.", "At inference time, rather than sampling actions from p(y t |y <t , x), we choose y t that yields the highest probability to generate the system summary y.", "This process is deterministic and no QA is required.", "∇ θ E P (y|x) [R(y)] = E P (y|x) [R(y)∇ θ log P (y|x)] ≈ 1 N N n=1 R(ŷ (n) )∇ θ log P (ŷ (n) |x) (14) Experiments We proceed by discussing the dataset and settings, comparison systems, and experimental results obtained through both automatic metrics and human evaluation in a reading comprehension setting.", "Dataset and Settings Our goal is to build an extractive summarizer identifying important textual segments from source articles.", "To investigate the effectiveness of the proposed approach, we conduct experiments on the CNN/Daily Mail dataset using a version provided by See et al.", "(2017) .", "The reference summaries of this dataset were created by human editors exhibiting a moderate degree of extractiveness.", "E.g., 83% of summary unigrams and 45% of bigrams appear in source articles (Narayan et al., 2018a) .", "On average, a CNN article contains 761 words / 34 sentences and a DM article contains 653 words / 29 sentences.", "We report results respectively for the CNN and DM portion of the dataset.", "Our hyperparameter settings are as follows.", "We set the hidden state dimension of the LSTM to be 256 in either direction.", "A bidirectional LSTM f Bi-LSTM 1 (·) produces a 512-dimensional vector for each content word.", "Similarly, f Bi-LSTM 4 (·) generates a question vector q k of the same size.", "Our CNN encoder f CNN 2 (·) uses multiple window sizes of {1, 3, 5, 7} and 128 filters per window size.", "h e t is thus a 512-dimensional vector using either CNN or LSTM encoder.", "We set the hidden state dimension of s t to be 128.", "We also use 100-dimensional word embeddings (Pennington et al., 2014) and sinusoidal positional encodings (Vaswani et al., 2017) of 30 dimensions.", "The maximum article length is set to 400 words.", "Compared to the study of Arumae and Liu (2018) , we expand the search space dramatically from 100 to 400 words, which poses a challenge to the RLbased summarizers.", "We associate each article with at most 10 QA pairs (K=10) and use them to guide the extraction of summary segments.", "We apply mini-batch training with Adam optimizer (Kingma and Ba, 2014) , where a mini-batch contains 128 articles and their QA pairs.", "The summary ratio δ is set to 0.15, yielding extractive summaries of about 60 words.", "Following Arumae and Liu (2018) , we set hyperparameters β = 2α; α and γ are tuned on the dev set using grid search.", "Experimental Results Comparison systems We compare our method with a number of extractive and abstractive systems that have reported results on the CNN/DM datasets.", "We consider non-neural approaches that extract sentences from the source article to form a summary.", "These include LexRank (Radev et al., 2004) , SumBasic (Vanderwende et al., 2007) , and KLSum (Haghighi and Vanderwende, 2009) .", "Such methods treat sentences as bags of words, and then select sentences containing topically important words.", "We further include the Lead-3 baseline that extracts the first 3 sentences from any given article.", "The method has been shown to be a strong baseline for summarizing news articles.", "Neural extractive approaches focus on learning vector representations for sentences and words, then performing extraction based on the learned representations.", "Cheng et al.", "(2016) describe a neural network method composed of a hierarchical document encoder and an attention-based extractor.", "The system has two variants: NN-WE extracts words from the source article and NN-SE extracts sentences.", "SummaRuNNer (Nallapati et al., 2017) presents an autoregressive sequence labeling method based on recurrent neural networks.", "It selects summary sentences based on their content, salience, position, and novelty representations.", "Abstractive summarization methods are not directly comparable to our approach, but we choose to include three systems that report results respectively for CNN and DM datasets.", "Distraction-M3 (Chen et al., 2016b ) trains the summarization system to distract its attention to traverse different regions of the source article.", "Graph attention (Tan et al., 2017) introduces a graph-based attention mechanism to enhance the encoderdecoder framework.", "PointerGen+Cov.", "(See et al., 2017) allows the system to not only copy words from the source text but also generate summary words by selecting them from a vocabulary.", "Abstractive methods can thus introduce new words to the summary that are not present in the source article.", "However, system summaries may change the meaning of the original texts due to this flexibility.", "Results We present summarization results of various systems in Tables 2 and 3 , evaluated on the standard CNN/DM test sets by R-1, R-2, and R-L metrics (Lin, 2004) , which respectively measure the overlap of unigrams, bigrams, and longest common subsequences between system and reference summaries.", "We investigate four variants of our method: QASumm+NoQ does not utilize any question-answer pairs during training.", "It extracts summary text chunks by learning from groundtruth labels ( §3.2) and the chunks are encoded by f Bi-LSTM 1 .", "Other variants initialize their models using pretrained parameters from QASumm+NoQ, then integrate the reinforcement learning objective ( §3.4) to exploit the space of possible extractive summaries and reward those that are useful for answering questions.", "We consider three types of QA pairs: the answer token is the root of a sentence dependency parse tree (+ROOT), a subject or object (+SUBJ/OBJ), or an entity found in the sentence (+NER).", "In all cases, the question is generated by replacing the answer token with a blank symbol.", "As illustrated in Tables 2 and 3 , our QASumm methods with reinforcement learning (+ROOT, +SUBJ/OBJ, +NER) perform competitively with strong baselines.", "They outperform the counterpart QASumm+NoQ that makes no use of the QA pairs by a substantial margin.", "They outperform or perform at a comparable level to state-of-the-art published systems on the CNN/DM datasets but are generally inferior to PointerGen.", "We observe that exacting summary chunks is highly desirable in real-world applications as it provides a mechanism to generate concise summaries.", "Nonetheless, accurately identifying summary chunks is challenging because the search space is vast and spuriousness arises in chunking sentences.", "Cheng and Lapata (2016) report a substantial performance drop when adapting their system to extract words.", "Our QASumm methods focusing on chunk extraction perform on par with competitive systems that extract whole sentences.", "We additionally present human evaluation results of summary usefulness for a reading comprehension task in §4.3.", "In Tables 2 and 3 , we further show the number of unique answers per QA type.", "We find that the ROOT-type QA pairs have the least number of unique answers.", "They are often main verbs of sentences.", "In contrast, the SUBJ/OBJ-type has the most number of answers.", "They are subjects and objects of sentences and correspond to an open class of content words.", "The NER-type has a moderate number of answers compared to others.", "Note that all answer tokens have been filtered by frequency; those appearing less than 5 times in the dataset are removed to avoid overfitting.", "Among variants of the QASumm method, we find that QASumm+ROOT achieves the highest scores on DM dataset.", "QASumm+NER performs consistently well on both CNN and DM datasets, suggesting QA pairs of this type are effective in guiding the system to extract summary chunks.", "We conjecture that maintaining a moderate number of answers is important to maximize performance.", "To answer questions with missing entities, the summary is encouraged to contain similar content as the question body.", "Because questions are derived from the human abstract, this in turn requires the system summary to carry similar semantic content as the human abstract.", "Question-answering accuracy We next dive into the QA component of our system to investigate question-answering performance when different types of summaries and QA pairs are supplied to the system ( §3.3).", "Given a question, the system predicts an answer using an extractive summary as the source input.", "Intuitively, an informative summary can lead to high QA accuracy, as the summary content serves well as the basis for predicting answers.", "With the same summary as input, certain types of questions can be more difficult to answer than others, and the system must rely heavily on the summary to gauge correct answers.", "We compare various types of summaries.", "These include (a) QASumm+NoQ which extracts summary chunks without requiring QA pairs; and (b) GoldSumm, which are gold-standard extractive summaries generated by collecting source words appearing in human summaries.", "We further consider NoText and FullText, corresponding to using no source text or the full source article as input.", "They represent the two extremes.", "In all cases the QA component ( §3.3) is trained on the training set and we report QA accuracies on the dev set.", "In Table 4 , we observe that question-answering with GoldSumm performs the best for all QA types.", "It outperforms the scenarios using Full-Text as the source input.", "This indicates that distilled information contained in a high-quality summary can be useful for answering questions, as searching for answers in a succinct summary can be more efficient than that in a full article.", "Moreover, we observe that the performance of QA-Summ+NoQ is in between NoText and GoldSumm for all answer types.", "The results suggest that extractive summaries with even modest ROUGE scores can prove useful for question-answering.", "Regarding different types of QA pairs, we find that the ROOT-type can achieve high QA accuracy when using NoText input.", "It suggests that ROOT answers can to some extent be predicted based on the question context.", "The NER-type QA pairs work the best for both GoldSumm and Full-Text, likely because the source texts contain necessary entities required to correctly answer those questions.", "We also find the SUBJ/OBJ-type QA pairs have the smallest gap between train/dev accuracies, despite that they have a large answer space.", "Based on the analysis we would suggest future work to consider using NER-based QA pairs as they encourage the summaries to contain salient source content and be informative.", "Extraction units We finally compare the performance of using either words or chunks as extraction units ( §3.1).", "The chunks are obtained by breaking down sentence constituent parse trees in a top-down manner until all tree fragments contain 5 words or less.", "We observe that 70% of the chunks are 1-grams, and 2/3/4/5-grams are 9%, 7%, 6%, and 8% respectively.", "We compare the bidirectional LSTM (f LSTM 1 ) and CNN (f CNN 2 ) encoders for their effectiveness on generating representations for extraction units.", "Figure 2 presents the results of the QASumm+NoQ system under various settings.", "We find that extracting chunks performs superior, and combining chunks with LSTM representations yield the highest scores.", "Human Evaluation Testing the usefulness of an extractive system driven by reading comprehension is not inherently measured by automatic metrics (i.e.", "ROUGE).", "We conducted a human evaluation to assess whether the highlighted summaries contribute to document understanding.", "Similar to our training paradigm we presented each participant with the document and three fill-in-the-blank questions created from the human abstracts.", "It was guaranteed that each question was from a unique human abstract to avoid seeing the answer adjacent to the same template.", "The missing section was randomly generated to be either the root word, the subject or ob- See et al.", "(2017) .", "Our systems tested were the supervised extractor, and our full model (NER).", "ject of the sentence, or a named entity.", "We compare our reinforced extracted summary (presented as a bold overlay to the document), against our supervised method (section 3.2), abstractive summaries generated by See et al.", "(2017) , and the human abstracts in full.", "Additionally we asked the participants to rate the quality of the summary presented (1-5, with 5 being most informative).", "We utilized Amazon Mechanical Turk, and conducted an experiment where we sampled 80 documents from the CNN test set.", "The articles were evenly split across the four competing systems, and each HIT was completed by 5 turkers.", "Upon completion the data was analyzed manually for accuracy since turkers entered each answer as free text, and to remove any meaningless datapoints.", "Table 5 shows the average time (in seconds) to complete a single question, the overall accuracy of the participants, and the informativeness of a given summary type.", "Excluding the use of human abstracts, all systems resulted in similar performance times.", "However we observe a large margin in QA accuracy in our full system compared to the abstractive and our supervised approach.", "Although participants rated the informativeness of the summaries to be the same our systems yielded a higher performance.", "This strongly indicates that having a system which makes using of document comprehension has a tangible effect when applied towards a real-world task.", "Conclusion We exploited an extractive summarization framework using deep reinforcement learning to identify consecutive word sequences from a document to form an extractive summary.", "Our reward function promotes adequate and fluent summaries that can serve as document surrogates to answer important questions, directly addressing users' information needs.", "Experimental results on benchmark datasets demonstrated the efficacy of our proposed method over state-of-the-art baselines, assessed by both automatic metrics and human evaluators." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "4.3", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Our Approach", "Representing an Extraction Unit", "Constructing an Extractive Summary", "Using Summaries to Answer Questions", "A Reinforcement Learning Framework", "Experiments", "Dataset and Settings", "Experimental Results", "Human Evaluation", "Conclusion" ] }
GEM-SciDuet-train-36#paper-1050#slide-4
Constructing an Extractive Summary
It is desirable to first develop a supervised framework for identifying summary- worthy text segments from a source article. The task can be formulated as a sequence labeling problem. We build a framework to extract summary units where the importance of the t-th source unit is characterized by position in the document relationship with the partial summary Kristjan Arumae and Fei Liu Guiding Extractive Summarization with Question-Answering Rewards - NAACL 2019
It is desirable to first develop a supervised framework for identifying summary- worthy text segments from a source article. The task can be formulated as a sequence labeling problem. We build a framework to extract summary units where the importance of the t-th source unit is characterized by position in the document relationship with the partial summary Kristjan Arumae and Fei Liu Guiding Extractive Summarization with Question-Answering Rewards - NAACL 2019
[]
GEM-SciDuet-train-36#paper-1050#slide-5
1050
Guiding Extractive Summarization with Question-Answering Rewards
Highlighting while reading is a natural behavior for people to track salient content of a document. It would be desirable to teach an extractive summarizer to do the same. However, a major obstacle to the development of a supervised summarizer is the lack of ground-truth. Manual annotation of extraction units is costprohibitive, whereas acquiring labels by automatically aligning human abstracts and source documents can yield inferior results. In this paper we describe a novel framework to guide a supervised, extractive summarization system with question-answering rewards. We argue that quality summaries should serve as a document surrogate to answer important questions, and such question-answer pairs can be conveniently obtained from human abstracts. The system learns to promote summaries that are informative, fluent, and perform competitively on question-answering. Our results compare favorably with those reported by strong summarization baselines as evaluated by automatic metrics and human assessors.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289 ], "paper_content_text": [ "Introduction Our increasingly digitized lifestyle calls for summarization techniques to produce short and accurate summaries that can be accessed at any time.", "These summaries should factually adhere to the content of the source text and present the reader with the key points therein.", "Although neural abstractive summarization has shown promising results (Rush et al., 2015; Nallapati et al., 2016; See et al., 2017) , these methods can have potential drawbacks.", "It was revealed that abstracts generated by neural systems sometimes alter or falsify objective details, and introduce new meanings not present in the original text (Cao et al., 2018) .", "Reading these abstracts can lead to misinterpretation of the source materials, which is clearly undesirable.", "In this work, we focus on extractive summarization, where the summaries are guaranteed (CNN) A judge this week sentenced a former TSA agent to six months in jail for secretly videotaping a female co-worker while she was in the bathroom, prosecutors said.", "During the investigation, detectives with the Metro Nashville Police Department in Tennessee also found that the agent, 33-year-old Daniel Boykin, entered the woman's home multiple times, where he took videos, photos and other data.", "Police found more than 90 videos and 1,500 photos of the victim on Boykin's phone and computer .", "The victim filed a complaint after seeing images of herself on his phone last year.", "[...] Comprehension Questions (Human Abstract): Former Daniel Boykin, 33, videotaped his female co-worker in the restroom, authorities say.", "Authorities say they found 90 videos and 1,500 photos of the victim on and computer.", "to remain faithful to the original content.", "Our system seeks to identify salient and consecutive sequences of words from the source document, and highlight them in the text to assist users in browsing and comprehending lengthy documents.", "An example is illustrated in Table 1 .", "A primary challenge faced by extractive summarizers is the lack of annotated data.", "The cost of hiring humans to label a necessary amount of source articles with summary words, good for training a modern classifier, can be prohibitive.", "Previous work has exploited using human abstracts to derive labels for extraction units (Woodsend and Lapata, 2010) .", "E.g., a source word is tagged 1 if it appears in the abstract, 0 otherwise.", "Although pairs of source articles and human abstracts are abundant, labels derived in this way are not necessarily best since summary saliency can not be easily captured with a rule based categorization.", "Considering that human abstracts involve generalization, paraphrasing, and can con-tain words not present in the source text, leveraging them to derive labels for extraction units can be suboptimal.", "In this work, we investigate a new strategy that seeks to better utilize human abstracts to guide the extraction of summary text units.", "We hypothesize that quality extractive summaries should contain informative content so that they can be used as document surrogates to answer important questions, thereby satisfying users' information needs.", "The question-answer pairs can be conveniently developed from human abstracts.", "Our proposed approach identifies answer tokens from each sentence of the human abstract, then replaces each answer token with a blank to create a Cloze-style question-answer pair.", "To answer all questions (≈human abstract), the system summary must contain content that is semantically close to and collectively resembles the human abstract.", "In this paper, we construct an extractive summary by selecting consecutive word sequences from the source document.", "To accomplish this we utilize a novel reinforcement learning framework to explore the space of possible extractive summaries and assess each summary using a novel reward function judging the summary's adequacy, fluency, length, and its competency to answer important questions.", "The system learns to sample extractive summaries yielding the highest expected rewards, with no pre-derived extraction labels needed.", "This work extends the methodology of Arumae and Liu (2018) with new representations of extraction units and thorough experimental evaluation.", "The contributions of this research can be summarized as follows: • we describe a novel framework generating extractive summaries by selecting consecutive sequences of words from source documents.", "This new system explores various encoding mechanisms, as well as new sampling techniques to capture phrase level data.", "Such a framework has not been thoroughly investigated in the past; • We conduct a methodical empirical evaluation from the point of view of information saliency.", "Rather than solely relying on automatic summarization evaluation methods, we also show the advantages of our system by assessing the summary quality with reading comprehension tasks.", "Our summaries compare favorably with the automatic metrics against state of the art, and show promising results against baselines when evaluated by humans for question answering.", "Related Work Extractive summarization has seen growing popularity in the past decades (Nenkova and McKeown, 2011) .", "The methods focus on selecting representative sentences from the document(s) and optionally deleting unimportant sentence constituents to form a summary (Knight and Marcu, 2002; Radev et al., 2004; Zajic et al., 2007; Martins and Smith, 2009; Gillick and Favre, 2009; Lin and Bilmes, 2010; Wang et al., 2013; Li et al., 2013 Li et al., , 2014 Hong et al., 2014; Yogatama et al., 2015) .", "A majority of the methods are unsupervised.", "They estimate sentence importance based on the sentence's length and position in the document, whether the sentence contains topical content and its relationship with other sentences.", "The summarization objective is to select a handful of sentences to maximize the coverage of important content while minimizing summary redundancy.", "Although unsupervised methods are promising, they cannot benefit from the large-scale training data harvested from the Web (Sandhaus, 2008; Hermann et al., 2015; Grusky et al., 2018) .", "Neural extractive summarization has focused primarily on extracting sentences (Nallapati et al., 2017; Cao et al., 2017; Isonuma et al., 2017; Tarnpradab et al., 2017; Zhou et al., 2018; Kedzie et al., 2018) .", "These studies exploit parallel training data consisting of source articles and story highlights (i.e., human abstracts) to create ground-truth labels for sentences.", "A neural extractive summarizer learns to predict a binary label for each source sentence indicating if it is to be included in the summary.", "These studies build distributed sentence representations using neural networks (Cheng and Lapata, 2016; Yasunaga et al., 2017) and use reinforcement learning to optimize the evaluation metric (Narayan et al., 2018b) and improve summary coherence (Wu and Hu, 2018) .", "However, sentence extraction can be coarse and in many cases, only a part of the sentence is worthy to be added to the summary.", "In this study, we perform finer-grained extractive summarization by allowing the system to select consecutive sequences of words rather than sentences to form a summary.", "Interestingly, studies reveal that summaries generated by recent neural abstractive systems are, in fact, quite \"extractive.\"", "Abstractive systems often adopt the encoder-decoder architecture with an attention mechanism (Rush et al., 2015; Nallapati et al., 2016; Paulus et al., 2017; Guo et al., 2018; Gehrmann et al., 2018; Lebanoff et al., 2018; Ce-likyilmaz et al., 2018) .", "The encoder condenses a source sequence to a fixed-length vector and the decoder takes the vector as input and generates a summary by predicting one word at a time.", "See, Liu, and Manning (2017) suggest that about 35% of the summary sentences occur in the source documents, and 90% of summary n-grams appear in the source.", "Moreover, the summaries may contain inaccurate factual details and introduce new meanings not present in the original text (Cao et al., 2018; .", "It thus raises concerns as to whether such systems can be used in realworld scenarios to summarize materials such as legal documents.", "In this work, we choose to focus on extractive summarization where selected word sequences can be highlighted on the source text to avoid change of meaning.", "Our proposed method is inspired by the work of Lei et al.", "(2016) who seek to identify rationales from textual input to support sentiment classification and question retrieval.", "Distinct from this previous work, we focus on generating generic document summaries.", "We present a novel supervised framework encouraging the selection of consecutive sequences of words to form an extractive summary.", "Further, we leverage reinforcement learning to explore the space of possible extractive summaries and promote those that are fluent, adequate, and competent in question answering.", "We seek to test the hypothesis that successful summaries can serve as document surrogates to answer important questions, and moreover, ground-truth questionanswer pairs can be derived from human abstracts.", "In the following section we describe our proposed approach in details.", "Our Approach Let S be an extractive summary consisting of text segments selected from a source document x.", "The summary can be mapped to a sequence of binary labels y assigned to document words.", "In this section we first present a supervised framework for identifying consecutive sequences of words that are summary-worthy, then proceed by describing our question-answering rewards and a deep reinforcement learning framework to guide the selection of summaries so that they can be used as document surrogates to answer important questions.", "1 Representing an Extraction Unit How best to decompose a source document into a set of text units useful for extractive summarization remains an open problem.", "A natural choice is to use words as extraction units.", "However, this choice ignores the cohesiveness of text.", "A text chunk (e.g., a prepositional phrase) can be either selected to the summary in its entirety or not at all.", "In this paper we experiment with both schemes, using either words or chunks as extraction units.", "When a text chunk is selected in the summary, all its consisting words are selected.", "We obtain text chunks by breaking down the sentence constituent parse tree in a top-down manner until each tree fragment governs at most 5 words.", "A chunk thus can contain from 1 to 5 words.", "Additionally, word level modeling can be considered a special case of chunks where the length of each phrase is 1.", "It is important to note that using sentences as extraction units is out of the scope of this paper, because our work focuses on finer-grained extraction units such as words and phrases and this is notably a more challenging task.", "The most successful neural models for encoding a piece of text to a fixed-length vector include the recurrent (Hochreiter and Schmidhuber, 1997) and convolutional neural networks (CNN; Kim et al., 2014) , among others.", "A recent study by Khandelwal et al.", "(2018) reported that the recurrent networks are capable of memorizing a recent context of about 20 tokens and the model is highly sensitive to word order, whereas this is less the case for CNN whose max-pooling operation makes it agnostic to word order.", "We implement both networks and are curious to compare their effectiveness at encoding extraction units for summarization.", "{h e t } = f Bi-LSTM 1 (x) (1) or {h e t } = f CNN 2 (x) (2) Our model first encodes the source document using a bidirectional LSTM with the forward and backward passes (Eq.", "(1)).", "The representation of the t-th source word h e t = [ ← − h e t || − → h e t ] is the con- catenation of the hidden states in both directions.", "A chunk is similarly denoted by h e t = [ ← − h e t || − → h e t+n ] where t and t + n are the indices of its beginning and ending words.", "In both cases, a fixed-length vector (h e t ∈ R m ) is created for the word/chunk.", "Further, our CNN encoder (Eq.", "(2)) uses a sliding window of {1,3,5,7} words, corresponding to the kernel sizes, to scan through the source document.", "We apply a number of filters to each window size to extract local features.", "The t-th source word is represented by the concatenation of feature maps (an m-dimensional vector).", "To obtain the chunk vector we perform max-pooling over the representations of its consisting words (from t to t + n).", "In the following we use h e t to denote the vector representation of the t-th extraction unit, may it be a word or a chunk, generated using either encoder.", "Constructing an Extractive Summary It is desirable to first develop a supervised framework for identifying summary-worthy text segments from a source article.", "These segments collectively form an extractive summary to be highlighted on the source text.", "The task can be formulated as a sequence labeling problem: a source text unit (a word or chunk) is labelled 1 if it is to be included in the summary and 0 otherwise.", "It is not unusual to develop an auto-regressive model to perform sequence labeling, where the label of the t-th extraction unit (y t ) depends on all previous labels (y <t ).", "Given this hypothesis, we build a framework to extract summary units where the importance of the t-th source unit is characterized by its informativeness (encoded in h e t ), its position in the document, and relationship with the partial summary.", "The details are presented below.", "We use a positional embedding (g t ) to signify the position of the t-th text unit in the source document.", "The position corresponds to the index of the source sentence containing the t-th unit, and further, all text units belonging to the same sentence share the same positional embedding.", "We apply sinusoidal initialization to the embeddings, following Vaswani et al.", "(2017) .", "Importantly, positional embeddings allow us to inject macro-positional knowledge about words/chunks into a neural summarization framework to offset the natural bias that humans tend to have on putting important content at the beginning of an article.", "Next, we build a representation for the partial summary to aid the system in selecting future text units.", "The representation s t is expected to encode the extraction decisions up to time t-1 and it can be realized using a unidirectional LSTM network (Eq.", "(3)).", "The t-th input to the network is represented as y t−1 ⊗ h e t−1 where y t−1 is a binary label serving as a gating mechanism to control if the semantic content of the previous text unit (h e t−1 ) is to be included in the summary (\"⊗\" corresponds to elementwise product).", "During training, we apply teacher forcing and y t−1 is the ground-truth extraction label for the (t − 1)-th unit; at test time, Figure 1 : A unidirectional LSTM (blue, Eq.", "(3)) encodes the partial summary, while the multilayer perceptron network (orange, Eq.", "(4-5)) utilizes the text unit representation (h e t ), its positional embedding (gt), and the partial summary representation (st) to determine if the t-th text unit is to be included in the summary.", "Best viewed in color.", "g t 1 g t g t+1 g t+2 s t+2 s t+1 s t s t 1 h e t 1 h e t h e t+1 h e t+2 y t−1 is generated on-the-fly by obtaining the label yielding the highest probability according to Eq.", "(5).", "In the previous work of Cheng and Lapata (2016) and Nallapati et al.", "(2017) , similar auto-regressive models are developed to identify summary sentences.", "Different from the previous work, this study focuses on extracting consecutive sequences of words and chunks from the source document, and the partial summary representation is particularly useful for predicting if the next unit is to be included in the summary to improve summary fluency.", "s t = f Uni-LSTM 3 (s t−1 , y t−1 ⊗ h e t−1 ) (3) Given the partial summary representation (s t ), and representation of the text unit (h e t ) and its positional encoding (g t ), we employ a multilayer perceptron to predict how likely the unit is to be included in the summary.", "This process is described by Eqs.", "(4-5) and further illustrated in Figure 1 .", "a t = f ReLU (W a [h e t ; g t ; s t ] + b a ) (4) p(y t |y <t , x) = σ(w y a t + b y ) (5) Our model parameters include {W a , b a , w y , b y } along with those required by f Bi-LSTM 1 , f CNN 2 and f Uni-LSTM 3 .", "It is possible to train this model in a fully supervised fashion by minimizing the negative log-likelihood of the training data.", "We generate ground-truth labels for source text units as follows.", "A source word receives a label of 1 if both itself and its adjacent word appear in the human abstract (excluding cases where both words are stopwords).", "This heuristic aims to label consecutive source words (2 or more) as summaryworthy, as opposed to picking single words which can be less informative.", "A source text chunk receives a label of 1 if one of its component words is labelled 1 in the above process.", "Because human abstracts are often short and contain novel words not present in source documents, they can be suboptimal for generating ground-truth labels for extraction units.", "Only a small portion of the source words (about 8% in our dataset) are labelled as positive, whereas the vast majority are negative.", "Such labels can be ineffective in providing supervision.", "In the following section, we investigate a new learning paradigm, which encourages extractive summaries to contain informative content useful for answering important questions, while question-answer pairs can be automatically derived from human abstracts.", "Using Summaries to Answer Questions Our hypothesis is that high-quality summaries should contain informative content making them appropriate to serve as document surrogates to satisfy users' information needs.", "We train the extractive summarizer to identify source text units necessary for answering questions, and the questionanswer (QA) pairs can be conveniently developed from human abstracts.", "To obtain QA pairs, we set an answer token to be either a salient word or a named entity to limit the space of potential answers.", "For any sentence in the human abstract, we identify an answer token from it, then replace the answer token with a blank to create a Cloze-style question-answer pair (see Table 1 ).", "When a sentence contains multiple answer tokens, a set of QA pairs can be obtained from it.", "It is important to note that at least one QA pair should be extracted from each sentence of the abstract.", "Because a system summary is trained to contain content useful for answering all questions (≈human abstract), any missing QA pair is likely to cause the summary to be insufficient.", "We collect answer tokens using the following methods: (a) we extract a set of entities with tag {PER, LOC, ORG, MISC} from each sentence using the Stanford CoreNLP toolkit ; (b) we also identify the ROOT word of each sentence's dependency parse tree along with the sentence's subject/object word, whose type is {NSUBJ, CSUBJ, OBJ, IOBJ} (if exists), then add them to the collection of answer tokens.", "Further, we prune the answer space by excluding those which appear fewer than 5 times overall.", "Having several methods for question construction allows us to explore the answer space properly.", "In the results section we perform experiments on root, subject/object, and named entities to see which model provides the best extraction guide.", "Given an extractive summary S containing a set of source text units, and a collection of questionanswer pairs P = {(Q k , e * k )} K k=1 related to the source document, we want to develop a mechanism leveraging the extractive summary to answer these questions.", "We first encode each question Q k to a vector representation (q k ).", "This is achieved by concatenating the last hidden states of the forward/backward passes of a bidirectional LSTM (Eq.", "(6) ).", "Next, we exploit the attention mechanism to locate summary parts that are relevant to answering the k-th question.", "Given the attention mechanism, an extractive summary S can be used to answer multiple questions related to the document.", "We define α t,k to be the semantic relatedness between the t-th source text unit and the k-th question.", "Following Chen et al.", "(2016a) , we introduce a bilinear term to characterize their relationship (α t,k ∝ h e t W α q k ; see Eq.", "(7) ).", "In this process, we consider only those source text units selected in summary S. Using α t,k as weights, we then compute a context vector c k condensing summary content related to the k-th question (Eq.", "(8)) .", "q k = f Bi-LSTM 4 (Q k ) (6) α t,k = exp(h e t W α q k ) t exp(h e t W α q k ) (7) c k = t α t,k h e t (8) u k = [c k ; q k ; |c k − q k |; c k ⊗ q k ] (9) To predict the most probable answer, we construct a fully-connected network as the output layer.", "The input to the network includes a concatenation of the context vector (c k ), question vector (q k ), absolute difference (|c k − q k |) and element-wise product (c k ⊗ q k ) of the two vectors (Eq.", "(9)).", "A softmax function is used to estimate a probability distribution over the space of candidate answers: .", "P (e k |S, Q k ) = softmax(W e f ReLU (W u u k + b u )).", "A Reinforcement Learning Framework In this section we introduce a reinforcement learning framework to explore the space of possible extractive summaries and present a novel reward function to promote summaries that are adequate, fluent, restricted in length, and competent in question answering.", "Our reward function consists of four components, whose interpolation weights γ, α, and β are tuned on the dev set.", "R(y) = R c (y) + γR a (y) + αR f (y) + βR l (y) We define QA competency (Eq.", "(10)) as the average log-likelihood of correctly answering questions using the system summary (y).", "A highquality system summary is expected to resemble reference summary by using similar wording.", "The adequacy metric (Eq.", "(11)) measures the percentage of overlapping unigrams between the system (y) and reference summary (y * ).", "The fluency criterion (Eq.", "(12)) encourages consecutive sequences of source words to be selected by preventing many 0/1 switches in the label sequence (i.e., |y t − y t−1 |).", "Finally, we limit the summary size by setting the ratio of selected words to be close to a threshold δ (Eq.", "(13) ).", "QA R c (y) = 1 K K k=1 log P (e * k |y, Q k ) (10) Adequ.", "R a (y) = 1 |y * | U(y, y * ) (11) Fluency R f (y) = − |y| t=2 |y t − y t−1 | (12) Length R l (y) = − 1 |y| t y t − δ (13) The reward function R(y) successfully combines intrinsic measures of summary fluency and adequacy (Goldstein et al., 2005) with extrinsic measure of summary responsiveness to given questions (Dang, 2006; Murray et al., 2008) .", "A reinforcement learning agent finds a policy P (y|x) to maximize the expected reward E P (y|x) [R(y)].", "Training the system with policy gradient (Eq.", "(14) ) involves repeatedly sampling an extractive summaryŷ from the source document x.", "At time t, the agent takes an action by sampling a decision based on p(y t |ŷ <t , x) (Eq.", "(5)) indicating whether the t-th source text unit is to be included in the summary.", "Once the full summary sequenceŷ is generated, it is compared to the ground-truth sequence to compute the reward R(ŷ).", "In this way, reinforcement learning explores the space of extractive summaries and promotes those yielding high rewards.", "At inference time, rather than sampling actions from p(y t |y <t , x), we choose y t that yields the highest probability to generate the system summary y.", "This process is deterministic and no QA is required.", "∇ θ E P (y|x) [R(y)] = E P (y|x) [R(y)∇ θ log P (y|x)] ≈ 1 N N n=1 R(ŷ (n) )∇ θ log P (ŷ (n) |x) (14) Experiments We proceed by discussing the dataset and settings, comparison systems, and experimental results obtained through both automatic metrics and human evaluation in a reading comprehension setting.", "Dataset and Settings Our goal is to build an extractive summarizer identifying important textual segments from source articles.", "To investigate the effectiveness of the proposed approach, we conduct experiments on the CNN/Daily Mail dataset using a version provided by See et al.", "(2017) .", "The reference summaries of this dataset were created by human editors exhibiting a moderate degree of extractiveness.", "E.g., 83% of summary unigrams and 45% of bigrams appear in source articles (Narayan et al., 2018a) .", "On average, a CNN article contains 761 words / 34 sentences and a DM article contains 653 words / 29 sentences.", "We report results respectively for the CNN and DM portion of the dataset.", "Our hyperparameter settings are as follows.", "We set the hidden state dimension of the LSTM to be 256 in either direction.", "A bidirectional LSTM f Bi-LSTM 1 (·) produces a 512-dimensional vector for each content word.", "Similarly, f Bi-LSTM 4 (·) generates a question vector q k of the same size.", "Our CNN encoder f CNN 2 (·) uses multiple window sizes of {1, 3, 5, 7} and 128 filters per window size.", "h e t is thus a 512-dimensional vector using either CNN or LSTM encoder.", "We set the hidden state dimension of s t to be 128.", "We also use 100-dimensional word embeddings (Pennington et al., 2014) and sinusoidal positional encodings (Vaswani et al., 2017) of 30 dimensions.", "The maximum article length is set to 400 words.", "Compared to the study of Arumae and Liu (2018) , we expand the search space dramatically from 100 to 400 words, which poses a challenge to the RLbased summarizers.", "We associate each article with at most 10 QA pairs (K=10) and use them to guide the extraction of summary segments.", "We apply mini-batch training with Adam optimizer (Kingma and Ba, 2014) , where a mini-batch contains 128 articles and their QA pairs.", "The summary ratio δ is set to 0.15, yielding extractive summaries of about 60 words.", "Following Arumae and Liu (2018) , we set hyperparameters β = 2α; α and γ are tuned on the dev set using grid search.", "Experimental Results Comparison systems We compare our method with a number of extractive and abstractive systems that have reported results on the CNN/DM datasets.", "We consider non-neural approaches that extract sentences from the source article to form a summary.", "These include LexRank (Radev et al., 2004) , SumBasic (Vanderwende et al., 2007) , and KLSum (Haghighi and Vanderwende, 2009) .", "Such methods treat sentences as bags of words, and then select sentences containing topically important words.", "We further include the Lead-3 baseline that extracts the first 3 sentences from any given article.", "The method has been shown to be a strong baseline for summarizing news articles.", "Neural extractive approaches focus on learning vector representations for sentences and words, then performing extraction based on the learned representations.", "Cheng et al.", "(2016) describe a neural network method composed of a hierarchical document encoder and an attention-based extractor.", "The system has two variants: NN-WE extracts words from the source article and NN-SE extracts sentences.", "SummaRuNNer (Nallapati et al., 2017) presents an autoregressive sequence labeling method based on recurrent neural networks.", "It selects summary sentences based on their content, salience, position, and novelty representations.", "Abstractive summarization methods are not directly comparable to our approach, but we choose to include three systems that report results respectively for CNN and DM datasets.", "Distraction-M3 (Chen et al., 2016b ) trains the summarization system to distract its attention to traverse different regions of the source article.", "Graph attention (Tan et al., 2017) introduces a graph-based attention mechanism to enhance the encoderdecoder framework.", "PointerGen+Cov.", "(See et al., 2017) allows the system to not only copy words from the source text but also generate summary words by selecting them from a vocabulary.", "Abstractive methods can thus introduce new words to the summary that are not present in the source article.", "However, system summaries may change the meaning of the original texts due to this flexibility.", "Results We present summarization results of various systems in Tables 2 and 3 , evaluated on the standard CNN/DM test sets by R-1, R-2, and R-L metrics (Lin, 2004) , which respectively measure the overlap of unigrams, bigrams, and longest common subsequences between system and reference summaries.", "We investigate four variants of our method: QASumm+NoQ does not utilize any question-answer pairs during training.", "It extracts summary text chunks by learning from groundtruth labels ( §3.2) and the chunks are encoded by f Bi-LSTM 1 .", "Other variants initialize their models using pretrained parameters from QASumm+NoQ, then integrate the reinforcement learning objective ( §3.4) to exploit the space of possible extractive summaries and reward those that are useful for answering questions.", "We consider three types of QA pairs: the answer token is the root of a sentence dependency parse tree (+ROOT), a subject or object (+SUBJ/OBJ), or an entity found in the sentence (+NER).", "In all cases, the question is generated by replacing the answer token with a blank symbol.", "As illustrated in Tables 2 and 3 , our QASumm methods with reinforcement learning (+ROOT, +SUBJ/OBJ, +NER) perform competitively with strong baselines.", "They outperform the counterpart QASumm+NoQ that makes no use of the QA pairs by a substantial margin.", "They outperform or perform at a comparable level to state-of-the-art published systems on the CNN/DM datasets but are generally inferior to PointerGen.", "We observe that exacting summary chunks is highly desirable in real-world applications as it provides a mechanism to generate concise summaries.", "Nonetheless, accurately identifying summary chunks is challenging because the search space is vast and spuriousness arises in chunking sentences.", "Cheng and Lapata (2016) report a substantial performance drop when adapting their system to extract words.", "Our QASumm methods focusing on chunk extraction perform on par with competitive systems that extract whole sentences.", "We additionally present human evaluation results of summary usefulness for a reading comprehension task in §4.3.", "In Tables 2 and 3 , we further show the number of unique answers per QA type.", "We find that the ROOT-type QA pairs have the least number of unique answers.", "They are often main verbs of sentences.", "In contrast, the SUBJ/OBJ-type has the most number of answers.", "They are subjects and objects of sentences and correspond to an open class of content words.", "The NER-type has a moderate number of answers compared to others.", "Note that all answer tokens have been filtered by frequency; those appearing less than 5 times in the dataset are removed to avoid overfitting.", "Among variants of the QASumm method, we find that QASumm+ROOT achieves the highest scores on DM dataset.", "QASumm+NER performs consistently well on both CNN and DM datasets, suggesting QA pairs of this type are effective in guiding the system to extract summary chunks.", "We conjecture that maintaining a moderate number of answers is important to maximize performance.", "To answer questions with missing entities, the summary is encouraged to contain similar content as the question body.", "Because questions are derived from the human abstract, this in turn requires the system summary to carry similar semantic content as the human abstract.", "Question-answering accuracy We next dive into the QA component of our system to investigate question-answering performance when different types of summaries and QA pairs are supplied to the system ( §3.3).", "Given a question, the system predicts an answer using an extractive summary as the source input.", "Intuitively, an informative summary can lead to high QA accuracy, as the summary content serves well as the basis for predicting answers.", "With the same summary as input, certain types of questions can be more difficult to answer than others, and the system must rely heavily on the summary to gauge correct answers.", "We compare various types of summaries.", "These include (a) QASumm+NoQ which extracts summary chunks without requiring QA pairs; and (b) GoldSumm, which are gold-standard extractive summaries generated by collecting source words appearing in human summaries.", "We further consider NoText and FullText, corresponding to using no source text or the full source article as input.", "They represent the two extremes.", "In all cases the QA component ( §3.3) is trained on the training set and we report QA accuracies on the dev set.", "In Table 4 , we observe that question-answering with GoldSumm performs the best for all QA types.", "It outperforms the scenarios using Full-Text as the source input.", "This indicates that distilled information contained in a high-quality summary can be useful for answering questions, as searching for answers in a succinct summary can be more efficient than that in a full article.", "Moreover, we observe that the performance of QA-Summ+NoQ is in between NoText and GoldSumm for all answer types.", "The results suggest that extractive summaries with even modest ROUGE scores can prove useful for question-answering.", "Regarding different types of QA pairs, we find that the ROOT-type can achieve high QA accuracy when using NoText input.", "It suggests that ROOT answers can to some extent be predicted based on the question context.", "The NER-type QA pairs work the best for both GoldSumm and Full-Text, likely because the source texts contain necessary entities required to correctly answer those questions.", "We also find the SUBJ/OBJ-type QA pairs have the smallest gap between train/dev accuracies, despite that they have a large answer space.", "Based on the analysis we would suggest future work to consider using NER-based QA pairs as they encourage the summaries to contain salient source content and be informative.", "Extraction units We finally compare the performance of using either words or chunks as extraction units ( §3.1).", "The chunks are obtained by breaking down sentence constituent parse trees in a top-down manner until all tree fragments contain 5 words or less.", "We observe that 70% of the chunks are 1-grams, and 2/3/4/5-grams are 9%, 7%, 6%, and 8% respectively.", "We compare the bidirectional LSTM (f LSTM 1 ) and CNN (f CNN 2 ) encoders for their effectiveness on generating representations for extraction units.", "Figure 2 presents the results of the QASumm+NoQ system under various settings.", "We find that extracting chunks performs superior, and combining chunks with LSTM representations yield the highest scores.", "Human Evaluation Testing the usefulness of an extractive system driven by reading comprehension is not inherently measured by automatic metrics (i.e.", "ROUGE).", "We conducted a human evaluation to assess whether the highlighted summaries contribute to document understanding.", "Similar to our training paradigm we presented each participant with the document and three fill-in-the-blank questions created from the human abstracts.", "It was guaranteed that each question was from a unique human abstract to avoid seeing the answer adjacent to the same template.", "The missing section was randomly generated to be either the root word, the subject or ob- See et al.", "(2017) .", "Our systems tested were the supervised extractor, and our full model (NER).", "ject of the sentence, or a named entity.", "We compare our reinforced extracted summary (presented as a bold overlay to the document), against our supervised method (section 3.2), abstractive summaries generated by See et al.", "(2017) , and the human abstracts in full.", "Additionally we asked the participants to rate the quality of the summary presented (1-5, with 5 being most informative).", "We utilized Amazon Mechanical Turk, and conducted an experiment where we sampled 80 documents from the CNN test set.", "The articles were evenly split across the four competing systems, and each HIT was completed by 5 turkers.", "Upon completion the data was analyzed manually for accuracy since turkers entered each answer as free text, and to remove any meaningless datapoints.", "Table 5 shows the average time (in seconds) to complete a single question, the overall accuracy of the participants, and the informativeness of a given summary type.", "Excluding the use of human abstracts, all systems resulted in similar performance times.", "However we observe a large margin in QA accuracy in our full system compared to the abstractive and our supervised approach.", "Although participants rated the informativeness of the summaries to be the same our systems yielded a higher performance.", "This strongly indicates that having a system which makes using of document comprehension has a tangible effect when applied towards a real-world task.", "Conclusion We exploited an extractive summarization framework using deep reinforcement learning to identify consecutive word sequences from a document to form an extractive summary.", "Our reward function promotes adequate and fluent summaries that can serve as document surrogates to answer important questions, directly addressing users' information needs.", "Experimental results on benchmark datasets demonstrated the efficacy of our proposed method over state-of-the-art baselines, assessed by both automatic metrics and human evaluators." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "4.3", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Our Approach", "Representing an Extraction Unit", "Constructing an Extractive Summary", "Using Summaries to Answer Questions", "A Reinforcement Learning Framework", "Experiments", "Dataset and Settings", "Experimental Results", "Human Evaluation", "Conclusion" ] }
GEM-SciDuet-train-36#paper-1050#slide-5
Summary Encoding
position in the document: relationship with the partial summary: We employ a multilayer perceptron to predict how likely the unit is to be included in the summary. Kristjan Arumae and Fei Liu Guiding Extractive Summarization with Question-Answering Rewards - NAACL 2019
position in the document: relationship with the partial summary: We employ a multilayer perceptron to predict how likely the unit is to be included in the summary. Kristjan Arumae and Fei Liu Guiding Extractive Summarization with Question-Answering Rewards - NAACL 2019
[]
GEM-SciDuet-train-36#paper-1050#slide-6
1050
Guiding Extractive Summarization with Question-Answering Rewards
Highlighting while reading is a natural behavior for people to track salient content of a document. It would be desirable to teach an extractive summarizer to do the same. However, a major obstacle to the development of a supervised summarizer is the lack of ground-truth. Manual annotation of extraction units is costprohibitive, whereas acquiring labels by automatically aligning human abstracts and source documents can yield inferior results. In this paper we describe a novel framework to guide a supervised, extractive summarization system with question-answering rewards. We argue that quality summaries should serve as a document surrogate to answer important questions, and such question-answer pairs can be conveniently obtained from human abstracts. The system learns to promote summaries that are informative, fluent, and perform competitively on question-answering. Our results compare favorably with those reported by strong summarization baselines as evaluated by automatic metrics and human assessors.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289 ], "paper_content_text": [ "Introduction Our increasingly digitized lifestyle calls for summarization techniques to produce short and accurate summaries that can be accessed at any time.", "These summaries should factually adhere to the content of the source text and present the reader with the key points therein.", "Although neural abstractive summarization has shown promising results (Rush et al., 2015; Nallapati et al., 2016; See et al., 2017) , these methods can have potential drawbacks.", "It was revealed that abstracts generated by neural systems sometimes alter or falsify objective details, and introduce new meanings not present in the original text (Cao et al., 2018) .", "Reading these abstracts can lead to misinterpretation of the source materials, which is clearly undesirable.", "In this work, we focus on extractive summarization, where the summaries are guaranteed (CNN) A judge this week sentenced a former TSA agent to six months in jail for secretly videotaping a female co-worker while she was in the bathroom, prosecutors said.", "During the investigation, detectives with the Metro Nashville Police Department in Tennessee also found that the agent, 33-year-old Daniel Boykin, entered the woman's home multiple times, where he took videos, photos and other data.", "Police found more than 90 videos and 1,500 photos of the victim on Boykin's phone and computer .", "The victim filed a complaint after seeing images of herself on his phone last year.", "[...] Comprehension Questions (Human Abstract): Former Daniel Boykin, 33, videotaped his female co-worker in the restroom, authorities say.", "Authorities say they found 90 videos and 1,500 photos of the victim on and computer.", "to remain faithful to the original content.", "Our system seeks to identify salient and consecutive sequences of words from the source document, and highlight them in the text to assist users in browsing and comprehending lengthy documents.", "An example is illustrated in Table 1 .", "A primary challenge faced by extractive summarizers is the lack of annotated data.", "The cost of hiring humans to label a necessary amount of source articles with summary words, good for training a modern classifier, can be prohibitive.", "Previous work has exploited using human abstracts to derive labels for extraction units (Woodsend and Lapata, 2010) .", "E.g., a source word is tagged 1 if it appears in the abstract, 0 otherwise.", "Although pairs of source articles and human abstracts are abundant, labels derived in this way are not necessarily best since summary saliency can not be easily captured with a rule based categorization.", "Considering that human abstracts involve generalization, paraphrasing, and can con-tain words not present in the source text, leveraging them to derive labels for extraction units can be suboptimal.", "In this work, we investigate a new strategy that seeks to better utilize human abstracts to guide the extraction of summary text units.", "We hypothesize that quality extractive summaries should contain informative content so that they can be used as document surrogates to answer important questions, thereby satisfying users' information needs.", "The question-answer pairs can be conveniently developed from human abstracts.", "Our proposed approach identifies answer tokens from each sentence of the human abstract, then replaces each answer token with a blank to create a Cloze-style question-answer pair.", "To answer all questions (≈human abstract), the system summary must contain content that is semantically close to and collectively resembles the human abstract.", "In this paper, we construct an extractive summary by selecting consecutive word sequences from the source document.", "To accomplish this we utilize a novel reinforcement learning framework to explore the space of possible extractive summaries and assess each summary using a novel reward function judging the summary's adequacy, fluency, length, and its competency to answer important questions.", "The system learns to sample extractive summaries yielding the highest expected rewards, with no pre-derived extraction labels needed.", "This work extends the methodology of Arumae and Liu (2018) with new representations of extraction units and thorough experimental evaluation.", "The contributions of this research can be summarized as follows: • we describe a novel framework generating extractive summaries by selecting consecutive sequences of words from source documents.", "This new system explores various encoding mechanisms, as well as new sampling techniques to capture phrase level data.", "Such a framework has not been thoroughly investigated in the past; • We conduct a methodical empirical evaluation from the point of view of information saliency.", "Rather than solely relying on automatic summarization evaluation methods, we also show the advantages of our system by assessing the summary quality with reading comprehension tasks.", "Our summaries compare favorably with the automatic metrics against state of the art, and show promising results against baselines when evaluated by humans for question answering.", "Related Work Extractive summarization has seen growing popularity in the past decades (Nenkova and McKeown, 2011) .", "The methods focus on selecting representative sentences from the document(s) and optionally deleting unimportant sentence constituents to form a summary (Knight and Marcu, 2002; Radev et al., 2004; Zajic et al., 2007; Martins and Smith, 2009; Gillick and Favre, 2009; Lin and Bilmes, 2010; Wang et al., 2013; Li et al., 2013 Li et al., , 2014 Hong et al., 2014; Yogatama et al., 2015) .", "A majority of the methods are unsupervised.", "They estimate sentence importance based on the sentence's length and position in the document, whether the sentence contains topical content and its relationship with other sentences.", "The summarization objective is to select a handful of sentences to maximize the coverage of important content while minimizing summary redundancy.", "Although unsupervised methods are promising, they cannot benefit from the large-scale training data harvested from the Web (Sandhaus, 2008; Hermann et al., 2015; Grusky et al., 2018) .", "Neural extractive summarization has focused primarily on extracting sentences (Nallapati et al., 2017; Cao et al., 2017; Isonuma et al., 2017; Tarnpradab et al., 2017; Zhou et al., 2018; Kedzie et al., 2018) .", "These studies exploit parallel training data consisting of source articles and story highlights (i.e., human abstracts) to create ground-truth labels for sentences.", "A neural extractive summarizer learns to predict a binary label for each source sentence indicating if it is to be included in the summary.", "These studies build distributed sentence representations using neural networks (Cheng and Lapata, 2016; Yasunaga et al., 2017) and use reinforcement learning to optimize the evaluation metric (Narayan et al., 2018b) and improve summary coherence (Wu and Hu, 2018) .", "However, sentence extraction can be coarse and in many cases, only a part of the sentence is worthy to be added to the summary.", "In this study, we perform finer-grained extractive summarization by allowing the system to select consecutive sequences of words rather than sentences to form a summary.", "Interestingly, studies reveal that summaries generated by recent neural abstractive systems are, in fact, quite \"extractive.\"", "Abstractive systems often adopt the encoder-decoder architecture with an attention mechanism (Rush et al., 2015; Nallapati et al., 2016; Paulus et al., 2017; Guo et al., 2018; Gehrmann et al., 2018; Lebanoff et al., 2018; Ce-likyilmaz et al., 2018) .", "The encoder condenses a source sequence to a fixed-length vector and the decoder takes the vector as input and generates a summary by predicting one word at a time.", "See, Liu, and Manning (2017) suggest that about 35% of the summary sentences occur in the source documents, and 90% of summary n-grams appear in the source.", "Moreover, the summaries may contain inaccurate factual details and introduce new meanings not present in the original text (Cao et al., 2018; .", "It thus raises concerns as to whether such systems can be used in realworld scenarios to summarize materials such as legal documents.", "In this work, we choose to focus on extractive summarization where selected word sequences can be highlighted on the source text to avoid change of meaning.", "Our proposed method is inspired by the work of Lei et al.", "(2016) who seek to identify rationales from textual input to support sentiment classification and question retrieval.", "Distinct from this previous work, we focus on generating generic document summaries.", "We present a novel supervised framework encouraging the selection of consecutive sequences of words to form an extractive summary.", "Further, we leverage reinforcement learning to explore the space of possible extractive summaries and promote those that are fluent, adequate, and competent in question answering.", "We seek to test the hypothesis that successful summaries can serve as document surrogates to answer important questions, and moreover, ground-truth questionanswer pairs can be derived from human abstracts.", "In the following section we describe our proposed approach in details.", "Our Approach Let S be an extractive summary consisting of text segments selected from a source document x.", "The summary can be mapped to a sequence of binary labels y assigned to document words.", "In this section we first present a supervised framework for identifying consecutive sequences of words that are summary-worthy, then proceed by describing our question-answering rewards and a deep reinforcement learning framework to guide the selection of summaries so that they can be used as document surrogates to answer important questions.", "1 Representing an Extraction Unit How best to decompose a source document into a set of text units useful for extractive summarization remains an open problem.", "A natural choice is to use words as extraction units.", "However, this choice ignores the cohesiveness of text.", "A text chunk (e.g., a prepositional phrase) can be either selected to the summary in its entirety or not at all.", "In this paper we experiment with both schemes, using either words or chunks as extraction units.", "When a text chunk is selected in the summary, all its consisting words are selected.", "We obtain text chunks by breaking down the sentence constituent parse tree in a top-down manner until each tree fragment governs at most 5 words.", "A chunk thus can contain from 1 to 5 words.", "Additionally, word level modeling can be considered a special case of chunks where the length of each phrase is 1.", "It is important to note that using sentences as extraction units is out of the scope of this paper, because our work focuses on finer-grained extraction units such as words and phrases and this is notably a more challenging task.", "The most successful neural models for encoding a piece of text to a fixed-length vector include the recurrent (Hochreiter and Schmidhuber, 1997) and convolutional neural networks (CNN; Kim et al., 2014) , among others.", "A recent study by Khandelwal et al.", "(2018) reported that the recurrent networks are capable of memorizing a recent context of about 20 tokens and the model is highly sensitive to word order, whereas this is less the case for CNN whose max-pooling operation makes it agnostic to word order.", "We implement both networks and are curious to compare their effectiveness at encoding extraction units for summarization.", "{h e t } = f Bi-LSTM 1 (x) (1) or {h e t } = f CNN 2 (x) (2) Our model first encodes the source document using a bidirectional LSTM with the forward and backward passes (Eq.", "(1)).", "The representation of the t-th source word h e t = [ ← − h e t || − → h e t ] is the con- catenation of the hidden states in both directions.", "A chunk is similarly denoted by h e t = [ ← − h e t || − → h e t+n ] where t and t + n are the indices of its beginning and ending words.", "In both cases, a fixed-length vector (h e t ∈ R m ) is created for the word/chunk.", "Further, our CNN encoder (Eq.", "(2)) uses a sliding window of {1,3,5,7} words, corresponding to the kernel sizes, to scan through the source document.", "We apply a number of filters to each window size to extract local features.", "The t-th source word is represented by the concatenation of feature maps (an m-dimensional vector).", "To obtain the chunk vector we perform max-pooling over the representations of its consisting words (from t to t + n).", "In the following we use h e t to denote the vector representation of the t-th extraction unit, may it be a word or a chunk, generated using either encoder.", "Constructing an Extractive Summary It is desirable to first develop a supervised framework for identifying summary-worthy text segments from a source article.", "These segments collectively form an extractive summary to be highlighted on the source text.", "The task can be formulated as a sequence labeling problem: a source text unit (a word or chunk) is labelled 1 if it is to be included in the summary and 0 otherwise.", "It is not unusual to develop an auto-regressive model to perform sequence labeling, where the label of the t-th extraction unit (y t ) depends on all previous labels (y <t ).", "Given this hypothesis, we build a framework to extract summary units where the importance of the t-th source unit is characterized by its informativeness (encoded in h e t ), its position in the document, and relationship with the partial summary.", "The details are presented below.", "We use a positional embedding (g t ) to signify the position of the t-th text unit in the source document.", "The position corresponds to the index of the source sentence containing the t-th unit, and further, all text units belonging to the same sentence share the same positional embedding.", "We apply sinusoidal initialization to the embeddings, following Vaswani et al.", "(2017) .", "Importantly, positional embeddings allow us to inject macro-positional knowledge about words/chunks into a neural summarization framework to offset the natural bias that humans tend to have on putting important content at the beginning of an article.", "Next, we build a representation for the partial summary to aid the system in selecting future text units.", "The representation s t is expected to encode the extraction decisions up to time t-1 and it can be realized using a unidirectional LSTM network (Eq.", "(3)).", "The t-th input to the network is represented as y t−1 ⊗ h e t−1 where y t−1 is a binary label serving as a gating mechanism to control if the semantic content of the previous text unit (h e t−1 ) is to be included in the summary (\"⊗\" corresponds to elementwise product).", "During training, we apply teacher forcing and y t−1 is the ground-truth extraction label for the (t − 1)-th unit; at test time, Figure 1 : A unidirectional LSTM (blue, Eq.", "(3)) encodes the partial summary, while the multilayer perceptron network (orange, Eq.", "(4-5)) utilizes the text unit representation (h e t ), its positional embedding (gt), and the partial summary representation (st) to determine if the t-th text unit is to be included in the summary.", "Best viewed in color.", "g t 1 g t g t+1 g t+2 s t+2 s t+1 s t s t 1 h e t 1 h e t h e t+1 h e t+2 y t−1 is generated on-the-fly by obtaining the label yielding the highest probability according to Eq.", "(5).", "In the previous work of Cheng and Lapata (2016) and Nallapati et al.", "(2017) , similar auto-regressive models are developed to identify summary sentences.", "Different from the previous work, this study focuses on extracting consecutive sequences of words and chunks from the source document, and the partial summary representation is particularly useful for predicting if the next unit is to be included in the summary to improve summary fluency.", "s t = f Uni-LSTM 3 (s t−1 , y t−1 ⊗ h e t−1 ) (3) Given the partial summary representation (s t ), and representation of the text unit (h e t ) and its positional encoding (g t ), we employ a multilayer perceptron to predict how likely the unit is to be included in the summary.", "This process is described by Eqs.", "(4-5) and further illustrated in Figure 1 .", "a t = f ReLU (W a [h e t ; g t ; s t ] + b a ) (4) p(y t |y <t , x) = σ(w y a t + b y ) (5) Our model parameters include {W a , b a , w y , b y } along with those required by f Bi-LSTM 1 , f CNN 2 and f Uni-LSTM 3 .", "It is possible to train this model in a fully supervised fashion by minimizing the negative log-likelihood of the training data.", "We generate ground-truth labels for source text units as follows.", "A source word receives a label of 1 if both itself and its adjacent word appear in the human abstract (excluding cases where both words are stopwords).", "This heuristic aims to label consecutive source words (2 or more) as summaryworthy, as opposed to picking single words which can be less informative.", "A source text chunk receives a label of 1 if one of its component words is labelled 1 in the above process.", "Because human abstracts are often short and contain novel words not present in source documents, they can be suboptimal for generating ground-truth labels for extraction units.", "Only a small portion of the source words (about 8% in our dataset) are labelled as positive, whereas the vast majority are negative.", "Such labels can be ineffective in providing supervision.", "In the following section, we investigate a new learning paradigm, which encourages extractive summaries to contain informative content useful for answering important questions, while question-answer pairs can be automatically derived from human abstracts.", "Using Summaries to Answer Questions Our hypothesis is that high-quality summaries should contain informative content making them appropriate to serve as document surrogates to satisfy users' information needs.", "We train the extractive summarizer to identify source text units necessary for answering questions, and the questionanswer (QA) pairs can be conveniently developed from human abstracts.", "To obtain QA pairs, we set an answer token to be either a salient word or a named entity to limit the space of potential answers.", "For any sentence in the human abstract, we identify an answer token from it, then replace the answer token with a blank to create a Cloze-style question-answer pair (see Table 1 ).", "When a sentence contains multiple answer tokens, a set of QA pairs can be obtained from it.", "It is important to note that at least one QA pair should be extracted from each sentence of the abstract.", "Because a system summary is trained to contain content useful for answering all questions (≈human abstract), any missing QA pair is likely to cause the summary to be insufficient.", "We collect answer tokens using the following methods: (a) we extract a set of entities with tag {PER, LOC, ORG, MISC} from each sentence using the Stanford CoreNLP toolkit ; (b) we also identify the ROOT word of each sentence's dependency parse tree along with the sentence's subject/object word, whose type is {NSUBJ, CSUBJ, OBJ, IOBJ} (if exists), then add them to the collection of answer tokens.", "Further, we prune the answer space by excluding those which appear fewer than 5 times overall.", "Having several methods for question construction allows us to explore the answer space properly.", "In the results section we perform experiments on root, subject/object, and named entities to see which model provides the best extraction guide.", "Given an extractive summary S containing a set of source text units, and a collection of questionanswer pairs P = {(Q k , e * k )} K k=1 related to the source document, we want to develop a mechanism leveraging the extractive summary to answer these questions.", "We first encode each question Q k to a vector representation (q k ).", "This is achieved by concatenating the last hidden states of the forward/backward passes of a bidirectional LSTM (Eq.", "(6) ).", "Next, we exploit the attention mechanism to locate summary parts that are relevant to answering the k-th question.", "Given the attention mechanism, an extractive summary S can be used to answer multiple questions related to the document.", "We define α t,k to be the semantic relatedness between the t-th source text unit and the k-th question.", "Following Chen et al.", "(2016a) , we introduce a bilinear term to characterize their relationship (α t,k ∝ h e t W α q k ; see Eq.", "(7) ).", "In this process, we consider only those source text units selected in summary S. Using α t,k as weights, we then compute a context vector c k condensing summary content related to the k-th question (Eq.", "(8)) .", "q k = f Bi-LSTM 4 (Q k ) (6) α t,k = exp(h e t W α q k ) t exp(h e t W α q k ) (7) c k = t α t,k h e t (8) u k = [c k ; q k ; |c k − q k |; c k ⊗ q k ] (9) To predict the most probable answer, we construct a fully-connected network as the output layer.", "The input to the network includes a concatenation of the context vector (c k ), question vector (q k ), absolute difference (|c k − q k |) and element-wise product (c k ⊗ q k ) of the two vectors (Eq.", "(9)).", "A softmax function is used to estimate a probability distribution over the space of candidate answers: .", "P (e k |S, Q k ) = softmax(W e f ReLU (W u u k + b u )).", "A Reinforcement Learning Framework In this section we introduce a reinforcement learning framework to explore the space of possible extractive summaries and present a novel reward function to promote summaries that are adequate, fluent, restricted in length, and competent in question answering.", "Our reward function consists of four components, whose interpolation weights γ, α, and β are tuned on the dev set.", "R(y) = R c (y) + γR a (y) + αR f (y) + βR l (y) We define QA competency (Eq.", "(10)) as the average log-likelihood of correctly answering questions using the system summary (y).", "A highquality system summary is expected to resemble reference summary by using similar wording.", "The adequacy metric (Eq.", "(11)) measures the percentage of overlapping unigrams between the system (y) and reference summary (y * ).", "The fluency criterion (Eq.", "(12)) encourages consecutive sequences of source words to be selected by preventing many 0/1 switches in the label sequence (i.e., |y t − y t−1 |).", "Finally, we limit the summary size by setting the ratio of selected words to be close to a threshold δ (Eq.", "(13) ).", "QA R c (y) = 1 K K k=1 log P (e * k |y, Q k ) (10) Adequ.", "R a (y) = 1 |y * | U(y, y * ) (11) Fluency R f (y) = − |y| t=2 |y t − y t−1 | (12) Length R l (y) = − 1 |y| t y t − δ (13) The reward function R(y) successfully combines intrinsic measures of summary fluency and adequacy (Goldstein et al., 2005) with extrinsic measure of summary responsiveness to given questions (Dang, 2006; Murray et al., 2008) .", "A reinforcement learning agent finds a policy P (y|x) to maximize the expected reward E P (y|x) [R(y)].", "Training the system with policy gradient (Eq.", "(14) ) involves repeatedly sampling an extractive summaryŷ from the source document x.", "At time t, the agent takes an action by sampling a decision based on p(y t |ŷ <t , x) (Eq.", "(5)) indicating whether the t-th source text unit is to be included in the summary.", "Once the full summary sequenceŷ is generated, it is compared to the ground-truth sequence to compute the reward R(ŷ).", "In this way, reinforcement learning explores the space of extractive summaries and promotes those yielding high rewards.", "At inference time, rather than sampling actions from p(y t |y <t , x), we choose y t that yields the highest probability to generate the system summary y.", "This process is deterministic and no QA is required.", "∇ θ E P (y|x) [R(y)] = E P (y|x) [R(y)∇ θ log P (y|x)] ≈ 1 N N n=1 R(ŷ (n) )∇ θ log P (ŷ (n) |x) (14) Experiments We proceed by discussing the dataset and settings, comparison systems, and experimental results obtained through both automatic metrics and human evaluation in a reading comprehension setting.", "Dataset and Settings Our goal is to build an extractive summarizer identifying important textual segments from source articles.", "To investigate the effectiveness of the proposed approach, we conduct experiments on the CNN/Daily Mail dataset using a version provided by See et al.", "(2017) .", "The reference summaries of this dataset were created by human editors exhibiting a moderate degree of extractiveness.", "E.g., 83% of summary unigrams and 45% of bigrams appear in source articles (Narayan et al., 2018a) .", "On average, a CNN article contains 761 words / 34 sentences and a DM article contains 653 words / 29 sentences.", "We report results respectively for the CNN and DM portion of the dataset.", "Our hyperparameter settings are as follows.", "We set the hidden state dimension of the LSTM to be 256 in either direction.", "A bidirectional LSTM f Bi-LSTM 1 (·) produces a 512-dimensional vector for each content word.", "Similarly, f Bi-LSTM 4 (·) generates a question vector q k of the same size.", "Our CNN encoder f CNN 2 (·) uses multiple window sizes of {1, 3, 5, 7} and 128 filters per window size.", "h e t is thus a 512-dimensional vector using either CNN or LSTM encoder.", "We set the hidden state dimension of s t to be 128.", "We also use 100-dimensional word embeddings (Pennington et al., 2014) and sinusoidal positional encodings (Vaswani et al., 2017) of 30 dimensions.", "The maximum article length is set to 400 words.", "Compared to the study of Arumae and Liu (2018) , we expand the search space dramatically from 100 to 400 words, which poses a challenge to the RLbased summarizers.", "We associate each article with at most 10 QA pairs (K=10) and use them to guide the extraction of summary segments.", "We apply mini-batch training with Adam optimizer (Kingma and Ba, 2014) , where a mini-batch contains 128 articles and their QA pairs.", "The summary ratio δ is set to 0.15, yielding extractive summaries of about 60 words.", "Following Arumae and Liu (2018) , we set hyperparameters β = 2α; α and γ are tuned on the dev set using grid search.", "Experimental Results Comparison systems We compare our method with a number of extractive and abstractive systems that have reported results on the CNN/DM datasets.", "We consider non-neural approaches that extract sentences from the source article to form a summary.", "These include LexRank (Radev et al., 2004) , SumBasic (Vanderwende et al., 2007) , and KLSum (Haghighi and Vanderwende, 2009) .", "Such methods treat sentences as bags of words, and then select sentences containing topically important words.", "We further include the Lead-3 baseline that extracts the first 3 sentences from any given article.", "The method has been shown to be a strong baseline for summarizing news articles.", "Neural extractive approaches focus on learning vector representations for sentences and words, then performing extraction based on the learned representations.", "Cheng et al.", "(2016) describe a neural network method composed of a hierarchical document encoder and an attention-based extractor.", "The system has two variants: NN-WE extracts words from the source article and NN-SE extracts sentences.", "SummaRuNNer (Nallapati et al., 2017) presents an autoregressive sequence labeling method based on recurrent neural networks.", "It selects summary sentences based on their content, salience, position, and novelty representations.", "Abstractive summarization methods are not directly comparable to our approach, but we choose to include three systems that report results respectively for CNN and DM datasets.", "Distraction-M3 (Chen et al., 2016b ) trains the summarization system to distract its attention to traverse different regions of the source article.", "Graph attention (Tan et al., 2017) introduces a graph-based attention mechanism to enhance the encoderdecoder framework.", "PointerGen+Cov.", "(See et al., 2017) allows the system to not only copy words from the source text but also generate summary words by selecting them from a vocabulary.", "Abstractive methods can thus introduce new words to the summary that are not present in the source article.", "However, system summaries may change the meaning of the original texts due to this flexibility.", "Results We present summarization results of various systems in Tables 2 and 3 , evaluated on the standard CNN/DM test sets by R-1, R-2, and R-L metrics (Lin, 2004) , which respectively measure the overlap of unigrams, bigrams, and longest common subsequences between system and reference summaries.", "We investigate four variants of our method: QASumm+NoQ does not utilize any question-answer pairs during training.", "It extracts summary text chunks by learning from groundtruth labels ( §3.2) and the chunks are encoded by f Bi-LSTM 1 .", "Other variants initialize their models using pretrained parameters from QASumm+NoQ, then integrate the reinforcement learning objective ( §3.4) to exploit the space of possible extractive summaries and reward those that are useful for answering questions.", "We consider three types of QA pairs: the answer token is the root of a sentence dependency parse tree (+ROOT), a subject or object (+SUBJ/OBJ), or an entity found in the sentence (+NER).", "In all cases, the question is generated by replacing the answer token with a blank symbol.", "As illustrated in Tables 2 and 3 , our QASumm methods with reinforcement learning (+ROOT, +SUBJ/OBJ, +NER) perform competitively with strong baselines.", "They outperform the counterpart QASumm+NoQ that makes no use of the QA pairs by a substantial margin.", "They outperform or perform at a comparable level to state-of-the-art published systems on the CNN/DM datasets but are generally inferior to PointerGen.", "We observe that exacting summary chunks is highly desirable in real-world applications as it provides a mechanism to generate concise summaries.", "Nonetheless, accurately identifying summary chunks is challenging because the search space is vast and spuriousness arises in chunking sentences.", "Cheng and Lapata (2016) report a substantial performance drop when adapting their system to extract words.", "Our QASumm methods focusing on chunk extraction perform on par with competitive systems that extract whole sentences.", "We additionally present human evaluation results of summary usefulness for a reading comprehension task in §4.3.", "In Tables 2 and 3 , we further show the number of unique answers per QA type.", "We find that the ROOT-type QA pairs have the least number of unique answers.", "They are often main verbs of sentences.", "In contrast, the SUBJ/OBJ-type has the most number of answers.", "They are subjects and objects of sentences and correspond to an open class of content words.", "The NER-type has a moderate number of answers compared to others.", "Note that all answer tokens have been filtered by frequency; those appearing less than 5 times in the dataset are removed to avoid overfitting.", "Among variants of the QASumm method, we find that QASumm+ROOT achieves the highest scores on DM dataset.", "QASumm+NER performs consistently well on both CNN and DM datasets, suggesting QA pairs of this type are effective in guiding the system to extract summary chunks.", "We conjecture that maintaining a moderate number of answers is important to maximize performance.", "To answer questions with missing entities, the summary is encouraged to contain similar content as the question body.", "Because questions are derived from the human abstract, this in turn requires the system summary to carry similar semantic content as the human abstract.", "Question-answering accuracy We next dive into the QA component of our system to investigate question-answering performance when different types of summaries and QA pairs are supplied to the system ( §3.3).", "Given a question, the system predicts an answer using an extractive summary as the source input.", "Intuitively, an informative summary can lead to high QA accuracy, as the summary content serves well as the basis for predicting answers.", "With the same summary as input, certain types of questions can be more difficult to answer than others, and the system must rely heavily on the summary to gauge correct answers.", "We compare various types of summaries.", "These include (a) QASumm+NoQ which extracts summary chunks without requiring QA pairs; and (b) GoldSumm, which are gold-standard extractive summaries generated by collecting source words appearing in human summaries.", "We further consider NoText and FullText, corresponding to using no source text or the full source article as input.", "They represent the two extremes.", "In all cases the QA component ( §3.3) is trained on the training set and we report QA accuracies on the dev set.", "In Table 4 , we observe that question-answering with GoldSumm performs the best for all QA types.", "It outperforms the scenarios using Full-Text as the source input.", "This indicates that distilled information contained in a high-quality summary can be useful for answering questions, as searching for answers in a succinct summary can be more efficient than that in a full article.", "Moreover, we observe that the performance of QA-Summ+NoQ is in between NoText and GoldSumm for all answer types.", "The results suggest that extractive summaries with even modest ROUGE scores can prove useful for question-answering.", "Regarding different types of QA pairs, we find that the ROOT-type can achieve high QA accuracy when using NoText input.", "It suggests that ROOT answers can to some extent be predicted based on the question context.", "The NER-type QA pairs work the best for both GoldSumm and Full-Text, likely because the source texts contain necessary entities required to correctly answer those questions.", "We also find the SUBJ/OBJ-type QA pairs have the smallest gap between train/dev accuracies, despite that they have a large answer space.", "Based on the analysis we would suggest future work to consider using NER-based QA pairs as they encourage the summaries to contain salient source content and be informative.", "Extraction units We finally compare the performance of using either words or chunks as extraction units ( §3.1).", "The chunks are obtained by breaking down sentence constituent parse trees in a top-down manner until all tree fragments contain 5 words or less.", "We observe that 70% of the chunks are 1-grams, and 2/3/4/5-grams are 9%, 7%, 6%, and 8% respectively.", "We compare the bidirectional LSTM (f LSTM 1 ) and CNN (f CNN 2 ) encoders for their effectiveness on generating representations for extraction units.", "Figure 2 presents the results of the QASumm+NoQ system under various settings.", "We find that extracting chunks performs superior, and combining chunks with LSTM representations yield the highest scores.", "Human Evaluation Testing the usefulness of an extractive system driven by reading comprehension is not inherently measured by automatic metrics (i.e.", "ROUGE).", "We conducted a human evaluation to assess whether the highlighted summaries contribute to document understanding.", "Similar to our training paradigm we presented each participant with the document and three fill-in-the-blank questions created from the human abstracts.", "It was guaranteed that each question was from a unique human abstract to avoid seeing the answer adjacent to the same template.", "The missing section was randomly generated to be either the root word, the subject or ob- See et al.", "(2017) .", "Our systems tested were the supervised extractor, and our full model (NER).", "ject of the sentence, or a named entity.", "We compare our reinforced extracted summary (presented as a bold overlay to the document), against our supervised method (section 3.2), abstractive summaries generated by See et al.", "(2017) , and the human abstracts in full.", "Additionally we asked the participants to rate the quality of the summary presented (1-5, with 5 being most informative).", "We utilized Amazon Mechanical Turk, and conducted an experiment where we sampled 80 documents from the CNN test set.", "The articles were evenly split across the four competing systems, and each HIT was completed by 5 turkers.", "Upon completion the data was analyzed manually for accuracy since turkers entered each answer as free text, and to remove any meaningless datapoints.", "Table 5 shows the average time (in seconds) to complete a single question, the overall accuracy of the participants, and the informativeness of a given summary type.", "Excluding the use of human abstracts, all systems resulted in similar performance times.", "However we observe a large margin in QA accuracy in our full system compared to the abstractive and our supervised approach.", "Although participants rated the informativeness of the summaries to be the same our systems yielded a higher performance.", "This strongly indicates that having a system which makes using of document comprehension has a tangible effect when applied towards a real-world task.", "Conclusion We exploited an extractive summarization framework using deep reinforcement learning to identify consecutive word sequences from a document to form an extractive summary.", "Our reward function promotes adequate and fluent summaries that can serve as document surrogates to answer important questions, directly addressing users' information needs.", "Experimental results on benchmark datasets demonstrated the efficacy of our proposed method over state-of-the-art baselines, assessed by both automatic metrics and human evaluators." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "4.3", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Our Approach", "Representing an Extraction Unit", "Constructing an Extractive Summary", "Using Summaries to Answer Questions", "A Reinforcement Learning Framework", "Experiments", "Dataset and Settings", "Experimental Results", "Human Evaluation", "Conclusion" ] }
GEM-SciDuet-train-36#paper-1050#slide-6
Question Answering
Question-answer (QA) pairs can be conveniently developed from human abstracts. For any sentence in the human abstract, we identify an answer token from it, then replace the answer token with a blank to create a cloze-style QA pair. We set an answer token to be either a salient word or a named entity to limit the space of potential answers.
Question-answer (QA) pairs can be conveniently developed from human abstracts. For any sentence in the human abstract, we identify an answer token from it, then replace the answer token with a blank to create a cloze-style QA pair. We set an answer token to be either a salient word or a named entity to limit the space of potential answers.
[]
GEM-SciDuet-train-36#paper-1050#slide-7
1050
Guiding Extractive Summarization with Question-Answering Rewards
Highlighting while reading is a natural behavior for people to track salient content of a document. It would be desirable to teach an extractive summarizer to do the same. However, a major obstacle to the development of a supervised summarizer is the lack of ground-truth. Manual annotation of extraction units is costprohibitive, whereas acquiring labels by automatically aligning human abstracts and source documents can yield inferior results. In this paper we describe a novel framework to guide a supervised, extractive summarization system with question-answering rewards. We argue that quality summaries should serve as a document surrogate to answer important questions, and such question-answer pairs can be conveniently obtained from human abstracts. The system learns to promote summaries that are informative, fluent, and perform competitively on question-answering. Our results compare favorably with those reported by strong summarization baselines as evaluated by automatic metrics and human assessors.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289 ], "paper_content_text": [ "Introduction Our increasingly digitized lifestyle calls for summarization techniques to produce short and accurate summaries that can be accessed at any time.", "These summaries should factually adhere to the content of the source text and present the reader with the key points therein.", "Although neural abstractive summarization has shown promising results (Rush et al., 2015; Nallapati et al., 2016; See et al., 2017) , these methods can have potential drawbacks.", "It was revealed that abstracts generated by neural systems sometimes alter or falsify objective details, and introduce new meanings not present in the original text (Cao et al., 2018) .", "Reading these abstracts can lead to misinterpretation of the source materials, which is clearly undesirable.", "In this work, we focus on extractive summarization, where the summaries are guaranteed (CNN) A judge this week sentenced a former TSA agent to six months in jail for secretly videotaping a female co-worker while she was in the bathroom, prosecutors said.", "During the investigation, detectives with the Metro Nashville Police Department in Tennessee also found that the agent, 33-year-old Daniel Boykin, entered the woman's home multiple times, where he took videos, photos and other data.", "Police found more than 90 videos and 1,500 photos of the victim on Boykin's phone and computer .", "The victim filed a complaint after seeing images of herself on his phone last year.", "[...] Comprehension Questions (Human Abstract): Former Daniel Boykin, 33, videotaped his female co-worker in the restroom, authorities say.", "Authorities say they found 90 videos and 1,500 photos of the victim on and computer.", "to remain faithful to the original content.", "Our system seeks to identify salient and consecutive sequences of words from the source document, and highlight them in the text to assist users in browsing and comprehending lengthy documents.", "An example is illustrated in Table 1 .", "A primary challenge faced by extractive summarizers is the lack of annotated data.", "The cost of hiring humans to label a necessary amount of source articles with summary words, good for training a modern classifier, can be prohibitive.", "Previous work has exploited using human abstracts to derive labels for extraction units (Woodsend and Lapata, 2010) .", "E.g., a source word is tagged 1 if it appears in the abstract, 0 otherwise.", "Although pairs of source articles and human abstracts are abundant, labels derived in this way are not necessarily best since summary saliency can not be easily captured with a rule based categorization.", "Considering that human abstracts involve generalization, paraphrasing, and can con-tain words not present in the source text, leveraging them to derive labels for extraction units can be suboptimal.", "In this work, we investigate a new strategy that seeks to better utilize human abstracts to guide the extraction of summary text units.", "We hypothesize that quality extractive summaries should contain informative content so that they can be used as document surrogates to answer important questions, thereby satisfying users' information needs.", "The question-answer pairs can be conveniently developed from human abstracts.", "Our proposed approach identifies answer tokens from each sentence of the human abstract, then replaces each answer token with a blank to create a Cloze-style question-answer pair.", "To answer all questions (≈human abstract), the system summary must contain content that is semantically close to and collectively resembles the human abstract.", "In this paper, we construct an extractive summary by selecting consecutive word sequences from the source document.", "To accomplish this we utilize a novel reinforcement learning framework to explore the space of possible extractive summaries and assess each summary using a novel reward function judging the summary's adequacy, fluency, length, and its competency to answer important questions.", "The system learns to sample extractive summaries yielding the highest expected rewards, with no pre-derived extraction labels needed.", "This work extends the methodology of Arumae and Liu (2018) with new representations of extraction units and thorough experimental evaluation.", "The contributions of this research can be summarized as follows: • we describe a novel framework generating extractive summaries by selecting consecutive sequences of words from source documents.", "This new system explores various encoding mechanisms, as well as new sampling techniques to capture phrase level data.", "Such a framework has not been thoroughly investigated in the past; • We conduct a methodical empirical evaluation from the point of view of information saliency.", "Rather than solely relying on automatic summarization evaluation methods, we also show the advantages of our system by assessing the summary quality with reading comprehension tasks.", "Our summaries compare favorably with the automatic metrics against state of the art, and show promising results against baselines when evaluated by humans for question answering.", "Related Work Extractive summarization has seen growing popularity in the past decades (Nenkova and McKeown, 2011) .", "The methods focus on selecting representative sentences from the document(s) and optionally deleting unimportant sentence constituents to form a summary (Knight and Marcu, 2002; Radev et al., 2004; Zajic et al., 2007; Martins and Smith, 2009; Gillick and Favre, 2009; Lin and Bilmes, 2010; Wang et al., 2013; Li et al., 2013 Li et al., , 2014 Hong et al., 2014; Yogatama et al., 2015) .", "A majority of the methods are unsupervised.", "They estimate sentence importance based on the sentence's length and position in the document, whether the sentence contains topical content and its relationship with other sentences.", "The summarization objective is to select a handful of sentences to maximize the coverage of important content while minimizing summary redundancy.", "Although unsupervised methods are promising, they cannot benefit from the large-scale training data harvested from the Web (Sandhaus, 2008; Hermann et al., 2015; Grusky et al., 2018) .", "Neural extractive summarization has focused primarily on extracting sentences (Nallapati et al., 2017; Cao et al., 2017; Isonuma et al., 2017; Tarnpradab et al., 2017; Zhou et al., 2018; Kedzie et al., 2018) .", "These studies exploit parallel training data consisting of source articles and story highlights (i.e., human abstracts) to create ground-truth labels for sentences.", "A neural extractive summarizer learns to predict a binary label for each source sentence indicating if it is to be included in the summary.", "These studies build distributed sentence representations using neural networks (Cheng and Lapata, 2016; Yasunaga et al., 2017) and use reinforcement learning to optimize the evaluation metric (Narayan et al., 2018b) and improve summary coherence (Wu and Hu, 2018) .", "However, sentence extraction can be coarse and in many cases, only a part of the sentence is worthy to be added to the summary.", "In this study, we perform finer-grained extractive summarization by allowing the system to select consecutive sequences of words rather than sentences to form a summary.", "Interestingly, studies reveal that summaries generated by recent neural abstractive systems are, in fact, quite \"extractive.\"", "Abstractive systems often adopt the encoder-decoder architecture with an attention mechanism (Rush et al., 2015; Nallapati et al., 2016; Paulus et al., 2017; Guo et al., 2018; Gehrmann et al., 2018; Lebanoff et al., 2018; Ce-likyilmaz et al., 2018) .", "The encoder condenses a source sequence to a fixed-length vector and the decoder takes the vector as input and generates a summary by predicting one word at a time.", "See, Liu, and Manning (2017) suggest that about 35% of the summary sentences occur in the source documents, and 90% of summary n-grams appear in the source.", "Moreover, the summaries may contain inaccurate factual details and introduce new meanings not present in the original text (Cao et al., 2018; .", "It thus raises concerns as to whether such systems can be used in realworld scenarios to summarize materials such as legal documents.", "In this work, we choose to focus on extractive summarization where selected word sequences can be highlighted on the source text to avoid change of meaning.", "Our proposed method is inspired by the work of Lei et al.", "(2016) who seek to identify rationales from textual input to support sentiment classification and question retrieval.", "Distinct from this previous work, we focus on generating generic document summaries.", "We present a novel supervised framework encouraging the selection of consecutive sequences of words to form an extractive summary.", "Further, we leverage reinforcement learning to explore the space of possible extractive summaries and promote those that are fluent, adequate, and competent in question answering.", "We seek to test the hypothesis that successful summaries can serve as document surrogates to answer important questions, and moreover, ground-truth questionanswer pairs can be derived from human abstracts.", "In the following section we describe our proposed approach in details.", "Our Approach Let S be an extractive summary consisting of text segments selected from a source document x.", "The summary can be mapped to a sequence of binary labels y assigned to document words.", "In this section we first present a supervised framework for identifying consecutive sequences of words that are summary-worthy, then proceed by describing our question-answering rewards and a deep reinforcement learning framework to guide the selection of summaries so that they can be used as document surrogates to answer important questions.", "1 Representing an Extraction Unit How best to decompose a source document into a set of text units useful for extractive summarization remains an open problem.", "A natural choice is to use words as extraction units.", "However, this choice ignores the cohesiveness of text.", "A text chunk (e.g., a prepositional phrase) can be either selected to the summary in its entirety or not at all.", "In this paper we experiment with both schemes, using either words or chunks as extraction units.", "When a text chunk is selected in the summary, all its consisting words are selected.", "We obtain text chunks by breaking down the sentence constituent parse tree in a top-down manner until each tree fragment governs at most 5 words.", "A chunk thus can contain from 1 to 5 words.", "Additionally, word level modeling can be considered a special case of chunks where the length of each phrase is 1.", "It is important to note that using sentences as extraction units is out of the scope of this paper, because our work focuses on finer-grained extraction units such as words and phrases and this is notably a more challenging task.", "The most successful neural models for encoding a piece of text to a fixed-length vector include the recurrent (Hochreiter and Schmidhuber, 1997) and convolutional neural networks (CNN; Kim et al., 2014) , among others.", "A recent study by Khandelwal et al.", "(2018) reported that the recurrent networks are capable of memorizing a recent context of about 20 tokens and the model is highly sensitive to word order, whereas this is less the case for CNN whose max-pooling operation makes it agnostic to word order.", "We implement both networks and are curious to compare their effectiveness at encoding extraction units for summarization.", "{h e t } = f Bi-LSTM 1 (x) (1) or {h e t } = f CNN 2 (x) (2) Our model first encodes the source document using a bidirectional LSTM with the forward and backward passes (Eq.", "(1)).", "The representation of the t-th source word h e t = [ ← − h e t || − → h e t ] is the con- catenation of the hidden states in both directions.", "A chunk is similarly denoted by h e t = [ ← − h e t || − → h e t+n ] where t and t + n are the indices of its beginning and ending words.", "In both cases, a fixed-length vector (h e t ∈ R m ) is created for the word/chunk.", "Further, our CNN encoder (Eq.", "(2)) uses a sliding window of {1,3,5,7} words, corresponding to the kernel sizes, to scan through the source document.", "We apply a number of filters to each window size to extract local features.", "The t-th source word is represented by the concatenation of feature maps (an m-dimensional vector).", "To obtain the chunk vector we perform max-pooling over the representations of its consisting words (from t to t + n).", "In the following we use h e t to denote the vector representation of the t-th extraction unit, may it be a word or a chunk, generated using either encoder.", "Constructing an Extractive Summary It is desirable to first develop a supervised framework for identifying summary-worthy text segments from a source article.", "These segments collectively form an extractive summary to be highlighted on the source text.", "The task can be formulated as a sequence labeling problem: a source text unit (a word or chunk) is labelled 1 if it is to be included in the summary and 0 otherwise.", "It is not unusual to develop an auto-regressive model to perform sequence labeling, where the label of the t-th extraction unit (y t ) depends on all previous labels (y <t ).", "Given this hypothesis, we build a framework to extract summary units where the importance of the t-th source unit is characterized by its informativeness (encoded in h e t ), its position in the document, and relationship with the partial summary.", "The details are presented below.", "We use a positional embedding (g t ) to signify the position of the t-th text unit in the source document.", "The position corresponds to the index of the source sentence containing the t-th unit, and further, all text units belonging to the same sentence share the same positional embedding.", "We apply sinusoidal initialization to the embeddings, following Vaswani et al.", "(2017) .", "Importantly, positional embeddings allow us to inject macro-positional knowledge about words/chunks into a neural summarization framework to offset the natural bias that humans tend to have on putting important content at the beginning of an article.", "Next, we build a representation for the partial summary to aid the system in selecting future text units.", "The representation s t is expected to encode the extraction decisions up to time t-1 and it can be realized using a unidirectional LSTM network (Eq.", "(3)).", "The t-th input to the network is represented as y t−1 ⊗ h e t−1 where y t−1 is a binary label serving as a gating mechanism to control if the semantic content of the previous text unit (h e t−1 ) is to be included in the summary (\"⊗\" corresponds to elementwise product).", "During training, we apply teacher forcing and y t−1 is the ground-truth extraction label for the (t − 1)-th unit; at test time, Figure 1 : A unidirectional LSTM (blue, Eq.", "(3)) encodes the partial summary, while the multilayer perceptron network (orange, Eq.", "(4-5)) utilizes the text unit representation (h e t ), its positional embedding (gt), and the partial summary representation (st) to determine if the t-th text unit is to be included in the summary.", "Best viewed in color.", "g t 1 g t g t+1 g t+2 s t+2 s t+1 s t s t 1 h e t 1 h e t h e t+1 h e t+2 y t−1 is generated on-the-fly by obtaining the label yielding the highest probability according to Eq.", "(5).", "In the previous work of Cheng and Lapata (2016) and Nallapati et al.", "(2017) , similar auto-regressive models are developed to identify summary sentences.", "Different from the previous work, this study focuses on extracting consecutive sequences of words and chunks from the source document, and the partial summary representation is particularly useful for predicting if the next unit is to be included in the summary to improve summary fluency.", "s t = f Uni-LSTM 3 (s t−1 , y t−1 ⊗ h e t−1 ) (3) Given the partial summary representation (s t ), and representation of the text unit (h e t ) and its positional encoding (g t ), we employ a multilayer perceptron to predict how likely the unit is to be included in the summary.", "This process is described by Eqs.", "(4-5) and further illustrated in Figure 1 .", "a t = f ReLU (W a [h e t ; g t ; s t ] + b a ) (4) p(y t |y <t , x) = σ(w y a t + b y ) (5) Our model parameters include {W a , b a , w y , b y } along with those required by f Bi-LSTM 1 , f CNN 2 and f Uni-LSTM 3 .", "It is possible to train this model in a fully supervised fashion by minimizing the negative log-likelihood of the training data.", "We generate ground-truth labels for source text units as follows.", "A source word receives a label of 1 if both itself and its adjacent word appear in the human abstract (excluding cases where both words are stopwords).", "This heuristic aims to label consecutive source words (2 or more) as summaryworthy, as opposed to picking single words which can be less informative.", "A source text chunk receives a label of 1 if one of its component words is labelled 1 in the above process.", "Because human abstracts are often short and contain novel words not present in source documents, they can be suboptimal for generating ground-truth labels for extraction units.", "Only a small portion of the source words (about 8% in our dataset) are labelled as positive, whereas the vast majority are negative.", "Such labels can be ineffective in providing supervision.", "In the following section, we investigate a new learning paradigm, which encourages extractive summaries to contain informative content useful for answering important questions, while question-answer pairs can be automatically derived from human abstracts.", "Using Summaries to Answer Questions Our hypothesis is that high-quality summaries should contain informative content making them appropriate to serve as document surrogates to satisfy users' information needs.", "We train the extractive summarizer to identify source text units necessary for answering questions, and the questionanswer (QA) pairs can be conveniently developed from human abstracts.", "To obtain QA pairs, we set an answer token to be either a salient word or a named entity to limit the space of potential answers.", "For any sentence in the human abstract, we identify an answer token from it, then replace the answer token with a blank to create a Cloze-style question-answer pair (see Table 1 ).", "When a sentence contains multiple answer tokens, a set of QA pairs can be obtained from it.", "It is important to note that at least one QA pair should be extracted from each sentence of the abstract.", "Because a system summary is trained to contain content useful for answering all questions (≈human abstract), any missing QA pair is likely to cause the summary to be insufficient.", "We collect answer tokens using the following methods: (a) we extract a set of entities with tag {PER, LOC, ORG, MISC} from each sentence using the Stanford CoreNLP toolkit ; (b) we also identify the ROOT word of each sentence's dependency parse tree along with the sentence's subject/object word, whose type is {NSUBJ, CSUBJ, OBJ, IOBJ} (if exists), then add them to the collection of answer tokens.", "Further, we prune the answer space by excluding those which appear fewer than 5 times overall.", "Having several methods for question construction allows us to explore the answer space properly.", "In the results section we perform experiments on root, subject/object, and named entities to see which model provides the best extraction guide.", "Given an extractive summary S containing a set of source text units, and a collection of questionanswer pairs P = {(Q k , e * k )} K k=1 related to the source document, we want to develop a mechanism leveraging the extractive summary to answer these questions.", "We first encode each question Q k to a vector representation (q k ).", "This is achieved by concatenating the last hidden states of the forward/backward passes of a bidirectional LSTM (Eq.", "(6) ).", "Next, we exploit the attention mechanism to locate summary parts that are relevant to answering the k-th question.", "Given the attention mechanism, an extractive summary S can be used to answer multiple questions related to the document.", "We define α t,k to be the semantic relatedness between the t-th source text unit and the k-th question.", "Following Chen et al.", "(2016a) , we introduce a bilinear term to characterize their relationship (α t,k ∝ h e t W α q k ; see Eq.", "(7) ).", "In this process, we consider only those source text units selected in summary S. Using α t,k as weights, we then compute a context vector c k condensing summary content related to the k-th question (Eq.", "(8)) .", "q k = f Bi-LSTM 4 (Q k ) (6) α t,k = exp(h e t W α q k ) t exp(h e t W α q k ) (7) c k = t α t,k h e t (8) u k = [c k ; q k ; |c k − q k |; c k ⊗ q k ] (9) To predict the most probable answer, we construct a fully-connected network as the output layer.", "The input to the network includes a concatenation of the context vector (c k ), question vector (q k ), absolute difference (|c k − q k |) and element-wise product (c k ⊗ q k ) of the two vectors (Eq.", "(9)).", "A softmax function is used to estimate a probability distribution over the space of candidate answers: .", "P (e k |S, Q k ) = softmax(W e f ReLU (W u u k + b u )).", "A Reinforcement Learning Framework In this section we introduce a reinforcement learning framework to explore the space of possible extractive summaries and present a novel reward function to promote summaries that are adequate, fluent, restricted in length, and competent in question answering.", "Our reward function consists of four components, whose interpolation weights γ, α, and β are tuned on the dev set.", "R(y) = R c (y) + γR a (y) + αR f (y) + βR l (y) We define QA competency (Eq.", "(10)) as the average log-likelihood of correctly answering questions using the system summary (y).", "A highquality system summary is expected to resemble reference summary by using similar wording.", "The adequacy metric (Eq.", "(11)) measures the percentage of overlapping unigrams between the system (y) and reference summary (y * ).", "The fluency criterion (Eq.", "(12)) encourages consecutive sequences of source words to be selected by preventing many 0/1 switches in the label sequence (i.e., |y t − y t−1 |).", "Finally, we limit the summary size by setting the ratio of selected words to be close to a threshold δ (Eq.", "(13) ).", "QA R c (y) = 1 K K k=1 log P (e * k |y, Q k ) (10) Adequ.", "R a (y) = 1 |y * | U(y, y * ) (11) Fluency R f (y) = − |y| t=2 |y t − y t−1 | (12) Length R l (y) = − 1 |y| t y t − δ (13) The reward function R(y) successfully combines intrinsic measures of summary fluency and adequacy (Goldstein et al., 2005) with extrinsic measure of summary responsiveness to given questions (Dang, 2006; Murray et al., 2008) .", "A reinforcement learning agent finds a policy P (y|x) to maximize the expected reward E P (y|x) [R(y)].", "Training the system with policy gradient (Eq.", "(14) ) involves repeatedly sampling an extractive summaryŷ from the source document x.", "At time t, the agent takes an action by sampling a decision based on p(y t |ŷ <t , x) (Eq.", "(5)) indicating whether the t-th source text unit is to be included in the summary.", "Once the full summary sequenceŷ is generated, it is compared to the ground-truth sequence to compute the reward R(ŷ).", "In this way, reinforcement learning explores the space of extractive summaries and promotes those yielding high rewards.", "At inference time, rather than sampling actions from p(y t |y <t , x), we choose y t that yields the highest probability to generate the system summary y.", "This process is deterministic and no QA is required.", "∇ θ E P (y|x) [R(y)] = E P (y|x) [R(y)∇ θ log P (y|x)] ≈ 1 N N n=1 R(ŷ (n) )∇ θ log P (ŷ (n) |x) (14) Experiments We proceed by discussing the dataset and settings, comparison systems, and experimental results obtained through both automatic metrics and human evaluation in a reading comprehension setting.", "Dataset and Settings Our goal is to build an extractive summarizer identifying important textual segments from source articles.", "To investigate the effectiveness of the proposed approach, we conduct experiments on the CNN/Daily Mail dataset using a version provided by See et al.", "(2017) .", "The reference summaries of this dataset were created by human editors exhibiting a moderate degree of extractiveness.", "E.g., 83% of summary unigrams and 45% of bigrams appear in source articles (Narayan et al., 2018a) .", "On average, a CNN article contains 761 words / 34 sentences and a DM article contains 653 words / 29 sentences.", "We report results respectively for the CNN and DM portion of the dataset.", "Our hyperparameter settings are as follows.", "We set the hidden state dimension of the LSTM to be 256 in either direction.", "A bidirectional LSTM f Bi-LSTM 1 (·) produces a 512-dimensional vector for each content word.", "Similarly, f Bi-LSTM 4 (·) generates a question vector q k of the same size.", "Our CNN encoder f CNN 2 (·) uses multiple window sizes of {1, 3, 5, 7} and 128 filters per window size.", "h e t is thus a 512-dimensional vector using either CNN or LSTM encoder.", "We set the hidden state dimension of s t to be 128.", "We also use 100-dimensional word embeddings (Pennington et al., 2014) and sinusoidal positional encodings (Vaswani et al., 2017) of 30 dimensions.", "The maximum article length is set to 400 words.", "Compared to the study of Arumae and Liu (2018) , we expand the search space dramatically from 100 to 400 words, which poses a challenge to the RLbased summarizers.", "We associate each article with at most 10 QA pairs (K=10) and use them to guide the extraction of summary segments.", "We apply mini-batch training with Adam optimizer (Kingma and Ba, 2014) , where a mini-batch contains 128 articles and their QA pairs.", "The summary ratio δ is set to 0.15, yielding extractive summaries of about 60 words.", "Following Arumae and Liu (2018) , we set hyperparameters β = 2α; α and γ are tuned on the dev set using grid search.", "Experimental Results Comparison systems We compare our method with a number of extractive and abstractive systems that have reported results on the CNN/DM datasets.", "We consider non-neural approaches that extract sentences from the source article to form a summary.", "These include LexRank (Radev et al., 2004) , SumBasic (Vanderwende et al., 2007) , and KLSum (Haghighi and Vanderwende, 2009) .", "Such methods treat sentences as bags of words, and then select sentences containing topically important words.", "We further include the Lead-3 baseline that extracts the first 3 sentences from any given article.", "The method has been shown to be a strong baseline for summarizing news articles.", "Neural extractive approaches focus on learning vector representations for sentences and words, then performing extraction based on the learned representations.", "Cheng et al.", "(2016) describe a neural network method composed of a hierarchical document encoder and an attention-based extractor.", "The system has two variants: NN-WE extracts words from the source article and NN-SE extracts sentences.", "SummaRuNNer (Nallapati et al., 2017) presents an autoregressive sequence labeling method based on recurrent neural networks.", "It selects summary sentences based on their content, salience, position, and novelty representations.", "Abstractive summarization methods are not directly comparable to our approach, but we choose to include three systems that report results respectively for CNN and DM datasets.", "Distraction-M3 (Chen et al., 2016b ) trains the summarization system to distract its attention to traverse different regions of the source article.", "Graph attention (Tan et al., 2017) introduces a graph-based attention mechanism to enhance the encoderdecoder framework.", "PointerGen+Cov.", "(See et al., 2017) allows the system to not only copy words from the source text but also generate summary words by selecting them from a vocabulary.", "Abstractive methods can thus introduce new words to the summary that are not present in the source article.", "However, system summaries may change the meaning of the original texts due to this flexibility.", "Results We present summarization results of various systems in Tables 2 and 3 , evaluated on the standard CNN/DM test sets by R-1, R-2, and R-L metrics (Lin, 2004) , which respectively measure the overlap of unigrams, bigrams, and longest common subsequences between system and reference summaries.", "We investigate four variants of our method: QASumm+NoQ does not utilize any question-answer pairs during training.", "It extracts summary text chunks by learning from groundtruth labels ( §3.2) and the chunks are encoded by f Bi-LSTM 1 .", "Other variants initialize their models using pretrained parameters from QASumm+NoQ, then integrate the reinforcement learning objective ( §3.4) to exploit the space of possible extractive summaries and reward those that are useful for answering questions.", "We consider three types of QA pairs: the answer token is the root of a sentence dependency parse tree (+ROOT), a subject or object (+SUBJ/OBJ), or an entity found in the sentence (+NER).", "In all cases, the question is generated by replacing the answer token with a blank symbol.", "As illustrated in Tables 2 and 3 , our QASumm methods with reinforcement learning (+ROOT, +SUBJ/OBJ, +NER) perform competitively with strong baselines.", "They outperform the counterpart QASumm+NoQ that makes no use of the QA pairs by a substantial margin.", "They outperform or perform at a comparable level to state-of-the-art published systems on the CNN/DM datasets but are generally inferior to PointerGen.", "We observe that exacting summary chunks is highly desirable in real-world applications as it provides a mechanism to generate concise summaries.", "Nonetheless, accurately identifying summary chunks is challenging because the search space is vast and spuriousness arises in chunking sentences.", "Cheng and Lapata (2016) report a substantial performance drop when adapting their system to extract words.", "Our QASumm methods focusing on chunk extraction perform on par with competitive systems that extract whole sentences.", "We additionally present human evaluation results of summary usefulness for a reading comprehension task in §4.3.", "In Tables 2 and 3 , we further show the number of unique answers per QA type.", "We find that the ROOT-type QA pairs have the least number of unique answers.", "They are often main verbs of sentences.", "In contrast, the SUBJ/OBJ-type has the most number of answers.", "They are subjects and objects of sentences and correspond to an open class of content words.", "The NER-type has a moderate number of answers compared to others.", "Note that all answer tokens have been filtered by frequency; those appearing less than 5 times in the dataset are removed to avoid overfitting.", "Among variants of the QASumm method, we find that QASumm+ROOT achieves the highest scores on DM dataset.", "QASumm+NER performs consistently well on both CNN and DM datasets, suggesting QA pairs of this type are effective in guiding the system to extract summary chunks.", "We conjecture that maintaining a moderate number of answers is important to maximize performance.", "To answer questions with missing entities, the summary is encouraged to contain similar content as the question body.", "Because questions are derived from the human abstract, this in turn requires the system summary to carry similar semantic content as the human abstract.", "Question-answering accuracy We next dive into the QA component of our system to investigate question-answering performance when different types of summaries and QA pairs are supplied to the system ( §3.3).", "Given a question, the system predicts an answer using an extractive summary as the source input.", "Intuitively, an informative summary can lead to high QA accuracy, as the summary content serves well as the basis for predicting answers.", "With the same summary as input, certain types of questions can be more difficult to answer than others, and the system must rely heavily on the summary to gauge correct answers.", "We compare various types of summaries.", "These include (a) QASumm+NoQ which extracts summary chunks without requiring QA pairs; and (b) GoldSumm, which are gold-standard extractive summaries generated by collecting source words appearing in human summaries.", "We further consider NoText and FullText, corresponding to using no source text or the full source article as input.", "They represent the two extremes.", "In all cases the QA component ( §3.3) is trained on the training set and we report QA accuracies on the dev set.", "In Table 4 , we observe that question-answering with GoldSumm performs the best for all QA types.", "It outperforms the scenarios using Full-Text as the source input.", "This indicates that distilled information contained in a high-quality summary can be useful for answering questions, as searching for answers in a succinct summary can be more efficient than that in a full article.", "Moreover, we observe that the performance of QA-Summ+NoQ is in between NoText and GoldSumm for all answer types.", "The results suggest that extractive summaries with even modest ROUGE scores can prove useful for question-answering.", "Regarding different types of QA pairs, we find that the ROOT-type can achieve high QA accuracy when using NoText input.", "It suggests that ROOT answers can to some extent be predicted based on the question context.", "The NER-type QA pairs work the best for both GoldSumm and Full-Text, likely because the source texts contain necessary entities required to correctly answer those questions.", "We also find the SUBJ/OBJ-type QA pairs have the smallest gap between train/dev accuracies, despite that they have a large answer space.", "Based on the analysis we would suggest future work to consider using NER-based QA pairs as they encourage the summaries to contain salient source content and be informative.", "Extraction units We finally compare the performance of using either words or chunks as extraction units ( §3.1).", "The chunks are obtained by breaking down sentence constituent parse trees in a top-down manner until all tree fragments contain 5 words or less.", "We observe that 70% of the chunks are 1-grams, and 2/3/4/5-grams are 9%, 7%, 6%, and 8% respectively.", "We compare the bidirectional LSTM (f LSTM 1 ) and CNN (f CNN 2 ) encoders for their effectiveness on generating representations for extraction units.", "Figure 2 presents the results of the QASumm+NoQ system under various settings.", "We find that extracting chunks performs superior, and combining chunks with LSTM representations yield the highest scores.", "Human Evaluation Testing the usefulness of an extractive system driven by reading comprehension is not inherently measured by automatic metrics (i.e.", "ROUGE).", "We conducted a human evaluation to assess whether the highlighted summaries contribute to document understanding.", "Similar to our training paradigm we presented each participant with the document and three fill-in-the-blank questions created from the human abstracts.", "It was guaranteed that each question was from a unique human abstract to avoid seeing the answer adjacent to the same template.", "The missing section was randomly generated to be either the root word, the subject or ob- See et al.", "(2017) .", "Our systems tested were the supervised extractor, and our full model (NER).", "ject of the sentence, or a named entity.", "We compare our reinforced extracted summary (presented as a bold overlay to the document), against our supervised method (section 3.2), abstractive summaries generated by See et al.", "(2017) , and the human abstracts in full.", "Additionally we asked the participants to rate the quality of the summary presented (1-5, with 5 being most informative).", "We utilized Amazon Mechanical Turk, and conducted an experiment where we sampled 80 documents from the CNN test set.", "The articles were evenly split across the four competing systems, and each HIT was completed by 5 turkers.", "Upon completion the data was analyzed manually for accuracy since turkers entered each answer as free text, and to remove any meaningless datapoints.", "Table 5 shows the average time (in seconds) to complete a single question, the overall accuracy of the participants, and the informativeness of a given summary type.", "Excluding the use of human abstracts, all systems resulted in similar performance times.", "However we observe a large margin in QA accuracy in our full system compared to the abstractive and our supervised approach.", "Although participants rated the informativeness of the summaries to be the same our systems yielded a higher performance.", "This strongly indicates that having a system which makes using of document comprehension has a tangible effect when applied towards a real-world task.", "Conclusion We exploited an extractive summarization framework using deep reinforcement learning to identify consecutive word sequences from a document to form an extractive summary.", "Our reward function promotes adequate and fluent summaries that can serve as document surrogates to answer important questions, directly addressing users' information needs.", "Experimental results on benchmark datasets demonstrated the efficacy of our proposed method over state-of-the-art baselines, assessed by both automatic metrics and human evaluators." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "4.3", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Our Approach", "Representing an Extraction Unit", "Constructing an Extractive Summary", "Using Summaries to Answer Questions", "A Reinforcement Learning Framework", "Experiments", "Dataset and Settings", "Experimental Results", "Human Evaluation", "Conclusion" ] }
GEM-SciDuet-train-36#paper-1050#slide-7
Question Answering Model
Given an extractive summary containing a set of source text units, and a collection of question-answer pairs we develop a mechanism leveraging the summary to answer these questions (Chen et al. 2016). With an attention driven system, an extractive summary can be used to answer multiple questions related to the document. Kristjan Arumae and Fei Liu Guiding Extractive Summarization with Question-Answering Rewards - NAACL 2019
Given an extractive summary containing a set of source text units, and a collection of question-answer pairs we develop a mechanism leveraging the summary to answer these questions (Chen et al. 2016). With an attention driven system, an extractive summary can be used to answer multiple questions related to the document. Kristjan Arumae and Fei Liu Guiding Extractive Summarization with Question-Answering Rewards - NAACL 2019
[]
GEM-SciDuet-train-36#paper-1050#slide-8
1050
Guiding Extractive Summarization with Question-Answering Rewards
Highlighting while reading is a natural behavior for people to track salient content of a document. It would be desirable to teach an extractive summarizer to do the same. However, a major obstacle to the development of a supervised summarizer is the lack of ground-truth. Manual annotation of extraction units is costprohibitive, whereas acquiring labels by automatically aligning human abstracts and source documents can yield inferior results. In this paper we describe a novel framework to guide a supervised, extractive summarization system with question-answering rewards. We argue that quality summaries should serve as a document surrogate to answer important questions, and such question-answer pairs can be conveniently obtained from human abstracts. The system learns to promote summaries that are informative, fluent, and perform competitively on question-answering. Our results compare favorably with those reported by strong summarization baselines as evaluated by automatic metrics and human assessors.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289 ], "paper_content_text": [ "Introduction Our increasingly digitized lifestyle calls for summarization techniques to produce short and accurate summaries that can be accessed at any time.", "These summaries should factually adhere to the content of the source text and present the reader with the key points therein.", "Although neural abstractive summarization has shown promising results (Rush et al., 2015; Nallapati et al., 2016; See et al., 2017) , these methods can have potential drawbacks.", "It was revealed that abstracts generated by neural systems sometimes alter or falsify objective details, and introduce new meanings not present in the original text (Cao et al., 2018) .", "Reading these abstracts can lead to misinterpretation of the source materials, which is clearly undesirable.", "In this work, we focus on extractive summarization, where the summaries are guaranteed (CNN) A judge this week sentenced a former TSA agent to six months in jail for secretly videotaping a female co-worker while she was in the bathroom, prosecutors said.", "During the investigation, detectives with the Metro Nashville Police Department in Tennessee also found that the agent, 33-year-old Daniel Boykin, entered the woman's home multiple times, where he took videos, photos and other data.", "Police found more than 90 videos and 1,500 photos of the victim on Boykin's phone and computer .", "The victim filed a complaint after seeing images of herself on his phone last year.", "[...] Comprehension Questions (Human Abstract): Former Daniel Boykin, 33, videotaped his female co-worker in the restroom, authorities say.", "Authorities say they found 90 videos and 1,500 photos of the victim on and computer.", "to remain faithful to the original content.", "Our system seeks to identify salient and consecutive sequences of words from the source document, and highlight them in the text to assist users in browsing and comprehending lengthy documents.", "An example is illustrated in Table 1 .", "A primary challenge faced by extractive summarizers is the lack of annotated data.", "The cost of hiring humans to label a necessary amount of source articles with summary words, good for training a modern classifier, can be prohibitive.", "Previous work has exploited using human abstracts to derive labels for extraction units (Woodsend and Lapata, 2010) .", "E.g., a source word is tagged 1 if it appears in the abstract, 0 otherwise.", "Although pairs of source articles and human abstracts are abundant, labels derived in this way are not necessarily best since summary saliency can not be easily captured with a rule based categorization.", "Considering that human abstracts involve generalization, paraphrasing, and can con-tain words not present in the source text, leveraging them to derive labels for extraction units can be suboptimal.", "In this work, we investigate a new strategy that seeks to better utilize human abstracts to guide the extraction of summary text units.", "We hypothesize that quality extractive summaries should contain informative content so that they can be used as document surrogates to answer important questions, thereby satisfying users' information needs.", "The question-answer pairs can be conveniently developed from human abstracts.", "Our proposed approach identifies answer tokens from each sentence of the human abstract, then replaces each answer token with a blank to create a Cloze-style question-answer pair.", "To answer all questions (≈human abstract), the system summary must contain content that is semantically close to and collectively resembles the human abstract.", "In this paper, we construct an extractive summary by selecting consecutive word sequences from the source document.", "To accomplish this we utilize a novel reinforcement learning framework to explore the space of possible extractive summaries and assess each summary using a novel reward function judging the summary's adequacy, fluency, length, and its competency to answer important questions.", "The system learns to sample extractive summaries yielding the highest expected rewards, with no pre-derived extraction labels needed.", "This work extends the methodology of Arumae and Liu (2018) with new representations of extraction units and thorough experimental evaluation.", "The contributions of this research can be summarized as follows: • we describe a novel framework generating extractive summaries by selecting consecutive sequences of words from source documents.", "This new system explores various encoding mechanisms, as well as new sampling techniques to capture phrase level data.", "Such a framework has not been thoroughly investigated in the past; • We conduct a methodical empirical evaluation from the point of view of information saliency.", "Rather than solely relying on automatic summarization evaluation methods, we also show the advantages of our system by assessing the summary quality with reading comprehension tasks.", "Our summaries compare favorably with the automatic metrics against state of the art, and show promising results against baselines when evaluated by humans for question answering.", "Related Work Extractive summarization has seen growing popularity in the past decades (Nenkova and McKeown, 2011) .", "The methods focus on selecting representative sentences from the document(s) and optionally deleting unimportant sentence constituents to form a summary (Knight and Marcu, 2002; Radev et al., 2004; Zajic et al., 2007; Martins and Smith, 2009; Gillick and Favre, 2009; Lin and Bilmes, 2010; Wang et al., 2013; Li et al., 2013 Li et al., , 2014 Hong et al., 2014; Yogatama et al., 2015) .", "A majority of the methods are unsupervised.", "They estimate sentence importance based on the sentence's length and position in the document, whether the sentence contains topical content and its relationship with other sentences.", "The summarization objective is to select a handful of sentences to maximize the coverage of important content while minimizing summary redundancy.", "Although unsupervised methods are promising, they cannot benefit from the large-scale training data harvested from the Web (Sandhaus, 2008; Hermann et al., 2015; Grusky et al., 2018) .", "Neural extractive summarization has focused primarily on extracting sentences (Nallapati et al., 2017; Cao et al., 2017; Isonuma et al., 2017; Tarnpradab et al., 2017; Zhou et al., 2018; Kedzie et al., 2018) .", "These studies exploit parallel training data consisting of source articles and story highlights (i.e., human abstracts) to create ground-truth labels for sentences.", "A neural extractive summarizer learns to predict a binary label for each source sentence indicating if it is to be included in the summary.", "These studies build distributed sentence representations using neural networks (Cheng and Lapata, 2016; Yasunaga et al., 2017) and use reinforcement learning to optimize the evaluation metric (Narayan et al., 2018b) and improve summary coherence (Wu and Hu, 2018) .", "However, sentence extraction can be coarse and in many cases, only a part of the sentence is worthy to be added to the summary.", "In this study, we perform finer-grained extractive summarization by allowing the system to select consecutive sequences of words rather than sentences to form a summary.", "Interestingly, studies reveal that summaries generated by recent neural abstractive systems are, in fact, quite \"extractive.\"", "Abstractive systems often adopt the encoder-decoder architecture with an attention mechanism (Rush et al., 2015; Nallapati et al., 2016; Paulus et al., 2017; Guo et al., 2018; Gehrmann et al., 2018; Lebanoff et al., 2018; Ce-likyilmaz et al., 2018) .", "The encoder condenses a source sequence to a fixed-length vector and the decoder takes the vector as input and generates a summary by predicting one word at a time.", "See, Liu, and Manning (2017) suggest that about 35% of the summary sentences occur in the source documents, and 90% of summary n-grams appear in the source.", "Moreover, the summaries may contain inaccurate factual details and introduce new meanings not present in the original text (Cao et al., 2018; .", "It thus raises concerns as to whether such systems can be used in realworld scenarios to summarize materials such as legal documents.", "In this work, we choose to focus on extractive summarization where selected word sequences can be highlighted on the source text to avoid change of meaning.", "Our proposed method is inspired by the work of Lei et al.", "(2016) who seek to identify rationales from textual input to support sentiment classification and question retrieval.", "Distinct from this previous work, we focus on generating generic document summaries.", "We present a novel supervised framework encouraging the selection of consecutive sequences of words to form an extractive summary.", "Further, we leverage reinforcement learning to explore the space of possible extractive summaries and promote those that are fluent, adequate, and competent in question answering.", "We seek to test the hypothesis that successful summaries can serve as document surrogates to answer important questions, and moreover, ground-truth questionanswer pairs can be derived from human abstracts.", "In the following section we describe our proposed approach in details.", "Our Approach Let S be an extractive summary consisting of text segments selected from a source document x.", "The summary can be mapped to a sequence of binary labels y assigned to document words.", "In this section we first present a supervised framework for identifying consecutive sequences of words that are summary-worthy, then proceed by describing our question-answering rewards and a deep reinforcement learning framework to guide the selection of summaries so that they can be used as document surrogates to answer important questions.", "1 Representing an Extraction Unit How best to decompose a source document into a set of text units useful for extractive summarization remains an open problem.", "A natural choice is to use words as extraction units.", "However, this choice ignores the cohesiveness of text.", "A text chunk (e.g., a prepositional phrase) can be either selected to the summary in its entirety or not at all.", "In this paper we experiment with both schemes, using either words or chunks as extraction units.", "When a text chunk is selected in the summary, all its consisting words are selected.", "We obtain text chunks by breaking down the sentence constituent parse tree in a top-down manner until each tree fragment governs at most 5 words.", "A chunk thus can contain from 1 to 5 words.", "Additionally, word level modeling can be considered a special case of chunks where the length of each phrase is 1.", "It is important to note that using sentences as extraction units is out of the scope of this paper, because our work focuses on finer-grained extraction units such as words and phrases and this is notably a more challenging task.", "The most successful neural models for encoding a piece of text to a fixed-length vector include the recurrent (Hochreiter and Schmidhuber, 1997) and convolutional neural networks (CNN; Kim et al., 2014) , among others.", "A recent study by Khandelwal et al.", "(2018) reported that the recurrent networks are capable of memorizing a recent context of about 20 tokens and the model is highly sensitive to word order, whereas this is less the case for CNN whose max-pooling operation makes it agnostic to word order.", "We implement both networks and are curious to compare their effectiveness at encoding extraction units for summarization.", "{h e t } = f Bi-LSTM 1 (x) (1) or {h e t } = f CNN 2 (x) (2) Our model first encodes the source document using a bidirectional LSTM with the forward and backward passes (Eq.", "(1)).", "The representation of the t-th source word h e t = [ ← − h e t || − → h e t ] is the con- catenation of the hidden states in both directions.", "A chunk is similarly denoted by h e t = [ ← − h e t || − → h e t+n ] where t and t + n are the indices of its beginning and ending words.", "In both cases, a fixed-length vector (h e t ∈ R m ) is created for the word/chunk.", "Further, our CNN encoder (Eq.", "(2)) uses a sliding window of {1,3,5,7} words, corresponding to the kernel sizes, to scan through the source document.", "We apply a number of filters to each window size to extract local features.", "The t-th source word is represented by the concatenation of feature maps (an m-dimensional vector).", "To obtain the chunk vector we perform max-pooling over the representations of its consisting words (from t to t + n).", "In the following we use h e t to denote the vector representation of the t-th extraction unit, may it be a word or a chunk, generated using either encoder.", "Constructing an Extractive Summary It is desirable to first develop a supervised framework for identifying summary-worthy text segments from a source article.", "These segments collectively form an extractive summary to be highlighted on the source text.", "The task can be formulated as a sequence labeling problem: a source text unit (a word or chunk) is labelled 1 if it is to be included in the summary and 0 otherwise.", "It is not unusual to develop an auto-regressive model to perform sequence labeling, where the label of the t-th extraction unit (y t ) depends on all previous labels (y <t ).", "Given this hypothesis, we build a framework to extract summary units where the importance of the t-th source unit is characterized by its informativeness (encoded in h e t ), its position in the document, and relationship with the partial summary.", "The details are presented below.", "We use a positional embedding (g t ) to signify the position of the t-th text unit in the source document.", "The position corresponds to the index of the source sentence containing the t-th unit, and further, all text units belonging to the same sentence share the same positional embedding.", "We apply sinusoidal initialization to the embeddings, following Vaswani et al.", "(2017) .", "Importantly, positional embeddings allow us to inject macro-positional knowledge about words/chunks into a neural summarization framework to offset the natural bias that humans tend to have on putting important content at the beginning of an article.", "Next, we build a representation for the partial summary to aid the system in selecting future text units.", "The representation s t is expected to encode the extraction decisions up to time t-1 and it can be realized using a unidirectional LSTM network (Eq.", "(3)).", "The t-th input to the network is represented as y t−1 ⊗ h e t−1 where y t−1 is a binary label serving as a gating mechanism to control if the semantic content of the previous text unit (h e t−1 ) is to be included in the summary (\"⊗\" corresponds to elementwise product).", "During training, we apply teacher forcing and y t−1 is the ground-truth extraction label for the (t − 1)-th unit; at test time, Figure 1 : A unidirectional LSTM (blue, Eq.", "(3)) encodes the partial summary, while the multilayer perceptron network (orange, Eq.", "(4-5)) utilizes the text unit representation (h e t ), its positional embedding (gt), and the partial summary representation (st) to determine if the t-th text unit is to be included in the summary.", "Best viewed in color.", "g t 1 g t g t+1 g t+2 s t+2 s t+1 s t s t 1 h e t 1 h e t h e t+1 h e t+2 y t−1 is generated on-the-fly by obtaining the label yielding the highest probability according to Eq.", "(5).", "In the previous work of Cheng and Lapata (2016) and Nallapati et al.", "(2017) , similar auto-regressive models are developed to identify summary sentences.", "Different from the previous work, this study focuses on extracting consecutive sequences of words and chunks from the source document, and the partial summary representation is particularly useful for predicting if the next unit is to be included in the summary to improve summary fluency.", "s t = f Uni-LSTM 3 (s t−1 , y t−1 ⊗ h e t−1 ) (3) Given the partial summary representation (s t ), and representation of the text unit (h e t ) and its positional encoding (g t ), we employ a multilayer perceptron to predict how likely the unit is to be included in the summary.", "This process is described by Eqs.", "(4-5) and further illustrated in Figure 1 .", "a t = f ReLU (W a [h e t ; g t ; s t ] + b a ) (4) p(y t |y <t , x) = σ(w y a t + b y ) (5) Our model parameters include {W a , b a , w y , b y } along with those required by f Bi-LSTM 1 , f CNN 2 and f Uni-LSTM 3 .", "It is possible to train this model in a fully supervised fashion by minimizing the negative log-likelihood of the training data.", "We generate ground-truth labels for source text units as follows.", "A source word receives a label of 1 if both itself and its adjacent word appear in the human abstract (excluding cases where both words are stopwords).", "This heuristic aims to label consecutive source words (2 or more) as summaryworthy, as opposed to picking single words which can be less informative.", "A source text chunk receives a label of 1 if one of its component words is labelled 1 in the above process.", "Because human abstracts are often short and contain novel words not present in source documents, they can be suboptimal for generating ground-truth labels for extraction units.", "Only a small portion of the source words (about 8% in our dataset) are labelled as positive, whereas the vast majority are negative.", "Such labels can be ineffective in providing supervision.", "In the following section, we investigate a new learning paradigm, which encourages extractive summaries to contain informative content useful for answering important questions, while question-answer pairs can be automatically derived from human abstracts.", "Using Summaries to Answer Questions Our hypothesis is that high-quality summaries should contain informative content making them appropriate to serve as document surrogates to satisfy users' information needs.", "We train the extractive summarizer to identify source text units necessary for answering questions, and the questionanswer (QA) pairs can be conveniently developed from human abstracts.", "To obtain QA pairs, we set an answer token to be either a salient word or a named entity to limit the space of potential answers.", "For any sentence in the human abstract, we identify an answer token from it, then replace the answer token with a blank to create a Cloze-style question-answer pair (see Table 1 ).", "When a sentence contains multiple answer tokens, a set of QA pairs can be obtained from it.", "It is important to note that at least one QA pair should be extracted from each sentence of the abstract.", "Because a system summary is trained to contain content useful for answering all questions (≈human abstract), any missing QA pair is likely to cause the summary to be insufficient.", "We collect answer tokens using the following methods: (a) we extract a set of entities with tag {PER, LOC, ORG, MISC} from each sentence using the Stanford CoreNLP toolkit ; (b) we also identify the ROOT word of each sentence's dependency parse tree along with the sentence's subject/object word, whose type is {NSUBJ, CSUBJ, OBJ, IOBJ} (if exists), then add them to the collection of answer tokens.", "Further, we prune the answer space by excluding those which appear fewer than 5 times overall.", "Having several methods for question construction allows us to explore the answer space properly.", "In the results section we perform experiments on root, subject/object, and named entities to see which model provides the best extraction guide.", "Given an extractive summary S containing a set of source text units, and a collection of questionanswer pairs P = {(Q k , e * k )} K k=1 related to the source document, we want to develop a mechanism leveraging the extractive summary to answer these questions.", "We first encode each question Q k to a vector representation (q k ).", "This is achieved by concatenating the last hidden states of the forward/backward passes of a bidirectional LSTM (Eq.", "(6) ).", "Next, we exploit the attention mechanism to locate summary parts that are relevant to answering the k-th question.", "Given the attention mechanism, an extractive summary S can be used to answer multiple questions related to the document.", "We define α t,k to be the semantic relatedness between the t-th source text unit and the k-th question.", "Following Chen et al.", "(2016a) , we introduce a bilinear term to characterize their relationship (α t,k ∝ h e t W α q k ; see Eq.", "(7) ).", "In this process, we consider only those source text units selected in summary S. Using α t,k as weights, we then compute a context vector c k condensing summary content related to the k-th question (Eq.", "(8)) .", "q k = f Bi-LSTM 4 (Q k ) (6) α t,k = exp(h e t W α q k ) t exp(h e t W α q k ) (7) c k = t α t,k h e t (8) u k = [c k ; q k ; |c k − q k |; c k ⊗ q k ] (9) To predict the most probable answer, we construct a fully-connected network as the output layer.", "The input to the network includes a concatenation of the context vector (c k ), question vector (q k ), absolute difference (|c k − q k |) and element-wise product (c k ⊗ q k ) of the two vectors (Eq.", "(9)).", "A softmax function is used to estimate a probability distribution over the space of candidate answers: .", "P (e k |S, Q k ) = softmax(W e f ReLU (W u u k + b u )).", "A Reinforcement Learning Framework In this section we introduce a reinforcement learning framework to explore the space of possible extractive summaries and present a novel reward function to promote summaries that are adequate, fluent, restricted in length, and competent in question answering.", "Our reward function consists of four components, whose interpolation weights γ, α, and β are tuned on the dev set.", "R(y) = R c (y) + γR a (y) + αR f (y) + βR l (y) We define QA competency (Eq.", "(10)) as the average log-likelihood of correctly answering questions using the system summary (y).", "A highquality system summary is expected to resemble reference summary by using similar wording.", "The adequacy metric (Eq.", "(11)) measures the percentage of overlapping unigrams between the system (y) and reference summary (y * ).", "The fluency criterion (Eq.", "(12)) encourages consecutive sequences of source words to be selected by preventing many 0/1 switches in the label sequence (i.e., |y t − y t−1 |).", "Finally, we limit the summary size by setting the ratio of selected words to be close to a threshold δ (Eq.", "(13) ).", "QA R c (y) = 1 K K k=1 log P (e * k |y, Q k ) (10) Adequ.", "R a (y) = 1 |y * | U(y, y * ) (11) Fluency R f (y) = − |y| t=2 |y t − y t−1 | (12) Length R l (y) = − 1 |y| t y t − δ (13) The reward function R(y) successfully combines intrinsic measures of summary fluency and adequacy (Goldstein et al., 2005) with extrinsic measure of summary responsiveness to given questions (Dang, 2006; Murray et al., 2008) .", "A reinforcement learning agent finds a policy P (y|x) to maximize the expected reward E P (y|x) [R(y)].", "Training the system with policy gradient (Eq.", "(14) ) involves repeatedly sampling an extractive summaryŷ from the source document x.", "At time t, the agent takes an action by sampling a decision based on p(y t |ŷ <t , x) (Eq.", "(5)) indicating whether the t-th source text unit is to be included in the summary.", "Once the full summary sequenceŷ is generated, it is compared to the ground-truth sequence to compute the reward R(ŷ).", "In this way, reinforcement learning explores the space of extractive summaries and promotes those yielding high rewards.", "At inference time, rather than sampling actions from p(y t |y <t , x), we choose y t that yields the highest probability to generate the system summary y.", "This process is deterministic and no QA is required.", "∇ θ E P (y|x) [R(y)] = E P (y|x) [R(y)∇ θ log P (y|x)] ≈ 1 N N n=1 R(ŷ (n) )∇ θ log P (ŷ (n) |x) (14) Experiments We proceed by discussing the dataset and settings, comparison systems, and experimental results obtained through both automatic metrics and human evaluation in a reading comprehension setting.", "Dataset and Settings Our goal is to build an extractive summarizer identifying important textual segments from source articles.", "To investigate the effectiveness of the proposed approach, we conduct experiments on the CNN/Daily Mail dataset using a version provided by See et al.", "(2017) .", "The reference summaries of this dataset were created by human editors exhibiting a moderate degree of extractiveness.", "E.g., 83% of summary unigrams and 45% of bigrams appear in source articles (Narayan et al., 2018a) .", "On average, a CNN article contains 761 words / 34 sentences and a DM article contains 653 words / 29 sentences.", "We report results respectively for the CNN and DM portion of the dataset.", "Our hyperparameter settings are as follows.", "We set the hidden state dimension of the LSTM to be 256 in either direction.", "A bidirectional LSTM f Bi-LSTM 1 (·) produces a 512-dimensional vector for each content word.", "Similarly, f Bi-LSTM 4 (·) generates a question vector q k of the same size.", "Our CNN encoder f CNN 2 (·) uses multiple window sizes of {1, 3, 5, 7} and 128 filters per window size.", "h e t is thus a 512-dimensional vector using either CNN or LSTM encoder.", "We set the hidden state dimension of s t to be 128.", "We also use 100-dimensional word embeddings (Pennington et al., 2014) and sinusoidal positional encodings (Vaswani et al., 2017) of 30 dimensions.", "The maximum article length is set to 400 words.", "Compared to the study of Arumae and Liu (2018) , we expand the search space dramatically from 100 to 400 words, which poses a challenge to the RLbased summarizers.", "We associate each article with at most 10 QA pairs (K=10) and use them to guide the extraction of summary segments.", "We apply mini-batch training with Adam optimizer (Kingma and Ba, 2014) , where a mini-batch contains 128 articles and their QA pairs.", "The summary ratio δ is set to 0.15, yielding extractive summaries of about 60 words.", "Following Arumae and Liu (2018) , we set hyperparameters β = 2α; α and γ are tuned on the dev set using grid search.", "Experimental Results Comparison systems We compare our method with a number of extractive and abstractive systems that have reported results on the CNN/DM datasets.", "We consider non-neural approaches that extract sentences from the source article to form a summary.", "These include LexRank (Radev et al., 2004) , SumBasic (Vanderwende et al., 2007) , and KLSum (Haghighi and Vanderwende, 2009) .", "Such methods treat sentences as bags of words, and then select sentences containing topically important words.", "We further include the Lead-3 baseline that extracts the first 3 sentences from any given article.", "The method has been shown to be a strong baseline for summarizing news articles.", "Neural extractive approaches focus on learning vector representations for sentences and words, then performing extraction based on the learned representations.", "Cheng et al.", "(2016) describe a neural network method composed of a hierarchical document encoder and an attention-based extractor.", "The system has two variants: NN-WE extracts words from the source article and NN-SE extracts sentences.", "SummaRuNNer (Nallapati et al., 2017) presents an autoregressive sequence labeling method based on recurrent neural networks.", "It selects summary sentences based on their content, salience, position, and novelty representations.", "Abstractive summarization methods are not directly comparable to our approach, but we choose to include three systems that report results respectively for CNN and DM datasets.", "Distraction-M3 (Chen et al., 2016b ) trains the summarization system to distract its attention to traverse different regions of the source article.", "Graph attention (Tan et al., 2017) introduces a graph-based attention mechanism to enhance the encoderdecoder framework.", "PointerGen+Cov.", "(See et al., 2017) allows the system to not only copy words from the source text but also generate summary words by selecting them from a vocabulary.", "Abstractive methods can thus introduce new words to the summary that are not present in the source article.", "However, system summaries may change the meaning of the original texts due to this flexibility.", "Results We present summarization results of various systems in Tables 2 and 3 , evaluated on the standard CNN/DM test sets by R-1, R-2, and R-L metrics (Lin, 2004) , which respectively measure the overlap of unigrams, bigrams, and longest common subsequences between system and reference summaries.", "We investigate four variants of our method: QASumm+NoQ does not utilize any question-answer pairs during training.", "It extracts summary text chunks by learning from groundtruth labels ( §3.2) and the chunks are encoded by f Bi-LSTM 1 .", "Other variants initialize their models using pretrained parameters from QASumm+NoQ, then integrate the reinforcement learning objective ( §3.4) to exploit the space of possible extractive summaries and reward those that are useful for answering questions.", "We consider three types of QA pairs: the answer token is the root of a sentence dependency parse tree (+ROOT), a subject or object (+SUBJ/OBJ), or an entity found in the sentence (+NER).", "In all cases, the question is generated by replacing the answer token with a blank symbol.", "As illustrated in Tables 2 and 3 , our QASumm methods with reinforcement learning (+ROOT, +SUBJ/OBJ, +NER) perform competitively with strong baselines.", "They outperform the counterpart QASumm+NoQ that makes no use of the QA pairs by a substantial margin.", "They outperform or perform at a comparable level to state-of-the-art published systems on the CNN/DM datasets but are generally inferior to PointerGen.", "We observe that exacting summary chunks is highly desirable in real-world applications as it provides a mechanism to generate concise summaries.", "Nonetheless, accurately identifying summary chunks is challenging because the search space is vast and spuriousness arises in chunking sentences.", "Cheng and Lapata (2016) report a substantial performance drop when adapting their system to extract words.", "Our QASumm methods focusing on chunk extraction perform on par with competitive systems that extract whole sentences.", "We additionally present human evaluation results of summary usefulness for a reading comprehension task in §4.3.", "In Tables 2 and 3 , we further show the number of unique answers per QA type.", "We find that the ROOT-type QA pairs have the least number of unique answers.", "They are often main verbs of sentences.", "In contrast, the SUBJ/OBJ-type has the most number of answers.", "They are subjects and objects of sentences and correspond to an open class of content words.", "The NER-type has a moderate number of answers compared to others.", "Note that all answer tokens have been filtered by frequency; those appearing less than 5 times in the dataset are removed to avoid overfitting.", "Among variants of the QASumm method, we find that QASumm+ROOT achieves the highest scores on DM dataset.", "QASumm+NER performs consistently well on both CNN and DM datasets, suggesting QA pairs of this type are effective in guiding the system to extract summary chunks.", "We conjecture that maintaining a moderate number of answers is important to maximize performance.", "To answer questions with missing entities, the summary is encouraged to contain similar content as the question body.", "Because questions are derived from the human abstract, this in turn requires the system summary to carry similar semantic content as the human abstract.", "Question-answering accuracy We next dive into the QA component of our system to investigate question-answering performance when different types of summaries and QA pairs are supplied to the system ( §3.3).", "Given a question, the system predicts an answer using an extractive summary as the source input.", "Intuitively, an informative summary can lead to high QA accuracy, as the summary content serves well as the basis for predicting answers.", "With the same summary as input, certain types of questions can be more difficult to answer than others, and the system must rely heavily on the summary to gauge correct answers.", "We compare various types of summaries.", "These include (a) QASumm+NoQ which extracts summary chunks without requiring QA pairs; and (b) GoldSumm, which are gold-standard extractive summaries generated by collecting source words appearing in human summaries.", "We further consider NoText and FullText, corresponding to using no source text or the full source article as input.", "They represent the two extremes.", "In all cases the QA component ( §3.3) is trained on the training set and we report QA accuracies on the dev set.", "In Table 4 , we observe that question-answering with GoldSumm performs the best for all QA types.", "It outperforms the scenarios using Full-Text as the source input.", "This indicates that distilled information contained in a high-quality summary can be useful for answering questions, as searching for answers in a succinct summary can be more efficient than that in a full article.", "Moreover, we observe that the performance of QA-Summ+NoQ is in between NoText and GoldSumm for all answer types.", "The results suggest that extractive summaries with even modest ROUGE scores can prove useful for question-answering.", "Regarding different types of QA pairs, we find that the ROOT-type can achieve high QA accuracy when using NoText input.", "It suggests that ROOT answers can to some extent be predicted based on the question context.", "The NER-type QA pairs work the best for both GoldSumm and Full-Text, likely because the source texts contain necessary entities required to correctly answer those questions.", "We also find the SUBJ/OBJ-type QA pairs have the smallest gap between train/dev accuracies, despite that they have a large answer space.", "Based on the analysis we would suggest future work to consider using NER-based QA pairs as they encourage the summaries to contain salient source content and be informative.", "Extraction units We finally compare the performance of using either words or chunks as extraction units ( §3.1).", "The chunks are obtained by breaking down sentence constituent parse trees in a top-down manner until all tree fragments contain 5 words or less.", "We observe that 70% of the chunks are 1-grams, and 2/3/4/5-grams are 9%, 7%, 6%, and 8% respectively.", "We compare the bidirectional LSTM (f LSTM 1 ) and CNN (f CNN 2 ) encoders for their effectiveness on generating representations for extraction units.", "Figure 2 presents the results of the QASumm+NoQ system under various settings.", "We find that extracting chunks performs superior, and combining chunks with LSTM representations yield the highest scores.", "Human Evaluation Testing the usefulness of an extractive system driven by reading comprehension is not inherently measured by automatic metrics (i.e.", "ROUGE).", "We conducted a human evaluation to assess whether the highlighted summaries contribute to document understanding.", "Similar to our training paradigm we presented each participant with the document and three fill-in-the-blank questions created from the human abstracts.", "It was guaranteed that each question was from a unique human abstract to avoid seeing the answer adjacent to the same template.", "The missing section was randomly generated to be either the root word, the subject or ob- See et al.", "(2017) .", "Our systems tested were the supervised extractor, and our full model (NER).", "ject of the sentence, or a named entity.", "We compare our reinforced extracted summary (presented as a bold overlay to the document), against our supervised method (section 3.2), abstractive summaries generated by See et al.", "(2017) , and the human abstracts in full.", "Additionally we asked the participants to rate the quality of the summary presented (1-5, with 5 being most informative).", "We utilized Amazon Mechanical Turk, and conducted an experiment where we sampled 80 documents from the CNN test set.", "The articles were evenly split across the four competing systems, and each HIT was completed by 5 turkers.", "Upon completion the data was analyzed manually for accuracy since turkers entered each answer as free text, and to remove any meaningless datapoints.", "Table 5 shows the average time (in seconds) to complete a single question, the overall accuracy of the participants, and the informativeness of a given summary type.", "Excluding the use of human abstracts, all systems resulted in similar performance times.", "However we observe a large margin in QA accuracy in our full system compared to the abstractive and our supervised approach.", "Although participants rated the informativeness of the summaries to be the same our systems yielded a higher performance.", "This strongly indicates that having a system which makes using of document comprehension has a tangible effect when applied towards a real-world task.", "Conclusion We exploited an extractive summarization framework using deep reinforcement learning to identify consecutive word sequences from a document to form an extractive summary.", "Our reward function promotes adequate and fluent summaries that can serve as document surrogates to answer important questions, directly addressing users' information needs.", "Experimental results on benchmark datasets demonstrated the efficacy of our proposed method over state-of-the-art baselines, assessed by both automatic metrics and human evaluators." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "4.3", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Our Approach", "Representing an Extraction Unit", "Constructing an Extractive Summary", "Using Summaries to Answer Questions", "A Reinforcement Learning Framework", "Experiments", "Dataset and Settings", "Experimental Results", "Human Evaluation", "Conclusion" ] }
GEM-SciDuet-train-36#paper-1050#slide-8
A Reinforcement Learning Framework
We introduce a reinforcement learning framework to explore the space of possible extractive summaries and present a novel reward function. The reward promotes summaries that are adequate, fluent, restricted in length, and competent in question answering. Kristjan Arumae and Fei Liu Guiding Extractive Summarization with Question-Answering Rewards - NAACL 2019 Training the system with policy gradient involves repeatedly sampling an extractive summary from the source document (Lei et al. 2016). At time t, the agent takes an action by sampling a decision based on indicating whether the t-th source text unit is to be included in the summary.
We introduce a reinforcement learning framework to explore the space of possible extractive summaries and present a novel reward function. The reward promotes summaries that are adequate, fluent, restricted in length, and competent in question answering. Kristjan Arumae and Fei Liu Guiding Extractive Summarization with Question-Answering Rewards - NAACL 2019 Training the system with policy gradient involves repeatedly sampling an extractive summary from the source document (Lei et al. 2016). At time t, the agent takes an action by sampling a decision based on indicating whether the t-th source text unit is to be included in the summary.
[]
GEM-SciDuet-train-36#paper-1050#slide-9
1050
Guiding Extractive Summarization with Question-Answering Rewards
Highlighting while reading is a natural behavior for people to track salient content of a document. It would be desirable to teach an extractive summarizer to do the same. However, a major obstacle to the development of a supervised summarizer is the lack of ground-truth. Manual annotation of extraction units is costprohibitive, whereas acquiring labels by automatically aligning human abstracts and source documents can yield inferior results. In this paper we describe a novel framework to guide a supervised, extractive summarization system with question-answering rewards. We argue that quality summaries should serve as a document surrogate to answer important questions, and such question-answer pairs can be conveniently obtained from human abstracts. The system learns to promote summaries that are informative, fluent, and perform competitively on question-answering. Our results compare favorably with those reported by strong summarization baselines as evaluated by automatic metrics and human assessors.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289 ], "paper_content_text": [ "Introduction Our increasingly digitized lifestyle calls for summarization techniques to produce short and accurate summaries that can be accessed at any time.", "These summaries should factually adhere to the content of the source text and present the reader with the key points therein.", "Although neural abstractive summarization has shown promising results (Rush et al., 2015; Nallapati et al., 2016; See et al., 2017) , these methods can have potential drawbacks.", "It was revealed that abstracts generated by neural systems sometimes alter or falsify objective details, and introduce new meanings not present in the original text (Cao et al., 2018) .", "Reading these abstracts can lead to misinterpretation of the source materials, which is clearly undesirable.", "In this work, we focus on extractive summarization, where the summaries are guaranteed (CNN) A judge this week sentenced a former TSA agent to six months in jail for secretly videotaping a female co-worker while she was in the bathroom, prosecutors said.", "During the investigation, detectives with the Metro Nashville Police Department in Tennessee also found that the agent, 33-year-old Daniel Boykin, entered the woman's home multiple times, where he took videos, photos and other data.", "Police found more than 90 videos and 1,500 photos of the victim on Boykin's phone and computer .", "The victim filed a complaint after seeing images of herself on his phone last year.", "[...] Comprehension Questions (Human Abstract): Former Daniel Boykin, 33, videotaped his female co-worker in the restroom, authorities say.", "Authorities say they found 90 videos and 1,500 photos of the victim on and computer.", "to remain faithful to the original content.", "Our system seeks to identify salient and consecutive sequences of words from the source document, and highlight them in the text to assist users in browsing and comprehending lengthy documents.", "An example is illustrated in Table 1 .", "A primary challenge faced by extractive summarizers is the lack of annotated data.", "The cost of hiring humans to label a necessary amount of source articles with summary words, good for training a modern classifier, can be prohibitive.", "Previous work has exploited using human abstracts to derive labels for extraction units (Woodsend and Lapata, 2010) .", "E.g., a source word is tagged 1 if it appears in the abstract, 0 otherwise.", "Although pairs of source articles and human abstracts are abundant, labels derived in this way are not necessarily best since summary saliency can not be easily captured with a rule based categorization.", "Considering that human abstracts involve generalization, paraphrasing, and can con-tain words not present in the source text, leveraging them to derive labels for extraction units can be suboptimal.", "In this work, we investigate a new strategy that seeks to better utilize human abstracts to guide the extraction of summary text units.", "We hypothesize that quality extractive summaries should contain informative content so that they can be used as document surrogates to answer important questions, thereby satisfying users' information needs.", "The question-answer pairs can be conveniently developed from human abstracts.", "Our proposed approach identifies answer tokens from each sentence of the human abstract, then replaces each answer token with a blank to create a Cloze-style question-answer pair.", "To answer all questions (≈human abstract), the system summary must contain content that is semantically close to and collectively resembles the human abstract.", "In this paper, we construct an extractive summary by selecting consecutive word sequences from the source document.", "To accomplish this we utilize a novel reinforcement learning framework to explore the space of possible extractive summaries and assess each summary using a novel reward function judging the summary's adequacy, fluency, length, and its competency to answer important questions.", "The system learns to sample extractive summaries yielding the highest expected rewards, with no pre-derived extraction labels needed.", "This work extends the methodology of Arumae and Liu (2018) with new representations of extraction units and thorough experimental evaluation.", "The contributions of this research can be summarized as follows: • we describe a novel framework generating extractive summaries by selecting consecutive sequences of words from source documents.", "This new system explores various encoding mechanisms, as well as new sampling techniques to capture phrase level data.", "Such a framework has not been thoroughly investigated in the past; • We conduct a methodical empirical evaluation from the point of view of information saliency.", "Rather than solely relying on automatic summarization evaluation methods, we also show the advantages of our system by assessing the summary quality with reading comprehension tasks.", "Our summaries compare favorably with the automatic metrics against state of the art, and show promising results against baselines when evaluated by humans for question answering.", "Related Work Extractive summarization has seen growing popularity in the past decades (Nenkova and McKeown, 2011) .", "The methods focus on selecting representative sentences from the document(s) and optionally deleting unimportant sentence constituents to form a summary (Knight and Marcu, 2002; Radev et al., 2004; Zajic et al., 2007; Martins and Smith, 2009; Gillick and Favre, 2009; Lin and Bilmes, 2010; Wang et al., 2013; Li et al., 2013 Li et al., , 2014 Hong et al., 2014; Yogatama et al., 2015) .", "A majority of the methods are unsupervised.", "They estimate sentence importance based on the sentence's length and position in the document, whether the sentence contains topical content and its relationship with other sentences.", "The summarization objective is to select a handful of sentences to maximize the coverage of important content while minimizing summary redundancy.", "Although unsupervised methods are promising, they cannot benefit from the large-scale training data harvested from the Web (Sandhaus, 2008; Hermann et al., 2015; Grusky et al., 2018) .", "Neural extractive summarization has focused primarily on extracting sentences (Nallapati et al., 2017; Cao et al., 2017; Isonuma et al., 2017; Tarnpradab et al., 2017; Zhou et al., 2018; Kedzie et al., 2018) .", "These studies exploit parallel training data consisting of source articles and story highlights (i.e., human abstracts) to create ground-truth labels for sentences.", "A neural extractive summarizer learns to predict a binary label for each source sentence indicating if it is to be included in the summary.", "These studies build distributed sentence representations using neural networks (Cheng and Lapata, 2016; Yasunaga et al., 2017) and use reinforcement learning to optimize the evaluation metric (Narayan et al., 2018b) and improve summary coherence (Wu and Hu, 2018) .", "However, sentence extraction can be coarse and in many cases, only a part of the sentence is worthy to be added to the summary.", "In this study, we perform finer-grained extractive summarization by allowing the system to select consecutive sequences of words rather than sentences to form a summary.", "Interestingly, studies reveal that summaries generated by recent neural abstractive systems are, in fact, quite \"extractive.\"", "Abstractive systems often adopt the encoder-decoder architecture with an attention mechanism (Rush et al., 2015; Nallapati et al., 2016; Paulus et al., 2017; Guo et al., 2018; Gehrmann et al., 2018; Lebanoff et al., 2018; Ce-likyilmaz et al., 2018) .", "The encoder condenses a source sequence to a fixed-length vector and the decoder takes the vector as input and generates a summary by predicting one word at a time.", "See, Liu, and Manning (2017) suggest that about 35% of the summary sentences occur in the source documents, and 90% of summary n-grams appear in the source.", "Moreover, the summaries may contain inaccurate factual details and introduce new meanings not present in the original text (Cao et al., 2018; .", "It thus raises concerns as to whether such systems can be used in realworld scenarios to summarize materials such as legal documents.", "In this work, we choose to focus on extractive summarization where selected word sequences can be highlighted on the source text to avoid change of meaning.", "Our proposed method is inspired by the work of Lei et al.", "(2016) who seek to identify rationales from textual input to support sentiment classification and question retrieval.", "Distinct from this previous work, we focus on generating generic document summaries.", "We present a novel supervised framework encouraging the selection of consecutive sequences of words to form an extractive summary.", "Further, we leverage reinforcement learning to explore the space of possible extractive summaries and promote those that are fluent, adequate, and competent in question answering.", "We seek to test the hypothesis that successful summaries can serve as document surrogates to answer important questions, and moreover, ground-truth questionanswer pairs can be derived from human abstracts.", "In the following section we describe our proposed approach in details.", "Our Approach Let S be an extractive summary consisting of text segments selected from a source document x.", "The summary can be mapped to a sequence of binary labels y assigned to document words.", "In this section we first present a supervised framework for identifying consecutive sequences of words that are summary-worthy, then proceed by describing our question-answering rewards and a deep reinforcement learning framework to guide the selection of summaries so that they can be used as document surrogates to answer important questions.", "1 Representing an Extraction Unit How best to decompose a source document into a set of text units useful for extractive summarization remains an open problem.", "A natural choice is to use words as extraction units.", "However, this choice ignores the cohesiveness of text.", "A text chunk (e.g., a prepositional phrase) can be either selected to the summary in its entirety or not at all.", "In this paper we experiment with both schemes, using either words or chunks as extraction units.", "When a text chunk is selected in the summary, all its consisting words are selected.", "We obtain text chunks by breaking down the sentence constituent parse tree in a top-down manner until each tree fragment governs at most 5 words.", "A chunk thus can contain from 1 to 5 words.", "Additionally, word level modeling can be considered a special case of chunks where the length of each phrase is 1.", "It is important to note that using sentences as extraction units is out of the scope of this paper, because our work focuses on finer-grained extraction units such as words and phrases and this is notably a more challenging task.", "The most successful neural models for encoding a piece of text to a fixed-length vector include the recurrent (Hochreiter and Schmidhuber, 1997) and convolutional neural networks (CNN; Kim et al., 2014) , among others.", "A recent study by Khandelwal et al.", "(2018) reported that the recurrent networks are capable of memorizing a recent context of about 20 tokens and the model is highly sensitive to word order, whereas this is less the case for CNN whose max-pooling operation makes it agnostic to word order.", "We implement both networks and are curious to compare their effectiveness at encoding extraction units for summarization.", "{h e t } = f Bi-LSTM 1 (x) (1) or {h e t } = f CNN 2 (x) (2) Our model first encodes the source document using a bidirectional LSTM with the forward and backward passes (Eq.", "(1)).", "The representation of the t-th source word h e t = [ ← − h e t || − → h e t ] is the con- catenation of the hidden states in both directions.", "A chunk is similarly denoted by h e t = [ ← − h e t || − → h e t+n ] where t and t + n are the indices of its beginning and ending words.", "In both cases, a fixed-length vector (h e t ∈ R m ) is created for the word/chunk.", "Further, our CNN encoder (Eq.", "(2)) uses a sliding window of {1,3,5,7} words, corresponding to the kernel sizes, to scan through the source document.", "We apply a number of filters to each window size to extract local features.", "The t-th source word is represented by the concatenation of feature maps (an m-dimensional vector).", "To obtain the chunk vector we perform max-pooling over the representations of its consisting words (from t to t + n).", "In the following we use h e t to denote the vector representation of the t-th extraction unit, may it be a word or a chunk, generated using either encoder.", "Constructing an Extractive Summary It is desirable to first develop a supervised framework for identifying summary-worthy text segments from a source article.", "These segments collectively form an extractive summary to be highlighted on the source text.", "The task can be formulated as a sequence labeling problem: a source text unit (a word or chunk) is labelled 1 if it is to be included in the summary and 0 otherwise.", "It is not unusual to develop an auto-regressive model to perform sequence labeling, where the label of the t-th extraction unit (y t ) depends on all previous labels (y <t ).", "Given this hypothesis, we build a framework to extract summary units where the importance of the t-th source unit is characterized by its informativeness (encoded in h e t ), its position in the document, and relationship with the partial summary.", "The details are presented below.", "We use a positional embedding (g t ) to signify the position of the t-th text unit in the source document.", "The position corresponds to the index of the source sentence containing the t-th unit, and further, all text units belonging to the same sentence share the same positional embedding.", "We apply sinusoidal initialization to the embeddings, following Vaswani et al.", "(2017) .", "Importantly, positional embeddings allow us to inject macro-positional knowledge about words/chunks into a neural summarization framework to offset the natural bias that humans tend to have on putting important content at the beginning of an article.", "Next, we build a representation for the partial summary to aid the system in selecting future text units.", "The representation s t is expected to encode the extraction decisions up to time t-1 and it can be realized using a unidirectional LSTM network (Eq.", "(3)).", "The t-th input to the network is represented as y t−1 ⊗ h e t−1 where y t−1 is a binary label serving as a gating mechanism to control if the semantic content of the previous text unit (h e t−1 ) is to be included in the summary (\"⊗\" corresponds to elementwise product).", "During training, we apply teacher forcing and y t−1 is the ground-truth extraction label for the (t − 1)-th unit; at test time, Figure 1 : A unidirectional LSTM (blue, Eq.", "(3)) encodes the partial summary, while the multilayer perceptron network (orange, Eq.", "(4-5)) utilizes the text unit representation (h e t ), its positional embedding (gt), and the partial summary representation (st) to determine if the t-th text unit is to be included in the summary.", "Best viewed in color.", "g t 1 g t g t+1 g t+2 s t+2 s t+1 s t s t 1 h e t 1 h e t h e t+1 h e t+2 y t−1 is generated on-the-fly by obtaining the label yielding the highest probability according to Eq.", "(5).", "In the previous work of Cheng and Lapata (2016) and Nallapati et al.", "(2017) , similar auto-regressive models are developed to identify summary sentences.", "Different from the previous work, this study focuses on extracting consecutive sequences of words and chunks from the source document, and the partial summary representation is particularly useful for predicting if the next unit is to be included in the summary to improve summary fluency.", "s t = f Uni-LSTM 3 (s t−1 , y t−1 ⊗ h e t−1 ) (3) Given the partial summary representation (s t ), and representation of the text unit (h e t ) and its positional encoding (g t ), we employ a multilayer perceptron to predict how likely the unit is to be included in the summary.", "This process is described by Eqs.", "(4-5) and further illustrated in Figure 1 .", "a t = f ReLU (W a [h e t ; g t ; s t ] + b a ) (4) p(y t |y <t , x) = σ(w y a t + b y ) (5) Our model parameters include {W a , b a , w y , b y } along with those required by f Bi-LSTM 1 , f CNN 2 and f Uni-LSTM 3 .", "It is possible to train this model in a fully supervised fashion by minimizing the negative log-likelihood of the training data.", "We generate ground-truth labels for source text units as follows.", "A source word receives a label of 1 if both itself and its adjacent word appear in the human abstract (excluding cases where both words are stopwords).", "This heuristic aims to label consecutive source words (2 or more) as summaryworthy, as opposed to picking single words which can be less informative.", "A source text chunk receives a label of 1 if one of its component words is labelled 1 in the above process.", "Because human abstracts are often short and contain novel words not present in source documents, they can be suboptimal for generating ground-truth labels for extraction units.", "Only a small portion of the source words (about 8% in our dataset) are labelled as positive, whereas the vast majority are negative.", "Such labels can be ineffective in providing supervision.", "In the following section, we investigate a new learning paradigm, which encourages extractive summaries to contain informative content useful for answering important questions, while question-answer pairs can be automatically derived from human abstracts.", "Using Summaries to Answer Questions Our hypothesis is that high-quality summaries should contain informative content making them appropriate to serve as document surrogates to satisfy users' information needs.", "We train the extractive summarizer to identify source text units necessary for answering questions, and the questionanswer (QA) pairs can be conveniently developed from human abstracts.", "To obtain QA pairs, we set an answer token to be either a salient word or a named entity to limit the space of potential answers.", "For any sentence in the human abstract, we identify an answer token from it, then replace the answer token with a blank to create a Cloze-style question-answer pair (see Table 1 ).", "When a sentence contains multiple answer tokens, a set of QA pairs can be obtained from it.", "It is important to note that at least one QA pair should be extracted from each sentence of the abstract.", "Because a system summary is trained to contain content useful for answering all questions (≈human abstract), any missing QA pair is likely to cause the summary to be insufficient.", "We collect answer tokens using the following methods: (a) we extract a set of entities with tag {PER, LOC, ORG, MISC} from each sentence using the Stanford CoreNLP toolkit ; (b) we also identify the ROOT word of each sentence's dependency parse tree along with the sentence's subject/object word, whose type is {NSUBJ, CSUBJ, OBJ, IOBJ} (if exists), then add them to the collection of answer tokens.", "Further, we prune the answer space by excluding those which appear fewer than 5 times overall.", "Having several methods for question construction allows us to explore the answer space properly.", "In the results section we perform experiments on root, subject/object, and named entities to see which model provides the best extraction guide.", "Given an extractive summary S containing a set of source text units, and a collection of questionanswer pairs P = {(Q k , e * k )} K k=1 related to the source document, we want to develop a mechanism leveraging the extractive summary to answer these questions.", "We first encode each question Q k to a vector representation (q k ).", "This is achieved by concatenating the last hidden states of the forward/backward passes of a bidirectional LSTM (Eq.", "(6) ).", "Next, we exploit the attention mechanism to locate summary parts that are relevant to answering the k-th question.", "Given the attention mechanism, an extractive summary S can be used to answer multiple questions related to the document.", "We define α t,k to be the semantic relatedness between the t-th source text unit and the k-th question.", "Following Chen et al.", "(2016a) , we introduce a bilinear term to characterize their relationship (α t,k ∝ h e t W α q k ; see Eq.", "(7) ).", "In this process, we consider only those source text units selected in summary S. Using α t,k as weights, we then compute a context vector c k condensing summary content related to the k-th question (Eq.", "(8)) .", "q k = f Bi-LSTM 4 (Q k ) (6) α t,k = exp(h e t W α q k ) t exp(h e t W α q k ) (7) c k = t α t,k h e t (8) u k = [c k ; q k ; |c k − q k |; c k ⊗ q k ] (9) To predict the most probable answer, we construct a fully-connected network as the output layer.", "The input to the network includes a concatenation of the context vector (c k ), question vector (q k ), absolute difference (|c k − q k |) and element-wise product (c k ⊗ q k ) of the two vectors (Eq.", "(9)).", "A softmax function is used to estimate a probability distribution over the space of candidate answers: .", "P (e k |S, Q k ) = softmax(W e f ReLU (W u u k + b u )).", "A Reinforcement Learning Framework In this section we introduce a reinforcement learning framework to explore the space of possible extractive summaries and present a novel reward function to promote summaries that are adequate, fluent, restricted in length, and competent in question answering.", "Our reward function consists of four components, whose interpolation weights γ, α, and β are tuned on the dev set.", "R(y) = R c (y) + γR a (y) + αR f (y) + βR l (y) We define QA competency (Eq.", "(10)) as the average log-likelihood of correctly answering questions using the system summary (y).", "A highquality system summary is expected to resemble reference summary by using similar wording.", "The adequacy metric (Eq.", "(11)) measures the percentage of overlapping unigrams between the system (y) and reference summary (y * ).", "The fluency criterion (Eq.", "(12)) encourages consecutive sequences of source words to be selected by preventing many 0/1 switches in the label sequence (i.e., |y t − y t−1 |).", "Finally, we limit the summary size by setting the ratio of selected words to be close to a threshold δ (Eq.", "(13) ).", "QA R c (y) = 1 K K k=1 log P (e * k |y, Q k ) (10) Adequ.", "R a (y) = 1 |y * | U(y, y * ) (11) Fluency R f (y) = − |y| t=2 |y t − y t−1 | (12) Length R l (y) = − 1 |y| t y t − δ (13) The reward function R(y) successfully combines intrinsic measures of summary fluency and adequacy (Goldstein et al., 2005) with extrinsic measure of summary responsiveness to given questions (Dang, 2006; Murray et al., 2008) .", "A reinforcement learning agent finds a policy P (y|x) to maximize the expected reward E P (y|x) [R(y)].", "Training the system with policy gradient (Eq.", "(14) ) involves repeatedly sampling an extractive summaryŷ from the source document x.", "At time t, the agent takes an action by sampling a decision based on p(y t |ŷ <t , x) (Eq.", "(5)) indicating whether the t-th source text unit is to be included in the summary.", "Once the full summary sequenceŷ is generated, it is compared to the ground-truth sequence to compute the reward R(ŷ).", "In this way, reinforcement learning explores the space of extractive summaries and promotes those yielding high rewards.", "At inference time, rather than sampling actions from p(y t |y <t , x), we choose y t that yields the highest probability to generate the system summary y.", "This process is deterministic and no QA is required.", "∇ θ E P (y|x) [R(y)] = E P (y|x) [R(y)∇ θ log P (y|x)] ≈ 1 N N n=1 R(ŷ (n) )∇ θ log P (ŷ (n) |x) (14) Experiments We proceed by discussing the dataset and settings, comparison systems, and experimental results obtained through both automatic metrics and human evaluation in a reading comprehension setting.", "Dataset and Settings Our goal is to build an extractive summarizer identifying important textual segments from source articles.", "To investigate the effectiveness of the proposed approach, we conduct experiments on the CNN/Daily Mail dataset using a version provided by See et al.", "(2017) .", "The reference summaries of this dataset were created by human editors exhibiting a moderate degree of extractiveness.", "E.g., 83% of summary unigrams and 45% of bigrams appear in source articles (Narayan et al., 2018a) .", "On average, a CNN article contains 761 words / 34 sentences and a DM article contains 653 words / 29 sentences.", "We report results respectively for the CNN and DM portion of the dataset.", "Our hyperparameter settings are as follows.", "We set the hidden state dimension of the LSTM to be 256 in either direction.", "A bidirectional LSTM f Bi-LSTM 1 (·) produces a 512-dimensional vector for each content word.", "Similarly, f Bi-LSTM 4 (·) generates a question vector q k of the same size.", "Our CNN encoder f CNN 2 (·) uses multiple window sizes of {1, 3, 5, 7} and 128 filters per window size.", "h e t is thus a 512-dimensional vector using either CNN or LSTM encoder.", "We set the hidden state dimension of s t to be 128.", "We also use 100-dimensional word embeddings (Pennington et al., 2014) and sinusoidal positional encodings (Vaswani et al., 2017) of 30 dimensions.", "The maximum article length is set to 400 words.", "Compared to the study of Arumae and Liu (2018) , we expand the search space dramatically from 100 to 400 words, which poses a challenge to the RLbased summarizers.", "We associate each article with at most 10 QA pairs (K=10) and use them to guide the extraction of summary segments.", "We apply mini-batch training with Adam optimizer (Kingma and Ba, 2014) , where a mini-batch contains 128 articles and their QA pairs.", "The summary ratio δ is set to 0.15, yielding extractive summaries of about 60 words.", "Following Arumae and Liu (2018) , we set hyperparameters β = 2α; α and γ are tuned on the dev set using grid search.", "Experimental Results Comparison systems We compare our method with a number of extractive and abstractive systems that have reported results on the CNN/DM datasets.", "We consider non-neural approaches that extract sentences from the source article to form a summary.", "These include LexRank (Radev et al., 2004) , SumBasic (Vanderwende et al., 2007) , and KLSum (Haghighi and Vanderwende, 2009) .", "Such methods treat sentences as bags of words, and then select sentences containing topically important words.", "We further include the Lead-3 baseline that extracts the first 3 sentences from any given article.", "The method has been shown to be a strong baseline for summarizing news articles.", "Neural extractive approaches focus on learning vector representations for sentences and words, then performing extraction based on the learned representations.", "Cheng et al.", "(2016) describe a neural network method composed of a hierarchical document encoder and an attention-based extractor.", "The system has two variants: NN-WE extracts words from the source article and NN-SE extracts sentences.", "SummaRuNNer (Nallapati et al., 2017) presents an autoregressive sequence labeling method based on recurrent neural networks.", "It selects summary sentences based on their content, salience, position, and novelty representations.", "Abstractive summarization methods are not directly comparable to our approach, but we choose to include three systems that report results respectively for CNN and DM datasets.", "Distraction-M3 (Chen et al., 2016b ) trains the summarization system to distract its attention to traverse different regions of the source article.", "Graph attention (Tan et al., 2017) introduces a graph-based attention mechanism to enhance the encoderdecoder framework.", "PointerGen+Cov.", "(See et al., 2017) allows the system to not only copy words from the source text but also generate summary words by selecting them from a vocabulary.", "Abstractive methods can thus introduce new words to the summary that are not present in the source article.", "However, system summaries may change the meaning of the original texts due to this flexibility.", "Results We present summarization results of various systems in Tables 2 and 3 , evaluated on the standard CNN/DM test sets by R-1, R-2, and R-L metrics (Lin, 2004) , which respectively measure the overlap of unigrams, bigrams, and longest common subsequences between system and reference summaries.", "We investigate four variants of our method: QASumm+NoQ does not utilize any question-answer pairs during training.", "It extracts summary text chunks by learning from groundtruth labels ( §3.2) and the chunks are encoded by f Bi-LSTM 1 .", "Other variants initialize their models using pretrained parameters from QASumm+NoQ, then integrate the reinforcement learning objective ( §3.4) to exploit the space of possible extractive summaries and reward those that are useful for answering questions.", "We consider three types of QA pairs: the answer token is the root of a sentence dependency parse tree (+ROOT), a subject or object (+SUBJ/OBJ), or an entity found in the sentence (+NER).", "In all cases, the question is generated by replacing the answer token with a blank symbol.", "As illustrated in Tables 2 and 3 , our QASumm methods with reinforcement learning (+ROOT, +SUBJ/OBJ, +NER) perform competitively with strong baselines.", "They outperform the counterpart QASumm+NoQ that makes no use of the QA pairs by a substantial margin.", "They outperform or perform at a comparable level to state-of-the-art published systems on the CNN/DM datasets but are generally inferior to PointerGen.", "We observe that exacting summary chunks is highly desirable in real-world applications as it provides a mechanism to generate concise summaries.", "Nonetheless, accurately identifying summary chunks is challenging because the search space is vast and spuriousness arises in chunking sentences.", "Cheng and Lapata (2016) report a substantial performance drop when adapting their system to extract words.", "Our QASumm methods focusing on chunk extraction perform on par with competitive systems that extract whole sentences.", "We additionally present human evaluation results of summary usefulness for a reading comprehension task in §4.3.", "In Tables 2 and 3 , we further show the number of unique answers per QA type.", "We find that the ROOT-type QA pairs have the least number of unique answers.", "They are often main verbs of sentences.", "In contrast, the SUBJ/OBJ-type has the most number of answers.", "They are subjects and objects of sentences and correspond to an open class of content words.", "The NER-type has a moderate number of answers compared to others.", "Note that all answer tokens have been filtered by frequency; those appearing less than 5 times in the dataset are removed to avoid overfitting.", "Among variants of the QASumm method, we find that QASumm+ROOT achieves the highest scores on DM dataset.", "QASumm+NER performs consistently well on both CNN and DM datasets, suggesting QA pairs of this type are effective in guiding the system to extract summary chunks.", "We conjecture that maintaining a moderate number of answers is important to maximize performance.", "To answer questions with missing entities, the summary is encouraged to contain similar content as the question body.", "Because questions are derived from the human abstract, this in turn requires the system summary to carry similar semantic content as the human abstract.", "Question-answering accuracy We next dive into the QA component of our system to investigate question-answering performance when different types of summaries and QA pairs are supplied to the system ( §3.3).", "Given a question, the system predicts an answer using an extractive summary as the source input.", "Intuitively, an informative summary can lead to high QA accuracy, as the summary content serves well as the basis for predicting answers.", "With the same summary as input, certain types of questions can be more difficult to answer than others, and the system must rely heavily on the summary to gauge correct answers.", "We compare various types of summaries.", "These include (a) QASumm+NoQ which extracts summary chunks without requiring QA pairs; and (b) GoldSumm, which are gold-standard extractive summaries generated by collecting source words appearing in human summaries.", "We further consider NoText and FullText, corresponding to using no source text or the full source article as input.", "They represent the two extremes.", "In all cases the QA component ( §3.3) is trained on the training set and we report QA accuracies on the dev set.", "In Table 4 , we observe that question-answering with GoldSumm performs the best for all QA types.", "It outperforms the scenarios using Full-Text as the source input.", "This indicates that distilled information contained in a high-quality summary can be useful for answering questions, as searching for answers in a succinct summary can be more efficient than that in a full article.", "Moreover, we observe that the performance of QA-Summ+NoQ is in between NoText and GoldSumm for all answer types.", "The results suggest that extractive summaries with even modest ROUGE scores can prove useful for question-answering.", "Regarding different types of QA pairs, we find that the ROOT-type can achieve high QA accuracy when using NoText input.", "It suggests that ROOT answers can to some extent be predicted based on the question context.", "The NER-type QA pairs work the best for both GoldSumm and Full-Text, likely because the source texts contain necessary entities required to correctly answer those questions.", "We also find the SUBJ/OBJ-type QA pairs have the smallest gap between train/dev accuracies, despite that they have a large answer space.", "Based on the analysis we would suggest future work to consider using NER-based QA pairs as they encourage the summaries to contain salient source content and be informative.", "Extraction units We finally compare the performance of using either words or chunks as extraction units ( §3.1).", "The chunks are obtained by breaking down sentence constituent parse trees in a top-down manner until all tree fragments contain 5 words or less.", "We observe that 70% of the chunks are 1-grams, and 2/3/4/5-grams are 9%, 7%, 6%, and 8% respectively.", "We compare the bidirectional LSTM (f LSTM 1 ) and CNN (f CNN 2 ) encoders for their effectiveness on generating representations for extraction units.", "Figure 2 presents the results of the QASumm+NoQ system under various settings.", "We find that extracting chunks performs superior, and combining chunks with LSTM representations yield the highest scores.", "Human Evaluation Testing the usefulness of an extractive system driven by reading comprehension is not inherently measured by automatic metrics (i.e.", "ROUGE).", "We conducted a human evaluation to assess whether the highlighted summaries contribute to document understanding.", "Similar to our training paradigm we presented each participant with the document and three fill-in-the-blank questions created from the human abstracts.", "It was guaranteed that each question was from a unique human abstract to avoid seeing the answer adjacent to the same template.", "The missing section was randomly generated to be either the root word, the subject or ob- See et al.", "(2017) .", "Our systems tested were the supervised extractor, and our full model (NER).", "ject of the sentence, or a named entity.", "We compare our reinforced extracted summary (presented as a bold overlay to the document), against our supervised method (section 3.2), abstractive summaries generated by See et al.", "(2017) , and the human abstracts in full.", "Additionally we asked the participants to rate the quality of the summary presented (1-5, with 5 being most informative).", "We utilized Amazon Mechanical Turk, and conducted an experiment where we sampled 80 documents from the CNN test set.", "The articles were evenly split across the four competing systems, and each HIT was completed by 5 turkers.", "Upon completion the data was analyzed manually for accuracy since turkers entered each answer as free text, and to remove any meaningless datapoints.", "Table 5 shows the average time (in seconds) to complete a single question, the overall accuracy of the participants, and the informativeness of a given summary type.", "Excluding the use of human abstracts, all systems resulted in similar performance times.", "However we observe a large margin in QA accuracy in our full system compared to the abstractive and our supervised approach.", "Although participants rated the informativeness of the summaries to be the same our systems yielded a higher performance.", "This strongly indicates that having a system which makes using of document comprehension has a tangible effect when applied towards a real-world task.", "Conclusion We exploited an extractive summarization framework using deep reinforcement learning to identify consecutive word sequences from a document to form an extractive summary.", "Our reward function promotes adequate and fluent summaries that can serve as document surrogates to answer important questions, directly addressing users' information needs.", "Experimental results on benchmark datasets demonstrated the efficacy of our proposed method over state-of-the-art baselines, assessed by both automatic metrics and human evaluators." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "4.3", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Our Approach", "Representing an Extraction Unit", "Constructing an Extractive Summary", "Using Summaries to Answer Questions", "A Reinforcement Learning Framework", "Experiments", "Dataset and Settings", "Experimental Results", "Human Evaluation", "Conclusion" ] }
GEM-SciDuet-train-36#paper-1050#slide-9
Recap
1. Representing an extraction unit. 2. A framework for extractive summarization. 3. Question answering as a task. 4. Combined reinforcement learning framework. Kristjan Arumae and Fei Liu Guiding Extractive Summarization with Question-Answering Rewards - NAACL 2019
1. Representing an extraction unit. 2. A framework for extractive summarization. 3. Question answering as a task. 4. Combined reinforcement learning framework. Kristjan Arumae and Fei Liu Guiding Extractive Summarization with Question-Answering Rewards - NAACL 2019
[]
GEM-SciDuet-train-36#paper-1050#slide-10
1050
Guiding Extractive Summarization with Question-Answering Rewards
Highlighting while reading is a natural behavior for people to track salient content of a document. It would be desirable to teach an extractive summarizer to do the same. However, a major obstacle to the development of a supervised summarizer is the lack of ground-truth. Manual annotation of extraction units is costprohibitive, whereas acquiring labels by automatically aligning human abstracts and source documents can yield inferior results. In this paper we describe a novel framework to guide a supervised, extractive summarization system with question-answering rewards. We argue that quality summaries should serve as a document surrogate to answer important questions, and such question-answer pairs can be conveniently obtained from human abstracts. The system learns to promote summaries that are informative, fluent, and perform competitively on question-answering. Our results compare favorably with those reported by strong summarization baselines as evaluated by automatic metrics and human assessors.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289 ], "paper_content_text": [ "Introduction Our increasingly digitized lifestyle calls for summarization techniques to produce short and accurate summaries that can be accessed at any time.", "These summaries should factually adhere to the content of the source text and present the reader with the key points therein.", "Although neural abstractive summarization has shown promising results (Rush et al., 2015; Nallapati et al., 2016; See et al., 2017) , these methods can have potential drawbacks.", "It was revealed that abstracts generated by neural systems sometimes alter or falsify objective details, and introduce new meanings not present in the original text (Cao et al., 2018) .", "Reading these abstracts can lead to misinterpretation of the source materials, which is clearly undesirable.", "In this work, we focus on extractive summarization, where the summaries are guaranteed (CNN) A judge this week sentenced a former TSA agent to six months in jail for secretly videotaping a female co-worker while she was in the bathroom, prosecutors said.", "During the investigation, detectives with the Metro Nashville Police Department in Tennessee also found that the agent, 33-year-old Daniel Boykin, entered the woman's home multiple times, where he took videos, photos and other data.", "Police found more than 90 videos and 1,500 photos of the victim on Boykin's phone and computer .", "The victim filed a complaint after seeing images of herself on his phone last year.", "[...] Comprehension Questions (Human Abstract): Former Daniel Boykin, 33, videotaped his female co-worker in the restroom, authorities say.", "Authorities say they found 90 videos and 1,500 photos of the victim on and computer.", "to remain faithful to the original content.", "Our system seeks to identify salient and consecutive sequences of words from the source document, and highlight them in the text to assist users in browsing and comprehending lengthy documents.", "An example is illustrated in Table 1 .", "A primary challenge faced by extractive summarizers is the lack of annotated data.", "The cost of hiring humans to label a necessary amount of source articles with summary words, good for training a modern classifier, can be prohibitive.", "Previous work has exploited using human abstracts to derive labels for extraction units (Woodsend and Lapata, 2010) .", "E.g., a source word is tagged 1 if it appears in the abstract, 0 otherwise.", "Although pairs of source articles and human abstracts are abundant, labels derived in this way are not necessarily best since summary saliency can not be easily captured with a rule based categorization.", "Considering that human abstracts involve generalization, paraphrasing, and can con-tain words not present in the source text, leveraging them to derive labels for extraction units can be suboptimal.", "In this work, we investigate a new strategy that seeks to better utilize human abstracts to guide the extraction of summary text units.", "We hypothesize that quality extractive summaries should contain informative content so that they can be used as document surrogates to answer important questions, thereby satisfying users' information needs.", "The question-answer pairs can be conveniently developed from human abstracts.", "Our proposed approach identifies answer tokens from each sentence of the human abstract, then replaces each answer token with a blank to create a Cloze-style question-answer pair.", "To answer all questions (≈human abstract), the system summary must contain content that is semantically close to and collectively resembles the human abstract.", "In this paper, we construct an extractive summary by selecting consecutive word sequences from the source document.", "To accomplish this we utilize a novel reinforcement learning framework to explore the space of possible extractive summaries and assess each summary using a novel reward function judging the summary's adequacy, fluency, length, and its competency to answer important questions.", "The system learns to sample extractive summaries yielding the highest expected rewards, with no pre-derived extraction labels needed.", "This work extends the methodology of Arumae and Liu (2018) with new representations of extraction units and thorough experimental evaluation.", "The contributions of this research can be summarized as follows: • we describe a novel framework generating extractive summaries by selecting consecutive sequences of words from source documents.", "This new system explores various encoding mechanisms, as well as new sampling techniques to capture phrase level data.", "Such a framework has not been thoroughly investigated in the past; • We conduct a methodical empirical evaluation from the point of view of information saliency.", "Rather than solely relying on automatic summarization evaluation methods, we also show the advantages of our system by assessing the summary quality with reading comprehension tasks.", "Our summaries compare favorably with the automatic metrics against state of the art, and show promising results against baselines when evaluated by humans for question answering.", "Related Work Extractive summarization has seen growing popularity in the past decades (Nenkova and McKeown, 2011) .", "The methods focus on selecting representative sentences from the document(s) and optionally deleting unimportant sentence constituents to form a summary (Knight and Marcu, 2002; Radev et al., 2004; Zajic et al., 2007; Martins and Smith, 2009; Gillick and Favre, 2009; Lin and Bilmes, 2010; Wang et al., 2013; Li et al., 2013 Li et al., , 2014 Hong et al., 2014; Yogatama et al., 2015) .", "A majority of the methods are unsupervised.", "They estimate sentence importance based on the sentence's length and position in the document, whether the sentence contains topical content and its relationship with other sentences.", "The summarization objective is to select a handful of sentences to maximize the coverage of important content while minimizing summary redundancy.", "Although unsupervised methods are promising, they cannot benefit from the large-scale training data harvested from the Web (Sandhaus, 2008; Hermann et al., 2015; Grusky et al., 2018) .", "Neural extractive summarization has focused primarily on extracting sentences (Nallapati et al., 2017; Cao et al., 2017; Isonuma et al., 2017; Tarnpradab et al., 2017; Zhou et al., 2018; Kedzie et al., 2018) .", "These studies exploit parallel training data consisting of source articles and story highlights (i.e., human abstracts) to create ground-truth labels for sentences.", "A neural extractive summarizer learns to predict a binary label for each source sentence indicating if it is to be included in the summary.", "These studies build distributed sentence representations using neural networks (Cheng and Lapata, 2016; Yasunaga et al., 2017) and use reinforcement learning to optimize the evaluation metric (Narayan et al., 2018b) and improve summary coherence (Wu and Hu, 2018) .", "However, sentence extraction can be coarse and in many cases, only a part of the sentence is worthy to be added to the summary.", "In this study, we perform finer-grained extractive summarization by allowing the system to select consecutive sequences of words rather than sentences to form a summary.", "Interestingly, studies reveal that summaries generated by recent neural abstractive systems are, in fact, quite \"extractive.\"", "Abstractive systems often adopt the encoder-decoder architecture with an attention mechanism (Rush et al., 2015; Nallapati et al., 2016; Paulus et al., 2017; Guo et al., 2018; Gehrmann et al., 2018; Lebanoff et al., 2018; Ce-likyilmaz et al., 2018) .", "The encoder condenses a source sequence to a fixed-length vector and the decoder takes the vector as input and generates a summary by predicting one word at a time.", "See, Liu, and Manning (2017) suggest that about 35% of the summary sentences occur in the source documents, and 90% of summary n-grams appear in the source.", "Moreover, the summaries may contain inaccurate factual details and introduce new meanings not present in the original text (Cao et al., 2018; .", "It thus raises concerns as to whether such systems can be used in realworld scenarios to summarize materials such as legal documents.", "In this work, we choose to focus on extractive summarization where selected word sequences can be highlighted on the source text to avoid change of meaning.", "Our proposed method is inspired by the work of Lei et al.", "(2016) who seek to identify rationales from textual input to support sentiment classification and question retrieval.", "Distinct from this previous work, we focus on generating generic document summaries.", "We present a novel supervised framework encouraging the selection of consecutive sequences of words to form an extractive summary.", "Further, we leverage reinforcement learning to explore the space of possible extractive summaries and promote those that are fluent, adequate, and competent in question answering.", "We seek to test the hypothesis that successful summaries can serve as document surrogates to answer important questions, and moreover, ground-truth questionanswer pairs can be derived from human abstracts.", "In the following section we describe our proposed approach in details.", "Our Approach Let S be an extractive summary consisting of text segments selected from a source document x.", "The summary can be mapped to a sequence of binary labels y assigned to document words.", "In this section we first present a supervised framework for identifying consecutive sequences of words that are summary-worthy, then proceed by describing our question-answering rewards and a deep reinforcement learning framework to guide the selection of summaries so that they can be used as document surrogates to answer important questions.", "1 Representing an Extraction Unit How best to decompose a source document into a set of text units useful for extractive summarization remains an open problem.", "A natural choice is to use words as extraction units.", "However, this choice ignores the cohesiveness of text.", "A text chunk (e.g., a prepositional phrase) can be either selected to the summary in its entirety or not at all.", "In this paper we experiment with both schemes, using either words or chunks as extraction units.", "When a text chunk is selected in the summary, all its consisting words are selected.", "We obtain text chunks by breaking down the sentence constituent parse tree in a top-down manner until each tree fragment governs at most 5 words.", "A chunk thus can contain from 1 to 5 words.", "Additionally, word level modeling can be considered a special case of chunks where the length of each phrase is 1.", "It is important to note that using sentences as extraction units is out of the scope of this paper, because our work focuses on finer-grained extraction units such as words and phrases and this is notably a more challenging task.", "The most successful neural models for encoding a piece of text to a fixed-length vector include the recurrent (Hochreiter and Schmidhuber, 1997) and convolutional neural networks (CNN; Kim et al., 2014) , among others.", "A recent study by Khandelwal et al.", "(2018) reported that the recurrent networks are capable of memorizing a recent context of about 20 tokens and the model is highly sensitive to word order, whereas this is less the case for CNN whose max-pooling operation makes it agnostic to word order.", "We implement both networks and are curious to compare their effectiveness at encoding extraction units for summarization.", "{h e t } = f Bi-LSTM 1 (x) (1) or {h e t } = f CNN 2 (x) (2) Our model first encodes the source document using a bidirectional LSTM with the forward and backward passes (Eq.", "(1)).", "The representation of the t-th source word h e t = [ ← − h e t || − → h e t ] is the con- catenation of the hidden states in both directions.", "A chunk is similarly denoted by h e t = [ ← − h e t || − → h e t+n ] where t and t + n are the indices of its beginning and ending words.", "In both cases, a fixed-length vector (h e t ∈ R m ) is created for the word/chunk.", "Further, our CNN encoder (Eq.", "(2)) uses a sliding window of {1,3,5,7} words, corresponding to the kernel sizes, to scan through the source document.", "We apply a number of filters to each window size to extract local features.", "The t-th source word is represented by the concatenation of feature maps (an m-dimensional vector).", "To obtain the chunk vector we perform max-pooling over the representations of its consisting words (from t to t + n).", "In the following we use h e t to denote the vector representation of the t-th extraction unit, may it be a word or a chunk, generated using either encoder.", "Constructing an Extractive Summary It is desirable to first develop a supervised framework for identifying summary-worthy text segments from a source article.", "These segments collectively form an extractive summary to be highlighted on the source text.", "The task can be formulated as a sequence labeling problem: a source text unit (a word or chunk) is labelled 1 if it is to be included in the summary and 0 otherwise.", "It is not unusual to develop an auto-regressive model to perform sequence labeling, where the label of the t-th extraction unit (y t ) depends on all previous labels (y <t ).", "Given this hypothesis, we build a framework to extract summary units where the importance of the t-th source unit is characterized by its informativeness (encoded in h e t ), its position in the document, and relationship with the partial summary.", "The details are presented below.", "We use a positional embedding (g t ) to signify the position of the t-th text unit in the source document.", "The position corresponds to the index of the source sentence containing the t-th unit, and further, all text units belonging to the same sentence share the same positional embedding.", "We apply sinusoidal initialization to the embeddings, following Vaswani et al.", "(2017) .", "Importantly, positional embeddings allow us to inject macro-positional knowledge about words/chunks into a neural summarization framework to offset the natural bias that humans tend to have on putting important content at the beginning of an article.", "Next, we build a representation for the partial summary to aid the system in selecting future text units.", "The representation s t is expected to encode the extraction decisions up to time t-1 and it can be realized using a unidirectional LSTM network (Eq.", "(3)).", "The t-th input to the network is represented as y t−1 ⊗ h e t−1 where y t−1 is a binary label serving as a gating mechanism to control if the semantic content of the previous text unit (h e t−1 ) is to be included in the summary (\"⊗\" corresponds to elementwise product).", "During training, we apply teacher forcing and y t−1 is the ground-truth extraction label for the (t − 1)-th unit; at test time, Figure 1 : A unidirectional LSTM (blue, Eq.", "(3)) encodes the partial summary, while the multilayer perceptron network (orange, Eq.", "(4-5)) utilizes the text unit representation (h e t ), its positional embedding (gt), and the partial summary representation (st) to determine if the t-th text unit is to be included in the summary.", "Best viewed in color.", "g t 1 g t g t+1 g t+2 s t+2 s t+1 s t s t 1 h e t 1 h e t h e t+1 h e t+2 y t−1 is generated on-the-fly by obtaining the label yielding the highest probability according to Eq.", "(5).", "In the previous work of Cheng and Lapata (2016) and Nallapati et al.", "(2017) , similar auto-regressive models are developed to identify summary sentences.", "Different from the previous work, this study focuses on extracting consecutive sequences of words and chunks from the source document, and the partial summary representation is particularly useful for predicting if the next unit is to be included in the summary to improve summary fluency.", "s t = f Uni-LSTM 3 (s t−1 , y t−1 ⊗ h e t−1 ) (3) Given the partial summary representation (s t ), and representation of the text unit (h e t ) and its positional encoding (g t ), we employ a multilayer perceptron to predict how likely the unit is to be included in the summary.", "This process is described by Eqs.", "(4-5) and further illustrated in Figure 1 .", "a t = f ReLU (W a [h e t ; g t ; s t ] + b a ) (4) p(y t |y <t , x) = σ(w y a t + b y ) (5) Our model parameters include {W a , b a , w y , b y } along with those required by f Bi-LSTM 1 , f CNN 2 and f Uni-LSTM 3 .", "It is possible to train this model in a fully supervised fashion by minimizing the negative log-likelihood of the training data.", "We generate ground-truth labels for source text units as follows.", "A source word receives a label of 1 if both itself and its adjacent word appear in the human abstract (excluding cases where both words are stopwords).", "This heuristic aims to label consecutive source words (2 or more) as summaryworthy, as opposed to picking single words which can be less informative.", "A source text chunk receives a label of 1 if one of its component words is labelled 1 in the above process.", "Because human abstracts are often short and contain novel words not present in source documents, they can be suboptimal for generating ground-truth labels for extraction units.", "Only a small portion of the source words (about 8% in our dataset) are labelled as positive, whereas the vast majority are negative.", "Such labels can be ineffective in providing supervision.", "In the following section, we investigate a new learning paradigm, which encourages extractive summaries to contain informative content useful for answering important questions, while question-answer pairs can be automatically derived from human abstracts.", "Using Summaries to Answer Questions Our hypothesis is that high-quality summaries should contain informative content making them appropriate to serve as document surrogates to satisfy users' information needs.", "We train the extractive summarizer to identify source text units necessary for answering questions, and the questionanswer (QA) pairs can be conveniently developed from human abstracts.", "To obtain QA pairs, we set an answer token to be either a salient word or a named entity to limit the space of potential answers.", "For any sentence in the human abstract, we identify an answer token from it, then replace the answer token with a blank to create a Cloze-style question-answer pair (see Table 1 ).", "When a sentence contains multiple answer tokens, a set of QA pairs can be obtained from it.", "It is important to note that at least one QA pair should be extracted from each sentence of the abstract.", "Because a system summary is trained to contain content useful for answering all questions (≈human abstract), any missing QA pair is likely to cause the summary to be insufficient.", "We collect answer tokens using the following methods: (a) we extract a set of entities with tag {PER, LOC, ORG, MISC} from each sentence using the Stanford CoreNLP toolkit ; (b) we also identify the ROOT word of each sentence's dependency parse tree along with the sentence's subject/object word, whose type is {NSUBJ, CSUBJ, OBJ, IOBJ} (if exists), then add them to the collection of answer tokens.", "Further, we prune the answer space by excluding those which appear fewer than 5 times overall.", "Having several methods for question construction allows us to explore the answer space properly.", "In the results section we perform experiments on root, subject/object, and named entities to see which model provides the best extraction guide.", "Given an extractive summary S containing a set of source text units, and a collection of questionanswer pairs P = {(Q k , e * k )} K k=1 related to the source document, we want to develop a mechanism leveraging the extractive summary to answer these questions.", "We first encode each question Q k to a vector representation (q k ).", "This is achieved by concatenating the last hidden states of the forward/backward passes of a bidirectional LSTM (Eq.", "(6) ).", "Next, we exploit the attention mechanism to locate summary parts that are relevant to answering the k-th question.", "Given the attention mechanism, an extractive summary S can be used to answer multiple questions related to the document.", "We define α t,k to be the semantic relatedness between the t-th source text unit and the k-th question.", "Following Chen et al.", "(2016a) , we introduce a bilinear term to characterize their relationship (α t,k ∝ h e t W α q k ; see Eq.", "(7) ).", "In this process, we consider only those source text units selected in summary S. Using α t,k as weights, we then compute a context vector c k condensing summary content related to the k-th question (Eq.", "(8)) .", "q k = f Bi-LSTM 4 (Q k ) (6) α t,k = exp(h e t W α q k ) t exp(h e t W α q k ) (7) c k = t α t,k h e t (8) u k = [c k ; q k ; |c k − q k |; c k ⊗ q k ] (9) To predict the most probable answer, we construct a fully-connected network as the output layer.", "The input to the network includes a concatenation of the context vector (c k ), question vector (q k ), absolute difference (|c k − q k |) and element-wise product (c k ⊗ q k ) of the two vectors (Eq.", "(9)).", "A softmax function is used to estimate a probability distribution over the space of candidate answers: .", "P (e k |S, Q k ) = softmax(W e f ReLU (W u u k + b u )).", "A Reinforcement Learning Framework In this section we introduce a reinforcement learning framework to explore the space of possible extractive summaries and present a novel reward function to promote summaries that are adequate, fluent, restricted in length, and competent in question answering.", "Our reward function consists of four components, whose interpolation weights γ, α, and β are tuned on the dev set.", "R(y) = R c (y) + γR a (y) + αR f (y) + βR l (y) We define QA competency (Eq.", "(10)) as the average log-likelihood of correctly answering questions using the system summary (y).", "A highquality system summary is expected to resemble reference summary by using similar wording.", "The adequacy metric (Eq.", "(11)) measures the percentage of overlapping unigrams between the system (y) and reference summary (y * ).", "The fluency criterion (Eq.", "(12)) encourages consecutive sequences of source words to be selected by preventing many 0/1 switches in the label sequence (i.e., |y t − y t−1 |).", "Finally, we limit the summary size by setting the ratio of selected words to be close to a threshold δ (Eq.", "(13) ).", "QA R c (y) = 1 K K k=1 log P (e * k |y, Q k ) (10) Adequ.", "R a (y) = 1 |y * | U(y, y * ) (11) Fluency R f (y) = − |y| t=2 |y t − y t−1 | (12) Length R l (y) = − 1 |y| t y t − δ (13) The reward function R(y) successfully combines intrinsic measures of summary fluency and adequacy (Goldstein et al., 2005) with extrinsic measure of summary responsiveness to given questions (Dang, 2006; Murray et al., 2008) .", "A reinforcement learning agent finds a policy P (y|x) to maximize the expected reward E P (y|x) [R(y)].", "Training the system with policy gradient (Eq.", "(14) ) involves repeatedly sampling an extractive summaryŷ from the source document x.", "At time t, the agent takes an action by sampling a decision based on p(y t |ŷ <t , x) (Eq.", "(5)) indicating whether the t-th source text unit is to be included in the summary.", "Once the full summary sequenceŷ is generated, it is compared to the ground-truth sequence to compute the reward R(ŷ).", "In this way, reinforcement learning explores the space of extractive summaries and promotes those yielding high rewards.", "At inference time, rather than sampling actions from p(y t |y <t , x), we choose y t that yields the highest probability to generate the system summary y.", "This process is deterministic and no QA is required.", "∇ θ E P (y|x) [R(y)] = E P (y|x) [R(y)∇ θ log P (y|x)] ≈ 1 N N n=1 R(ŷ (n) )∇ θ log P (ŷ (n) |x) (14) Experiments We proceed by discussing the dataset and settings, comparison systems, and experimental results obtained through both automatic metrics and human evaluation in a reading comprehension setting.", "Dataset and Settings Our goal is to build an extractive summarizer identifying important textual segments from source articles.", "To investigate the effectiveness of the proposed approach, we conduct experiments on the CNN/Daily Mail dataset using a version provided by See et al.", "(2017) .", "The reference summaries of this dataset were created by human editors exhibiting a moderate degree of extractiveness.", "E.g., 83% of summary unigrams and 45% of bigrams appear in source articles (Narayan et al., 2018a) .", "On average, a CNN article contains 761 words / 34 sentences and a DM article contains 653 words / 29 sentences.", "We report results respectively for the CNN and DM portion of the dataset.", "Our hyperparameter settings are as follows.", "We set the hidden state dimension of the LSTM to be 256 in either direction.", "A bidirectional LSTM f Bi-LSTM 1 (·) produces a 512-dimensional vector for each content word.", "Similarly, f Bi-LSTM 4 (·) generates a question vector q k of the same size.", "Our CNN encoder f CNN 2 (·) uses multiple window sizes of {1, 3, 5, 7} and 128 filters per window size.", "h e t is thus a 512-dimensional vector using either CNN or LSTM encoder.", "We set the hidden state dimension of s t to be 128.", "We also use 100-dimensional word embeddings (Pennington et al., 2014) and sinusoidal positional encodings (Vaswani et al., 2017) of 30 dimensions.", "The maximum article length is set to 400 words.", "Compared to the study of Arumae and Liu (2018) , we expand the search space dramatically from 100 to 400 words, which poses a challenge to the RLbased summarizers.", "We associate each article with at most 10 QA pairs (K=10) and use them to guide the extraction of summary segments.", "We apply mini-batch training with Adam optimizer (Kingma and Ba, 2014) , where a mini-batch contains 128 articles and their QA pairs.", "The summary ratio δ is set to 0.15, yielding extractive summaries of about 60 words.", "Following Arumae and Liu (2018) , we set hyperparameters β = 2α; α and γ are tuned on the dev set using grid search.", "Experimental Results Comparison systems We compare our method with a number of extractive and abstractive systems that have reported results on the CNN/DM datasets.", "We consider non-neural approaches that extract sentences from the source article to form a summary.", "These include LexRank (Radev et al., 2004) , SumBasic (Vanderwende et al., 2007) , and KLSum (Haghighi and Vanderwende, 2009) .", "Such methods treat sentences as bags of words, and then select sentences containing topically important words.", "We further include the Lead-3 baseline that extracts the first 3 sentences from any given article.", "The method has been shown to be a strong baseline for summarizing news articles.", "Neural extractive approaches focus on learning vector representations for sentences and words, then performing extraction based on the learned representations.", "Cheng et al.", "(2016) describe a neural network method composed of a hierarchical document encoder and an attention-based extractor.", "The system has two variants: NN-WE extracts words from the source article and NN-SE extracts sentences.", "SummaRuNNer (Nallapati et al., 2017) presents an autoregressive sequence labeling method based on recurrent neural networks.", "It selects summary sentences based on their content, salience, position, and novelty representations.", "Abstractive summarization methods are not directly comparable to our approach, but we choose to include three systems that report results respectively for CNN and DM datasets.", "Distraction-M3 (Chen et al., 2016b ) trains the summarization system to distract its attention to traverse different regions of the source article.", "Graph attention (Tan et al., 2017) introduces a graph-based attention mechanism to enhance the encoderdecoder framework.", "PointerGen+Cov.", "(See et al., 2017) allows the system to not only copy words from the source text but also generate summary words by selecting them from a vocabulary.", "Abstractive methods can thus introduce new words to the summary that are not present in the source article.", "However, system summaries may change the meaning of the original texts due to this flexibility.", "Results We present summarization results of various systems in Tables 2 and 3 , evaluated on the standard CNN/DM test sets by R-1, R-2, and R-L metrics (Lin, 2004) , which respectively measure the overlap of unigrams, bigrams, and longest common subsequences between system and reference summaries.", "We investigate four variants of our method: QASumm+NoQ does not utilize any question-answer pairs during training.", "It extracts summary text chunks by learning from groundtruth labels ( §3.2) and the chunks are encoded by f Bi-LSTM 1 .", "Other variants initialize their models using pretrained parameters from QASumm+NoQ, then integrate the reinforcement learning objective ( §3.4) to exploit the space of possible extractive summaries and reward those that are useful for answering questions.", "We consider three types of QA pairs: the answer token is the root of a sentence dependency parse tree (+ROOT), a subject or object (+SUBJ/OBJ), or an entity found in the sentence (+NER).", "In all cases, the question is generated by replacing the answer token with a blank symbol.", "As illustrated in Tables 2 and 3 , our QASumm methods with reinforcement learning (+ROOT, +SUBJ/OBJ, +NER) perform competitively with strong baselines.", "They outperform the counterpart QASumm+NoQ that makes no use of the QA pairs by a substantial margin.", "They outperform or perform at a comparable level to state-of-the-art published systems on the CNN/DM datasets but are generally inferior to PointerGen.", "We observe that exacting summary chunks is highly desirable in real-world applications as it provides a mechanism to generate concise summaries.", "Nonetheless, accurately identifying summary chunks is challenging because the search space is vast and spuriousness arises in chunking sentences.", "Cheng and Lapata (2016) report a substantial performance drop when adapting their system to extract words.", "Our QASumm methods focusing on chunk extraction perform on par with competitive systems that extract whole sentences.", "We additionally present human evaluation results of summary usefulness for a reading comprehension task in §4.3.", "In Tables 2 and 3 , we further show the number of unique answers per QA type.", "We find that the ROOT-type QA pairs have the least number of unique answers.", "They are often main verbs of sentences.", "In contrast, the SUBJ/OBJ-type has the most number of answers.", "They are subjects and objects of sentences and correspond to an open class of content words.", "The NER-type has a moderate number of answers compared to others.", "Note that all answer tokens have been filtered by frequency; those appearing less than 5 times in the dataset are removed to avoid overfitting.", "Among variants of the QASumm method, we find that QASumm+ROOT achieves the highest scores on DM dataset.", "QASumm+NER performs consistently well on both CNN and DM datasets, suggesting QA pairs of this type are effective in guiding the system to extract summary chunks.", "We conjecture that maintaining a moderate number of answers is important to maximize performance.", "To answer questions with missing entities, the summary is encouraged to contain similar content as the question body.", "Because questions are derived from the human abstract, this in turn requires the system summary to carry similar semantic content as the human abstract.", "Question-answering accuracy We next dive into the QA component of our system to investigate question-answering performance when different types of summaries and QA pairs are supplied to the system ( §3.3).", "Given a question, the system predicts an answer using an extractive summary as the source input.", "Intuitively, an informative summary can lead to high QA accuracy, as the summary content serves well as the basis for predicting answers.", "With the same summary as input, certain types of questions can be more difficult to answer than others, and the system must rely heavily on the summary to gauge correct answers.", "We compare various types of summaries.", "These include (a) QASumm+NoQ which extracts summary chunks without requiring QA pairs; and (b) GoldSumm, which are gold-standard extractive summaries generated by collecting source words appearing in human summaries.", "We further consider NoText and FullText, corresponding to using no source text or the full source article as input.", "They represent the two extremes.", "In all cases the QA component ( §3.3) is trained on the training set and we report QA accuracies on the dev set.", "In Table 4 , we observe that question-answering with GoldSumm performs the best for all QA types.", "It outperforms the scenarios using Full-Text as the source input.", "This indicates that distilled information contained in a high-quality summary can be useful for answering questions, as searching for answers in a succinct summary can be more efficient than that in a full article.", "Moreover, we observe that the performance of QA-Summ+NoQ is in between NoText and GoldSumm for all answer types.", "The results suggest that extractive summaries with even modest ROUGE scores can prove useful for question-answering.", "Regarding different types of QA pairs, we find that the ROOT-type can achieve high QA accuracy when using NoText input.", "It suggests that ROOT answers can to some extent be predicted based on the question context.", "The NER-type QA pairs work the best for both GoldSumm and Full-Text, likely because the source texts contain necessary entities required to correctly answer those questions.", "We also find the SUBJ/OBJ-type QA pairs have the smallest gap between train/dev accuracies, despite that they have a large answer space.", "Based on the analysis we would suggest future work to consider using NER-based QA pairs as they encourage the summaries to contain salient source content and be informative.", "Extraction units We finally compare the performance of using either words or chunks as extraction units ( §3.1).", "The chunks are obtained by breaking down sentence constituent parse trees in a top-down manner until all tree fragments contain 5 words or less.", "We observe that 70% of the chunks are 1-grams, and 2/3/4/5-grams are 9%, 7%, 6%, and 8% respectively.", "We compare the bidirectional LSTM (f LSTM 1 ) and CNN (f CNN 2 ) encoders for their effectiveness on generating representations for extraction units.", "Figure 2 presents the results of the QASumm+NoQ system under various settings.", "We find that extracting chunks performs superior, and combining chunks with LSTM representations yield the highest scores.", "Human Evaluation Testing the usefulness of an extractive system driven by reading comprehension is not inherently measured by automatic metrics (i.e.", "ROUGE).", "We conducted a human evaluation to assess whether the highlighted summaries contribute to document understanding.", "Similar to our training paradigm we presented each participant with the document and three fill-in-the-blank questions created from the human abstracts.", "It was guaranteed that each question was from a unique human abstract to avoid seeing the answer adjacent to the same template.", "The missing section was randomly generated to be either the root word, the subject or ob- See et al.", "(2017) .", "Our systems tested were the supervised extractor, and our full model (NER).", "ject of the sentence, or a named entity.", "We compare our reinforced extracted summary (presented as a bold overlay to the document), against our supervised method (section 3.2), abstractive summaries generated by See et al.", "(2017) , and the human abstracts in full.", "Additionally we asked the participants to rate the quality of the summary presented (1-5, with 5 being most informative).", "We utilized Amazon Mechanical Turk, and conducted an experiment where we sampled 80 documents from the CNN test set.", "The articles were evenly split across the four competing systems, and each HIT was completed by 5 turkers.", "Upon completion the data was analyzed manually for accuracy since turkers entered each answer as free text, and to remove any meaningless datapoints.", "Table 5 shows the average time (in seconds) to complete a single question, the overall accuracy of the participants, and the informativeness of a given summary type.", "Excluding the use of human abstracts, all systems resulted in similar performance times.", "However we observe a large margin in QA accuracy in our full system compared to the abstractive and our supervised approach.", "Although participants rated the informativeness of the summaries to be the same our systems yielded a higher performance.", "This strongly indicates that having a system which makes using of document comprehension has a tangible effect when applied towards a real-world task.", "Conclusion We exploited an extractive summarization framework using deep reinforcement learning to identify consecutive word sequences from a document to form an extractive summary.", "Our reward function promotes adequate and fluent summaries that can serve as document surrogates to answer important questions, directly addressing users' information needs.", "Experimental results on benchmark datasets demonstrated the efficacy of our proposed method over state-of-the-art baselines, assessed by both automatic metrics and human evaluators." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "4.3", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Our Approach", "Representing an Extraction Unit", "Constructing an Extractive Summary", "Using Summaries to Answer Questions", "A Reinforcement Learning Framework", "Experiments", "Dataset and Settings", "Experimental Results", "Human Evaluation", "Conclusion" ] }
GEM-SciDuet-train-36#paper-1050#slide-10
Experimental Results CNN
Models outperform the counterpart QASumm (No QA) that makes no use of the QA pairs by a substantial margin. Kristjan Arumae and Fei Liu Guiding Extractive Summarization with Question-Answering Rewards - NAACL 2019
Models outperform the counterpart QASumm (No QA) that makes no use of the QA pairs by a substantial margin. Kristjan Arumae and Fei Liu Guiding Extractive Summarization with Question-Answering Rewards - NAACL 2019
[]
GEM-SciDuet-train-36#paper-1050#slide-11
1050
Guiding Extractive Summarization with Question-Answering Rewards
Highlighting while reading is a natural behavior for people to track salient content of a document. It would be desirable to teach an extractive summarizer to do the same. However, a major obstacle to the development of a supervised summarizer is the lack of ground-truth. Manual annotation of extraction units is costprohibitive, whereas acquiring labels by automatically aligning human abstracts and source documents can yield inferior results. In this paper we describe a novel framework to guide a supervised, extractive summarization system with question-answering rewards. We argue that quality summaries should serve as a document surrogate to answer important questions, and such question-answer pairs can be conveniently obtained from human abstracts. The system learns to promote summaries that are informative, fluent, and perform competitively on question-answering. Our results compare favorably with those reported by strong summarization baselines as evaluated by automatic metrics and human assessors.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289 ], "paper_content_text": [ "Introduction Our increasingly digitized lifestyle calls for summarization techniques to produce short and accurate summaries that can be accessed at any time.", "These summaries should factually adhere to the content of the source text and present the reader with the key points therein.", "Although neural abstractive summarization has shown promising results (Rush et al., 2015; Nallapati et al., 2016; See et al., 2017) , these methods can have potential drawbacks.", "It was revealed that abstracts generated by neural systems sometimes alter or falsify objective details, and introduce new meanings not present in the original text (Cao et al., 2018) .", "Reading these abstracts can lead to misinterpretation of the source materials, which is clearly undesirable.", "In this work, we focus on extractive summarization, where the summaries are guaranteed (CNN) A judge this week sentenced a former TSA agent to six months in jail for secretly videotaping a female co-worker while she was in the bathroom, prosecutors said.", "During the investigation, detectives with the Metro Nashville Police Department in Tennessee also found that the agent, 33-year-old Daniel Boykin, entered the woman's home multiple times, where he took videos, photos and other data.", "Police found more than 90 videos and 1,500 photos of the victim on Boykin's phone and computer .", "The victim filed a complaint after seeing images of herself on his phone last year.", "[...] Comprehension Questions (Human Abstract): Former Daniel Boykin, 33, videotaped his female co-worker in the restroom, authorities say.", "Authorities say they found 90 videos and 1,500 photos of the victim on and computer.", "to remain faithful to the original content.", "Our system seeks to identify salient and consecutive sequences of words from the source document, and highlight them in the text to assist users in browsing and comprehending lengthy documents.", "An example is illustrated in Table 1 .", "A primary challenge faced by extractive summarizers is the lack of annotated data.", "The cost of hiring humans to label a necessary amount of source articles with summary words, good for training a modern classifier, can be prohibitive.", "Previous work has exploited using human abstracts to derive labels for extraction units (Woodsend and Lapata, 2010) .", "E.g., a source word is tagged 1 if it appears in the abstract, 0 otherwise.", "Although pairs of source articles and human abstracts are abundant, labels derived in this way are not necessarily best since summary saliency can not be easily captured with a rule based categorization.", "Considering that human abstracts involve generalization, paraphrasing, and can con-tain words not present in the source text, leveraging them to derive labels for extraction units can be suboptimal.", "In this work, we investigate a new strategy that seeks to better utilize human abstracts to guide the extraction of summary text units.", "We hypothesize that quality extractive summaries should contain informative content so that they can be used as document surrogates to answer important questions, thereby satisfying users' information needs.", "The question-answer pairs can be conveniently developed from human abstracts.", "Our proposed approach identifies answer tokens from each sentence of the human abstract, then replaces each answer token with a blank to create a Cloze-style question-answer pair.", "To answer all questions (≈human abstract), the system summary must contain content that is semantically close to and collectively resembles the human abstract.", "In this paper, we construct an extractive summary by selecting consecutive word sequences from the source document.", "To accomplish this we utilize a novel reinforcement learning framework to explore the space of possible extractive summaries and assess each summary using a novel reward function judging the summary's adequacy, fluency, length, and its competency to answer important questions.", "The system learns to sample extractive summaries yielding the highest expected rewards, with no pre-derived extraction labels needed.", "This work extends the methodology of Arumae and Liu (2018) with new representations of extraction units and thorough experimental evaluation.", "The contributions of this research can be summarized as follows: • we describe a novel framework generating extractive summaries by selecting consecutive sequences of words from source documents.", "This new system explores various encoding mechanisms, as well as new sampling techniques to capture phrase level data.", "Such a framework has not been thoroughly investigated in the past; • We conduct a methodical empirical evaluation from the point of view of information saliency.", "Rather than solely relying on automatic summarization evaluation methods, we also show the advantages of our system by assessing the summary quality with reading comprehension tasks.", "Our summaries compare favorably with the automatic metrics against state of the art, and show promising results against baselines when evaluated by humans for question answering.", "Related Work Extractive summarization has seen growing popularity in the past decades (Nenkova and McKeown, 2011) .", "The methods focus on selecting representative sentences from the document(s) and optionally deleting unimportant sentence constituents to form a summary (Knight and Marcu, 2002; Radev et al., 2004; Zajic et al., 2007; Martins and Smith, 2009; Gillick and Favre, 2009; Lin and Bilmes, 2010; Wang et al., 2013; Li et al., 2013 Li et al., , 2014 Hong et al., 2014; Yogatama et al., 2015) .", "A majority of the methods are unsupervised.", "They estimate sentence importance based on the sentence's length and position in the document, whether the sentence contains topical content and its relationship with other sentences.", "The summarization objective is to select a handful of sentences to maximize the coverage of important content while minimizing summary redundancy.", "Although unsupervised methods are promising, they cannot benefit from the large-scale training data harvested from the Web (Sandhaus, 2008; Hermann et al., 2015; Grusky et al., 2018) .", "Neural extractive summarization has focused primarily on extracting sentences (Nallapati et al., 2017; Cao et al., 2017; Isonuma et al., 2017; Tarnpradab et al., 2017; Zhou et al., 2018; Kedzie et al., 2018) .", "These studies exploit parallel training data consisting of source articles and story highlights (i.e., human abstracts) to create ground-truth labels for sentences.", "A neural extractive summarizer learns to predict a binary label for each source sentence indicating if it is to be included in the summary.", "These studies build distributed sentence representations using neural networks (Cheng and Lapata, 2016; Yasunaga et al., 2017) and use reinforcement learning to optimize the evaluation metric (Narayan et al., 2018b) and improve summary coherence (Wu and Hu, 2018) .", "However, sentence extraction can be coarse and in many cases, only a part of the sentence is worthy to be added to the summary.", "In this study, we perform finer-grained extractive summarization by allowing the system to select consecutive sequences of words rather than sentences to form a summary.", "Interestingly, studies reveal that summaries generated by recent neural abstractive systems are, in fact, quite \"extractive.\"", "Abstractive systems often adopt the encoder-decoder architecture with an attention mechanism (Rush et al., 2015; Nallapati et al., 2016; Paulus et al., 2017; Guo et al., 2018; Gehrmann et al., 2018; Lebanoff et al., 2018; Ce-likyilmaz et al., 2018) .", "The encoder condenses a source sequence to a fixed-length vector and the decoder takes the vector as input and generates a summary by predicting one word at a time.", "See, Liu, and Manning (2017) suggest that about 35% of the summary sentences occur in the source documents, and 90% of summary n-grams appear in the source.", "Moreover, the summaries may contain inaccurate factual details and introduce new meanings not present in the original text (Cao et al., 2018; .", "It thus raises concerns as to whether such systems can be used in realworld scenarios to summarize materials such as legal documents.", "In this work, we choose to focus on extractive summarization where selected word sequences can be highlighted on the source text to avoid change of meaning.", "Our proposed method is inspired by the work of Lei et al.", "(2016) who seek to identify rationales from textual input to support sentiment classification and question retrieval.", "Distinct from this previous work, we focus on generating generic document summaries.", "We present a novel supervised framework encouraging the selection of consecutive sequences of words to form an extractive summary.", "Further, we leverage reinforcement learning to explore the space of possible extractive summaries and promote those that are fluent, adequate, and competent in question answering.", "We seek to test the hypothesis that successful summaries can serve as document surrogates to answer important questions, and moreover, ground-truth questionanswer pairs can be derived from human abstracts.", "In the following section we describe our proposed approach in details.", "Our Approach Let S be an extractive summary consisting of text segments selected from a source document x.", "The summary can be mapped to a sequence of binary labels y assigned to document words.", "In this section we first present a supervised framework for identifying consecutive sequences of words that are summary-worthy, then proceed by describing our question-answering rewards and a deep reinforcement learning framework to guide the selection of summaries so that they can be used as document surrogates to answer important questions.", "1 Representing an Extraction Unit How best to decompose a source document into a set of text units useful for extractive summarization remains an open problem.", "A natural choice is to use words as extraction units.", "However, this choice ignores the cohesiveness of text.", "A text chunk (e.g., a prepositional phrase) can be either selected to the summary in its entirety or not at all.", "In this paper we experiment with both schemes, using either words or chunks as extraction units.", "When a text chunk is selected in the summary, all its consisting words are selected.", "We obtain text chunks by breaking down the sentence constituent parse tree in a top-down manner until each tree fragment governs at most 5 words.", "A chunk thus can contain from 1 to 5 words.", "Additionally, word level modeling can be considered a special case of chunks where the length of each phrase is 1.", "It is important to note that using sentences as extraction units is out of the scope of this paper, because our work focuses on finer-grained extraction units such as words and phrases and this is notably a more challenging task.", "The most successful neural models for encoding a piece of text to a fixed-length vector include the recurrent (Hochreiter and Schmidhuber, 1997) and convolutional neural networks (CNN; Kim et al., 2014) , among others.", "A recent study by Khandelwal et al.", "(2018) reported that the recurrent networks are capable of memorizing a recent context of about 20 tokens and the model is highly sensitive to word order, whereas this is less the case for CNN whose max-pooling operation makes it agnostic to word order.", "We implement both networks and are curious to compare their effectiveness at encoding extraction units for summarization.", "{h e t } = f Bi-LSTM 1 (x) (1) or {h e t } = f CNN 2 (x) (2) Our model first encodes the source document using a bidirectional LSTM with the forward and backward passes (Eq.", "(1)).", "The representation of the t-th source word h e t = [ ← − h e t || − → h e t ] is the con- catenation of the hidden states in both directions.", "A chunk is similarly denoted by h e t = [ ← − h e t || − → h e t+n ] where t and t + n are the indices of its beginning and ending words.", "In both cases, a fixed-length vector (h e t ∈ R m ) is created for the word/chunk.", "Further, our CNN encoder (Eq.", "(2)) uses a sliding window of {1,3,5,7} words, corresponding to the kernel sizes, to scan through the source document.", "We apply a number of filters to each window size to extract local features.", "The t-th source word is represented by the concatenation of feature maps (an m-dimensional vector).", "To obtain the chunk vector we perform max-pooling over the representations of its consisting words (from t to t + n).", "In the following we use h e t to denote the vector representation of the t-th extraction unit, may it be a word or a chunk, generated using either encoder.", "Constructing an Extractive Summary It is desirable to first develop a supervised framework for identifying summary-worthy text segments from a source article.", "These segments collectively form an extractive summary to be highlighted on the source text.", "The task can be formulated as a sequence labeling problem: a source text unit (a word or chunk) is labelled 1 if it is to be included in the summary and 0 otherwise.", "It is not unusual to develop an auto-regressive model to perform sequence labeling, where the label of the t-th extraction unit (y t ) depends on all previous labels (y <t ).", "Given this hypothesis, we build a framework to extract summary units where the importance of the t-th source unit is characterized by its informativeness (encoded in h e t ), its position in the document, and relationship with the partial summary.", "The details are presented below.", "We use a positional embedding (g t ) to signify the position of the t-th text unit in the source document.", "The position corresponds to the index of the source sentence containing the t-th unit, and further, all text units belonging to the same sentence share the same positional embedding.", "We apply sinusoidal initialization to the embeddings, following Vaswani et al.", "(2017) .", "Importantly, positional embeddings allow us to inject macro-positional knowledge about words/chunks into a neural summarization framework to offset the natural bias that humans tend to have on putting important content at the beginning of an article.", "Next, we build a representation for the partial summary to aid the system in selecting future text units.", "The representation s t is expected to encode the extraction decisions up to time t-1 and it can be realized using a unidirectional LSTM network (Eq.", "(3)).", "The t-th input to the network is represented as y t−1 ⊗ h e t−1 where y t−1 is a binary label serving as a gating mechanism to control if the semantic content of the previous text unit (h e t−1 ) is to be included in the summary (\"⊗\" corresponds to elementwise product).", "During training, we apply teacher forcing and y t−1 is the ground-truth extraction label for the (t − 1)-th unit; at test time, Figure 1 : A unidirectional LSTM (blue, Eq.", "(3)) encodes the partial summary, while the multilayer perceptron network (orange, Eq.", "(4-5)) utilizes the text unit representation (h e t ), its positional embedding (gt), and the partial summary representation (st) to determine if the t-th text unit is to be included in the summary.", "Best viewed in color.", "g t 1 g t g t+1 g t+2 s t+2 s t+1 s t s t 1 h e t 1 h e t h e t+1 h e t+2 y t−1 is generated on-the-fly by obtaining the label yielding the highest probability according to Eq.", "(5).", "In the previous work of Cheng and Lapata (2016) and Nallapati et al.", "(2017) , similar auto-regressive models are developed to identify summary sentences.", "Different from the previous work, this study focuses on extracting consecutive sequences of words and chunks from the source document, and the partial summary representation is particularly useful for predicting if the next unit is to be included in the summary to improve summary fluency.", "s t = f Uni-LSTM 3 (s t−1 , y t−1 ⊗ h e t−1 ) (3) Given the partial summary representation (s t ), and representation of the text unit (h e t ) and its positional encoding (g t ), we employ a multilayer perceptron to predict how likely the unit is to be included in the summary.", "This process is described by Eqs.", "(4-5) and further illustrated in Figure 1 .", "a t = f ReLU (W a [h e t ; g t ; s t ] + b a ) (4) p(y t |y <t , x) = σ(w y a t + b y ) (5) Our model parameters include {W a , b a , w y , b y } along with those required by f Bi-LSTM 1 , f CNN 2 and f Uni-LSTM 3 .", "It is possible to train this model in a fully supervised fashion by minimizing the negative log-likelihood of the training data.", "We generate ground-truth labels for source text units as follows.", "A source word receives a label of 1 if both itself and its adjacent word appear in the human abstract (excluding cases where both words are stopwords).", "This heuristic aims to label consecutive source words (2 or more) as summaryworthy, as opposed to picking single words which can be less informative.", "A source text chunk receives a label of 1 if one of its component words is labelled 1 in the above process.", "Because human abstracts are often short and contain novel words not present in source documents, they can be suboptimal for generating ground-truth labels for extraction units.", "Only a small portion of the source words (about 8% in our dataset) are labelled as positive, whereas the vast majority are negative.", "Such labels can be ineffective in providing supervision.", "In the following section, we investigate a new learning paradigm, which encourages extractive summaries to contain informative content useful for answering important questions, while question-answer pairs can be automatically derived from human abstracts.", "Using Summaries to Answer Questions Our hypothesis is that high-quality summaries should contain informative content making them appropriate to serve as document surrogates to satisfy users' information needs.", "We train the extractive summarizer to identify source text units necessary for answering questions, and the questionanswer (QA) pairs can be conveniently developed from human abstracts.", "To obtain QA pairs, we set an answer token to be either a salient word or a named entity to limit the space of potential answers.", "For any sentence in the human abstract, we identify an answer token from it, then replace the answer token with a blank to create a Cloze-style question-answer pair (see Table 1 ).", "When a sentence contains multiple answer tokens, a set of QA pairs can be obtained from it.", "It is important to note that at least one QA pair should be extracted from each sentence of the abstract.", "Because a system summary is trained to contain content useful for answering all questions (≈human abstract), any missing QA pair is likely to cause the summary to be insufficient.", "We collect answer tokens using the following methods: (a) we extract a set of entities with tag {PER, LOC, ORG, MISC} from each sentence using the Stanford CoreNLP toolkit ; (b) we also identify the ROOT word of each sentence's dependency parse tree along with the sentence's subject/object word, whose type is {NSUBJ, CSUBJ, OBJ, IOBJ} (if exists), then add them to the collection of answer tokens.", "Further, we prune the answer space by excluding those which appear fewer than 5 times overall.", "Having several methods for question construction allows us to explore the answer space properly.", "In the results section we perform experiments on root, subject/object, and named entities to see which model provides the best extraction guide.", "Given an extractive summary S containing a set of source text units, and a collection of questionanswer pairs P = {(Q k , e * k )} K k=1 related to the source document, we want to develop a mechanism leveraging the extractive summary to answer these questions.", "We first encode each question Q k to a vector representation (q k ).", "This is achieved by concatenating the last hidden states of the forward/backward passes of a bidirectional LSTM (Eq.", "(6) ).", "Next, we exploit the attention mechanism to locate summary parts that are relevant to answering the k-th question.", "Given the attention mechanism, an extractive summary S can be used to answer multiple questions related to the document.", "We define α t,k to be the semantic relatedness between the t-th source text unit and the k-th question.", "Following Chen et al.", "(2016a) , we introduce a bilinear term to characterize their relationship (α t,k ∝ h e t W α q k ; see Eq.", "(7) ).", "In this process, we consider only those source text units selected in summary S. Using α t,k as weights, we then compute a context vector c k condensing summary content related to the k-th question (Eq.", "(8)) .", "q k = f Bi-LSTM 4 (Q k ) (6) α t,k = exp(h e t W α q k ) t exp(h e t W α q k ) (7) c k = t α t,k h e t (8) u k = [c k ; q k ; |c k − q k |; c k ⊗ q k ] (9) To predict the most probable answer, we construct a fully-connected network as the output layer.", "The input to the network includes a concatenation of the context vector (c k ), question vector (q k ), absolute difference (|c k − q k |) and element-wise product (c k ⊗ q k ) of the two vectors (Eq.", "(9)).", "A softmax function is used to estimate a probability distribution over the space of candidate answers: .", "P (e k |S, Q k ) = softmax(W e f ReLU (W u u k + b u )).", "A Reinforcement Learning Framework In this section we introduce a reinforcement learning framework to explore the space of possible extractive summaries and present a novel reward function to promote summaries that are adequate, fluent, restricted in length, and competent in question answering.", "Our reward function consists of four components, whose interpolation weights γ, α, and β are tuned on the dev set.", "R(y) = R c (y) + γR a (y) + αR f (y) + βR l (y) We define QA competency (Eq.", "(10)) as the average log-likelihood of correctly answering questions using the system summary (y).", "A highquality system summary is expected to resemble reference summary by using similar wording.", "The adequacy metric (Eq.", "(11)) measures the percentage of overlapping unigrams between the system (y) and reference summary (y * ).", "The fluency criterion (Eq.", "(12)) encourages consecutive sequences of source words to be selected by preventing many 0/1 switches in the label sequence (i.e., |y t − y t−1 |).", "Finally, we limit the summary size by setting the ratio of selected words to be close to a threshold δ (Eq.", "(13) ).", "QA R c (y) = 1 K K k=1 log P (e * k |y, Q k ) (10) Adequ.", "R a (y) = 1 |y * | U(y, y * ) (11) Fluency R f (y) = − |y| t=2 |y t − y t−1 | (12) Length R l (y) = − 1 |y| t y t − δ (13) The reward function R(y) successfully combines intrinsic measures of summary fluency and adequacy (Goldstein et al., 2005) with extrinsic measure of summary responsiveness to given questions (Dang, 2006; Murray et al., 2008) .", "A reinforcement learning agent finds a policy P (y|x) to maximize the expected reward E P (y|x) [R(y)].", "Training the system with policy gradient (Eq.", "(14) ) involves repeatedly sampling an extractive summaryŷ from the source document x.", "At time t, the agent takes an action by sampling a decision based on p(y t |ŷ <t , x) (Eq.", "(5)) indicating whether the t-th source text unit is to be included in the summary.", "Once the full summary sequenceŷ is generated, it is compared to the ground-truth sequence to compute the reward R(ŷ).", "In this way, reinforcement learning explores the space of extractive summaries and promotes those yielding high rewards.", "At inference time, rather than sampling actions from p(y t |y <t , x), we choose y t that yields the highest probability to generate the system summary y.", "This process is deterministic and no QA is required.", "∇ θ E P (y|x) [R(y)] = E P (y|x) [R(y)∇ θ log P (y|x)] ≈ 1 N N n=1 R(ŷ (n) )∇ θ log P (ŷ (n) |x) (14) Experiments We proceed by discussing the dataset and settings, comparison systems, and experimental results obtained through both automatic metrics and human evaluation in a reading comprehension setting.", "Dataset and Settings Our goal is to build an extractive summarizer identifying important textual segments from source articles.", "To investigate the effectiveness of the proposed approach, we conduct experiments on the CNN/Daily Mail dataset using a version provided by See et al.", "(2017) .", "The reference summaries of this dataset were created by human editors exhibiting a moderate degree of extractiveness.", "E.g., 83% of summary unigrams and 45% of bigrams appear in source articles (Narayan et al., 2018a) .", "On average, a CNN article contains 761 words / 34 sentences and a DM article contains 653 words / 29 sentences.", "We report results respectively for the CNN and DM portion of the dataset.", "Our hyperparameter settings are as follows.", "We set the hidden state dimension of the LSTM to be 256 in either direction.", "A bidirectional LSTM f Bi-LSTM 1 (·) produces a 512-dimensional vector for each content word.", "Similarly, f Bi-LSTM 4 (·) generates a question vector q k of the same size.", "Our CNN encoder f CNN 2 (·) uses multiple window sizes of {1, 3, 5, 7} and 128 filters per window size.", "h e t is thus a 512-dimensional vector using either CNN or LSTM encoder.", "We set the hidden state dimension of s t to be 128.", "We also use 100-dimensional word embeddings (Pennington et al., 2014) and sinusoidal positional encodings (Vaswani et al., 2017) of 30 dimensions.", "The maximum article length is set to 400 words.", "Compared to the study of Arumae and Liu (2018) , we expand the search space dramatically from 100 to 400 words, which poses a challenge to the RLbased summarizers.", "We associate each article with at most 10 QA pairs (K=10) and use them to guide the extraction of summary segments.", "We apply mini-batch training with Adam optimizer (Kingma and Ba, 2014) , where a mini-batch contains 128 articles and their QA pairs.", "The summary ratio δ is set to 0.15, yielding extractive summaries of about 60 words.", "Following Arumae and Liu (2018) , we set hyperparameters β = 2α; α and γ are tuned on the dev set using grid search.", "Experimental Results Comparison systems We compare our method with a number of extractive and abstractive systems that have reported results on the CNN/DM datasets.", "We consider non-neural approaches that extract sentences from the source article to form a summary.", "These include LexRank (Radev et al., 2004) , SumBasic (Vanderwende et al., 2007) , and KLSum (Haghighi and Vanderwende, 2009) .", "Such methods treat sentences as bags of words, and then select sentences containing topically important words.", "We further include the Lead-3 baseline that extracts the first 3 sentences from any given article.", "The method has been shown to be a strong baseline for summarizing news articles.", "Neural extractive approaches focus on learning vector representations for sentences and words, then performing extraction based on the learned representations.", "Cheng et al.", "(2016) describe a neural network method composed of a hierarchical document encoder and an attention-based extractor.", "The system has two variants: NN-WE extracts words from the source article and NN-SE extracts sentences.", "SummaRuNNer (Nallapati et al., 2017) presents an autoregressive sequence labeling method based on recurrent neural networks.", "It selects summary sentences based on their content, salience, position, and novelty representations.", "Abstractive summarization methods are not directly comparable to our approach, but we choose to include three systems that report results respectively for CNN and DM datasets.", "Distraction-M3 (Chen et al., 2016b ) trains the summarization system to distract its attention to traverse different regions of the source article.", "Graph attention (Tan et al., 2017) introduces a graph-based attention mechanism to enhance the encoderdecoder framework.", "PointerGen+Cov.", "(See et al., 2017) allows the system to not only copy words from the source text but also generate summary words by selecting them from a vocabulary.", "Abstractive methods can thus introduce new words to the summary that are not present in the source article.", "However, system summaries may change the meaning of the original texts due to this flexibility.", "Results We present summarization results of various systems in Tables 2 and 3 , evaluated on the standard CNN/DM test sets by R-1, R-2, and R-L metrics (Lin, 2004) , which respectively measure the overlap of unigrams, bigrams, and longest common subsequences between system and reference summaries.", "We investigate four variants of our method: QASumm+NoQ does not utilize any question-answer pairs during training.", "It extracts summary text chunks by learning from groundtruth labels ( §3.2) and the chunks are encoded by f Bi-LSTM 1 .", "Other variants initialize their models using pretrained parameters from QASumm+NoQ, then integrate the reinforcement learning objective ( §3.4) to exploit the space of possible extractive summaries and reward those that are useful for answering questions.", "We consider three types of QA pairs: the answer token is the root of a sentence dependency parse tree (+ROOT), a subject or object (+SUBJ/OBJ), or an entity found in the sentence (+NER).", "In all cases, the question is generated by replacing the answer token with a blank symbol.", "As illustrated in Tables 2 and 3 , our QASumm methods with reinforcement learning (+ROOT, +SUBJ/OBJ, +NER) perform competitively with strong baselines.", "They outperform the counterpart QASumm+NoQ that makes no use of the QA pairs by a substantial margin.", "They outperform or perform at a comparable level to state-of-the-art published systems on the CNN/DM datasets but are generally inferior to PointerGen.", "We observe that exacting summary chunks is highly desirable in real-world applications as it provides a mechanism to generate concise summaries.", "Nonetheless, accurately identifying summary chunks is challenging because the search space is vast and spuriousness arises in chunking sentences.", "Cheng and Lapata (2016) report a substantial performance drop when adapting their system to extract words.", "Our QASumm methods focusing on chunk extraction perform on par with competitive systems that extract whole sentences.", "We additionally present human evaluation results of summary usefulness for a reading comprehension task in §4.3.", "In Tables 2 and 3 , we further show the number of unique answers per QA type.", "We find that the ROOT-type QA pairs have the least number of unique answers.", "They are often main verbs of sentences.", "In contrast, the SUBJ/OBJ-type has the most number of answers.", "They are subjects and objects of sentences and correspond to an open class of content words.", "The NER-type has a moderate number of answers compared to others.", "Note that all answer tokens have been filtered by frequency; those appearing less than 5 times in the dataset are removed to avoid overfitting.", "Among variants of the QASumm method, we find that QASumm+ROOT achieves the highest scores on DM dataset.", "QASumm+NER performs consistently well on both CNN and DM datasets, suggesting QA pairs of this type are effective in guiding the system to extract summary chunks.", "We conjecture that maintaining a moderate number of answers is important to maximize performance.", "To answer questions with missing entities, the summary is encouraged to contain similar content as the question body.", "Because questions are derived from the human abstract, this in turn requires the system summary to carry similar semantic content as the human abstract.", "Question-answering accuracy We next dive into the QA component of our system to investigate question-answering performance when different types of summaries and QA pairs are supplied to the system ( §3.3).", "Given a question, the system predicts an answer using an extractive summary as the source input.", "Intuitively, an informative summary can lead to high QA accuracy, as the summary content serves well as the basis for predicting answers.", "With the same summary as input, certain types of questions can be more difficult to answer than others, and the system must rely heavily on the summary to gauge correct answers.", "We compare various types of summaries.", "These include (a) QASumm+NoQ which extracts summary chunks without requiring QA pairs; and (b) GoldSumm, which are gold-standard extractive summaries generated by collecting source words appearing in human summaries.", "We further consider NoText and FullText, corresponding to using no source text or the full source article as input.", "They represent the two extremes.", "In all cases the QA component ( §3.3) is trained on the training set and we report QA accuracies on the dev set.", "In Table 4 , we observe that question-answering with GoldSumm performs the best for all QA types.", "It outperforms the scenarios using Full-Text as the source input.", "This indicates that distilled information contained in a high-quality summary can be useful for answering questions, as searching for answers in a succinct summary can be more efficient than that in a full article.", "Moreover, we observe that the performance of QA-Summ+NoQ is in between NoText and GoldSumm for all answer types.", "The results suggest that extractive summaries with even modest ROUGE scores can prove useful for question-answering.", "Regarding different types of QA pairs, we find that the ROOT-type can achieve high QA accuracy when using NoText input.", "It suggests that ROOT answers can to some extent be predicted based on the question context.", "The NER-type QA pairs work the best for both GoldSumm and Full-Text, likely because the source texts contain necessary entities required to correctly answer those questions.", "We also find the SUBJ/OBJ-type QA pairs have the smallest gap between train/dev accuracies, despite that they have a large answer space.", "Based on the analysis we would suggest future work to consider using NER-based QA pairs as they encourage the summaries to contain salient source content and be informative.", "Extraction units We finally compare the performance of using either words or chunks as extraction units ( §3.1).", "The chunks are obtained by breaking down sentence constituent parse trees in a top-down manner until all tree fragments contain 5 words or less.", "We observe that 70% of the chunks are 1-grams, and 2/3/4/5-grams are 9%, 7%, 6%, and 8% respectively.", "We compare the bidirectional LSTM (f LSTM 1 ) and CNN (f CNN 2 ) encoders for their effectiveness on generating representations for extraction units.", "Figure 2 presents the results of the QASumm+NoQ system under various settings.", "We find that extracting chunks performs superior, and combining chunks with LSTM representations yield the highest scores.", "Human Evaluation Testing the usefulness of an extractive system driven by reading comprehension is not inherently measured by automatic metrics (i.e.", "ROUGE).", "We conducted a human evaluation to assess whether the highlighted summaries contribute to document understanding.", "Similar to our training paradigm we presented each participant with the document and three fill-in-the-blank questions created from the human abstracts.", "It was guaranteed that each question was from a unique human abstract to avoid seeing the answer adjacent to the same template.", "The missing section was randomly generated to be either the root word, the subject or ob- See et al.", "(2017) .", "Our systems tested were the supervised extractor, and our full model (NER).", "ject of the sentence, or a named entity.", "We compare our reinforced extracted summary (presented as a bold overlay to the document), against our supervised method (section 3.2), abstractive summaries generated by See et al.", "(2017) , and the human abstracts in full.", "Additionally we asked the participants to rate the quality of the summary presented (1-5, with 5 being most informative).", "We utilized Amazon Mechanical Turk, and conducted an experiment where we sampled 80 documents from the CNN test set.", "The articles were evenly split across the four competing systems, and each HIT was completed by 5 turkers.", "Upon completion the data was analyzed manually for accuracy since turkers entered each answer as free text, and to remove any meaningless datapoints.", "Table 5 shows the average time (in seconds) to complete a single question, the overall accuracy of the participants, and the informativeness of a given summary type.", "Excluding the use of human abstracts, all systems resulted in similar performance times.", "However we observe a large margin in QA accuracy in our full system compared to the abstractive and our supervised approach.", "Although participants rated the informativeness of the summaries to be the same our systems yielded a higher performance.", "This strongly indicates that having a system which makes using of document comprehension has a tangible effect when applied towards a real-world task.", "Conclusion We exploited an extractive summarization framework using deep reinforcement learning to identify consecutive word sequences from a document to form an extractive summary.", "Our reward function promotes adequate and fluent summaries that can serve as document surrogates to answer important questions, directly addressing users' information needs.", "Experimental results on benchmark datasets demonstrated the efficacy of our proposed method over state-of-the-art baselines, assessed by both automatic metrics and human evaluators." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "4.3", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Our Approach", "Representing an Extraction Unit", "Constructing an Extractive Summary", "Using Summaries to Answer Questions", "A Reinforcement Learning Framework", "Experiments", "Dataset and Settings", "Experimental Results", "Human Evaluation", "Conclusion" ] }
GEM-SciDuet-train-36#paper-1050#slide-11
Experimental Results Daily Mail
We conjecture that maintaining a moderate number of answers is important to maximize performance. Kristjan Arumae and Fei Liu Guiding Extractive Summarization with Question-Answering Rewards - NAACL 2019
We conjecture that maintaining a moderate number of answers is important to maximize performance. Kristjan Arumae and Fei Liu Guiding Extractive Summarization with Question-Answering Rewards - NAACL 2019
[]
GEM-SciDuet-train-36#paper-1050#slide-12
1050
Guiding Extractive Summarization with Question-Answering Rewards
Highlighting while reading is a natural behavior for people to track salient content of a document. It would be desirable to teach an extractive summarizer to do the same. However, a major obstacle to the development of a supervised summarizer is the lack of ground-truth. Manual annotation of extraction units is costprohibitive, whereas acquiring labels by automatically aligning human abstracts and source documents can yield inferior results. In this paper we describe a novel framework to guide a supervised, extractive summarization system with question-answering rewards. We argue that quality summaries should serve as a document surrogate to answer important questions, and such question-answer pairs can be conveniently obtained from human abstracts. The system learns to promote summaries that are informative, fluent, and perform competitively on question-answering. Our results compare favorably with those reported by strong summarization baselines as evaluated by automatic metrics and human assessors.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289 ], "paper_content_text": [ "Introduction Our increasingly digitized lifestyle calls for summarization techniques to produce short and accurate summaries that can be accessed at any time.", "These summaries should factually adhere to the content of the source text and present the reader with the key points therein.", "Although neural abstractive summarization has shown promising results (Rush et al., 2015; Nallapati et al., 2016; See et al., 2017) , these methods can have potential drawbacks.", "It was revealed that abstracts generated by neural systems sometimes alter or falsify objective details, and introduce new meanings not present in the original text (Cao et al., 2018) .", "Reading these abstracts can lead to misinterpretation of the source materials, which is clearly undesirable.", "In this work, we focus on extractive summarization, where the summaries are guaranteed (CNN) A judge this week sentenced a former TSA agent to six months in jail for secretly videotaping a female co-worker while she was in the bathroom, prosecutors said.", "During the investigation, detectives with the Metro Nashville Police Department in Tennessee also found that the agent, 33-year-old Daniel Boykin, entered the woman's home multiple times, where he took videos, photos and other data.", "Police found more than 90 videos and 1,500 photos of the victim on Boykin's phone and computer .", "The victim filed a complaint after seeing images of herself on his phone last year.", "[...] Comprehension Questions (Human Abstract): Former Daniel Boykin, 33, videotaped his female co-worker in the restroom, authorities say.", "Authorities say they found 90 videos and 1,500 photos of the victim on and computer.", "to remain faithful to the original content.", "Our system seeks to identify salient and consecutive sequences of words from the source document, and highlight them in the text to assist users in browsing and comprehending lengthy documents.", "An example is illustrated in Table 1 .", "A primary challenge faced by extractive summarizers is the lack of annotated data.", "The cost of hiring humans to label a necessary amount of source articles with summary words, good for training a modern classifier, can be prohibitive.", "Previous work has exploited using human abstracts to derive labels for extraction units (Woodsend and Lapata, 2010) .", "E.g., a source word is tagged 1 if it appears in the abstract, 0 otherwise.", "Although pairs of source articles and human abstracts are abundant, labels derived in this way are not necessarily best since summary saliency can not be easily captured with a rule based categorization.", "Considering that human abstracts involve generalization, paraphrasing, and can con-tain words not present in the source text, leveraging them to derive labels for extraction units can be suboptimal.", "In this work, we investigate a new strategy that seeks to better utilize human abstracts to guide the extraction of summary text units.", "We hypothesize that quality extractive summaries should contain informative content so that they can be used as document surrogates to answer important questions, thereby satisfying users' information needs.", "The question-answer pairs can be conveniently developed from human abstracts.", "Our proposed approach identifies answer tokens from each sentence of the human abstract, then replaces each answer token with a blank to create a Cloze-style question-answer pair.", "To answer all questions (≈human abstract), the system summary must contain content that is semantically close to and collectively resembles the human abstract.", "In this paper, we construct an extractive summary by selecting consecutive word sequences from the source document.", "To accomplish this we utilize a novel reinforcement learning framework to explore the space of possible extractive summaries and assess each summary using a novel reward function judging the summary's adequacy, fluency, length, and its competency to answer important questions.", "The system learns to sample extractive summaries yielding the highest expected rewards, with no pre-derived extraction labels needed.", "This work extends the methodology of Arumae and Liu (2018) with new representations of extraction units and thorough experimental evaluation.", "The contributions of this research can be summarized as follows: • we describe a novel framework generating extractive summaries by selecting consecutive sequences of words from source documents.", "This new system explores various encoding mechanisms, as well as new sampling techniques to capture phrase level data.", "Such a framework has not been thoroughly investigated in the past; • We conduct a methodical empirical evaluation from the point of view of information saliency.", "Rather than solely relying on automatic summarization evaluation methods, we also show the advantages of our system by assessing the summary quality with reading comprehension tasks.", "Our summaries compare favorably with the automatic metrics against state of the art, and show promising results against baselines when evaluated by humans for question answering.", "Related Work Extractive summarization has seen growing popularity in the past decades (Nenkova and McKeown, 2011) .", "The methods focus on selecting representative sentences from the document(s) and optionally deleting unimportant sentence constituents to form a summary (Knight and Marcu, 2002; Radev et al., 2004; Zajic et al., 2007; Martins and Smith, 2009; Gillick and Favre, 2009; Lin and Bilmes, 2010; Wang et al., 2013; Li et al., 2013 Li et al., , 2014 Hong et al., 2014; Yogatama et al., 2015) .", "A majority of the methods are unsupervised.", "They estimate sentence importance based on the sentence's length and position in the document, whether the sentence contains topical content and its relationship with other sentences.", "The summarization objective is to select a handful of sentences to maximize the coverage of important content while minimizing summary redundancy.", "Although unsupervised methods are promising, they cannot benefit from the large-scale training data harvested from the Web (Sandhaus, 2008; Hermann et al., 2015; Grusky et al., 2018) .", "Neural extractive summarization has focused primarily on extracting sentences (Nallapati et al., 2017; Cao et al., 2017; Isonuma et al., 2017; Tarnpradab et al., 2017; Zhou et al., 2018; Kedzie et al., 2018) .", "These studies exploit parallel training data consisting of source articles and story highlights (i.e., human abstracts) to create ground-truth labels for sentences.", "A neural extractive summarizer learns to predict a binary label for each source sentence indicating if it is to be included in the summary.", "These studies build distributed sentence representations using neural networks (Cheng and Lapata, 2016; Yasunaga et al., 2017) and use reinforcement learning to optimize the evaluation metric (Narayan et al., 2018b) and improve summary coherence (Wu and Hu, 2018) .", "However, sentence extraction can be coarse and in many cases, only a part of the sentence is worthy to be added to the summary.", "In this study, we perform finer-grained extractive summarization by allowing the system to select consecutive sequences of words rather than sentences to form a summary.", "Interestingly, studies reveal that summaries generated by recent neural abstractive systems are, in fact, quite \"extractive.\"", "Abstractive systems often adopt the encoder-decoder architecture with an attention mechanism (Rush et al., 2015; Nallapati et al., 2016; Paulus et al., 2017; Guo et al., 2018; Gehrmann et al., 2018; Lebanoff et al., 2018; Ce-likyilmaz et al., 2018) .", "The encoder condenses a source sequence to a fixed-length vector and the decoder takes the vector as input and generates a summary by predicting one word at a time.", "See, Liu, and Manning (2017) suggest that about 35% of the summary sentences occur in the source documents, and 90% of summary n-grams appear in the source.", "Moreover, the summaries may contain inaccurate factual details and introduce new meanings not present in the original text (Cao et al., 2018; .", "It thus raises concerns as to whether such systems can be used in realworld scenarios to summarize materials such as legal documents.", "In this work, we choose to focus on extractive summarization where selected word sequences can be highlighted on the source text to avoid change of meaning.", "Our proposed method is inspired by the work of Lei et al.", "(2016) who seek to identify rationales from textual input to support sentiment classification and question retrieval.", "Distinct from this previous work, we focus on generating generic document summaries.", "We present a novel supervised framework encouraging the selection of consecutive sequences of words to form an extractive summary.", "Further, we leverage reinforcement learning to explore the space of possible extractive summaries and promote those that are fluent, adequate, and competent in question answering.", "We seek to test the hypothesis that successful summaries can serve as document surrogates to answer important questions, and moreover, ground-truth questionanswer pairs can be derived from human abstracts.", "In the following section we describe our proposed approach in details.", "Our Approach Let S be an extractive summary consisting of text segments selected from a source document x.", "The summary can be mapped to a sequence of binary labels y assigned to document words.", "In this section we first present a supervised framework for identifying consecutive sequences of words that are summary-worthy, then proceed by describing our question-answering rewards and a deep reinforcement learning framework to guide the selection of summaries so that they can be used as document surrogates to answer important questions.", "1 Representing an Extraction Unit How best to decompose a source document into a set of text units useful for extractive summarization remains an open problem.", "A natural choice is to use words as extraction units.", "However, this choice ignores the cohesiveness of text.", "A text chunk (e.g., a prepositional phrase) can be either selected to the summary in its entirety or not at all.", "In this paper we experiment with both schemes, using either words or chunks as extraction units.", "When a text chunk is selected in the summary, all its consisting words are selected.", "We obtain text chunks by breaking down the sentence constituent parse tree in a top-down manner until each tree fragment governs at most 5 words.", "A chunk thus can contain from 1 to 5 words.", "Additionally, word level modeling can be considered a special case of chunks where the length of each phrase is 1.", "It is important to note that using sentences as extraction units is out of the scope of this paper, because our work focuses on finer-grained extraction units such as words and phrases and this is notably a more challenging task.", "The most successful neural models for encoding a piece of text to a fixed-length vector include the recurrent (Hochreiter and Schmidhuber, 1997) and convolutional neural networks (CNN; Kim et al., 2014) , among others.", "A recent study by Khandelwal et al.", "(2018) reported that the recurrent networks are capable of memorizing a recent context of about 20 tokens and the model is highly sensitive to word order, whereas this is less the case for CNN whose max-pooling operation makes it agnostic to word order.", "We implement both networks and are curious to compare their effectiveness at encoding extraction units for summarization.", "{h e t } = f Bi-LSTM 1 (x) (1) or {h e t } = f CNN 2 (x) (2) Our model first encodes the source document using a bidirectional LSTM with the forward and backward passes (Eq.", "(1)).", "The representation of the t-th source word h e t = [ ← − h e t || − → h e t ] is the con- catenation of the hidden states in both directions.", "A chunk is similarly denoted by h e t = [ ← − h e t || − → h e t+n ] where t and t + n are the indices of its beginning and ending words.", "In both cases, a fixed-length vector (h e t ∈ R m ) is created for the word/chunk.", "Further, our CNN encoder (Eq.", "(2)) uses a sliding window of {1,3,5,7} words, corresponding to the kernel sizes, to scan through the source document.", "We apply a number of filters to each window size to extract local features.", "The t-th source word is represented by the concatenation of feature maps (an m-dimensional vector).", "To obtain the chunk vector we perform max-pooling over the representations of its consisting words (from t to t + n).", "In the following we use h e t to denote the vector representation of the t-th extraction unit, may it be a word or a chunk, generated using either encoder.", "Constructing an Extractive Summary It is desirable to first develop a supervised framework for identifying summary-worthy text segments from a source article.", "These segments collectively form an extractive summary to be highlighted on the source text.", "The task can be formulated as a sequence labeling problem: a source text unit (a word or chunk) is labelled 1 if it is to be included in the summary and 0 otherwise.", "It is not unusual to develop an auto-regressive model to perform sequence labeling, where the label of the t-th extraction unit (y t ) depends on all previous labels (y <t ).", "Given this hypothesis, we build a framework to extract summary units where the importance of the t-th source unit is characterized by its informativeness (encoded in h e t ), its position in the document, and relationship with the partial summary.", "The details are presented below.", "We use a positional embedding (g t ) to signify the position of the t-th text unit in the source document.", "The position corresponds to the index of the source sentence containing the t-th unit, and further, all text units belonging to the same sentence share the same positional embedding.", "We apply sinusoidal initialization to the embeddings, following Vaswani et al.", "(2017) .", "Importantly, positional embeddings allow us to inject macro-positional knowledge about words/chunks into a neural summarization framework to offset the natural bias that humans tend to have on putting important content at the beginning of an article.", "Next, we build a representation for the partial summary to aid the system in selecting future text units.", "The representation s t is expected to encode the extraction decisions up to time t-1 and it can be realized using a unidirectional LSTM network (Eq.", "(3)).", "The t-th input to the network is represented as y t−1 ⊗ h e t−1 where y t−1 is a binary label serving as a gating mechanism to control if the semantic content of the previous text unit (h e t−1 ) is to be included in the summary (\"⊗\" corresponds to elementwise product).", "During training, we apply teacher forcing and y t−1 is the ground-truth extraction label for the (t − 1)-th unit; at test time, Figure 1 : A unidirectional LSTM (blue, Eq.", "(3)) encodes the partial summary, while the multilayer perceptron network (orange, Eq.", "(4-5)) utilizes the text unit representation (h e t ), its positional embedding (gt), and the partial summary representation (st) to determine if the t-th text unit is to be included in the summary.", "Best viewed in color.", "g t 1 g t g t+1 g t+2 s t+2 s t+1 s t s t 1 h e t 1 h e t h e t+1 h e t+2 y t−1 is generated on-the-fly by obtaining the label yielding the highest probability according to Eq.", "(5).", "In the previous work of Cheng and Lapata (2016) and Nallapati et al.", "(2017) , similar auto-regressive models are developed to identify summary sentences.", "Different from the previous work, this study focuses on extracting consecutive sequences of words and chunks from the source document, and the partial summary representation is particularly useful for predicting if the next unit is to be included in the summary to improve summary fluency.", "s t = f Uni-LSTM 3 (s t−1 , y t−1 ⊗ h e t−1 ) (3) Given the partial summary representation (s t ), and representation of the text unit (h e t ) and its positional encoding (g t ), we employ a multilayer perceptron to predict how likely the unit is to be included in the summary.", "This process is described by Eqs.", "(4-5) and further illustrated in Figure 1 .", "a t = f ReLU (W a [h e t ; g t ; s t ] + b a ) (4) p(y t |y <t , x) = σ(w y a t + b y ) (5) Our model parameters include {W a , b a , w y , b y } along with those required by f Bi-LSTM 1 , f CNN 2 and f Uni-LSTM 3 .", "It is possible to train this model in a fully supervised fashion by minimizing the negative log-likelihood of the training data.", "We generate ground-truth labels for source text units as follows.", "A source word receives a label of 1 if both itself and its adjacent word appear in the human abstract (excluding cases where both words are stopwords).", "This heuristic aims to label consecutive source words (2 or more) as summaryworthy, as opposed to picking single words which can be less informative.", "A source text chunk receives a label of 1 if one of its component words is labelled 1 in the above process.", "Because human abstracts are often short and contain novel words not present in source documents, they can be suboptimal for generating ground-truth labels for extraction units.", "Only a small portion of the source words (about 8% in our dataset) are labelled as positive, whereas the vast majority are negative.", "Such labels can be ineffective in providing supervision.", "In the following section, we investigate a new learning paradigm, which encourages extractive summaries to contain informative content useful for answering important questions, while question-answer pairs can be automatically derived from human abstracts.", "Using Summaries to Answer Questions Our hypothesis is that high-quality summaries should contain informative content making them appropriate to serve as document surrogates to satisfy users' information needs.", "We train the extractive summarizer to identify source text units necessary for answering questions, and the questionanswer (QA) pairs can be conveniently developed from human abstracts.", "To obtain QA pairs, we set an answer token to be either a salient word or a named entity to limit the space of potential answers.", "For any sentence in the human abstract, we identify an answer token from it, then replace the answer token with a blank to create a Cloze-style question-answer pair (see Table 1 ).", "When a sentence contains multiple answer tokens, a set of QA pairs can be obtained from it.", "It is important to note that at least one QA pair should be extracted from each sentence of the abstract.", "Because a system summary is trained to contain content useful for answering all questions (≈human abstract), any missing QA pair is likely to cause the summary to be insufficient.", "We collect answer tokens using the following methods: (a) we extract a set of entities with tag {PER, LOC, ORG, MISC} from each sentence using the Stanford CoreNLP toolkit ; (b) we also identify the ROOT word of each sentence's dependency parse tree along with the sentence's subject/object word, whose type is {NSUBJ, CSUBJ, OBJ, IOBJ} (if exists), then add them to the collection of answer tokens.", "Further, we prune the answer space by excluding those which appear fewer than 5 times overall.", "Having several methods for question construction allows us to explore the answer space properly.", "In the results section we perform experiments on root, subject/object, and named entities to see which model provides the best extraction guide.", "Given an extractive summary S containing a set of source text units, and a collection of questionanswer pairs P = {(Q k , e * k )} K k=1 related to the source document, we want to develop a mechanism leveraging the extractive summary to answer these questions.", "We first encode each question Q k to a vector representation (q k ).", "This is achieved by concatenating the last hidden states of the forward/backward passes of a bidirectional LSTM (Eq.", "(6) ).", "Next, we exploit the attention mechanism to locate summary parts that are relevant to answering the k-th question.", "Given the attention mechanism, an extractive summary S can be used to answer multiple questions related to the document.", "We define α t,k to be the semantic relatedness between the t-th source text unit and the k-th question.", "Following Chen et al.", "(2016a) , we introduce a bilinear term to characterize their relationship (α t,k ∝ h e t W α q k ; see Eq.", "(7) ).", "In this process, we consider only those source text units selected in summary S. Using α t,k as weights, we then compute a context vector c k condensing summary content related to the k-th question (Eq.", "(8)) .", "q k = f Bi-LSTM 4 (Q k ) (6) α t,k = exp(h e t W α q k ) t exp(h e t W α q k ) (7) c k = t α t,k h e t (8) u k = [c k ; q k ; |c k − q k |; c k ⊗ q k ] (9) To predict the most probable answer, we construct a fully-connected network as the output layer.", "The input to the network includes a concatenation of the context vector (c k ), question vector (q k ), absolute difference (|c k − q k |) and element-wise product (c k ⊗ q k ) of the two vectors (Eq.", "(9)).", "A softmax function is used to estimate a probability distribution over the space of candidate answers: .", "P (e k |S, Q k ) = softmax(W e f ReLU (W u u k + b u )).", "A Reinforcement Learning Framework In this section we introduce a reinforcement learning framework to explore the space of possible extractive summaries and present a novel reward function to promote summaries that are adequate, fluent, restricted in length, and competent in question answering.", "Our reward function consists of four components, whose interpolation weights γ, α, and β are tuned on the dev set.", "R(y) = R c (y) + γR a (y) + αR f (y) + βR l (y) We define QA competency (Eq.", "(10)) as the average log-likelihood of correctly answering questions using the system summary (y).", "A highquality system summary is expected to resemble reference summary by using similar wording.", "The adequacy metric (Eq.", "(11)) measures the percentage of overlapping unigrams between the system (y) and reference summary (y * ).", "The fluency criterion (Eq.", "(12)) encourages consecutive sequences of source words to be selected by preventing many 0/1 switches in the label sequence (i.e., |y t − y t−1 |).", "Finally, we limit the summary size by setting the ratio of selected words to be close to a threshold δ (Eq.", "(13) ).", "QA R c (y) = 1 K K k=1 log P (e * k |y, Q k ) (10) Adequ.", "R a (y) = 1 |y * | U(y, y * ) (11) Fluency R f (y) = − |y| t=2 |y t − y t−1 | (12) Length R l (y) = − 1 |y| t y t − δ (13) The reward function R(y) successfully combines intrinsic measures of summary fluency and adequacy (Goldstein et al., 2005) with extrinsic measure of summary responsiveness to given questions (Dang, 2006; Murray et al., 2008) .", "A reinforcement learning agent finds a policy P (y|x) to maximize the expected reward E P (y|x) [R(y)].", "Training the system with policy gradient (Eq.", "(14) ) involves repeatedly sampling an extractive summaryŷ from the source document x.", "At time t, the agent takes an action by sampling a decision based on p(y t |ŷ <t , x) (Eq.", "(5)) indicating whether the t-th source text unit is to be included in the summary.", "Once the full summary sequenceŷ is generated, it is compared to the ground-truth sequence to compute the reward R(ŷ).", "In this way, reinforcement learning explores the space of extractive summaries and promotes those yielding high rewards.", "At inference time, rather than sampling actions from p(y t |y <t , x), we choose y t that yields the highest probability to generate the system summary y.", "This process is deterministic and no QA is required.", "∇ θ E P (y|x) [R(y)] = E P (y|x) [R(y)∇ θ log P (y|x)] ≈ 1 N N n=1 R(ŷ (n) )∇ θ log P (ŷ (n) |x) (14) Experiments We proceed by discussing the dataset and settings, comparison systems, and experimental results obtained through both automatic metrics and human evaluation in a reading comprehension setting.", "Dataset and Settings Our goal is to build an extractive summarizer identifying important textual segments from source articles.", "To investigate the effectiveness of the proposed approach, we conduct experiments on the CNN/Daily Mail dataset using a version provided by See et al.", "(2017) .", "The reference summaries of this dataset were created by human editors exhibiting a moderate degree of extractiveness.", "E.g., 83% of summary unigrams and 45% of bigrams appear in source articles (Narayan et al., 2018a) .", "On average, a CNN article contains 761 words / 34 sentences and a DM article contains 653 words / 29 sentences.", "We report results respectively for the CNN and DM portion of the dataset.", "Our hyperparameter settings are as follows.", "We set the hidden state dimension of the LSTM to be 256 in either direction.", "A bidirectional LSTM f Bi-LSTM 1 (·) produces a 512-dimensional vector for each content word.", "Similarly, f Bi-LSTM 4 (·) generates a question vector q k of the same size.", "Our CNN encoder f CNN 2 (·) uses multiple window sizes of {1, 3, 5, 7} and 128 filters per window size.", "h e t is thus a 512-dimensional vector using either CNN or LSTM encoder.", "We set the hidden state dimension of s t to be 128.", "We also use 100-dimensional word embeddings (Pennington et al., 2014) and sinusoidal positional encodings (Vaswani et al., 2017) of 30 dimensions.", "The maximum article length is set to 400 words.", "Compared to the study of Arumae and Liu (2018) , we expand the search space dramatically from 100 to 400 words, which poses a challenge to the RLbased summarizers.", "We associate each article with at most 10 QA pairs (K=10) and use them to guide the extraction of summary segments.", "We apply mini-batch training with Adam optimizer (Kingma and Ba, 2014) , where a mini-batch contains 128 articles and their QA pairs.", "The summary ratio δ is set to 0.15, yielding extractive summaries of about 60 words.", "Following Arumae and Liu (2018) , we set hyperparameters β = 2α; α and γ are tuned on the dev set using grid search.", "Experimental Results Comparison systems We compare our method with a number of extractive and abstractive systems that have reported results on the CNN/DM datasets.", "We consider non-neural approaches that extract sentences from the source article to form a summary.", "These include LexRank (Radev et al., 2004) , SumBasic (Vanderwende et al., 2007) , and KLSum (Haghighi and Vanderwende, 2009) .", "Such methods treat sentences as bags of words, and then select sentences containing topically important words.", "We further include the Lead-3 baseline that extracts the first 3 sentences from any given article.", "The method has been shown to be a strong baseline for summarizing news articles.", "Neural extractive approaches focus on learning vector representations for sentences and words, then performing extraction based on the learned representations.", "Cheng et al.", "(2016) describe a neural network method composed of a hierarchical document encoder and an attention-based extractor.", "The system has two variants: NN-WE extracts words from the source article and NN-SE extracts sentences.", "SummaRuNNer (Nallapati et al., 2017) presents an autoregressive sequence labeling method based on recurrent neural networks.", "It selects summary sentences based on their content, salience, position, and novelty representations.", "Abstractive summarization methods are not directly comparable to our approach, but we choose to include three systems that report results respectively for CNN and DM datasets.", "Distraction-M3 (Chen et al., 2016b ) trains the summarization system to distract its attention to traverse different regions of the source article.", "Graph attention (Tan et al., 2017) introduces a graph-based attention mechanism to enhance the encoderdecoder framework.", "PointerGen+Cov.", "(See et al., 2017) allows the system to not only copy words from the source text but also generate summary words by selecting them from a vocabulary.", "Abstractive methods can thus introduce new words to the summary that are not present in the source article.", "However, system summaries may change the meaning of the original texts due to this flexibility.", "Results We present summarization results of various systems in Tables 2 and 3 , evaluated on the standard CNN/DM test sets by R-1, R-2, and R-L metrics (Lin, 2004) , which respectively measure the overlap of unigrams, bigrams, and longest common subsequences between system and reference summaries.", "We investigate four variants of our method: QASumm+NoQ does not utilize any question-answer pairs during training.", "It extracts summary text chunks by learning from groundtruth labels ( §3.2) and the chunks are encoded by f Bi-LSTM 1 .", "Other variants initialize their models using pretrained parameters from QASumm+NoQ, then integrate the reinforcement learning objective ( §3.4) to exploit the space of possible extractive summaries and reward those that are useful for answering questions.", "We consider three types of QA pairs: the answer token is the root of a sentence dependency parse tree (+ROOT), a subject or object (+SUBJ/OBJ), or an entity found in the sentence (+NER).", "In all cases, the question is generated by replacing the answer token with a blank symbol.", "As illustrated in Tables 2 and 3 , our QASumm methods with reinforcement learning (+ROOT, +SUBJ/OBJ, +NER) perform competitively with strong baselines.", "They outperform the counterpart QASumm+NoQ that makes no use of the QA pairs by a substantial margin.", "They outperform or perform at a comparable level to state-of-the-art published systems on the CNN/DM datasets but are generally inferior to PointerGen.", "We observe that exacting summary chunks is highly desirable in real-world applications as it provides a mechanism to generate concise summaries.", "Nonetheless, accurately identifying summary chunks is challenging because the search space is vast and spuriousness arises in chunking sentences.", "Cheng and Lapata (2016) report a substantial performance drop when adapting their system to extract words.", "Our QASumm methods focusing on chunk extraction perform on par with competitive systems that extract whole sentences.", "We additionally present human evaluation results of summary usefulness for a reading comprehension task in §4.3.", "In Tables 2 and 3 , we further show the number of unique answers per QA type.", "We find that the ROOT-type QA pairs have the least number of unique answers.", "They are often main verbs of sentences.", "In contrast, the SUBJ/OBJ-type has the most number of answers.", "They are subjects and objects of sentences and correspond to an open class of content words.", "The NER-type has a moderate number of answers compared to others.", "Note that all answer tokens have been filtered by frequency; those appearing less than 5 times in the dataset are removed to avoid overfitting.", "Among variants of the QASumm method, we find that QASumm+ROOT achieves the highest scores on DM dataset.", "QASumm+NER performs consistently well on both CNN and DM datasets, suggesting QA pairs of this type are effective in guiding the system to extract summary chunks.", "We conjecture that maintaining a moderate number of answers is important to maximize performance.", "To answer questions with missing entities, the summary is encouraged to contain similar content as the question body.", "Because questions are derived from the human abstract, this in turn requires the system summary to carry similar semantic content as the human abstract.", "Question-answering accuracy We next dive into the QA component of our system to investigate question-answering performance when different types of summaries and QA pairs are supplied to the system ( §3.3).", "Given a question, the system predicts an answer using an extractive summary as the source input.", "Intuitively, an informative summary can lead to high QA accuracy, as the summary content serves well as the basis for predicting answers.", "With the same summary as input, certain types of questions can be more difficult to answer than others, and the system must rely heavily on the summary to gauge correct answers.", "We compare various types of summaries.", "These include (a) QASumm+NoQ which extracts summary chunks without requiring QA pairs; and (b) GoldSumm, which are gold-standard extractive summaries generated by collecting source words appearing in human summaries.", "We further consider NoText and FullText, corresponding to using no source text or the full source article as input.", "They represent the two extremes.", "In all cases the QA component ( §3.3) is trained on the training set and we report QA accuracies on the dev set.", "In Table 4 , we observe that question-answering with GoldSumm performs the best for all QA types.", "It outperforms the scenarios using Full-Text as the source input.", "This indicates that distilled information contained in a high-quality summary can be useful for answering questions, as searching for answers in a succinct summary can be more efficient than that in a full article.", "Moreover, we observe that the performance of QA-Summ+NoQ is in between NoText and GoldSumm for all answer types.", "The results suggest that extractive summaries with even modest ROUGE scores can prove useful for question-answering.", "Regarding different types of QA pairs, we find that the ROOT-type can achieve high QA accuracy when using NoText input.", "It suggests that ROOT answers can to some extent be predicted based on the question context.", "The NER-type QA pairs work the best for both GoldSumm and Full-Text, likely because the source texts contain necessary entities required to correctly answer those questions.", "We also find the SUBJ/OBJ-type QA pairs have the smallest gap between train/dev accuracies, despite that they have a large answer space.", "Based on the analysis we would suggest future work to consider using NER-based QA pairs as they encourage the summaries to contain salient source content and be informative.", "Extraction units We finally compare the performance of using either words or chunks as extraction units ( §3.1).", "The chunks are obtained by breaking down sentence constituent parse trees in a top-down manner until all tree fragments contain 5 words or less.", "We observe that 70% of the chunks are 1-grams, and 2/3/4/5-grams are 9%, 7%, 6%, and 8% respectively.", "We compare the bidirectional LSTM (f LSTM 1 ) and CNN (f CNN 2 ) encoders for their effectiveness on generating representations for extraction units.", "Figure 2 presents the results of the QASumm+NoQ system under various settings.", "We find that extracting chunks performs superior, and combining chunks with LSTM representations yield the highest scores.", "Human Evaluation Testing the usefulness of an extractive system driven by reading comprehension is not inherently measured by automatic metrics (i.e.", "ROUGE).", "We conducted a human evaluation to assess whether the highlighted summaries contribute to document understanding.", "Similar to our training paradigm we presented each participant with the document and three fill-in-the-blank questions created from the human abstracts.", "It was guaranteed that each question was from a unique human abstract to avoid seeing the answer adjacent to the same template.", "The missing section was randomly generated to be either the root word, the subject or ob- See et al.", "(2017) .", "Our systems tested were the supervised extractor, and our full model (NER).", "ject of the sentence, or a named entity.", "We compare our reinforced extracted summary (presented as a bold overlay to the document), against our supervised method (section 3.2), abstractive summaries generated by See et al.", "(2017) , and the human abstracts in full.", "Additionally we asked the participants to rate the quality of the summary presented (1-5, with 5 being most informative).", "We utilized Amazon Mechanical Turk, and conducted an experiment where we sampled 80 documents from the CNN test set.", "The articles were evenly split across the four competing systems, and each HIT was completed by 5 turkers.", "Upon completion the data was analyzed manually for accuracy since turkers entered each answer as free text, and to remove any meaningless datapoints.", "Table 5 shows the average time (in seconds) to complete a single question, the overall accuracy of the participants, and the informativeness of a given summary type.", "Excluding the use of human abstracts, all systems resulted in similar performance times.", "However we observe a large margin in QA accuracy in our full system compared to the abstractive and our supervised approach.", "Although participants rated the informativeness of the summaries to be the same our systems yielded a higher performance.", "This strongly indicates that having a system which makes using of document comprehension has a tangible effect when applied towards a real-world task.", "Conclusion We exploited an extractive summarization framework using deep reinforcement learning to identify consecutive word sequences from a document to form an extractive summary.", "Our reward function promotes adequate and fluent summaries that can serve as document surrogates to answer important questions, directly addressing users' information needs.", "Experimental results on benchmark datasets demonstrated the efficacy of our proposed method over state-of-the-art baselines, assessed by both automatic metrics and human evaluators." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "4.3", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Our Approach", "Representing an Extraction Unit", "Constructing an Extractive Summary", "Using Summaries to Answer Questions", "A Reinforcement Learning Framework", "Experiments", "Dataset and Settings", "Experimental Results", "Human Evaluation", "Conclusion" ] }
GEM-SciDuet-train-36#paper-1050#slide-12
Question Answering Results
No T ext QASumm (no QA) Gold S umm Full Text Answer Type Train Dev Train Dev Train Dev Train Dev We observe that question-answering with Gold Summ performs the best for all The results suggest that extractive summaries with even modest ROUGE scores can prove useful for question-answering. Kristjan Arumae and Fei Liu Guiding Extractive Summarization with Question-Answering Rewards - NAACL 2019
No T ext QASumm (no QA) Gold S umm Full Text Answer Type Train Dev Train Dev Train Dev Train Dev We observe that question-answering with Gold Summ performs the best for all The results suggest that extractive summaries with even modest ROUGE scores can prove useful for question-answering. Kristjan Arumae and Fei Liu Guiding Extractive Summarization with Question-Answering Rewards - NAACL 2019
[]
GEM-SciDuet-train-36#paper-1050#slide-13
1050
Guiding Extractive Summarization with Question-Answering Rewards
Highlighting while reading is a natural behavior for people to track salient content of a document. It would be desirable to teach an extractive summarizer to do the same. However, a major obstacle to the development of a supervised summarizer is the lack of ground-truth. Manual annotation of extraction units is costprohibitive, whereas acquiring labels by automatically aligning human abstracts and source documents can yield inferior results. In this paper we describe a novel framework to guide a supervised, extractive summarization system with question-answering rewards. We argue that quality summaries should serve as a document surrogate to answer important questions, and such question-answer pairs can be conveniently obtained from human abstracts. The system learns to promote summaries that are informative, fluent, and perform competitively on question-answering. Our results compare favorably with those reported by strong summarization baselines as evaluated by automatic metrics and human assessors.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289 ], "paper_content_text": [ "Introduction Our increasingly digitized lifestyle calls for summarization techniques to produce short and accurate summaries that can be accessed at any time.", "These summaries should factually adhere to the content of the source text and present the reader with the key points therein.", "Although neural abstractive summarization has shown promising results (Rush et al., 2015; Nallapati et al., 2016; See et al., 2017) , these methods can have potential drawbacks.", "It was revealed that abstracts generated by neural systems sometimes alter or falsify objective details, and introduce new meanings not present in the original text (Cao et al., 2018) .", "Reading these abstracts can lead to misinterpretation of the source materials, which is clearly undesirable.", "In this work, we focus on extractive summarization, where the summaries are guaranteed (CNN) A judge this week sentenced a former TSA agent to six months in jail for secretly videotaping a female co-worker while she was in the bathroom, prosecutors said.", "During the investigation, detectives with the Metro Nashville Police Department in Tennessee also found that the agent, 33-year-old Daniel Boykin, entered the woman's home multiple times, where he took videos, photos and other data.", "Police found more than 90 videos and 1,500 photos of the victim on Boykin's phone and computer .", "The victim filed a complaint after seeing images of herself on his phone last year.", "[...] Comprehension Questions (Human Abstract): Former Daniel Boykin, 33, videotaped his female co-worker in the restroom, authorities say.", "Authorities say they found 90 videos and 1,500 photos of the victim on and computer.", "to remain faithful to the original content.", "Our system seeks to identify salient and consecutive sequences of words from the source document, and highlight them in the text to assist users in browsing and comprehending lengthy documents.", "An example is illustrated in Table 1 .", "A primary challenge faced by extractive summarizers is the lack of annotated data.", "The cost of hiring humans to label a necessary amount of source articles with summary words, good for training a modern classifier, can be prohibitive.", "Previous work has exploited using human abstracts to derive labels for extraction units (Woodsend and Lapata, 2010) .", "E.g., a source word is tagged 1 if it appears in the abstract, 0 otherwise.", "Although pairs of source articles and human abstracts are abundant, labels derived in this way are not necessarily best since summary saliency can not be easily captured with a rule based categorization.", "Considering that human abstracts involve generalization, paraphrasing, and can con-tain words not present in the source text, leveraging them to derive labels for extraction units can be suboptimal.", "In this work, we investigate a new strategy that seeks to better utilize human abstracts to guide the extraction of summary text units.", "We hypothesize that quality extractive summaries should contain informative content so that they can be used as document surrogates to answer important questions, thereby satisfying users' information needs.", "The question-answer pairs can be conveniently developed from human abstracts.", "Our proposed approach identifies answer tokens from each sentence of the human abstract, then replaces each answer token with a blank to create a Cloze-style question-answer pair.", "To answer all questions (≈human abstract), the system summary must contain content that is semantically close to and collectively resembles the human abstract.", "In this paper, we construct an extractive summary by selecting consecutive word sequences from the source document.", "To accomplish this we utilize a novel reinforcement learning framework to explore the space of possible extractive summaries and assess each summary using a novel reward function judging the summary's adequacy, fluency, length, and its competency to answer important questions.", "The system learns to sample extractive summaries yielding the highest expected rewards, with no pre-derived extraction labels needed.", "This work extends the methodology of Arumae and Liu (2018) with new representations of extraction units and thorough experimental evaluation.", "The contributions of this research can be summarized as follows: • we describe a novel framework generating extractive summaries by selecting consecutive sequences of words from source documents.", "This new system explores various encoding mechanisms, as well as new sampling techniques to capture phrase level data.", "Such a framework has not been thoroughly investigated in the past; • We conduct a methodical empirical evaluation from the point of view of information saliency.", "Rather than solely relying on automatic summarization evaluation methods, we also show the advantages of our system by assessing the summary quality with reading comprehension tasks.", "Our summaries compare favorably with the automatic metrics against state of the art, and show promising results against baselines when evaluated by humans for question answering.", "Related Work Extractive summarization has seen growing popularity in the past decades (Nenkova and McKeown, 2011) .", "The methods focus on selecting representative sentences from the document(s) and optionally deleting unimportant sentence constituents to form a summary (Knight and Marcu, 2002; Radev et al., 2004; Zajic et al., 2007; Martins and Smith, 2009; Gillick and Favre, 2009; Lin and Bilmes, 2010; Wang et al., 2013; Li et al., 2013 Li et al., , 2014 Hong et al., 2014; Yogatama et al., 2015) .", "A majority of the methods are unsupervised.", "They estimate sentence importance based on the sentence's length and position in the document, whether the sentence contains topical content and its relationship with other sentences.", "The summarization objective is to select a handful of sentences to maximize the coverage of important content while minimizing summary redundancy.", "Although unsupervised methods are promising, they cannot benefit from the large-scale training data harvested from the Web (Sandhaus, 2008; Hermann et al., 2015; Grusky et al., 2018) .", "Neural extractive summarization has focused primarily on extracting sentences (Nallapati et al., 2017; Cao et al., 2017; Isonuma et al., 2017; Tarnpradab et al., 2017; Zhou et al., 2018; Kedzie et al., 2018) .", "These studies exploit parallel training data consisting of source articles and story highlights (i.e., human abstracts) to create ground-truth labels for sentences.", "A neural extractive summarizer learns to predict a binary label for each source sentence indicating if it is to be included in the summary.", "These studies build distributed sentence representations using neural networks (Cheng and Lapata, 2016; Yasunaga et al., 2017) and use reinforcement learning to optimize the evaluation metric (Narayan et al., 2018b) and improve summary coherence (Wu and Hu, 2018) .", "However, sentence extraction can be coarse and in many cases, only a part of the sentence is worthy to be added to the summary.", "In this study, we perform finer-grained extractive summarization by allowing the system to select consecutive sequences of words rather than sentences to form a summary.", "Interestingly, studies reveal that summaries generated by recent neural abstractive systems are, in fact, quite \"extractive.\"", "Abstractive systems often adopt the encoder-decoder architecture with an attention mechanism (Rush et al., 2015; Nallapati et al., 2016; Paulus et al., 2017; Guo et al., 2018; Gehrmann et al., 2018; Lebanoff et al., 2018; Ce-likyilmaz et al., 2018) .", "The encoder condenses a source sequence to a fixed-length vector and the decoder takes the vector as input and generates a summary by predicting one word at a time.", "See, Liu, and Manning (2017) suggest that about 35% of the summary sentences occur in the source documents, and 90% of summary n-grams appear in the source.", "Moreover, the summaries may contain inaccurate factual details and introduce new meanings not present in the original text (Cao et al., 2018; .", "It thus raises concerns as to whether such systems can be used in realworld scenarios to summarize materials such as legal documents.", "In this work, we choose to focus on extractive summarization where selected word sequences can be highlighted on the source text to avoid change of meaning.", "Our proposed method is inspired by the work of Lei et al.", "(2016) who seek to identify rationales from textual input to support sentiment classification and question retrieval.", "Distinct from this previous work, we focus on generating generic document summaries.", "We present a novel supervised framework encouraging the selection of consecutive sequences of words to form an extractive summary.", "Further, we leverage reinforcement learning to explore the space of possible extractive summaries and promote those that are fluent, adequate, and competent in question answering.", "We seek to test the hypothesis that successful summaries can serve as document surrogates to answer important questions, and moreover, ground-truth questionanswer pairs can be derived from human abstracts.", "In the following section we describe our proposed approach in details.", "Our Approach Let S be an extractive summary consisting of text segments selected from a source document x.", "The summary can be mapped to a sequence of binary labels y assigned to document words.", "In this section we first present a supervised framework for identifying consecutive sequences of words that are summary-worthy, then proceed by describing our question-answering rewards and a deep reinforcement learning framework to guide the selection of summaries so that they can be used as document surrogates to answer important questions.", "1 Representing an Extraction Unit How best to decompose a source document into a set of text units useful for extractive summarization remains an open problem.", "A natural choice is to use words as extraction units.", "However, this choice ignores the cohesiveness of text.", "A text chunk (e.g., a prepositional phrase) can be either selected to the summary in its entirety or not at all.", "In this paper we experiment with both schemes, using either words or chunks as extraction units.", "When a text chunk is selected in the summary, all its consisting words are selected.", "We obtain text chunks by breaking down the sentence constituent parse tree in a top-down manner until each tree fragment governs at most 5 words.", "A chunk thus can contain from 1 to 5 words.", "Additionally, word level modeling can be considered a special case of chunks where the length of each phrase is 1.", "It is important to note that using sentences as extraction units is out of the scope of this paper, because our work focuses on finer-grained extraction units such as words and phrases and this is notably a more challenging task.", "The most successful neural models for encoding a piece of text to a fixed-length vector include the recurrent (Hochreiter and Schmidhuber, 1997) and convolutional neural networks (CNN; Kim et al., 2014) , among others.", "A recent study by Khandelwal et al.", "(2018) reported that the recurrent networks are capable of memorizing a recent context of about 20 tokens and the model is highly sensitive to word order, whereas this is less the case for CNN whose max-pooling operation makes it agnostic to word order.", "We implement both networks and are curious to compare their effectiveness at encoding extraction units for summarization.", "{h e t } = f Bi-LSTM 1 (x) (1) or {h e t } = f CNN 2 (x) (2) Our model first encodes the source document using a bidirectional LSTM with the forward and backward passes (Eq.", "(1)).", "The representation of the t-th source word h e t = [ ← − h e t || − → h e t ] is the con- catenation of the hidden states in both directions.", "A chunk is similarly denoted by h e t = [ ← − h e t || − → h e t+n ] where t and t + n are the indices of its beginning and ending words.", "In both cases, a fixed-length vector (h e t ∈ R m ) is created for the word/chunk.", "Further, our CNN encoder (Eq.", "(2)) uses a sliding window of {1,3,5,7} words, corresponding to the kernel sizes, to scan through the source document.", "We apply a number of filters to each window size to extract local features.", "The t-th source word is represented by the concatenation of feature maps (an m-dimensional vector).", "To obtain the chunk vector we perform max-pooling over the representations of its consisting words (from t to t + n).", "In the following we use h e t to denote the vector representation of the t-th extraction unit, may it be a word or a chunk, generated using either encoder.", "Constructing an Extractive Summary It is desirable to first develop a supervised framework for identifying summary-worthy text segments from a source article.", "These segments collectively form an extractive summary to be highlighted on the source text.", "The task can be formulated as a sequence labeling problem: a source text unit (a word or chunk) is labelled 1 if it is to be included in the summary and 0 otherwise.", "It is not unusual to develop an auto-regressive model to perform sequence labeling, where the label of the t-th extraction unit (y t ) depends on all previous labels (y <t ).", "Given this hypothesis, we build a framework to extract summary units where the importance of the t-th source unit is characterized by its informativeness (encoded in h e t ), its position in the document, and relationship with the partial summary.", "The details are presented below.", "We use a positional embedding (g t ) to signify the position of the t-th text unit in the source document.", "The position corresponds to the index of the source sentence containing the t-th unit, and further, all text units belonging to the same sentence share the same positional embedding.", "We apply sinusoidal initialization to the embeddings, following Vaswani et al.", "(2017) .", "Importantly, positional embeddings allow us to inject macro-positional knowledge about words/chunks into a neural summarization framework to offset the natural bias that humans tend to have on putting important content at the beginning of an article.", "Next, we build a representation for the partial summary to aid the system in selecting future text units.", "The representation s t is expected to encode the extraction decisions up to time t-1 and it can be realized using a unidirectional LSTM network (Eq.", "(3)).", "The t-th input to the network is represented as y t−1 ⊗ h e t−1 where y t−1 is a binary label serving as a gating mechanism to control if the semantic content of the previous text unit (h e t−1 ) is to be included in the summary (\"⊗\" corresponds to elementwise product).", "During training, we apply teacher forcing and y t−1 is the ground-truth extraction label for the (t − 1)-th unit; at test time, Figure 1 : A unidirectional LSTM (blue, Eq.", "(3)) encodes the partial summary, while the multilayer perceptron network (orange, Eq.", "(4-5)) utilizes the text unit representation (h e t ), its positional embedding (gt), and the partial summary representation (st) to determine if the t-th text unit is to be included in the summary.", "Best viewed in color.", "g t 1 g t g t+1 g t+2 s t+2 s t+1 s t s t 1 h e t 1 h e t h e t+1 h e t+2 y t−1 is generated on-the-fly by obtaining the label yielding the highest probability according to Eq.", "(5).", "In the previous work of Cheng and Lapata (2016) and Nallapati et al.", "(2017) , similar auto-regressive models are developed to identify summary sentences.", "Different from the previous work, this study focuses on extracting consecutive sequences of words and chunks from the source document, and the partial summary representation is particularly useful for predicting if the next unit is to be included in the summary to improve summary fluency.", "s t = f Uni-LSTM 3 (s t−1 , y t−1 ⊗ h e t−1 ) (3) Given the partial summary representation (s t ), and representation of the text unit (h e t ) and its positional encoding (g t ), we employ a multilayer perceptron to predict how likely the unit is to be included in the summary.", "This process is described by Eqs.", "(4-5) and further illustrated in Figure 1 .", "a t = f ReLU (W a [h e t ; g t ; s t ] + b a ) (4) p(y t |y <t , x) = σ(w y a t + b y ) (5) Our model parameters include {W a , b a , w y , b y } along with those required by f Bi-LSTM 1 , f CNN 2 and f Uni-LSTM 3 .", "It is possible to train this model in a fully supervised fashion by minimizing the negative log-likelihood of the training data.", "We generate ground-truth labels for source text units as follows.", "A source word receives a label of 1 if both itself and its adjacent word appear in the human abstract (excluding cases where both words are stopwords).", "This heuristic aims to label consecutive source words (2 or more) as summaryworthy, as opposed to picking single words which can be less informative.", "A source text chunk receives a label of 1 if one of its component words is labelled 1 in the above process.", "Because human abstracts are often short and contain novel words not present in source documents, they can be suboptimal for generating ground-truth labels for extraction units.", "Only a small portion of the source words (about 8% in our dataset) are labelled as positive, whereas the vast majority are negative.", "Such labels can be ineffective in providing supervision.", "In the following section, we investigate a new learning paradigm, which encourages extractive summaries to contain informative content useful for answering important questions, while question-answer pairs can be automatically derived from human abstracts.", "Using Summaries to Answer Questions Our hypothesis is that high-quality summaries should contain informative content making them appropriate to serve as document surrogates to satisfy users' information needs.", "We train the extractive summarizer to identify source text units necessary for answering questions, and the questionanswer (QA) pairs can be conveniently developed from human abstracts.", "To obtain QA pairs, we set an answer token to be either a salient word or a named entity to limit the space of potential answers.", "For any sentence in the human abstract, we identify an answer token from it, then replace the answer token with a blank to create a Cloze-style question-answer pair (see Table 1 ).", "When a sentence contains multiple answer tokens, a set of QA pairs can be obtained from it.", "It is important to note that at least one QA pair should be extracted from each sentence of the abstract.", "Because a system summary is trained to contain content useful for answering all questions (≈human abstract), any missing QA pair is likely to cause the summary to be insufficient.", "We collect answer tokens using the following methods: (a) we extract a set of entities with tag {PER, LOC, ORG, MISC} from each sentence using the Stanford CoreNLP toolkit ; (b) we also identify the ROOT word of each sentence's dependency parse tree along with the sentence's subject/object word, whose type is {NSUBJ, CSUBJ, OBJ, IOBJ} (if exists), then add them to the collection of answer tokens.", "Further, we prune the answer space by excluding those which appear fewer than 5 times overall.", "Having several methods for question construction allows us to explore the answer space properly.", "In the results section we perform experiments on root, subject/object, and named entities to see which model provides the best extraction guide.", "Given an extractive summary S containing a set of source text units, and a collection of questionanswer pairs P = {(Q k , e * k )} K k=1 related to the source document, we want to develop a mechanism leveraging the extractive summary to answer these questions.", "We first encode each question Q k to a vector representation (q k ).", "This is achieved by concatenating the last hidden states of the forward/backward passes of a bidirectional LSTM (Eq.", "(6) ).", "Next, we exploit the attention mechanism to locate summary parts that are relevant to answering the k-th question.", "Given the attention mechanism, an extractive summary S can be used to answer multiple questions related to the document.", "We define α t,k to be the semantic relatedness between the t-th source text unit and the k-th question.", "Following Chen et al.", "(2016a) , we introduce a bilinear term to characterize their relationship (α t,k ∝ h e t W α q k ; see Eq.", "(7) ).", "In this process, we consider only those source text units selected in summary S. Using α t,k as weights, we then compute a context vector c k condensing summary content related to the k-th question (Eq.", "(8)) .", "q k = f Bi-LSTM 4 (Q k ) (6) α t,k = exp(h e t W α q k ) t exp(h e t W α q k ) (7) c k = t α t,k h e t (8) u k = [c k ; q k ; |c k − q k |; c k ⊗ q k ] (9) To predict the most probable answer, we construct a fully-connected network as the output layer.", "The input to the network includes a concatenation of the context vector (c k ), question vector (q k ), absolute difference (|c k − q k |) and element-wise product (c k ⊗ q k ) of the two vectors (Eq.", "(9)).", "A softmax function is used to estimate a probability distribution over the space of candidate answers: .", "P (e k |S, Q k ) = softmax(W e f ReLU (W u u k + b u )).", "A Reinforcement Learning Framework In this section we introduce a reinforcement learning framework to explore the space of possible extractive summaries and present a novel reward function to promote summaries that are adequate, fluent, restricted in length, and competent in question answering.", "Our reward function consists of four components, whose interpolation weights γ, α, and β are tuned on the dev set.", "R(y) = R c (y) + γR a (y) + αR f (y) + βR l (y) We define QA competency (Eq.", "(10)) as the average log-likelihood of correctly answering questions using the system summary (y).", "A highquality system summary is expected to resemble reference summary by using similar wording.", "The adequacy metric (Eq.", "(11)) measures the percentage of overlapping unigrams between the system (y) and reference summary (y * ).", "The fluency criterion (Eq.", "(12)) encourages consecutive sequences of source words to be selected by preventing many 0/1 switches in the label sequence (i.e., |y t − y t−1 |).", "Finally, we limit the summary size by setting the ratio of selected words to be close to a threshold δ (Eq.", "(13) ).", "QA R c (y) = 1 K K k=1 log P (e * k |y, Q k ) (10) Adequ.", "R a (y) = 1 |y * | U(y, y * ) (11) Fluency R f (y) = − |y| t=2 |y t − y t−1 | (12) Length R l (y) = − 1 |y| t y t − δ (13) The reward function R(y) successfully combines intrinsic measures of summary fluency and adequacy (Goldstein et al., 2005) with extrinsic measure of summary responsiveness to given questions (Dang, 2006; Murray et al., 2008) .", "A reinforcement learning agent finds a policy P (y|x) to maximize the expected reward E P (y|x) [R(y)].", "Training the system with policy gradient (Eq.", "(14) ) involves repeatedly sampling an extractive summaryŷ from the source document x.", "At time t, the agent takes an action by sampling a decision based on p(y t |ŷ <t , x) (Eq.", "(5)) indicating whether the t-th source text unit is to be included in the summary.", "Once the full summary sequenceŷ is generated, it is compared to the ground-truth sequence to compute the reward R(ŷ).", "In this way, reinforcement learning explores the space of extractive summaries and promotes those yielding high rewards.", "At inference time, rather than sampling actions from p(y t |y <t , x), we choose y t that yields the highest probability to generate the system summary y.", "This process is deterministic and no QA is required.", "∇ θ E P (y|x) [R(y)] = E P (y|x) [R(y)∇ θ log P (y|x)] ≈ 1 N N n=1 R(ŷ (n) )∇ θ log P (ŷ (n) |x) (14) Experiments We proceed by discussing the dataset and settings, comparison systems, and experimental results obtained through both automatic metrics and human evaluation in a reading comprehension setting.", "Dataset and Settings Our goal is to build an extractive summarizer identifying important textual segments from source articles.", "To investigate the effectiveness of the proposed approach, we conduct experiments on the CNN/Daily Mail dataset using a version provided by See et al.", "(2017) .", "The reference summaries of this dataset were created by human editors exhibiting a moderate degree of extractiveness.", "E.g., 83% of summary unigrams and 45% of bigrams appear in source articles (Narayan et al., 2018a) .", "On average, a CNN article contains 761 words / 34 sentences and a DM article contains 653 words / 29 sentences.", "We report results respectively for the CNN and DM portion of the dataset.", "Our hyperparameter settings are as follows.", "We set the hidden state dimension of the LSTM to be 256 in either direction.", "A bidirectional LSTM f Bi-LSTM 1 (·) produces a 512-dimensional vector for each content word.", "Similarly, f Bi-LSTM 4 (·) generates a question vector q k of the same size.", "Our CNN encoder f CNN 2 (·) uses multiple window sizes of {1, 3, 5, 7} and 128 filters per window size.", "h e t is thus a 512-dimensional vector using either CNN or LSTM encoder.", "We set the hidden state dimension of s t to be 128.", "We also use 100-dimensional word embeddings (Pennington et al., 2014) and sinusoidal positional encodings (Vaswani et al., 2017) of 30 dimensions.", "The maximum article length is set to 400 words.", "Compared to the study of Arumae and Liu (2018) , we expand the search space dramatically from 100 to 400 words, which poses a challenge to the RLbased summarizers.", "We associate each article with at most 10 QA pairs (K=10) and use them to guide the extraction of summary segments.", "We apply mini-batch training with Adam optimizer (Kingma and Ba, 2014) , where a mini-batch contains 128 articles and their QA pairs.", "The summary ratio δ is set to 0.15, yielding extractive summaries of about 60 words.", "Following Arumae and Liu (2018) , we set hyperparameters β = 2α; α and γ are tuned on the dev set using grid search.", "Experimental Results Comparison systems We compare our method with a number of extractive and abstractive systems that have reported results on the CNN/DM datasets.", "We consider non-neural approaches that extract sentences from the source article to form a summary.", "These include LexRank (Radev et al., 2004) , SumBasic (Vanderwende et al., 2007) , and KLSum (Haghighi and Vanderwende, 2009) .", "Such methods treat sentences as bags of words, and then select sentences containing topically important words.", "We further include the Lead-3 baseline that extracts the first 3 sentences from any given article.", "The method has been shown to be a strong baseline for summarizing news articles.", "Neural extractive approaches focus on learning vector representations for sentences and words, then performing extraction based on the learned representations.", "Cheng et al.", "(2016) describe a neural network method composed of a hierarchical document encoder and an attention-based extractor.", "The system has two variants: NN-WE extracts words from the source article and NN-SE extracts sentences.", "SummaRuNNer (Nallapati et al., 2017) presents an autoregressive sequence labeling method based on recurrent neural networks.", "It selects summary sentences based on their content, salience, position, and novelty representations.", "Abstractive summarization methods are not directly comparable to our approach, but we choose to include three systems that report results respectively for CNN and DM datasets.", "Distraction-M3 (Chen et al., 2016b ) trains the summarization system to distract its attention to traverse different regions of the source article.", "Graph attention (Tan et al., 2017) introduces a graph-based attention mechanism to enhance the encoderdecoder framework.", "PointerGen+Cov.", "(See et al., 2017) allows the system to not only copy words from the source text but also generate summary words by selecting them from a vocabulary.", "Abstractive methods can thus introduce new words to the summary that are not present in the source article.", "However, system summaries may change the meaning of the original texts due to this flexibility.", "Results We present summarization results of various systems in Tables 2 and 3 , evaluated on the standard CNN/DM test sets by R-1, R-2, and R-L metrics (Lin, 2004) , which respectively measure the overlap of unigrams, bigrams, and longest common subsequences between system and reference summaries.", "We investigate four variants of our method: QASumm+NoQ does not utilize any question-answer pairs during training.", "It extracts summary text chunks by learning from groundtruth labels ( §3.2) and the chunks are encoded by f Bi-LSTM 1 .", "Other variants initialize their models using pretrained parameters from QASumm+NoQ, then integrate the reinforcement learning objective ( §3.4) to exploit the space of possible extractive summaries and reward those that are useful for answering questions.", "We consider three types of QA pairs: the answer token is the root of a sentence dependency parse tree (+ROOT), a subject or object (+SUBJ/OBJ), or an entity found in the sentence (+NER).", "In all cases, the question is generated by replacing the answer token with a blank symbol.", "As illustrated in Tables 2 and 3 , our QASumm methods with reinforcement learning (+ROOT, +SUBJ/OBJ, +NER) perform competitively with strong baselines.", "They outperform the counterpart QASumm+NoQ that makes no use of the QA pairs by a substantial margin.", "They outperform or perform at a comparable level to state-of-the-art published systems on the CNN/DM datasets but are generally inferior to PointerGen.", "We observe that exacting summary chunks is highly desirable in real-world applications as it provides a mechanism to generate concise summaries.", "Nonetheless, accurately identifying summary chunks is challenging because the search space is vast and spuriousness arises in chunking sentences.", "Cheng and Lapata (2016) report a substantial performance drop when adapting their system to extract words.", "Our QASumm methods focusing on chunk extraction perform on par with competitive systems that extract whole sentences.", "We additionally present human evaluation results of summary usefulness for a reading comprehension task in §4.3.", "In Tables 2 and 3 , we further show the number of unique answers per QA type.", "We find that the ROOT-type QA pairs have the least number of unique answers.", "They are often main verbs of sentences.", "In contrast, the SUBJ/OBJ-type has the most number of answers.", "They are subjects and objects of sentences and correspond to an open class of content words.", "The NER-type has a moderate number of answers compared to others.", "Note that all answer tokens have been filtered by frequency; those appearing less than 5 times in the dataset are removed to avoid overfitting.", "Among variants of the QASumm method, we find that QASumm+ROOT achieves the highest scores on DM dataset.", "QASumm+NER performs consistently well on both CNN and DM datasets, suggesting QA pairs of this type are effective in guiding the system to extract summary chunks.", "We conjecture that maintaining a moderate number of answers is important to maximize performance.", "To answer questions with missing entities, the summary is encouraged to contain similar content as the question body.", "Because questions are derived from the human abstract, this in turn requires the system summary to carry similar semantic content as the human abstract.", "Question-answering accuracy We next dive into the QA component of our system to investigate question-answering performance when different types of summaries and QA pairs are supplied to the system ( §3.3).", "Given a question, the system predicts an answer using an extractive summary as the source input.", "Intuitively, an informative summary can lead to high QA accuracy, as the summary content serves well as the basis for predicting answers.", "With the same summary as input, certain types of questions can be more difficult to answer than others, and the system must rely heavily on the summary to gauge correct answers.", "We compare various types of summaries.", "These include (a) QASumm+NoQ which extracts summary chunks without requiring QA pairs; and (b) GoldSumm, which are gold-standard extractive summaries generated by collecting source words appearing in human summaries.", "We further consider NoText and FullText, corresponding to using no source text or the full source article as input.", "They represent the two extremes.", "In all cases the QA component ( §3.3) is trained on the training set and we report QA accuracies on the dev set.", "In Table 4 , we observe that question-answering with GoldSumm performs the best for all QA types.", "It outperforms the scenarios using Full-Text as the source input.", "This indicates that distilled information contained in a high-quality summary can be useful for answering questions, as searching for answers in a succinct summary can be more efficient than that in a full article.", "Moreover, we observe that the performance of QA-Summ+NoQ is in between NoText and GoldSumm for all answer types.", "The results suggest that extractive summaries with even modest ROUGE scores can prove useful for question-answering.", "Regarding different types of QA pairs, we find that the ROOT-type can achieve high QA accuracy when using NoText input.", "It suggests that ROOT answers can to some extent be predicted based on the question context.", "The NER-type QA pairs work the best for both GoldSumm and Full-Text, likely because the source texts contain necessary entities required to correctly answer those questions.", "We also find the SUBJ/OBJ-type QA pairs have the smallest gap between train/dev accuracies, despite that they have a large answer space.", "Based on the analysis we would suggest future work to consider using NER-based QA pairs as they encourage the summaries to contain salient source content and be informative.", "Extraction units We finally compare the performance of using either words or chunks as extraction units ( §3.1).", "The chunks are obtained by breaking down sentence constituent parse trees in a top-down manner until all tree fragments contain 5 words or less.", "We observe that 70% of the chunks are 1-grams, and 2/3/4/5-grams are 9%, 7%, 6%, and 8% respectively.", "We compare the bidirectional LSTM (f LSTM 1 ) and CNN (f CNN 2 ) encoders for their effectiveness on generating representations for extraction units.", "Figure 2 presents the results of the QASumm+NoQ system under various settings.", "We find that extracting chunks performs superior, and combining chunks with LSTM representations yield the highest scores.", "Human Evaluation Testing the usefulness of an extractive system driven by reading comprehension is not inherently measured by automatic metrics (i.e.", "ROUGE).", "We conducted a human evaluation to assess whether the highlighted summaries contribute to document understanding.", "Similar to our training paradigm we presented each participant with the document and three fill-in-the-blank questions created from the human abstracts.", "It was guaranteed that each question was from a unique human abstract to avoid seeing the answer adjacent to the same template.", "The missing section was randomly generated to be either the root word, the subject or ob- See et al.", "(2017) .", "Our systems tested were the supervised extractor, and our full model (NER).", "ject of the sentence, or a named entity.", "We compare our reinforced extracted summary (presented as a bold overlay to the document), against our supervised method (section 3.2), abstractive summaries generated by See et al.", "(2017) , and the human abstracts in full.", "Additionally we asked the participants to rate the quality of the summary presented (1-5, with 5 being most informative).", "We utilized Amazon Mechanical Turk, and conducted an experiment where we sampled 80 documents from the CNN test set.", "The articles were evenly split across the four competing systems, and each HIT was completed by 5 turkers.", "Upon completion the data was analyzed manually for accuracy since turkers entered each answer as free text, and to remove any meaningless datapoints.", "Table 5 shows the average time (in seconds) to complete a single question, the overall accuracy of the participants, and the informativeness of a given summary type.", "Excluding the use of human abstracts, all systems resulted in similar performance times.", "However we observe a large margin in QA accuracy in our full system compared to the abstractive and our supervised approach.", "Although participants rated the informativeness of the summaries to be the same our systems yielded a higher performance.", "This strongly indicates that having a system which makes using of document comprehension has a tangible effect when applied towards a real-world task.", "Conclusion We exploited an extractive summarization framework using deep reinforcement learning to identify consecutive word sequences from a document to form an extractive summary.", "Our reward function promotes adequate and fluent summaries that can serve as document surrogates to answer important questions, directly addressing users' information needs.", "Experimental results on benchmark datasets demonstrated the efficacy of our proposed method over state-of-the-art baselines, assessed by both automatic metrics and human evaluators." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "4.3", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Our Approach", "Representing an Extraction Unit", "Constructing an Extractive Summary", "Using Summaries to Answer Questions", "A Reinforcement Learning Framework", "Experiments", "Dataset and Settings", "Experimental Results", "Human Evaluation", "Conclusion" ] }
GEM-SciDuet-train-36#paper-1050#slide-13
Human Evaluation
We conducted a human evaluation to assess whether the highlighted summaries contribute to document understanding. (Amazon Mechanical Turk) We presented each participant with the document and fill-in-the-blank questions created from the human abstracts. Kristjan Arumae and Fei Liu Guiding Extractive Summarization with Question-Answering Rewards - NAACL 2019 We compare our reinforced extracted summary (presented as a bold overlay to the document), against our supervised method, abstractive summaries generated by See et al. (2017), and the human abstracts in full. Additionally we asked the participants to rate the quality of the summary presented (1-5, with 5 being most informative). Summary Type Time Acc. Inform. Although participants rated the informativeness of the summaries to be the same our systems yielded a higher performance.
We conducted a human evaluation to assess whether the highlighted summaries contribute to document understanding. (Amazon Mechanical Turk) We presented each participant with the document and fill-in-the-blank questions created from the human abstracts. Kristjan Arumae and Fei Liu Guiding Extractive Summarization with Question-Answering Rewards - NAACL 2019 We compare our reinforced extracted summary (presented as a bold overlay to the document), against our supervised method, abstractive summaries generated by See et al. (2017), and the human abstracts in full. Additionally we asked the participants to rate the quality of the summary presented (1-5, with 5 being most informative). Summary Type Time Acc. Inform. Although participants rated the informativeness of the summaries to be the same our systems yielded a higher performance.
[]
GEM-SciDuet-train-36#paper-1050#slide-14
1050
Guiding Extractive Summarization with Question-Answering Rewards
Highlighting while reading is a natural behavior for people to track salient content of a document. It would be desirable to teach an extractive summarizer to do the same. However, a major obstacle to the development of a supervised summarizer is the lack of ground-truth. Manual annotation of extraction units is costprohibitive, whereas acquiring labels by automatically aligning human abstracts and source documents can yield inferior results. In this paper we describe a novel framework to guide a supervised, extractive summarization system with question-answering rewards. We argue that quality summaries should serve as a document surrogate to answer important questions, and such question-answer pairs can be conveniently obtained from human abstracts. The system learns to promote summaries that are informative, fluent, and perform competitively on question-answering. Our results compare favorably with those reported by strong summarization baselines as evaluated by automatic metrics and human assessors.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289 ], "paper_content_text": [ "Introduction Our increasingly digitized lifestyle calls for summarization techniques to produce short and accurate summaries that can be accessed at any time.", "These summaries should factually adhere to the content of the source text and present the reader with the key points therein.", "Although neural abstractive summarization has shown promising results (Rush et al., 2015; Nallapati et al., 2016; See et al., 2017) , these methods can have potential drawbacks.", "It was revealed that abstracts generated by neural systems sometimes alter or falsify objective details, and introduce new meanings not present in the original text (Cao et al., 2018) .", "Reading these abstracts can lead to misinterpretation of the source materials, which is clearly undesirable.", "In this work, we focus on extractive summarization, where the summaries are guaranteed (CNN) A judge this week sentenced a former TSA agent to six months in jail for secretly videotaping a female co-worker while she was in the bathroom, prosecutors said.", "During the investigation, detectives with the Metro Nashville Police Department in Tennessee also found that the agent, 33-year-old Daniel Boykin, entered the woman's home multiple times, where he took videos, photos and other data.", "Police found more than 90 videos and 1,500 photos of the victim on Boykin's phone and computer .", "The victim filed a complaint after seeing images of herself on his phone last year.", "[...] Comprehension Questions (Human Abstract): Former Daniel Boykin, 33, videotaped his female co-worker in the restroom, authorities say.", "Authorities say they found 90 videos and 1,500 photos of the victim on and computer.", "to remain faithful to the original content.", "Our system seeks to identify salient and consecutive sequences of words from the source document, and highlight them in the text to assist users in browsing and comprehending lengthy documents.", "An example is illustrated in Table 1 .", "A primary challenge faced by extractive summarizers is the lack of annotated data.", "The cost of hiring humans to label a necessary amount of source articles with summary words, good for training a modern classifier, can be prohibitive.", "Previous work has exploited using human abstracts to derive labels for extraction units (Woodsend and Lapata, 2010) .", "E.g., a source word is tagged 1 if it appears in the abstract, 0 otherwise.", "Although pairs of source articles and human abstracts are abundant, labels derived in this way are not necessarily best since summary saliency can not be easily captured with a rule based categorization.", "Considering that human abstracts involve generalization, paraphrasing, and can con-tain words not present in the source text, leveraging them to derive labels for extraction units can be suboptimal.", "In this work, we investigate a new strategy that seeks to better utilize human abstracts to guide the extraction of summary text units.", "We hypothesize that quality extractive summaries should contain informative content so that they can be used as document surrogates to answer important questions, thereby satisfying users' information needs.", "The question-answer pairs can be conveniently developed from human abstracts.", "Our proposed approach identifies answer tokens from each sentence of the human abstract, then replaces each answer token with a blank to create a Cloze-style question-answer pair.", "To answer all questions (≈human abstract), the system summary must contain content that is semantically close to and collectively resembles the human abstract.", "In this paper, we construct an extractive summary by selecting consecutive word sequences from the source document.", "To accomplish this we utilize a novel reinforcement learning framework to explore the space of possible extractive summaries and assess each summary using a novel reward function judging the summary's adequacy, fluency, length, and its competency to answer important questions.", "The system learns to sample extractive summaries yielding the highest expected rewards, with no pre-derived extraction labels needed.", "This work extends the methodology of Arumae and Liu (2018) with new representations of extraction units and thorough experimental evaluation.", "The contributions of this research can be summarized as follows: • we describe a novel framework generating extractive summaries by selecting consecutive sequences of words from source documents.", "This new system explores various encoding mechanisms, as well as new sampling techniques to capture phrase level data.", "Such a framework has not been thoroughly investigated in the past; • We conduct a methodical empirical evaluation from the point of view of information saliency.", "Rather than solely relying on automatic summarization evaluation methods, we also show the advantages of our system by assessing the summary quality with reading comprehension tasks.", "Our summaries compare favorably with the automatic metrics against state of the art, and show promising results against baselines when evaluated by humans for question answering.", "Related Work Extractive summarization has seen growing popularity in the past decades (Nenkova and McKeown, 2011) .", "The methods focus on selecting representative sentences from the document(s) and optionally deleting unimportant sentence constituents to form a summary (Knight and Marcu, 2002; Radev et al., 2004; Zajic et al., 2007; Martins and Smith, 2009; Gillick and Favre, 2009; Lin and Bilmes, 2010; Wang et al., 2013; Li et al., 2013 Li et al., , 2014 Hong et al., 2014; Yogatama et al., 2015) .", "A majority of the methods are unsupervised.", "They estimate sentence importance based on the sentence's length and position in the document, whether the sentence contains topical content and its relationship with other sentences.", "The summarization objective is to select a handful of sentences to maximize the coverage of important content while minimizing summary redundancy.", "Although unsupervised methods are promising, they cannot benefit from the large-scale training data harvested from the Web (Sandhaus, 2008; Hermann et al., 2015; Grusky et al., 2018) .", "Neural extractive summarization has focused primarily on extracting sentences (Nallapati et al., 2017; Cao et al., 2017; Isonuma et al., 2017; Tarnpradab et al., 2017; Zhou et al., 2018; Kedzie et al., 2018) .", "These studies exploit parallel training data consisting of source articles and story highlights (i.e., human abstracts) to create ground-truth labels for sentences.", "A neural extractive summarizer learns to predict a binary label for each source sentence indicating if it is to be included in the summary.", "These studies build distributed sentence representations using neural networks (Cheng and Lapata, 2016; Yasunaga et al., 2017) and use reinforcement learning to optimize the evaluation metric (Narayan et al., 2018b) and improve summary coherence (Wu and Hu, 2018) .", "However, sentence extraction can be coarse and in many cases, only a part of the sentence is worthy to be added to the summary.", "In this study, we perform finer-grained extractive summarization by allowing the system to select consecutive sequences of words rather than sentences to form a summary.", "Interestingly, studies reveal that summaries generated by recent neural abstractive systems are, in fact, quite \"extractive.\"", "Abstractive systems often adopt the encoder-decoder architecture with an attention mechanism (Rush et al., 2015; Nallapati et al., 2016; Paulus et al., 2017; Guo et al., 2018; Gehrmann et al., 2018; Lebanoff et al., 2018; Ce-likyilmaz et al., 2018) .", "The encoder condenses a source sequence to a fixed-length vector and the decoder takes the vector as input and generates a summary by predicting one word at a time.", "See, Liu, and Manning (2017) suggest that about 35% of the summary sentences occur in the source documents, and 90% of summary n-grams appear in the source.", "Moreover, the summaries may contain inaccurate factual details and introduce new meanings not present in the original text (Cao et al., 2018; .", "It thus raises concerns as to whether such systems can be used in realworld scenarios to summarize materials such as legal documents.", "In this work, we choose to focus on extractive summarization where selected word sequences can be highlighted on the source text to avoid change of meaning.", "Our proposed method is inspired by the work of Lei et al.", "(2016) who seek to identify rationales from textual input to support sentiment classification and question retrieval.", "Distinct from this previous work, we focus on generating generic document summaries.", "We present a novel supervised framework encouraging the selection of consecutive sequences of words to form an extractive summary.", "Further, we leverage reinforcement learning to explore the space of possible extractive summaries and promote those that are fluent, adequate, and competent in question answering.", "We seek to test the hypothesis that successful summaries can serve as document surrogates to answer important questions, and moreover, ground-truth questionanswer pairs can be derived from human abstracts.", "In the following section we describe our proposed approach in details.", "Our Approach Let S be an extractive summary consisting of text segments selected from a source document x.", "The summary can be mapped to a sequence of binary labels y assigned to document words.", "In this section we first present a supervised framework for identifying consecutive sequences of words that are summary-worthy, then proceed by describing our question-answering rewards and a deep reinforcement learning framework to guide the selection of summaries so that they can be used as document surrogates to answer important questions.", "1 Representing an Extraction Unit How best to decompose a source document into a set of text units useful for extractive summarization remains an open problem.", "A natural choice is to use words as extraction units.", "However, this choice ignores the cohesiveness of text.", "A text chunk (e.g., a prepositional phrase) can be either selected to the summary in its entirety or not at all.", "In this paper we experiment with both schemes, using either words or chunks as extraction units.", "When a text chunk is selected in the summary, all its consisting words are selected.", "We obtain text chunks by breaking down the sentence constituent parse tree in a top-down manner until each tree fragment governs at most 5 words.", "A chunk thus can contain from 1 to 5 words.", "Additionally, word level modeling can be considered a special case of chunks where the length of each phrase is 1.", "It is important to note that using sentences as extraction units is out of the scope of this paper, because our work focuses on finer-grained extraction units such as words and phrases and this is notably a more challenging task.", "The most successful neural models for encoding a piece of text to a fixed-length vector include the recurrent (Hochreiter and Schmidhuber, 1997) and convolutional neural networks (CNN; Kim et al., 2014) , among others.", "A recent study by Khandelwal et al.", "(2018) reported that the recurrent networks are capable of memorizing a recent context of about 20 tokens and the model is highly sensitive to word order, whereas this is less the case for CNN whose max-pooling operation makes it agnostic to word order.", "We implement both networks and are curious to compare their effectiveness at encoding extraction units for summarization.", "{h e t } = f Bi-LSTM 1 (x) (1) or {h e t } = f CNN 2 (x) (2) Our model first encodes the source document using a bidirectional LSTM with the forward and backward passes (Eq.", "(1)).", "The representation of the t-th source word h e t = [ ← − h e t || − → h e t ] is the con- catenation of the hidden states in both directions.", "A chunk is similarly denoted by h e t = [ ← − h e t || − → h e t+n ] where t and t + n are the indices of its beginning and ending words.", "In both cases, a fixed-length vector (h e t ∈ R m ) is created for the word/chunk.", "Further, our CNN encoder (Eq.", "(2)) uses a sliding window of {1,3,5,7} words, corresponding to the kernel sizes, to scan through the source document.", "We apply a number of filters to each window size to extract local features.", "The t-th source word is represented by the concatenation of feature maps (an m-dimensional vector).", "To obtain the chunk vector we perform max-pooling over the representations of its consisting words (from t to t + n).", "In the following we use h e t to denote the vector representation of the t-th extraction unit, may it be a word or a chunk, generated using either encoder.", "Constructing an Extractive Summary It is desirable to first develop a supervised framework for identifying summary-worthy text segments from a source article.", "These segments collectively form an extractive summary to be highlighted on the source text.", "The task can be formulated as a sequence labeling problem: a source text unit (a word or chunk) is labelled 1 if it is to be included in the summary and 0 otherwise.", "It is not unusual to develop an auto-regressive model to perform sequence labeling, where the label of the t-th extraction unit (y t ) depends on all previous labels (y <t ).", "Given this hypothesis, we build a framework to extract summary units where the importance of the t-th source unit is characterized by its informativeness (encoded in h e t ), its position in the document, and relationship with the partial summary.", "The details are presented below.", "We use a positional embedding (g t ) to signify the position of the t-th text unit in the source document.", "The position corresponds to the index of the source sentence containing the t-th unit, and further, all text units belonging to the same sentence share the same positional embedding.", "We apply sinusoidal initialization to the embeddings, following Vaswani et al.", "(2017) .", "Importantly, positional embeddings allow us to inject macro-positional knowledge about words/chunks into a neural summarization framework to offset the natural bias that humans tend to have on putting important content at the beginning of an article.", "Next, we build a representation for the partial summary to aid the system in selecting future text units.", "The representation s t is expected to encode the extraction decisions up to time t-1 and it can be realized using a unidirectional LSTM network (Eq.", "(3)).", "The t-th input to the network is represented as y t−1 ⊗ h e t−1 where y t−1 is a binary label serving as a gating mechanism to control if the semantic content of the previous text unit (h e t−1 ) is to be included in the summary (\"⊗\" corresponds to elementwise product).", "During training, we apply teacher forcing and y t−1 is the ground-truth extraction label for the (t − 1)-th unit; at test time, Figure 1 : A unidirectional LSTM (blue, Eq.", "(3)) encodes the partial summary, while the multilayer perceptron network (orange, Eq.", "(4-5)) utilizes the text unit representation (h e t ), its positional embedding (gt), and the partial summary representation (st) to determine if the t-th text unit is to be included in the summary.", "Best viewed in color.", "g t 1 g t g t+1 g t+2 s t+2 s t+1 s t s t 1 h e t 1 h e t h e t+1 h e t+2 y t−1 is generated on-the-fly by obtaining the label yielding the highest probability according to Eq.", "(5).", "In the previous work of Cheng and Lapata (2016) and Nallapati et al.", "(2017) , similar auto-regressive models are developed to identify summary sentences.", "Different from the previous work, this study focuses on extracting consecutive sequences of words and chunks from the source document, and the partial summary representation is particularly useful for predicting if the next unit is to be included in the summary to improve summary fluency.", "s t = f Uni-LSTM 3 (s t−1 , y t−1 ⊗ h e t−1 ) (3) Given the partial summary representation (s t ), and representation of the text unit (h e t ) and its positional encoding (g t ), we employ a multilayer perceptron to predict how likely the unit is to be included in the summary.", "This process is described by Eqs.", "(4-5) and further illustrated in Figure 1 .", "a t = f ReLU (W a [h e t ; g t ; s t ] + b a ) (4) p(y t |y <t , x) = σ(w y a t + b y ) (5) Our model parameters include {W a , b a , w y , b y } along with those required by f Bi-LSTM 1 , f CNN 2 and f Uni-LSTM 3 .", "It is possible to train this model in a fully supervised fashion by minimizing the negative log-likelihood of the training data.", "We generate ground-truth labels for source text units as follows.", "A source word receives a label of 1 if both itself and its adjacent word appear in the human abstract (excluding cases where both words are stopwords).", "This heuristic aims to label consecutive source words (2 or more) as summaryworthy, as opposed to picking single words which can be less informative.", "A source text chunk receives a label of 1 if one of its component words is labelled 1 in the above process.", "Because human abstracts are often short and contain novel words not present in source documents, they can be suboptimal for generating ground-truth labels for extraction units.", "Only a small portion of the source words (about 8% in our dataset) are labelled as positive, whereas the vast majority are negative.", "Such labels can be ineffective in providing supervision.", "In the following section, we investigate a new learning paradigm, which encourages extractive summaries to contain informative content useful for answering important questions, while question-answer pairs can be automatically derived from human abstracts.", "Using Summaries to Answer Questions Our hypothesis is that high-quality summaries should contain informative content making them appropriate to serve as document surrogates to satisfy users' information needs.", "We train the extractive summarizer to identify source text units necessary for answering questions, and the questionanswer (QA) pairs can be conveniently developed from human abstracts.", "To obtain QA pairs, we set an answer token to be either a salient word or a named entity to limit the space of potential answers.", "For any sentence in the human abstract, we identify an answer token from it, then replace the answer token with a blank to create a Cloze-style question-answer pair (see Table 1 ).", "When a sentence contains multiple answer tokens, a set of QA pairs can be obtained from it.", "It is important to note that at least one QA pair should be extracted from each sentence of the abstract.", "Because a system summary is trained to contain content useful for answering all questions (≈human abstract), any missing QA pair is likely to cause the summary to be insufficient.", "We collect answer tokens using the following methods: (a) we extract a set of entities with tag {PER, LOC, ORG, MISC} from each sentence using the Stanford CoreNLP toolkit ; (b) we also identify the ROOT word of each sentence's dependency parse tree along with the sentence's subject/object word, whose type is {NSUBJ, CSUBJ, OBJ, IOBJ} (if exists), then add them to the collection of answer tokens.", "Further, we prune the answer space by excluding those which appear fewer than 5 times overall.", "Having several methods for question construction allows us to explore the answer space properly.", "In the results section we perform experiments on root, subject/object, and named entities to see which model provides the best extraction guide.", "Given an extractive summary S containing a set of source text units, and a collection of questionanswer pairs P = {(Q k , e * k )} K k=1 related to the source document, we want to develop a mechanism leveraging the extractive summary to answer these questions.", "We first encode each question Q k to a vector representation (q k ).", "This is achieved by concatenating the last hidden states of the forward/backward passes of a bidirectional LSTM (Eq.", "(6) ).", "Next, we exploit the attention mechanism to locate summary parts that are relevant to answering the k-th question.", "Given the attention mechanism, an extractive summary S can be used to answer multiple questions related to the document.", "We define α t,k to be the semantic relatedness between the t-th source text unit and the k-th question.", "Following Chen et al.", "(2016a) , we introduce a bilinear term to characterize their relationship (α t,k ∝ h e t W α q k ; see Eq.", "(7) ).", "In this process, we consider only those source text units selected in summary S. Using α t,k as weights, we then compute a context vector c k condensing summary content related to the k-th question (Eq.", "(8)) .", "q k = f Bi-LSTM 4 (Q k ) (6) α t,k = exp(h e t W α q k ) t exp(h e t W α q k ) (7) c k = t α t,k h e t (8) u k = [c k ; q k ; |c k − q k |; c k ⊗ q k ] (9) To predict the most probable answer, we construct a fully-connected network as the output layer.", "The input to the network includes a concatenation of the context vector (c k ), question vector (q k ), absolute difference (|c k − q k |) and element-wise product (c k ⊗ q k ) of the two vectors (Eq.", "(9)).", "A softmax function is used to estimate a probability distribution over the space of candidate answers: .", "P (e k |S, Q k ) = softmax(W e f ReLU (W u u k + b u )).", "A Reinforcement Learning Framework In this section we introduce a reinforcement learning framework to explore the space of possible extractive summaries and present a novel reward function to promote summaries that are adequate, fluent, restricted in length, and competent in question answering.", "Our reward function consists of four components, whose interpolation weights γ, α, and β are tuned on the dev set.", "R(y) = R c (y) + γR a (y) + αR f (y) + βR l (y) We define QA competency (Eq.", "(10)) as the average log-likelihood of correctly answering questions using the system summary (y).", "A highquality system summary is expected to resemble reference summary by using similar wording.", "The adequacy metric (Eq.", "(11)) measures the percentage of overlapping unigrams between the system (y) and reference summary (y * ).", "The fluency criterion (Eq.", "(12)) encourages consecutive sequences of source words to be selected by preventing many 0/1 switches in the label sequence (i.e., |y t − y t−1 |).", "Finally, we limit the summary size by setting the ratio of selected words to be close to a threshold δ (Eq.", "(13) ).", "QA R c (y) = 1 K K k=1 log P (e * k |y, Q k ) (10) Adequ.", "R a (y) = 1 |y * | U(y, y * ) (11) Fluency R f (y) = − |y| t=2 |y t − y t−1 | (12) Length R l (y) = − 1 |y| t y t − δ (13) The reward function R(y) successfully combines intrinsic measures of summary fluency and adequacy (Goldstein et al., 2005) with extrinsic measure of summary responsiveness to given questions (Dang, 2006; Murray et al., 2008) .", "A reinforcement learning agent finds a policy P (y|x) to maximize the expected reward E P (y|x) [R(y)].", "Training the system with policy gradient (Eq.", "(14) ) involves repeatedly sampling an extractive summaryŷ from the source document x.", "At time t, the agent takes an action by sampling a decision based on p(y t |ŷ <t , x) (Eq.", "(5)) indicating whether the t-th source text unit is to be included in the summary.", "Once the full summary sequenceŷ is generated, it is compared to the ground-truth sequence to compute the reward R(ŷ).", "In this way, reinforcement learning explores the space of extractive summaries and promotes those yielding high rewards.", "At inference time, rather than sampling actions from p(y t |y <t , x), we choose y t that yields the highest probability to generate the system summary y.", "This process is deterministic and no QA is required.", "∇ θ E P (y|x) [R(y)] = E P (y|x) [R(y)∇ θ log P (y|x)] ≈ 1 N N n=1 R(ŷ (n) )∇ θ log P (ŷ (n) |x) (14) Experiments We proceed by discussing the dataset and settings, comparison systems, and experimental results obtained through both automatic metrics and human evaluation in a reading comprehension setting.", "Dataset and Settings Our goal is to build an extractive summarizer identifying important textual segments from source articles.", "To investigate the effectiveness of the proposed approach, we conduct experiments on the CNN/Daily Mail dataset using a version provided by See et al.", "(2017) .", "The reference summaries of this dataset were created by human editors exhibiting a moderate degree of extractiveness.", "E.g., 83% of summary unigrams and 45% of bigrams appear in source articles (Narayan et al., 2018a) .", "On average, a CNN article contains 761 words / 34 sentences and a DM article contains 653 words / 29 sentences.", "We report results respectively for the CNN and DM portion of the dataset.", "Our hyperparameter settings are as follows.", "We set the hidden state dimension of the LSTM to be 256 in either direction.", "A bidirectional LSTM f Bi-LSTM 1 (·) produces a 512-dimensional vector for each content word.", "Similarly, f Bi-LSTM 4 (·) generates a question vector q k of the same size.", "Our CNN encoder f CNN 2 (·) uses multiple window sizes of {1, 3, 5, 7} and 128 filters per window size.", "h e t is thus a 512-dimensional vector using either CNN or LSTM encoder.", "We set the hidden state dimension of s t to be 128.", "We also use 100-dimensional word embeddings (Pennington et al., 2014) and sinusoidal positional encodings (Vaswani et al., 2017) of 30 dimensions.", "The maximum article length is set to 400 words.", "Compared to the study of Arumae and Liu (2018) , we expand the search space dramatically from 100 to 400 words, which poses a challenge to the RLbased summarizers.", "We associate each article with at most 10 QA pairs (K=10) and use them to guide the extraction of summary segments.", "We apply mini-batch training with Adam optimizer (Kingma and Ba, 2014) , where a mini-batch contains 128 articles and their QA pairs.", "The summary ratio δ is set to 0.15, yielding extractive summaries of about 60 words.", "Following Arumae and Liu (2018) , we set hyperparameters β = 2α; α and γ are tuned on the dev set using grid search.", "Experimental Results Comparison systems We compare our method with a number of extractive and abstractive systems that have reported results on the CNN/DM datasets.", "We consider non-neural approaches that extract sentences from the source article to form a summary.", "These include LexRank (Radev et al., 2004) , SumBasic (Vanderwende et al., 2007) , and KLSum (Haghighi and Vanderwende, 2009) .", "Such methods treat sentences as bags of words, and then select sentences containing topically important words.", "We further include the Lead-3 baseline that extracts the first 3 sentences from any given article.", "The method has been shown to be a strong baseline for summarizing news articles.", "Neural extractive approaches focus on learning vector representations for sentences and words, then performing extraction based on the learned representations.", "Cheng et al.", "(2016) describe a neural network method composed of a hierarchical document encoder and an attention-based extractor.", "The system has two variants: NN-WE extracts words from the source article and NN-SE extracts sentences.", "SummaRuNNer (Nallapati et al., 2017) presents an autoregressive sequence labeling method based on recurrent neural networks.", "It selects summary sentences based on their content, salience, position, and novelty representations.", "Abstractive summarization methods are not directly comparable to our approach, but we choose to include three systems that report results respectively for CNN and DM datasets.", "Distraction-M3 (Chen et al., 2016b ) trains the summarization system to distract its attention to traverse different regions of the source article.", "Graph attention (Tan et al., 2017) introduces a graph-based attention mechanism to enhance the encoderdecoder framework.", "PointerGen+Cov.", "(See et al., 2017) allows the system to not only copy words from the source text but also generate summary words by selecting them from a vocabulary.", "Abstractive methods can thus introduce new words to the summary that are not present in the source article.", "However, system summaries may change the meaning of the original texts due to this flexibility.", "Results We present summarization results of various systems in Tables 2 and 3 , evaluated on the standard CNN/DM test sets by R-1, R-2, and R-L metrics (Lin, 2004) , which respectively measure the overlap of unigrams, bigrams, and longest common subsequences between system and reference summaries.", "We investigate four variants of our method: QASumm+NoQ does not utilize any question-answer pairs during training.", "It extracts summary text chunks by learning from groundtruth labels ( §3.2) and the chunks are encoded by f Bi-LSTM 1 .", "Other variants initialize their models using pretrained parameters from QASumm+NoQ, then integrate the reinforcement learning objective ( §3.4) to exploit the space of possible extractive summaries and reward those that are useful for answering questions.", "We consider three types of QA pairs: the answer token is the root of a sentence dependency parse tree (+ROOT), a subject or object (+SUBJ/OBJ), or an entity found in the sentence (+NER).", "In all cases, the question is generated by replacing the answer token with a blank symbol.", "As illustrated in Tables 2 and 3 , our QASumm methods with reinforcement learning (+ROOT, +SUBJ/OBJ, +NER) perform competitively with strong baselines.", "They outperform the counterpart QASumm+NoQ that makes no use of the QA pairs by a substantial margin.", "They outperform or perform at a comparable level to state-of-the-art published systems on the CNN/DM datasets but are generally inferior to PointerGen.", "We observe that exacting summary chunks is highly desirable in real-world applications as it provides a mechanism to generate concise summaries.", "Nonetheless, accurately identifying summary chunks is challenging because the search space is vast and spuriousness arises in chunking sentences.", "Cheng and Lapata (2016) report a substantial performance drop when adapting their system to extract words.", "Our QASumm methods focusing on chunk extraction perform on par with competitive systems that extract whole sentences.", "We additionally present human evaluation results of summary usefulness for a reading comprehension task in §4.3.", "In Tables 2 and 3 , we further show the number of unique answers per QA type.", "We find that the ROOT-type QA pairs have the least number of unique answers.", "They are often main verbs of sentences.", "In contrast, the SUBJ/OBJ-type has the most number of answers.", "They are subjects and objects of sentences and correspond to an open class of content words.", "The NER-type has a moderate number of answers compared to others.", "Note that all answer tokens have been filtered by frequency; those appearing less than 5 times in the dataset are removed to avoid overfitting.", "Among variants of the QASumm method, we find that QASumm+ROOT achieves the highest scores on DM dataset.", "QASumm+NER performs consistently well on both CNN and DM datasets, suggesting QA pairs of this type are effective in guiding the system to extract summary chunks.", "We conjecture that maintaining a moderate number of answers is important to maximize performance.", "To answer questions with missing entities, the summary is encouraged to contain similar content as the question body.", "Because questions are derived from the human abstract, this in turn requires the system summary to carry similar semantic content as the human abstract.", "Question-answering accuracy We next dive into the QA component of our system to investigate question-answering performance when different types of summaries and QA pairs are supplied to the system ( §3.3).", "Given a question, the system predicts an answer using an extractive summary as the source input.", "Intuitively, an informative summary can lead to high QA accuracy, as the summary content serves well as the basis for predicting answers.", "With the same summary as input, certain types of questions can be more difficult to answer than others, and the system must rely heavily on the summary to gauge correct answers.", "We compare various types of summaries.", "These include (a) QASumm+NoQ which extracts summary chunks without requiring QA pairs; and (b) GoldSumm, which are gold-standard extractive summaries generated by collecting source words appearing in human summaries.", "We further consider NoText and FullText, corresponding to using no source text or the full source article as input.", "They represent the two extremes.", "In all cases the QA component ( §3.3) is trained on the training set and we report QA accuracies on the dev set.", "In Table 4 , we observe that question-answering with GoldSumm performs the best for all QA types.", "It outperforms the scenarios using Full-Text as the source input.", "This indicates that distilled information contained in a high-quality summary can be useful for answering questions, as searching for answers in a succinct summary can be more efficient than that in a full article.", "Moreover, we observe that the performance of QA-Summ+NoQ is in between NoText and GoldSumm for all answer types.", "The results suggest that extractive summaries with even modest ROUGE scores can prove useful for question-answering.", "Regarding different types of QA pairs, we find that the ROOT-type can achieve high QA accuracy when using NoText input.", "It suggests that ROOT answers can to some extent be predicted based on the question context.", "The NER-type QA pairs work the best for both GoldSumm and Full-Text, likely because the source texts contain necessary entities required to correctly answer those questions.", "We also find the SUBJ/OBJ-type QA pairs have the smallest gap between train/dev accuracies, despite that they have a large answer space.", "Based on the analysis we would suggest future work to consider using NER-based QA pairs as they encourage the summaries to contain salient source content and be informative.", "Extraction units We finally compare the performance of using either words or chunks as extraction units ( §3.1).", "The chunks are obtained by breaking down sentence constituent parse trees in a top-down manner until all tree fragments contain 5 words or less.", "We observe that 70% of the chunks are 1-grams, and 2/3/4/5-grams are 9%, 7%, 6%, and 8% respectively.", "We compare the bidirectional LSTM (f LSTM 1 ) and CNN (f CNN 2 ) encoders for their effectiveness on generating representations for extraction units.", "Figure 2 presents the results of the QASumm+NoQ system under various settings.", "We find that extracting chunks performs superior, and combining chunks with LSTM representations yield the highest scores.", "Human Evaluation Testing the usefulness of an extractive system driven by reading comprehension is not inherently measured by automatic metrics (i.e.", "ROUGE).", "We conducted a human evaluation to assess whether the highlighted summaries contribute to document understanding.", "Similar to our training paradigm we presented each participant with the document and three fill-in-the-blank questions created from the human abstracts.", "It was guaranteed that each question was from a unique human abstract to avoid seeing the answer adjacent to the same template.", "The missing section was randomly generated to be either the root word, the subject or ob- See et al.", "(2017) .", "Our systems tested were the supervised extractor, and our full model (NER).", "ject of the sentence, or a named entity.", "We compare our reinforced extracted summary (presented as a bold overlay to the document), against our supervised method (section 3.2), abstractive summaries generated by See et al.", "(2017) , and the human abstracts in full.", "Additionally we asked the participants to rate the quality of the summary presented (1-5, with 5 being most informative).", "We utilized Amazon Mechanical Turk, and conducted an experiment where we sampled 80 documents from the CNN test set.", "The articles were evenly split across the four competing systems, and each HIT was completed by 5 turkers.", "Upon completion the data was analyzed manually for accuracy since turkers entered each answer as free text, and to remove any meaningless datapoints.", "Table 5 shows the average time (in seconds) to complete a single question, the overall accuracy of the participants, and the informativeness of a given summary type.", "Excluding the use of human abstracts, all systems resulted in similar performance times.", "However we observe a large margin in QA accuracy in our full system compared to the abstractive and our supervised approach.", "Although participants rated the informativeness of the summaries to be the same our systems yielded a higher performance.", "This strongly indicates that having a system which makes using of document comprehension has a tangible effect when applied towards a real-world task.", "Conclusion We exploited an extractive summarization framework using deep reinforcement learning to identify consecutive word sequences from a document to form an extractive summary.", "Our reward function promotes adequate and fluent summaries that can serve as document surrogates to answer important questions, directly addressing users' information needs.", "Experimental results on benchmark datasets demonstrated the efficacy of our proposed method over state-of-the-art baselines, assessed by both automatic metrics and human evaluators." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "4.3", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Our Approach", "Representing an Extraction Unit", "Constructing an Extractive Summary", "Using Summaries to Answer Questions", "A Reinforcement Learning Framework", "Experiments", "Dataset and Settings", "Experimental Results", "Human Evaluation", "Conclusion" ] }
GEM-SciDuet-train-36#paper-1050#slide-14
Conclusion
We exploited an extractive summarization framework using deep reinforcement learning to identify word sequences from a document to form a summary. Our reward function promotes fluent summaries that can serve as document surrogates to answer important questions. Experimental results on benchmark data demonstrated the efficacy of our proposed method, assessed by both automatic metrics and human evaluators. Kristjan Arumae and Fei Liu Guiding Extractive Summarization with Question-Answering Rewards - NAACL 2019
We exploited an extractive summarization framework using deep reinforcement learning to identify word sequences from a document to form a summary. Our reward function promotes fluent summaries that can serve as document surrogates to answer important questions. Experimental results on benchmark data demonstrated the efficacy of our proposed method, assessed by both automatic metrics and human evaluators. Kristjan Arumae and Fei Liu Guiding Extractive Summarization with Question-Answering Rewards - NAACL 2019
[]
GEM-SciDuet-train-37#paper-1053#slide-0
1053
Stack-Pointer Networks for Dependency Parsing
We introduce a novel architecture for dependency parsing: stack-pointer networks (STACKPTR). Combining pointer networks (Vinyals et al., 2015) with an internal stack, the proposed model first reads and encodes the whole sentence, then builds the dependency tree top-down (from root-to-leaf) in a depth-first fashion. The stack tracks the status of the depthfirst search and the pointer networks select one child for the word at the top of the stack at each step. The STACKPTR parser benefits from the information of the whole sentence and all previously derived subtree structures, and removes the leftto-right restriction in classical transitionbased parsers. Yet, the number of steps for building any (including non-projective) parse tree is linear in the length of the sentence just as other transition-based parsers, yielding an efficient decoding algorithm with O(n 2 ) time complexity. We evaluate our model on 29 treebanks spanning 20 languages and different dependency annotation schemas, and achieve state-of-theart performance on 21 of them.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281 ], "paper_content_text": [ "Introduction Dependency parsing, which predicts the existence and type of linguistic dependency relations between words, is a first step towards deep language understanding.", "Its importance is widely recognized in the natural language processing (NLP) community, with it benefiting a wide range of NLP applications, such as coreference resolution (Ng, 2010; Durrett and Klein, 2013; Work done while at Carnegie Mellon University.", "2016), sentiment analysis (Tai et al., 2015) , machine translation (Bastings et al., 2017) , information extraction (Nguyen et al., 2009; Angeli et al., 2015; Peng et al., 2017) , word sense disambiguation (Fauceglia et al., 2015) , and low-resource languages processing (McDonald et al., 2013; Ma and Xia, 2014) .", "There are two dominant approaches to dependency parsing (Buchholz and Marsi, 2006; Nivre et al., 2007) : local and greedy transitionbased algorithms (Yamada and Matsumoto, 2003; Nivre and Scholz, 2004; Zhang and Nivre, 2011; Chen and Manning, 2014) , and the globally optimized graph-based algorithms (Eisner, 1996; Mc-Donald et al., 2005a,b; .", "Transition-based dependency parsers read words sequentially (commonly from left-to-right) and build dependency trees incrementally by making series of multiple choice decisions.", "The advantage of this formalism is that the number of operations required to build any projective parse tree is linear with respect to the length of the sentence.", "The challenge, however, is that the decision made at each step is based on local information, leading to error propagation and worse performance compared to graph-based parsers on root and long dependencies (McDonald and Nivre, 2011) .", "Previous studies have explored solutions to address this challenge.", "Stack LSTMs are capable of learning representations of the parser state that are sensitive to the complete contents of the parser's state.", "Andor et al.", "(2016) proposed a globally normalized transition model to replace the locally normalized classifier.", "However, the parsing accuracy is still behind state-of-the-art graph-based parsers (Dozat and Manning, 2017) .", "Graph-based dependency parsers, on the other hand, learn scoring functions for parse trees and perform exhaustive search over all possible trees for a sentence to find the globally highest scoring tree.", "Incorporating this global search algorithm with distributed representations learned from neural networks, neural graph-based parsers (Kiperwasser and Goldberg, 2016; Wang and Chang, 2016; Kuncoro et al., 2016; Dozat and Manning, 2017) have achieved the state-of-the-art accuracies on a number of treebanks in different languages.", "Nevertheless, these models, while accurate, are usually slow (e.g.", "decoding is O(n 3 ) time complexity for first-order models (McDonald et al., 2005a,b) and higher polynomials for higherorder models (McDonald and Pereira, 2006; Ma and Zhao, 2012b,a) ).", "In this paper, we propose a novel neural network architecture for dependency parsing, stackpointer networks (STACKPTR).", "STACKPTR is a transition-based architecture, with the corresponding asymptotic efficiency, but still maintains a global view of the sentence that proves essential for achieving competitive accuracy.", "Our STACKPTR parser has a pointer network (Vinyals et al., 2015) as its backbone, and is equipped with an internal stack to maintain the order of head words in tree structures.", "The STACKPTR parser performs parsing in an incremental, topdown, depth-first fashion; at each step, it generates an arc by assigning a child for the head word at the top of the internal stack.", "This architecture makes it possible to capture information from the whole sentence and all the previously derived subtrees, while maintaining a number of parsing steps linear in the sentence length.", "We evaluate our parser on 29 treebanks across 20 languages and different dependency annotation schemas, and achieve state-of-the-art performance on 21 of them.", "The contributions of this work are summarized as follows: (i) We propose a neural network architecture for dependency parsing that is simple, effective, and efficient.", "(ii) Empirical evaluations on benchmark datasets over 20 languages show that our method achieves state-of-the-art performance on 21 different treebanks 1 .", "(iii) Comprehensive error analysis is conducted to compare the proposed method to a strong graph-based baseline using biaffine attention (Dozat and Manning, 2017) .", "Background We first briefly describe the task of dependency parsing, setup the notation, and review Pointer Networks (Vinyals et al., 2015) .", "Dependency Parsing and Notations Dependency trees represent syntactic relationships between words in the sentences through labeled directed edges between head words and their dependents.", "Figure 1 (a) shows a dependency tree for the sentence, \"But there were no buyers\".", "In this paper, we will use the following notation: Input: x = {w 1 , .", ".", ".", ", w n } represents a generic sentence, where w i is the ith word.", "Output: y = {p 1 , p 2 , · · · , p k } represents a generic (possibly non-projective) dependency tree, where each path p i = $, w i,1 , w i,2 , · · · , w i,l i is a sequence of words from the root to a leaf.", "\"$\" is an universal virtual root that is added to each tree.", "Stack: σ denotes a stack configuration, which is a sequence of words.", "We use σ|w to represent a stack configuration that pushes word w into the stack σ.", "Children: ch(w i ) denotes the list of all the children (modifiers) of word w i .", "Pointer Networks Pointer Networks (PTR-NET) (Vinyals et al., 2015) are a variety of neural network capable of learning the conditional probability of an output sequence with elements that are discrete tokens corresponding to positions in an input sequence.", "This model cannot be trivially expressed by standard sequence-to-sequence networks due to the variable number of input positions in each sentence.", "PTR-NET solves the problem by using attention (Bahdanau et al., 2015; Luong et al., 2015) as a pointer to select a member of the input sequence as the output.", "Formally, the words of the sentence x are fed one-by-one into the encoder (a multiple-layer bidirectional RNN), producing a sequence of encoder hidden states s i .", "At each time step t, the decoder (a uni-directional RNN) receives the input from last step and outputs decoder hidden state h t .", "The attention vector a t is calculated as follows: e t i = score(h t , s i ) a t = softmax (e t ) (1) where score(·, ·) is the attention scoring function, which has several variations such as dot-product, concatenation, and biaffine (Luong et al., 2015) .", "PTR-NET regards the attention vector a t as a probability distribution over the source words, i.e.", "it uses a t i as pointers to select the input elements.", "3 Stack-Pointer Networks Overview Similarly to PTR-NET, STACKPTR first reads the whole sentence and encodes each word into the encoder hidden state s i .", "The internal stack σ is always initialized with the root symbol $.", "At each time step t, the decoder receives the input vector corresponding to the top element of the stack σ (the head word w p where p is the word index), generates the hidden state h t , and computes the attention vector a t using Eq.", "(1).", "The parser chooses a specific position c according to the attention scores in a t to generate a new dependency arc (w h , w c ) by selecting w c as a child of w h .", "Then the parser pushes w c onto the stack, i.e.", "σ → σ|w c , and goes to the next step.", "At one step if the parser points w h to itself, i.e.", "c = h, it indicates that all children of the head word w h have already been selected.", "Then the parser goes to the next step by popping w h out of σ.", "At test time, in order to guarantee a valid dependency tree containing all the words in the input sentences exactly once, the decoder maintains a list of \"available\" words.", "At each decoding step, the parser selects a child for the current head word, and removes the child from the list of available words to make sure that it cannot be selected as a child of other head words.", "For head words with multiple children, it is possible that there is more than one valid selection for each time step.", "In order to define a deterministic decoding process to make sure that there is only one ground-truth choice at each step (which is necessary for simple maximum likelihood estimation), a predefined order for each ch(w i ) needs to be introduced.", "The predefined order of children can have different alternatives, such as leftto-right or inside-out 2 .", "In this paper, we adopt the inside-out order 3 since it enables us to utilize second-order sibling information, which has been proven beneficial for parsing performance (Mc-Donald and Pereira, 2006; ) (see § 3.4 for details).", "Figure 1 (b) depicts the architecture of STACKPTR and the decoding procedure for the example sentence in Figure 1 (a) .", "Encoder The encoder of our parsing model is based on the bi-directional LSTM-CNN architecture (BLSTM-CNNs) (Chiu and Nichols, 2016; where CNNs encode character-level information of a word into its character-level repre-sentation and BLSTM models context information of each word.", "Formally, for each word, the CNN, with character embeddings as inputs, encodes the character-level representation.", "Then the character-level representation vector is concatenated with the word embedding vector to feed into the BLSTM network.", "To enrich word-level information, we also use POS embeddings.", "Finally, the encoder outputs a sequence of hidden states s i .", "Decoder The decoder for our parser is a uni-directional LSTM.", "Different from previous work (Bahdanau et al., 2015; Vinyals et al., 2015) which uses word embeddings of the previous word as the input to the decoder, our decoder receives the encoder hidden state vector (s i ) of the top element in the stack σ (see Figure 1 (b)).", "Compared to word embeddings, the encoder hidden states contain more contextual information, benefiting both the training and decoding procedures.", "The decoder produces a sequence of decoder hidden states h i , one for each decoding step.", "Higher-order Information As mentioned before, our parser is capable of utilizing higher-order information.", "In this paper, we incorporate two kinds of higher-order structures grandparent and sibling.", "A sibling structure is a head word with two successive modifiers, and a grandparent structure is a pair of dependencies connected head-to-tail: 0 12 3 456 2782 96 56 986 2 5214 3 77543 9 5 2 52 ÿ ÿ ! \"", "\"#$% & #& ' % !", "#& (()& *% & & !", "*\"& (+(\"!", "\"% ) &, \" \"#$' (% & #-.,\"/\" \"#$ % %% #*) (& ** *% & 0 % #( !", "$% %()- 1 2 3 -45 67 896: ;<=>?", "@ ABCDE3 -F14 G H I .J3-4 %& *% &()!", "!& %#( & +!", ", & * -\"ABCDE3 -F14K!", "& %& *K&& *&& 1 %& *!", "$% %()% & #-'A #& *# \" !", "0 & $()J3-4 (,% (& !", "!", "$, & *& *!", "& *() & *% & #.\"", "#& !", "$%(!", "K EL' % #& #! '", "#((%& & $.", "\"(,* #*) ( %& *) (0 #%()& * % .", "%& (M NOP QR#*\" \"#$& & (%!", "!SNR P T .,* #*#+% #( \" % (!", "& (' #& ( \" % #+) ( !", "U\"%) (!", "!", "(,% ABCDE3 -F145 V W<; ABCDEXDY3 -FZ4 Y*& % .,& && *\" \"#$& 1%%& () & %Z.#*(),* #*[%% & #(& 0 +& (& (& *% #( ()1'(# & ) #& ( U& (% . )", "/# & % !", "( & *% % &) (% (!", "K EL'' \\\"/ A # /#!", "!", "$.A#& (%u' .u' v.\"u' w\"% # + % %& *& .", "% #& K!", "$.)", "#& (& % & ( \"0 #* ! \"", "& % .", "\"0 % +!", "& % .\"", "& () \"0 % +!", "\"& 0 % +!", "& % ' x yz{ | } {| { { } | C,& * \"0 ( \"\" \"#$ % %+ !", "\"( \"%) ( % & % !", "( & *%'_ & * % % #& (., (K \"+#[ (\"(& ,( !", "K& % %) ( K (%,( [' G H G +H vY*\"$ #0 ( % & #& % \"\" K& (%()& *E % G vH!", "( & *' B( !", "& % % \" #& \"%& !", "%\" 0 #( !", "& % %%& U( \"% '(+ K & $.,! \"", "& *% $& # *& 0 *\"\"K % (% ' x { | } } { } { Ỹ */ % && $ () % ,\"% # +% %/ % & 0 ( \" ) #& ( U& (.,* #*\"#( (% %\" 0 \"#$& & ( & % \" K \"!\"", "\"# % 'E % 0 G vH & (\"#\", \"!", "$0 % \"\"$ #0 ( !", "( & *) (/ % & 0 ( \" % % & %& *+% %) ($ % % .", "#! \"", "(, !", "( & *%.,% U & %\"% * ' Y*E % G vH!", "( & * %+% \"(& ,( & !", "& \"& $ %()\"$ #0 ( %& #0 & % OQ S¡P% % .,* #*#(% % &()*\"0 ,( \"\" & %\"% #\"& %((% \".", "\"¢ £OQ ¤ S¡P% % .,* #*#(% % &()\" \"#$\"& * (+& ,& **\"\"(\" / ' ( !", "!", "$.,\"(& #( !", "& % %¥ ¦ § ,* ©\"ª & * \" #%()& *% « %*\"0 ,( \"\"\" ( & ' #( !", "& % %\"0 (& \"%¬ ¦ § ,* ©\"® & * \"()& * *\"\"(\" /()\" \"#$' _ & & K!", "$.", "#( !", "& % % & %*! )", "0 #(% & & & *\"\"+$©.,* % #( !", "& % %(!", "$ & !*! )", "0 #(% & & & .% #& *#(% & & &# +& \"\"+$\"\" ( (\" / %& (®' E#*& $ ()% %# & \"+$ # % K!", "$ #(+ & ,(%!", "!", ".", "\"#&% % & *#(0 % & #& (% % # /\" * #!", "!", "$ v' #( !", "& % %#(% & #& \") ( ()#( !", "& % % . \"", "#& & *\" K % (()& * °©F®± & (#(% & & & %*\"\"+$©\" ®' #( !", "& % %# & \"+$#( !", "& 0 #( !", "& % , & *& *(& **!", ")() ®« %#(% & & & 'Y* ( &()#(#& & ( #*#(% & #& (²® vG H(³ 0 vG +H ² %& *T S¡ ¢ PSQ¢ £P .)", "\"& *&% & + & \"& (/\"& *( & !#(% & #& (' _ ( \"& ( % %& #-.", "&% ) /#%& ( /\"( & !#(% & #& (%) (!", "!#( !", "& \" #( !", "& % %\"/\"(-' Y* %#+ To utilize higher-order information, the decoder's input at each step is the sum of the encoder hidden states of three words: β t = s h + s g + s s where β t is the input vector of decoder at time t and h, g, s are the indices of the head word and its grandparent and sibling, respectively.", "Figure 1 (b) illustrates the details.", "Here we use the element-wise sum operation instead of concatenation because it does not increase the dimension of the input vector β t , thus introducing no additional model parameters.", "Biaffine Attention Mechanism For attention score function (Eq.", "(1)), we adopt the biaffine attention mechanism (Luong et al., 2015; Dozat and Manning, 2017) : e t i = h T t Ws i + U T h t + V T s i + b where W, U, V, b are parameters, denoting the weight matrix of the bi-linear term, the two weight vectors of the linear terms, and the bias vector.", "As discussed in Dozat and Manning (2017) , applying a multilayer perceptron (MLP) to the output vectors of the BLSTM before the score function can both reduce the dimensionality and overfitting of the model.", "We follow this work by using a one-layer perceptron to s i and h i with elu (Clevert et al., 2015) as its activation function.", "Similarly, the dependency label classifier also uses a biaffine function to score each label, given the head word vector h t and child vector s i as inputs.", "Again, we use MLPs to transform h t and s i before feeding them into the classifier.", "Training Objectives The STACKPTR parser is trained to optimize the probability of the dependency trees given sentences: P θ (y|x), which can be factorized as: P θ (y|x) = k i=1 P θ (p i |p <i , x) = k i=1 l i j=1 P θ (c i,j |c i,<j , p <i , x), (2) where θ represents model parameters.", "p <i denotes the preceding paths that have already been generated.", "c i,j represents the jth word in p i and c i,<j denotes all the proceeding words on the path p i .", "Thus, the STACKPTR parser is an autoregressive model, like sequence-to-sequence models, but it factors the distribution according to a top-down tree structure as opposed to a left-to-right chain.", "We define P θ (c i,j |c i,<j , p <i , x) = a t , where attention vector a t (of dimension n) is used as the distribution over the indices of words in a sentence.", "Arc Prediction Our parser is trained by optimizing the conditional likelihood in Eq (2), which is implemented as the cross-entropy loss.", "Label Prediction We train a separated multiclass classifier in parallel to predict the dependency labels.", "Following Dozat and Manning (2017) , the classifier takes the information of the head word and its child as features.", "The label classifier is trained simultaneously with the parser by optimizing the sum of their objectives.", "Discussion Time Complexity.", "The number of decoding steps to build a parse tree for a sentence of length n is 2n−1, linear in n. Together with the attention mechanism (at each step, we need to compute the attention vector a t , whose runtime is O(n)), the time complexity of decoding algorithm is O(n 2 ), which is more efficient than graph-based parsers that have O(n 3 ) or worse complexity when using dynamic programming or maximum spanning tree (MST) decoding algorithms.", "Top-down Parsing.", "When humans comprehend a natural language sentence, they arguably do it in an incremental, left-to-right manner.", "However, when humans consciously annotate a sentence with syntactic structure, they rarely ever process in fixed left-to-right order.", "Rather, they start by reading the whole sentence, then seeking the main predicates, jumping back-and-forth over the sentence and recursively proceeding to the subtree structures governed by certain head words.", "Our parser follows a similar kind of annotation process: starting from reading the whole sentence, and processing in a top-down manner by finding the main predicates first and only then search for sub-trees governed by them.", "When making latter decisions, the parser has access to the entire structure built in earlier steps.", "Implementation Details Pre-trained Word Embeddings.", "For all the parsing models in different languages, we initialize word vectors with pretrained word embeddings.", "For Chinese, Dutch, English, German and Spanish, we use the structured-skipgram embeddings.", "For other languages we use Polyglot embeddings (Al-Rfou et al., 2013) .", "Optimization.", "Parameter optimization is performed with the Adam optimizer (Kingma and Ba, 2014) with β 1 = β 2 = 0.9.", "We choose an initial learning rate of η 0 = 0.001.", "The learning rate η is annealed by multiplying a fixed decay rate ρ = 0.75 when parsing performance stops increasing on validation sets.", "To reduce the effects of \"gradient exploding\", we use gradient clipping of 5.0 (Pascanu et al., 2013) .", "Dropout Training.", "To mitigate overfitting, we apply dropout (Srivastava et al., 2014; .", "For BLSTM, we use recurrent dropout (Gal and Ghahramani, 2016) with a drop rate of 0.33 between hidden states and 0.33 between layers.", "Following Dozat and Manning (2017) , we also use embedding dropout with a rate of 0.33 on all word, character, and POS embeddings.", "Hyper-Parameters.", "Some parameters are chosen from those reported in Dozat and Manning (2017) .", "We use the same hyper-parameters across the models on different treebanks and languages, due to time constraints.", "The details of the chosen hyper-parameters for all experiments are summarized in Appendix A.", "Experiments Setup We evaluate our STACKPTR parser mainly on three treebanks: the English Penn Treebank (PTB version 3.0) (Marcus et al., 1993) , the Penn Chinese Treebank (CTB version 5.1) (Xue et al., 2002) , and the German CoNLL 2009 corpus (Hajič et al., 2009) .", "We use the same experimental settings as Kuncoro et al.", "(2016) .", "To make a thorough empirical comparison with previous studies, we also evaluate our system on treebanks from CoNLL shared task and the Universal Dependency (UD) Treebanks 4 .", "For the CoNLL Treebanks, we use the English treebank from CoNLL-2008 shared task (Surdeanu et al., 2008) and all 13 treebanks from CoNLL-2006 shared task (Buchholz and Marsi, 2006) .", "The experimental settings are the same as .", "For UD Treebanks, we select 12 languages.", "The details of the treebanks and experimental settings are in § 4.5 and Appendix B.", "Evaluation Metrics Parsing performance is measured with five metrics: unlabeled attachment score (UAS), labeled attachment score (LAS), unlabeled complete match (UCM), labeled complete match (LCM), and root accuracy (RA).", "Following previous work (Kuncoro et al., 2016; Dozat and Manning, 2017) , we report results excluding punctuations for Chinese and English.", "For each experiment, we report the mean values with corresponding standard deviations over 5 repetitions.", "Baseline For fair comparison of the parsing performance, we re-implemented the graph-based Deep Biaffine (BIAF) parser (Dozat and Manning, 2017) , which achieved state-of-the-art results on a wide range of languages.", "Our re-implementation adds character-level information using the same LSTM-CNN encoder as our model ( § 3.2) to the original BIAF model, which boosts its performance on all languages.", "Main Results We first conduct experiments to demonstrate the effectiveness of our neural architecture by comparing with the strong baseline BIAF.", "We compare the performance of four variations of our model with different decoder inputs -Org, +gpar, +sib and Full -where the Org model utilizes only the encoder hidden states of head words, while the +gpar and +sib models augments the original one with grandparent and sibling information, respectively.", "The Full model includes all the three information as inputs.", "Figure 2 illustrates the performance (five metrics) of different variations of our STACKPTR parser together with the results of baseline BIAF re-implemented by us, on the test sets of the three languages.", "On UAS and LAS, the Full variation of STACKPTR with decoding beam size 10 outperforms BIAF on Chinese, and obtains competitive performance on English and German.", "An interesting observation is that the Full model achieves the best accuracy on English and Chinese, while performs slightly worse than +sib on German.", "This shows that the importance of higher-order information varies in languages.", "On LCM and UCM, STACKPTR significantly outperforms BIAF on all languages, showing the superiority of our parser on complete sentence parsing.", "The results of our parser on RA are slightly worse than BIAF.", "More details of results are provided in Appendix C. Table 1 illustrates the UAS and LAS of the four versions of our model (with decoding beam size 10) on the three treebanks, together with previous top-performing systems for comparison.", "Note that the results of STACKPTR and our reimplementation of BIAF are the average of 5 repetitions instead of a single run.", "Our Full model significantly outperforms all the transition-based parsers on all three languages, and achieves better results than most graph-based parsers.", "Our Table 1 : UAS and LAS of four versions of our model on test sets for three languages, together with topperforming parsing systems.", "\"T\" and \"G\" indicate transition-and graph-based models, respectively.", "For BIAF, we provide the original results reported in Dozat and Manning (2017) and our re-implementation.", "For STACKPTR and our re-implementation of BiAF, we report the average over 5 runs.", "re-implementation of BIAF obtains better performance than the original one in Dozat and Manning (2017) , demonstrating the effectiveness of the character-level information.", "Our model achieves state-of-the-art performance on both UAS and LAS on Chinese, and best UAS on English.", "On German, the performance is competitive with BIAF, and significantly better than other models.", "Comparison with Previous Work Error Analysis In this section, we characterize the errors made by BIAF and STACKPTR by presenting a number of experiments that relate parsing errors to a set of linguistic and structural properties.", "For simplicity, we follow McDonald and Nivre (2011) and report labeled parsing metrics (either accuracy, precision, or recall) for all experiments.", "Length and Graph Factors Following McDonald and Nivre (2011), we analyze parsing errors related to structural factors.", "Sentence Length.", "Figure 3 (a) shows the accuracy of both parsing models relative to sentence lengths.", "Consistent with the analysis in Mc-Donald and Nivre (2011) , STACKPTR tends to perform better on shorter sentences, which make fewer parsing decisions, significantly reducing the chance of error propagation.", "Dependency Length.", "Figure 3 (b) measures the precision and recall relative to dependency lengths.", "While the graph-based BIAF parser still performs better for longer dependency arcs and transition-based STACKPTR parser does better for shorter ones, the gap between the two systems is marginal, much smaller than that shown Table 3 : UAS and LAS on 14 treebanks from CoNLL shared tasks, together with several state-of-the-art parsers.", "Bi-Att is the bi-directional attention based parser (Cheng et al., 2016) , and NeuroMST is the neural MST parser .", "\"Best Published\" includes the most accurate parsers in term of UAS among , Martins et al.", "(2011) , Martins et al.", "(2013) , , , Zhang and McDonald (2014) , Pitler and McDonald (2015) , and Cheng et al.", "(2016) .", "in McDonald and Nivre (2011) .", "One possible reason is that, unlike traditional transition-based parsers that scan the sentence from left to right, STACKPTR processes in a top-down manner, thus sometimes unnecessarily creating shorter dependency arcs first.", "Root Distance.", "Figure 3 (c) plots the precision and recall of each system for arcs of varying distance to the root.", "Different from the observation in McDonald and Nivre (2011) , STACKPTR does not show an obvious advantage on the precision for arcs further away from the root.", "Furthermore, the STACKPTR parser does not have the tendency to over-predict root modifiers reported in McDonald and Nivre (2011) .", "This behavior can be explained using the same reasoning as above: the fact that arcs further away from the root are usually constructed early in the parsing algorithm of traditional transition-based parsers is not true for the STACKPTR parser.", "Effect of POS Embedding The only prerequisite information that our parsing model relies on is POS tags.", "With the goal of achieving an end-to-end parser, we explore the effect of POS tags on parsing performance.", "We run experiments on PTB using our STACKPTR parser with gold-standard and predicted POS tags, and without tags, respectively.", "STACKPTR in these experiments is the Full model with beam=10.", "Table 2 gives results of the parsers with different versions of POS tags on the test data of PTB.", "The parser with gold-standard POS tags significantly outperforms the other two parsers, showing that dependency parsers can still benefit from accurate POS information.", "The parser with predicted (imperfect) POS tags, however, performs even slightly worse than the parser without using POS tags.", "It illustrates that an end-to-end parser that doesn't rely on POS information can obtain competitive (or even better) performance than parsers using imperfect predicted POS tags, even if the POS tagger is relative high accuracy (accuracy > 97% in this experiment on PTB).", "Table 3 summarizes the parsing results of our model on the test sets of 14 treebanks from the CoNLL shared task, along with the state-of-theart baselines.", "Along with BIAF, we also list the performance of the bi-directional attention based Parser (Bi-Att) (Cheng et al., 2016) and the neural MST parser (NeuroMST) for comparison.", "Our parser achieves state-of-theart performance on both UAS and LAS on eight languages -Arabic, Czech, English, German, Portuguese, Slovene, Spanish, and Swedish.", "On Bulgarian and Dutch, our parser obtains the best UAS.", "On other languages, the performance of our parser is competitive with BIAF, and significantly better than others.", "The only exception is Japanese, on which NeuroMST obtains the best scores.", "Experiments on Other Treebanks CoNLL Treebanks UD Treebanks For UD Treebanks, we select 12 languages -Bulgarian, Catalan, Czech, Dutch, English, French, German, Italian, Norwegian, Romanian, Russian and Spanish.", "For all the languages, we adopt the standard training/dev/test splits, and use the universal POS tags (Petrov et al., 2012) provided in each treebank.", "The statistics of these corpora are provided in Appendix B.", "Table 4 summarizes the results of the STACKPTR parser, along with BIAF for comparison, on both the development and test datasets for each language.", "First, both BIAF and STACKPTR parsers achieve relatively high parsing accuracies on all the 12 languages -all with UAS are higher than 90%.", "On nine languages -Catalan, Czech, Dutch, English, French, German, Norwegian, Russian and Spanish -STACKPTR outperforms BIAF for both UAS and LAS.", "On Bulgarian, STACKPTR achieves slightly better UAS while LAS is slightly worse than BIAF.", "On Italian and Romanian, BIAF obtains marginally better parsing performance than STACKPTR.", "Conclusion In this paper, we proposed STACKPTR, a transition-based neural network architecture, for dependency parsing.", "Combining pointer networks with an internal stack to track the status of the top-down, depth-first search in the decoding procedure, the STACKPTR parser is able to capture information from the whole sentence and all the previously derived subtrees, removing the leftto-right restriction in classical transition-based parsers, while maintaining linear parsing steps, w.r.t the length of the sentences.", "Experimental re-sults on 29 treebanks show the effectiveness of our parser across 20 languages, by achieving state-ofthe-art performance on 21 corpora.", "There are several potential directions for future work.", "First, we intend to consider how to conduct experiments to improve the analysis of parsing errors qualitatively and quantitatively.", "Another interesting direction is to further improve our model by exploring reinforcement learning approaches to learn an optimal order for the children of head words, instead of using a predefined fixed order." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.1", "3.2", "3.3", "3.4", "3.5", "3.6", "3.7", "3.8", "4.1", "4.2", "4.4", "4.4.1", "4.4.2", "4.5.2", "5" ], "paper_header_content": [ "Introduction", "Background", "Dependency Parsing and Notations", "Pointer Networks", "Overview", "Encoder", "Decoder", "Higher-order Information", "Biaffine Attention Mechanism", "Training Objectives", "Discussion", "Implementation Details", "Setup", "Main Results", "Error Analysis", "Length and Graph Factors", "Effect of POS Embedding", "UD Treebanks", "Conclusion" ] }
GEM-SciDuet-train-37#paper-1053#slide-0
Dependency Parsing
But there were no buyers
But there were no buyers
[]
GEM-SciDuet-train-37#paper-1053#slide-1
1053
Stack-Pointer Networks for Dependency Parsing
We introduce a novel architecture for dependency parsing: stack-pointer networks (STACKPTR). Combining pointer networks (Vinyals et al., 2015) with an internal stack, the proposed model first reads and encodes the whole sentence, then builds the dependency tree top-down (from root-to-leaf) in a depth-first fashion. The stack tracks the status of the depthfirst search and the pointer networks select one child for the word at the top of the stack at each step. The STACKPTR parser benefits from the information of the whole sentence and all previously derived subtree structures, and removes the leftto-right restriction in classical transitionbased parsers. Yet, the number of steps for building any (including non-projective) parse tree is linear in the length of the sentence just as other transition-based parsers, yielding an efficient decoding algorithm with O(n 2 ) time complexity. We evaluate our model on 29 treebanks spanning 20 languages and different dependency annotation schemas, and achieve state-of-theart performance on 21 of them.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281 ], "paper_content_text": [ "Introduction Dependency parsing, which predicts the existence and type of linguistic dependency relations between words, is a first step towards deep language understanding.", "Its importance is widely recognized in the natural language processing (NLP) community, with it benefiting a wide range of NLP applications, such as coreference resolution (Ng, 2010; Durrett and Klein, 2013; Work done while at Carnegie Mellon University.", "2016), sentiment analysis (Tai et al., 2015) , machine translation (Bastings et al., 2017) , information extraction (Nguyen et al., 2009; Angeli et al., 2015; Peng et al., 2017) , word sense disambiguation (Fauceglia et al., 2015) , and low-resource languages processing (McDonald et al., 2013; Ma and Xia, 2014) .", "There are two dominant approaches to dependency parsing (Buchholz and Marsi, 2006; Nivre et al., 2007) : local and greedy transitionbased algorithms (Yamada and Matsumoto, 2003; Nivre and Scholz, 2004; Zhang and Nivre, 2011; Chen and Manning, 2014) , and the globally optimized graph-based algorithms (Eisner, 1996; Mc-Donald et al., 2005a,b; .", "Transition-based dependency parsers read words sequentially (commonly from left-to-right) and build dependency trees incrementally by making series of multiple choice decisions.", "The advantage of this formalism is that the number of operations required to build any projective parse tree is linear with respect to the length of the sentence.", "The challenge, however, is that the decision made at each step is based on local information, leading to error propagation and worse performance compared to graph-based parsers on root and long dependencies (McDonald and Nivre, 2011) .", "Previous studies have explored solutions to address this challenge.", "Stack LSTMs are capable of learning representations of the parser state that are sensitive to the complete contents of the parser's state.", "Andor et al.", "(2016) proposed a globally normalized transition model to replace the locally normalized classifier.", "However, the parsing accuracy is still behind state-of-the-art graph-based parsers (Dozat and Manning, 2017) .", "Graph-based dependency parsers, on the other hand, learn scoring functions for parse trees and perform exhaustive search over all possible trees for a sentence to find the globally highest scoring tree.", "Incorporating this global search algorithm with distributed representations learned from neural networks, neural graph-based parsers (Kiperwasser and Goldberg, 2016; Wang and Chang, 2016; Kuncoro et al., 2016; Dozat and Manning, 2017) have achieved the state-of-the-art accuracies on a number of treebanks in different languages.", "Nevertheless, these models, while accurate, are usually slow (e.g.", "decoding is O(n 3 ) time complexity for first-order models (McDonald et al., 2005a,b) and higher polynomials for higherorder models (McDonald and Pereira, 2006; Ma and Zhao, 2012b,a) ).", "In this paper, we propose a novel neural network architecture for dependency parsing, stackpointer networks (STACKPTR).", "STACKPTR is a transition-based architecture, with the corresponding asymptotic efficiency, but still maintains a global view of the sentence that proves essential for achieving competitive accuracy.", "Our STACKPTR parser has a pointer network (Vinyals et al., 2015) as its backbone, and is equipped with an internal stack to maintain the order of head words in tree structures.", "The STACKPTR parser performs parsing in an incremental, topdown, depth-first fashion; at each step, it generates an arc by assigning a child for the head word at the top of the internal stack.", "This architecture makes it possible to capture information from the whole sentence and all the previously derived subtrees, while maintaining a number of parsing steps linear in the sentence length.", "We evaluate our parser on 29 treebanks across 20 languages and different dependency annotation schemas, and achieve state-of-the-art performance on 21 of them.", "The contributions of this work are summarized as follows: (i) We propose a neural network architecture for dependency parsing that is simple, effective, and efficient.", "(ii) Empirical evaluations on benchmark datasets over 20 languages show that our method achieves state-of-the-art performance on 21 different treebanks 1 .", "(iii) Comprehensive error analysis is conducted to compare the proposed method to a strong graph-based baseline using biaffine attention (Dozat and Manning, 2017) .", "Background We first briefly describe the task of dependency parsing, setup the notation, and review Pointer Networks (Vinyals et al., 2015) .", "Dependency Parsing and Notations Dependency trees represent syntactic relationships between words in the sentences through labeled directed edges between head words and their dependents.", "Figure 1 (a) shows a dependency tree for the sentence, \"But there were no buyers\".", "In this paper, we will use the following notation: Input: x = {w 1 , .", ".", ".", ", w n } represents a generic sentence, where w i is the ith word.", "Output: y = {p 1 , p 2 , · · · , p k } represents a generic (possibly non-projective) dependency tree, where each path p i = $, w i,1 , w i,2 , · · · , w i,l i is a sequence of words from the root to a leaf.", "\"$\" is an universal virtual root that is added to each tree.", "Stack: σ denotes a stack configuration, which is a sequence of words.", "We use σ|w to represent a stack configuration that pushes word w into the stack σ.", "Children: ch(w i ) denotes the list of all the children (modifiers) of word w i .", "Pointer Networks Pointer Networks (PTR-NET) (Vinyals et al., 2015) are a variety of neural network capable of learning the conditional probability of an output sequence with elements that are discrete tokens corresponding to positions in an input sequence.", "This model cannot be trivially expressed by standard sequence-to-sequence networks due to the variable number of input positions in each sentence.", "PTR-NET solves the problem by using attention (Bahdanau et al., 2015; Luong et al., 2015) as a pointer to select a member of the input sequence as the output.", "Formally, the words of the sentence x are fed one-by-one into the encoder (a multiple-layer bidirectional RNN), producing a sequence of encoder hidden states s i .", "At each time step t, the decoder (a uni-directional RNN) receives the input from last step and outputs decoder hidden state h t .", "The attention vector a t is calculated as follows: e t i = score(h t , s i ) a t = softmax (e t ) (1) where score(·, ·) is the attention scoring function, which has several variations such as dot-product, concatenation, and biaffine (Luong et al., 2015) .", "PTR-NET regards the attention vector a t as a probability distribution over the source words, i.e.", "it uses a t i as pointers to select the input elements.", "3 Stack-Pointer Networks Overview Similarly to PTR-NET, STACKPTR first reads the whole sentence and encodes each word into the encoder hidden state s i .", "The internal stack σ is always initialized with the root symbol $.", "At each time step t, the decoder receives the input vector corresponding to the top element of the stack σ (the head word w p where p is the word index), generates the hidden state h t , and computes the attention vector a t using Eq.", "(1).", "The parser chooses a specific position c according to the attention scores in a t to generate a new dependency arc (w h , w c ) by selecting w c as a child of w h .", "Then the parser pushes w c onto the stack, i.e.", "σ → σ|w c , and goes to the next step.", "At one step if the parser points w h to itself, i.e.", "c = h, it indicates that all children of the head word w h have already been selected.", "Then the parser goes to the next step by popping w h out of σ.", "At test time, in order to guarantee a valid dependency tree containing all the words in the input sentences exactly once, the decoder maintains a list of \"available\" words.", "At each decoding step, the parser selects a child for the current head word, and removes the child from the list of available words to make sure that it cannot be selected as a child of other head words.", "For head words with multiple children, it is possible that there is more than one valid selection for each time step.", "In order to define a deterministic decoding process to make sure that there is only one ground-truth choice at each step (which is necessary for simple maximum likelihood estimation), a predefined order for each ch(w i ) needs to be introduced.", "The predefined order of children can have different alternatives, such as leftto-right or inside-out 2 .", "In this paper, we adopt the inside-out order 3 since it enables us to utilize second-order sibling information, which has been proven beneficial for parsing performance (Mc-Donald and Pereira, 2006; ) (see § 3.4 for details).", "Figure 1 (b) depicts the architecture of STACKPTR and the decoding procedure for the example sentence in Figure 1 (a) .", "Encoder The encoder of our parsing model is based on the bi-directional LSTM-CNN architecture (BLSTM-CNNs) (Chiu and Nichols, 2016; where CNNs encode character-level information of a word into its character-level repre-sentation and BLSTM models context information of each word.", "Formally, for each word, the CNN, with character embeddings as inputs, encodes the character-level representation.", "Then the character-level representation vector is concatenated with the word embedding vector to feed into the BLSTM network.", "To enrich word-level information, we also use POS embeddings.", "Finally, the encoder outputs a sequence of hidden states s i .", "Decoder The decoder for our parser is a uni-directional LSTM.", "Different from previous work (Bahdanau et al., 2015; Vinyals et al., 2015) which uses word embeddings of the previous word as the input to the decoder, our decoder receives the encoder hidden state vector (s i ) of the top element in the stack σ (see Figure 1 (b)).", "Compared to word embeddings, the encoder hidden states contain more contextual information, benefiting both the training and decoding procedures.", "The decoder produces a sequence of decoder hidden states h i , one for each decoding step.", "Higher-order Information As mentioned before, our parser is capable of utilizing higher-order information.", "In this paper, we incorporate two kinds of higher-order structures grandparent and sibling.", "A sibling structure is a head word with two successive modifiers, and a grandparent structure is a pair of dependencies connected head-to-tail: 0 12 3 456 2782 96 56 986 2 5214 3 77543 9 5 2 52 ÿ ÿ ! \"", "\"#$% & #& ' % !", "#& (()& *% & & !", "*\"& (+(\"!", "\"% ) &, \" \"#$' (% & #-.,\"/\" \"#$ % %% #*) (& ** *% & 0 % #( !", "$% %()- 1 2 3 -45 67 896: ;<=>?", "@ ABCDE3 -F14 G H I .J3-4 %& *% &()!", "!& %#( & +!", ", & * -\"ABCDE3 -F14K!", "& %& *K&& *&& 1 %& *!", "$% %()% & #-'A #& *# \" !", "0 & $()J3-4 (,% (& !", "!", "$, & *& *!", "& *() & *% & #.\"", "#& !", "$%(!", "K EL' % #& #! '", "#((%& & $.", "\"(,* #*) ( %& *) (0 #%()& * % .", "%& (M NOP QR#*\" \"#$& & (%!", "!SNR P T .,* #*#+% #( \" % (!", "& (' #& ( \" % #+) ( !", "U\"%) (!", "!", "(,% ABCDE3 -F145 V W<; ABCDEXDY3 -FZ4 Y*& % .,& && *\" \"#$& 1%%& () & %Z.#*(),* #*[%% & #(& 0 +& (& (& *% #( ()1'(# & ) #& ( U& (% . )", "/# & % !", "( & *% % &) (% (!", "K EL'' \\\"/ A # /#!", "!", "$.A#& (%u' .u' v.\"u' w\"% # + % %& *& .", "% #& K!", "$.)", "#& (& % & ( \"0 #* ! \"", "& % .", "\"0 % +!", "& % .\"", "& () \"0 % +!", "\"& 0 % +!", "& % ' x yz{ | } {| { { } | C,& * \"0 ( \"\" \"#$ % %+ !", "\"( \"%) ( % & % !", "( & *%'_ & * % % #& (., (K \"+#[ (\"(& ,( !", "K& % %) ( K (%,( [' G H G +H vY*\"$ #0 ( % & #& % \"\" K& (%()& *E % G vH!", "( & *' B( !", "& % % \" #& \"%& !", "%\" 0 #( !", "& % %%& U( \"% '(+ K & $.,! \"", "& *% $& # *& 0 *\"\"K % (% ' x { | } } { } { Ỹ */ % && $ () % ,\"% # +% %/ % & 0 ( \" ) #& ( U& (.,* #*\"#( (% %\" 0 \"#$& & ( & % \" K \"!\"", "\"# % 'E % 0 G vH & (\"#\", \"!", "$0 % \"\"$ #0 ( !", "( & *) (/ % & 0 ( \" % % & %& *+% %) ($ % % .", "#! \"", "(, !", "( & *%.,% U & %\"% * ' Y*E % G vH!", "( & * %+% \"(& ,( & !", "& \"& $ %()\"$ #0 ( %& #0 & % OQ S¡P% % .,* #*#(% % &()*\"0 ,( \"\" & %\"% #\"& %((% \".", "\"¢ £OQ ¤ S¡P% % .,* #*#(% % &()\" \"#$\"& * (+& ,& **\"\"(\" / ' ( !", "!", "$.,\"(& #( !", "& % %¥ ¦ § ,* ©\"ª & * \" #%()& *% « %*\"0 ,( \"\"\" ( & ' #( !", "& % %\"0 (& \"%¬ ¦ § ,* ©\"® & * \"()& * *\"\"(\" /()\" \"#$' _ & & K!", "$.", "#( !", "& % % & %*! )", "0 #(% & & & *\"\"+$©.,* % #( !", "& % %(!", "$ & !*! )", "0 #(% & & & .% #& *#(% & & &# +& \"\"+$\"\" ( (\" / %& (®' E#*& $ ()% %# & \"+$ # % K!", "$ #(+ & ,(%!", "!", ".", "\"#&% % & *#(0 % & #& (% % # /\" * #!", "!", "$ v' #( !", "& % %#(% & #& \") ( ()#( !", "& % % . \"", "#& & *\" K % (()& * °©F®± & (#(% & & & %*\"\"+$©\" ®' #( !", "& % %# & \"+$#( !", "& 0 #( !", "& % , & *& *(& **!", ")() ®« %#(% & & & 'Y* ( &()#(#& & ( #*#(% & #& (²® vG H(³ 0 vG +H ² %& *T S¡ ¢ PSQ¢ £P .)", "\"& *&% & + & \"& (/\"& *( & !#(% & #& (' _ ( \"& ( % %& #-.", "&% ) /#%& ( /\"( & !#(% & #& (%) (!", "!#( !", "& \" #( !", "& % %\"/\"(-' Y* %#+ To utilize higher-order information, the decoder's input at each step is the sum of the encoder hidden states of three words: β t = s h + s g + s s where β t is the input vector of decoder at time t and h, g, s are the indices of the head word and its grandparent and sibling, respectively.", "Figure 1 (b) illustrates the details.", "Here we use the element-wise sum operation instead of concatenation because it does not increase the dimension of the input vector β t , thus introducing no additional model parameters.", "Biaffine Attention Mechanism For attention score function (Eq.", "(1)), we adopt the biaffine attention mechanism (Luong et al., 2015; Dozat and Manning, 2017) : e t i = h T t Ws i + U T h t + V T s i + b where W, U, V, b are parameters, denoting the weight matrix of the bi-linear term, the two weight vectors of the linear terms, and the bias vector.", "As discussed in Dozat and Manning (2017) , applying a multilayer perceptron (MLP) to the output vectors of the BLSTM before the score function can both reduce the dimensionality and overfitting of the model.", "We follow this work by using a one-layer perceptron to s i and h i with elu (Clevert et al., 2015) as its activation function.", "Similarly, the dependency label classifier also uses a biaffine function to score each label, given the head word vector h t and child vector s i as inputs.", "Again, we use MLPs to transform h t and s i before feeding them into the classifier.", "Training Objectives The STACKPTR parser is trained to optimize the probability of the dependency trees given sentences: P θ (y|x), which can be factorized as: P θ (y|x) = k i=1 P θ (p i |p <i , x) = k i=1 l i j=1 P θ (c i,j |c i,<j , p <i , x), (2) where θ represents model parameters.", "p <i denotes the preceding paths that have already been generated.", "c i,j represents the jth word in p i and c i,<j denotes all the proceeding words on the path p i .", "Thus, the STACKPTR parser is an autoregressive model, like sequence-to-sequence models, but it factors the distribution according to a top-down tree structure as opposed to a left-to-right chain.", "We define P θ (c i,j |c i,<j , p <i , x) = a t , where attention vector a t (of dimension n) is used as the distribution over the indices of words in a sentence.", "Arc Prediction Our parser is trained by optimizing the conditional likelihood in Eq (2), which is implemented as the cross-entropy loss.", "Label Prediction We train a separated multiclass classifier in parallel to predict the dependency labels.", "Following Dozat and Manning (2017) , the classifier takes the information of the head word and its child as features.", "The label classifier is trained simultaneously with the parser by optimizing the sum of their objectives.", "Discussion Time Complexity.", "The number of decoding steps to build a parse tree for a sentence of length n is 2n−1, linear in n. Together with the attention mechanism (at each step, we need to compute the attention vector a t , whose runtime is O(n)), the time complexity of decoding algorithm is O(n 2 ), which is more efficient than graph-based parsers that have O(n 3 ) or worse complexity when using dynamic programming or maximum spanning tree (MST) decoding algorithms.", "Top-down Parsing.", "When humans comprehend a natural language sentence, they arguably do it in an incremental, left-to-right manner.", "However, when humans consciously annotate a sentence with syntactic structure, they rarely ever process in fixed left-to-right order.", "Rather, they start by reading the whole sentence, then seeking the main predicates, jumping back-and-forth over the sentence and recursively proceeding to the subtree structures governed by certain head words.", "Our parser follows a similar kind of annotation process: starting from reading the whole sentence, and processing in a top-down manner by finding the main predicates first and only then search for sub-trees governed by them.", "When making latter decisions, the parser has access to the entire structure built in earlier steps.", "Implementation Details Pre-trained Word Embeddings.", "For all the parsing models in different languages, we initialize word vectors with pretrained word embeddings.", "For Chinese, Dutch, English, German and Spanish, we use the structured-skipgram embeddings.", "For other languages we use Polyglot embeddings (Al-Rfou et al., 2013) .", "Optimization.", "Parameter optimization is performed with the Adam optimizer (Kingma and Ba, 2014) with β 1 = β 2 = 0.9.", "We choose an initial learning rate of η 0 = 0.001.", "The learning rate η is annealed by multiplying a fixed decay rate ρ = 0.75 when parsing performance stops increasing on validation sets.", "To reduce the effects of \"gradient exploding\", we use gradient clipping of 5.0 (Pascanu et al., 2013) .", "Dropout Training.", "To mitigate overfitting, we apply dropout (Srivastava et al., 2014; .", "For BLSTM, we use recurrent dropout (Gal and Ghahramani, 2016) with a drop rate of 0.33 between hidden states and 0.33 between layers.", "Following Dozat and Manning (2017) , we also use embedding dropout with a rate of 0.33 on all word, character, and POS embeddings.", "Hyper-Parameters.", "Some parameters are chosen from those reported in Dozat and Manning (2017) .", "We use the same hyper-parameters across the models on different treebanks and languages, due to time constraints.", "The details of the chosen hyper-parameters for all experiments are summarized in Appendix A.", "Experiments Setup We evaluate our STACKPTR parser mainly on three treebanks: the English Penn Treebank (PTB version 3.0) (Marcus et al., 1993) , the Penn Chinese Treebank (CTB version 5.1) (Xue et al., 2002) , and the German CoNLL 2009 corpus (Hajič et al., 2009) .", "We use the same experimental settings as Kuncoro et al.", "(2016) .", "To make a thorough empirical comparison with previous studies, we also evaluate our system on treebanks from CoNLL shared task and the Universal Dependency (UD) Treebanks 4 .", "For the CoNLL Treebanks, we use the English treebank from CoNLL-2008 shared task (Surdeanu et al., 2008) and all 13 treebanks from CoNLL-2006 shared task (Buchholz and Marsi, 2006) .", "The experimental settings are the same as .", "For UD Treebanks, we select 12 languages.", "The details of the treebanks and experimental settings are in § 4.5 and Appendix B.", "Evaluation Metrics Parsing performance is measured with five metrics: unlabeled attachment score (UAS), labeled attachment score (LAS), unlabeled complete match (UCM), labeled complete match (LCM), and root accuracy (RA).", "Following previous work (Kuncoro et al., 2016; Dozat and Manning, 2017) , we report results excluding punctuations for Chinese and English.", "For each experiment, we report the mean values with corresponding standard deviations over 5 repetitions.", "Baseline For fair comparison of the parsing performance, we re-implemented the graph-based Deep Biaffine (BIAF) parser (Dozat and Manning, 2017) , which achieved state-of-the-art results on a wide range of languages.", "Our re-implementation adds character-level information using the same LSTM-CNN encoder as our model ( § 3.2) to the original BIAF model, which boosts its performance on all languages.", "Main Results We first conduct experiments to demonstrate the effectiveness of our neural architecture by comparing with the strong baseline BIAF.", "We compare the performance of four variations of our model with different decoder inputs -Org, +gpar, +sib and Full -where the Org model utilizes only the encoder hidden states of head words, while the +gpar and +sib models augments the original one with grandparent and sibling information, respectively.", "The Full model includes all the three information as inputs.", "Figure 2 illustrates the performance (five metrics) of different variations of our STACKPTR parser together with the results of baseline BIAF re-implemented by us, on the test sets of the three languages.", "On UAS and LAS, the Full variation of STACKPTR with decoding beam size 10 outperforms BIAF on Chinese, and obtains competitive performance on English and German.", "An interesting observation is that the Full model achieves the best accuracy on English and Chinese, while performs slightly worse than +sib on German.", "This shows that the importance of higher-order information varies in languages.", "On LCM and UCM, STACKPTR significantly outperforms BIAF on all languages, showing the superiority of our parser on complete sentence parsing.", "The results of our parser on RA are slightly worse than BIAF.", "More details of results are provided in Appendix C. Table 1 illustrates the UAS and LAS of the four versions of our model (with decoding beam size 10) on the three treebanks, together with previous top-performing systems for comparison.", "Note that the results of STACKPTR and our reimplementation of BIAF are the average of 5 repetitions instead of a single run.", "Our Full model significantly outperforms all the transition-based parsers on all three languages, and achieves better results than most graph-based parsers.", "Our Table 1 : UAS and LAS of four versions of our model on test sets for three languages, together with topperforming parsing systems.", "\"T\" and \"G\" indicate transition-and graph-based models, respectively.", "For BIAF, we provide the original results reported in Dozat and Manning (2017) and our re-implementation.", "For STACKPTR and our re-implementation of BiAF, we report the average over 5 runs.", "re-implementation of BIAF obtains better performance than the original one in Dozat and Manning (2017) , demonstrating the effectiveness of the character-level information.", "Our model achieves state-of-the-art performance on both UAS and LAS on Chinese, and best UAS on English.", "On German, the performance is competitive with BIAF, and significantly better than other models.", "Comparison with Previous Work Error Analysis In this section, we characterize the errors made by BIAF and STACKPTR by presenting a number of experiments that relate parsing errors to a set of linguistic and structural properties.", "For simplicity, we follow McDonald and Nivre (2011) and report labeled parsing metrics (either accuracy, precision, or recall) for all experiments.", "Length and Graph Factors Following McDonald and Nivre (2011), we analyze parsing errors related to structural factors.", "Sentence Length.", "Figure 3 (a) shows the accuracy of both parsing models relative to sentence lengths.", "Consistent with the analysis in Mc-Donald and Nivre (2011) , STACKPTR tends to perform better on shorter sentences, which make fewer parsing decisions, significantly reducing the chance of error propagation.", "Dependency Length.", "Figure 3 (b) measures the precision and recall relative to dependency lengths.", "While the graph-based BIAF parser still performs better for longer dependency arcs and transition-based STACKPTR parser does better for shorter ones, the gap between the two systems is marginal, much smaller than that shown Table 3 : UAS and LAS on 14 treebanks from CoNLL shared tasks, together with several state-of-the-art parsers.", "Bi-Att is the bi-directional attention based parser (Cheng et al., 2016) , and NeuroMST is the neural MST parser .", "\"Best Published\" includes the most accurate parsers in term of UAS among , Martins et al.", "(2011) , Martins et al.", "(2013) , , , Zhang and McDonald (2014) , Pitler and McDonald (2015) , and Cheng et al.", "(2016) .", "in McDonald and Nivre (2011) .", "One possible reason is that, unlike traditional transition-based parsers that scan the sentence from left to right, STACKPTR processes in a top-down manner, thus sometimes unnecessarily creating shorter dependency arcs first.", "Root Distance.", "Figure 3 (c) plots the precision and recall of each system for arcs of varying distance to the root.", "Different from the observation in McDonald and Nivre (2011) , STACKPTR does not show an obvious advantage on the precision for arcs further away from the root.", "Furthermore, the STACKPTR parser does not have the tendency to over-predict root modifiers reported in McDonald and Nivre (2011) .", "This behavior can be explained using the same reasoning as above: the fact that arcs further away from the root are usually constructed early in the parsing algorithm of traditional transition-based parsers is not true for the STACKPTR parser.", "Effect of POS Embedding The only prerequisite information that our parsing model relies on is POS tags.", "With the goal of achieving an end-to-end parser, we explore the effect of POS tags on parsing performance.", "We run experiments on PTB using our STACKPTR parser with gold-standard and predicted POS tags, and without tags, respectively.", "STACKPTR in these experiments is the Full model with beam=10.", "Table 2 gives results of the parsers with different versions of POS tags on the test data of PTB.", "The parser with gold-standard POS tags significantly outperforms the other two parsers, showing that dependency parsers can still benefit from accurate POS information.", "The parser with predicted (imperfect) POS tags, however, performs even slightly worse than the parser without using POS tags.", "It illustrates that an end-to-end parser that doesn't rely on POS information can obtain competitive (or even better) performance than parsers using imperfect predicted POS tags, even if the POS tagger is relative high accuracy (accuracy > 97% in this experiment on PTB).", "Table 3 summarizes the parsing results of our model on the test sets of 14 treebanks from the CoNLL shared task, along with the state-of-theart baselines.", "Along with BIAF, we also list the performance of the bi-directional attention based Parser (Bi-Att) (Cheng et al., 2016) and the neural MST parser (NeuroMST) for comparison.", "Our parser achieves state-of-theart performance on both UAS and LAS on eight languages -Arabic, Czech, English, German, Portuguese, Slovene, Spanish, and Swedish.", "On Bulgarian and Dutch, our parser obtains the best UAS.", "On other languages, the performance of our parser is competitive with BIAF, and significantly better than others.", "The only exception is Japanese, on which NeuroMST obtains the best scores.", "Experiments on Other Treebanks CoNLL Treebanks UD Treebanks For UD Treebanks, we select 12 languages -Bulgarian, Catalan, Czech, Dutch, English, French, German, Italian, Norwegian, Romanian, Russian and Spanish.", "For all the languages, we adopt the standard training/dev/test splits, and use the universal POS tags (Petrov et al., 2012) provided in each treebank.", "The statistics of these corpora are provided in Appendix B.", "Table 4 summarizes the results of the STACKPTR parser, along with BIAF for comparison, on both the development and test datasets for each language.", "First, both BIAF and STACKPTR parsers achieve relatively high parsing accuracies on all the 12 languages -all with UAS are higher than 90%.", "On nine languages -Catalan, Czech, Dutch, English, French, German, Norwegian, Russian and Spanish -STACKPTR outperforms BIAF for both UAS and LAS.", "On Bulgarian, STACKPTR achieves slightly better UAS while LAS is slightly worse than BIAF.", "On Italian and Romanian, BIAF obtains marginally better parsing performance than STACKPTR.", "Conclusion In this paper, we proposed STACKPTR, a transition-based neural network architecture, for dependency parsing.", "Combining pointer networks with an internal stack to track the status of the top-down, depth-first search in the decoding procedure, the STACKPTR parser is able to capture information from the whole sentence and all the previously derived subtrees, removing the leftto-right restriction in classical transition-based parsers, while maintaining linear parsing steps, w.r.t the length of the sentences.", "Experimental re-sults on 29 treebanks show the effectiveness of our parser across 20 languages, by achieving state-ofthe-art performance on 21 corpora.", "There are several potential directions for future work.", "First, we intend to consider how to conduct experiments to improve the analysis of parsing errors qualitatively and quantitatively.", "Another interesting direction is to further improve our model by exploring reinforcement learning approaches to learn an optimal order for the children of head words, instead of using a predefined fixed order." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.1", "3.2", "3.3", "3.4", "3.5", "3.6", "3.7", "3.8", "4.1", "4.2", "4.4", "4.4.1", "4.4.2", "4.5.2", "5" ], "paper_header_content": [ "Introduction", "Background", "Dependency Parsing and Notations", "Pointer Networks", "Overview", "Encoder", "Decoder", "Higher-order Information", "Biaffine Attention Mechanism", "Training Objectives", "Discussion", "Implementation Details", "Setup", "Main Results", "Error Analysis", "Length and Graph Factors", "Effect of POS Embedding", "UD Treebanks", "Conclusion" ] }
GEM-SciDuet-train-37#paper-1053#slide-1
Transition based Parsing
Process the input sequentially in order Use actions that build up a tree Choose which actions to apply with a classifier
Process the input sequentially in order Use actions that build up a tree Choose which actions to apply with a classifier
[]
GEM-SciDuet-train-37#paper-1053#slide-2
1053
Stack-Pointer Networks for Dependency Parsing
We introduce a novel architecture for dependency parsing: stack-pointer networks (STACKPTR). Combining pointer networks (Vinyals et al., 2015) with an internal stack, the proposed model first reads and encodes the whole sentence, then builds the dependency tree top-down (from root-to-leaf) in a depth-first fashion. The stack tracks the status of the depthfirst search and the pointer networks select one child for the word at the top of the stack at each step. The STACKPTR parser benefits from the information of the whole sentence and all previously derived subtree structures, and removes the leftto-right restriction in classical transitionbased parsers. Yet, the number of steps for building any (including non-projective) parse tree is linear in the length of the sentence just as other transition-based parsers, yielding an efficient decoding algorithm with O(n 2 ) time complexity. We evaluate our model on 29 treebanks spanning 20 languages and different dependency annotation schemas, and achieve state-of-theart performance on 21 of them.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281 ], "paper_content_text": [ "Introduction Dependency parsing, which predicts the existence and type of linguistic dependency relations between words, is a first step towards deep language understanding.", "Its importance is widely recognized in the natural language processing (NLP) community, with it benefiting a wide range of NLP applications, such as coreference resolution (Ng, 2010; Durrett and Klein, 2013; Work done while at Carnegie Mellon University.", "2016), sentiment analysis (Tai et al., 2015) , machine translation (Bastings et al., 2017) , information extraction (Nguyen et al., 2009; Angeli et al., 2015; Peng et al., 2017) , word sense disambiguation (Fauceglia et al., 2015) , and low-resource languages processing (McDonald et al., 2013; Ma and Xia, 2014) .", "There are two dominant approaches to dependency parsing (Buchholz and Marsi, 2006; Nivre et al., 2007) : local and greedy transitionbased algorithms (Yamada and Matsumoto, 2003; Nivre and Scholz, 2004; Zhang and Nivre, 2011; Chen and Manning, 2014) , and the globally optimized graph-based algorithms (Eisner, 1996; Mc-Donald et al., 2005a,b; .", "Transition-based dependency parsers read words sequentially (commonly from left-to-right) and build dependency trees incrementally by making series of multiple choice decisions.", "The advantage of this formalism is that the number of operations required to build any projective parse tree is linear with respect to the length of the sentence.", "The challenge, however, is that the decision made at each step is based on local information, leading to error propagation and worse performance compared to graph-based parsers on root and long dependencies (McDonald and Nivre, 2011) .", "Previous studies have explored solutions to address this challenge.", "Stack LSTMs are capable of learning representations of the parser state that are sensitive to the complete contents of the parser's state.", "Andor et al.", "(2016) proposed a globally normalized transition model to replace the locally normalized classifier.", "However, the parsing accuracy is still behind state-of-the-art graph-based parsers (Dozat and Manning, 2017) .", "Graph-based dependency parsers, on the other hand, learn scoring functions for parse trees and perform exhaustive search over all possible trees for a sentence to find the globally highest scoring tree.", "Incorporating this global search algorithm with distributed representations learned from neural networks, neural graph-based parsers (Kiperwasser and Goldberg, 2016; Wang and Chang, 2016; Kuncoro et al., 2016; Dozat and Manning, 2017) have achieved the state-of-the-art accuracies on a number of treebanks in different languages.", "Nevertheless, these models, while accurate, are usually slow (e.g.", "decoding is O(n 3 ) time complexity for first-order models (McDonald et al., 2005a,b) and higher polynomials for higherorder models (McDonald and Pereira, 2006; Ma and Zhao, 2012b,a) ).", "In this paper, we propose a novel neural network architecture for dependency parsing, stackpointer networks (STACKPTR).", "STACKPTR is a transition-based architecture, with the corresponding asymptotic efficiency, but still maintains a global view of the sentence that proves essential for achieving competitive accuracy.", "Our STACKPTR parser has a pointer network (Vinyals et al., 2015) as its backbone, and is equipped with an internal stack to maintain the order of head words in tree structures.", "The STACKPTR parser performs parsing in an incremental, topdown, depth-first fashion; at each step, it generates an arc by assigning a child for the head word at the top of the internal stack.", "This architecture makes it possible to capture information from the whole sentence and all the previously derived subtrees, while maintaining a number of parsing steps linear in the sentence length.", "We evaluate our parser on 29 treebanks across 20 languages and different dependency annotation schemas, and achieve state-of-the-art performance on 21 of them.", "The contributions of this work are summarized as follows: (i) We propose a neural network architecture for dependency parsing that is simple, effective, and efficient.", "(ii) Empirical evaluations on benchmark datasets over 20 languages show that our method achieves state-of-the-art performance on 21 different treebanks 1 .", "(iii) Comprehensive error analysis is conducted to compare the proposed method to a strong graph-based baseline using biaffine attention (Dozat and Manning, 2017) .", "Background We first briefly describe the task of dependency parsing, setup the notation, and review Pointer Networks (Vinyals et al., 2015) .", "Dependency Parsing and Notations Dependency trees represent syntactic relationships between words in the sentences through labeled directed edges between head words and their dependents.", "Figure 1 (a) shows a dependency tree for the sentence, \"But there were no buyers\".", "In this paper, we will use the following notation: Input: x = {w 1 , .", ".", ".", ", w n } represents a generic sentence, where w i is the ith word.", "Output: y = {p 1 , p 2 , · · · , p k } represents a generic (possibly non-projective) dependency tree, where each path p i = $, w i,1 , w i,2 , · · · , w i,l i is a sequence of words from the root to a leaf.", "\"$\" is an universal virtual root that is added to each tree.", "Stack: σ denotes a stack configuration, which is a sequence of words.", "We use σ|w to represent a stack configuration that pushes word w into the stack σ.", "Children: ch(w i ) denotes the list of all the children (modifiers) of word w i .", "Pointer Networks Pointer Networks (PTR-NET) (Vinyals et al., 2015) are a variety of neural network capable of learning the conditional probability of an output sequence with elements that are discrete tokens corresponding to positions in an input sequence.", "This model cannot be trivially expressed by standard sequence-to-sequence networks due to the variable number of input positions in each sentence.", "PTR-NET solves the problem by using attention (Bahdanau et al., 2015; Luong et al., 2015) as a pointer to select a member of the input sequence as the output.", "Formally, the words of the sentence x are fed one-by-one into the encoder (a multiple-layer bidirectional RNN), producing a sequence of encoder hidden states s i .", "At each time step t, the decoder (a uni-directional RNN) receives the input from last step and outputs decoder hidden state h t .", "The attention vector a t is calculated as follows: e t i = score(h t , s i ) a t = softmax (e t ) (1) where score(·, ·) is the attention scoring function, which has several variations such as dot-product, concatenation, and biaffine (Luong et al., 2015) .", "PTR-NET regards the attention vector a t as a probability distribution over the source words, i.e.", "it uses a t i as pointers to select the input elements.", "3 Stack-Pointer Networks Overview Similarly to PTR-NET, STACKPTR first reads the whole sentence and encodes each word into the encoder hidden state s i .", "The internal stack σ is always initialized with the root symbol $.", "At each time step t, the decoder receives the input vector corresponding to the top element of the stack σ (the head word w p where p is the word index), generates the hidden state h t , and computes the attention vector a t using Eq.", "(1).", "The parser chooses a specific position c according to the attention scores in a t to generate a new dependency arc (w h , w c ) by selecting w c as a child of w h .", "Then the parser pushes w c onto the stack, i.e.", "σ → σ|w c , and goes to the next step.", "At one step if the parser points w h to itself, i.e.", "c = h, it indicates that all children of the head word w h have already been selected.", "Then the parser goes to the next step by popping w h out of σ.", "At test time, in order to guarantee a valid dependency tree containing all the words in the input sentences exactly once, the decoder maintains a list of \"available\" words.", "At each decoding step, the parser selects a child for the current head word, and removes the child from the list of available words to make sure that it cannot be selected as a child of other head words.", "For head words with multiple children, it is possible that there is more than one valid selection for each time step.", "In order to define a deterministic decoding process to make sure that there is only one ground-truth choice at each step (which is necessary for simple maximum likelihood estimation), a predefined order for each ch(w i ) needs to be introduced.", "The predefined order of children can have different alternatives, such as leftto-right or inside-out 2 .", "In this paper, we adopt the inside-out order 3 since it enables us to utilize second-order sibling information, which has been proven beneficial for parsing performance (Mc-Donald and Pereira, 2006; ) (see § 3.4 for details).", "Figure 1 (b) depicts the architecture of STACKPTR and the decoding procedure for the example sentence in Figure 1 (a) .", "Encoder The encoder of our parsing model is based on the bi-directional LSTM-CNN architecture (BLSTM-CNNs) (Chiu and Nichols, 2016; where CNNs encode character-level information of a word into its character-level repre-sentation and BLSTM models context information of each word.", "Formally, for each word, the CNN, with character embeddings as inputs, encodes the character-level representation.", "Then the character-level representation vector is concatenated with the word embedding vector to feed into the BLSTM network.", "To enrich word-level information, we also use POS embeddings.", "Finally, the encoder outputs a sequence of hidden states s i .", "Decoder The decoder for our parser is a uni-directional LSTM.", "Different from previous work (Bahdanau et al., 2015; Vinyals et al., 2015) which uses word embeddings of the previous word as the input to the decoder, our decoder receives the encoder hidden state vector (s i ) of the top element in the stack σ (see Figure 1 (b)).", "Compared to word embeddings, the encoder hidden states contain more contextual information, benefiting both the training and decoding procedures.", "The decoder produces a sequence of decoder hidden states h i , one for each decoding step.", "Higher-order Information As mentioned before, our parser is capable of utilizing higher-order information.", "In this paper, we incorporate two kinds of higher-order structures grandparent and sibling.", "A sibling structure is a head word with two successive modifiers, and a grandparent structure is a pair of dependencies connected head-to-tail: 0 12 3 456 2782 96 56 986 2 5214 3 77543 9 5 2 52 ÿ ÿ ! \"", "\"#$% & #& ' % !", "#& (()& *% & & !", "*\"& (+(\"!", "\"% ) &, \" \"#$' (% & #-.,\"/\" \"#$ % %% #*) (& ** *% & 0 % #( !", "$% %()- 1 2 3 -45 67 896: ;<=>?", "@ ABCDE3 -F14 G H I .J3-4 %& *% &()!", "!& %#( & +!", ", & * -\"ABCDE3 -F14K!", "& %& *K&& *&& 1 %& *!", "$% %()% & #-'A #& *# \" !", "0 & $()J3-4 (,% (& !", "!", "$, & *& *!", "& *() & *% & #.\"", "#& !", "$%(!", "K EL' % #& #! '", "#((%& & $.", "\"(,* #*) ( %& *) (0 #%()& * % .", "%& (M NOP QR#*\" \"#$& & (%!", "!SNR P T .,* #*#+% #( \" % (!", "& (' #& ( \" % #+) ( !", "U\"%) (!", "!", "(,% ABCDE3 -F145 V W<; ABCDEXDY3 -FZ4 Y*& % .,& && *\" \"#$& 1%%& () & %Z.#*(),* #*[%% & #(& 0 +& (& (& *% #( ()1'(# & ) #& ( U& (% . )", "/# & % !", "( & *% % &) (% (!", "K EL'' \\\"/ A # /#!", "!", "$.A#& (%u' .u' v.\"u' w\"% # + % %& *& .", "% #& K!", "$.)", "#& (& % & ( \"0 #* ! \"", "& % .", "\"0 % +!", "& % .\"", "& () \"0 % +!", "\"& 0 % +!", "& % ' x yz{ | } {| { { } | C,& * \"0 ( \"\" \"#$ % %+ !", "\"( \"%) ( % & % !", "( & *%'_ & * % % #& (., (K \"+#[ (\"(& ,( !", "K& % %) ( K (%,( [' G H G +H vY*\"$ #0 ( % & #& % \"\" K& (%()& *E % G vH!", "( & *' B( !", "& % % \" #& \"%& !", "%\" 0 #( !", "& % %%& U( \"% '(+ K & $.,! \"", "& *% $& # *& 0 *\"\"K % (% ' x { | } } { } { Ỹ */ % && $ () % ,\"% # +% %/ % & 0 ( \" ) #& ( U& (.,* #*\"#( (% %\" 0 \"#$& & ( & % \" K \"!\"", "\"# % 'E % 0 G vH & (\"#\", \"!", "$0 % \"\"$ #0 ( !", "( & *) (/ % & 0 ( \" % % & %& *+% %) ($ % % .", "#! \"", "(, !", "( & *%.,% U & %\"% * ' Y*E % G vH!", "( & * %+% \"(& ,( & !", "& \"& $ %()\"$ #0 ( %& #0 & % OQ S¡P% % .,* #*#(% % &()*\"0 ,( \"\" & %\"% #\"& %((% \".", "\"¢ £OQ ¤ S¡P% % .,* #*#(% % &()\" \"#$\"& * (+& ,& **\"\"(\" / ' ( !", "!", "$.,\"(& #( !", "& % %¥ ¦ § ,* ©\"ª & * \" #%()& *% « %*\"0 ,( \"\"\" ( & ' #( !", "& % %\"0 (& \"%¬ ¦ § ,* ©\"® & * \"()& * *\"\"(\" /()\" \"#$' _ & & K!", "$.", "#( !", "& % % & %*! )", "0 #(% & & & *\"\"+$©.,* % #( !", "& % %(!", "$ & !*! )", "0 #(% & & & .% #& *#(% & & &# +& \"\"+$\"\" ( (\" / %& (®' E#*& $ ()% %# & \"+$ # % K!", "$ #(+ & ,(%!", "!", ".", "\"#&% % & *#(0 % & #& (% % # /\" * #!", "!", "$ v' #( !", "& % %#(% & #& \") ( ()#( !", "& % % . \"", "#& & *\" K % (()& * °©F®± & (#(% & & & %*\"\"+$©\" ®' #( !", "& % %# & \"+$#( !", "& 0 #( !", "& % , & *& *(& **!", ")() ®« %#(% & & & 'Y* ( &()#(#& & ( #*#(% & #& (²® vG H(³ 0 vG +H ² %& *T S¡ ¢ PSQ¢ £P .)", "\"& *&% & + & \"& (/\"& *( & !#(% & #& (' _ ( \"& ( % %& #-.", "&% ) /#%& ( /\"( & !#(% & #& (%) (!", "!#( !", "& \" #( !", "& % %\"/\"(-' Y* %#+ To utilize higher-order information, the decoder's input at each step is the sum of the encoder hidden states of three words: β t = s h + s g + s s where β t is the input vector of decoder at time t and h, g, s are the indices of the head word and its grandparent and sibling, respectively.", "Figure 1 (b) illustrates the details.", "Here we use the element-wise sum operation instead of concatenation because it does not increase the dimension of the input vector β t , thus introducing no additional model parameters.", "Biaffine Attention Mechanism For attention score function (Eq.", "(1)), we adopt the biaffine attention mechanism (Luong et al., 2015; Dozat and Manning, 2017) : e t i = h T t Ws i + U T h t + V T s i + b where W, U, V, b are parameters, denoting the weight matrix of the bi-linear term, the two weight vectors of the linear terms, and the bias vector.", "As discussed in Dozat and Manning (2017) , applying a multilayer perceptron (MLP) to the output vectors of the BLSTM before the score function can both reduce the dimensionality and overfitting of the model.", "We follow this work by using a one-layer perceptron to s i and h i with elu (Clevert et al., 2015) as its activation function.", "Similarly, the dependency label classifier also uses a biaffine function to score each label, given the head word vector h t and child vector s i as inputs.", "Again, we use MLPs to transform h t and s i before feeding them into the classifier.", "Training Objectives The STACKPTR parser is trained to optimize the probability of the dependency trees given sentences: P θ (y|x), which can be factorized as: P θ (y|x) = k i=1 P θ (p i |p <i , x) = k i=1 l i j=1 P θ (c i,j |c i,<j , p <i , x), (2) where θ represents model parameters.", "p <i denotes the preceding paths that have already been generated.", "c i,j represents the jth word in p i and c i,<j denotes all the proceeding words on the path p i .", "Thus, the STACKPTR parser is an autoregressive model, like sequence-to-sequence models, but it factors the distribution according to a top-down tree structure as opposed to a left-to-right chain.", "We define P θ (c i,j |c i,<j , p <i , x) = a t , where attention vector a t (of dimension n) is used as the distribution over the indices of words in a sentence.", "Arc Prediction Our parser is trained by optimizing the conditional likelihood in Eq (2), which is implemented as the cross-entropy loss.", "Label Prediction We train a separated multiclass classifier in parallel to predict the dependency labels.", "Following Dozat and Manning (2017) , the classifier takes the information of the head word and its child as features.", "The label classifier is trained simultaneously with the parser by optimizing the sum of their objectives.", "Discussion Time Complexity.", "The number of decoding steps to build a parse tree for a sentence of length n is 2n−1, linear in n. Together with the attention mechanism (at each step, we need to compute the attention vector a t , whose runtime is O(n)), the time complexity of decoding algorithm is O(n 2 ), which is more efficient than graph-based parsers that have O(n 3 ) or worse complexity when using dynamic programming or maximum spanning tree (MST) decoding algorithms.", "Top-down Parsing.", "When humans comprehend a natural language sentence, they arguably do it in an incremental, left-to-right manner.", "However, when humans consciously annotate a sentence with syntactic structure, they rarely ever process in fixed left-to-right order.", "Rather, they start by reading the whole sentence, then seeking the main predicates, jumping back-and-forth over the sentence and recursively proceeding to the subtree structures governed by certain head words.", "Our parser follows a similar kind of annotation process: starting from reading the whole sentence, and processing in a top-down manner by finding the main predicates first and only then search for sub-trees governed by them.", "When making latter decisions, the parser has access to the entire structure built in earlier steps.", "Implementation Details Pre-trained Word Embeddings.", "For all the parsing models in different languages, we initialize word vectors with pretrained word embeddings.", "For Chinese, Dutch, English, German and Spanish, we use the structured-skipgram embeddings.", "For other languages we use Polyglot embeddings (Al-Rfou et al., 2013) .", "Optimization.", "Parameter optimization is performed with the Adam optimizer (Kingma and Ba, 2014) with β 1 = β 2 = 0.9.", "We choose an initial learning rate of η 0 = 0.001.", "The learning rate η is annealed by multiplying a fixed decay rate ρ = 0.75 when parsing performance stops increasing on validation sets.", "To reduce the effects of \"gradient exploding\", we use gradient clipping of 5.0 (Pascanu et al., 2013) .", "Dropout Training.", "To mitigate overfitting, we apply dropout (Srivastava et al., 2014; .", "For BLSTM, we use recurrent dropout (Gal and Ghahramani, 2016) with a drop rate of 0.33 between hidden states and 0.33 between layers.", "Following Dozat and Manning (2017) , we also use embedding dropout with a rate of 0.33 on all word, character, and POS embeddings.", "Hyper-Parameters.", "Some parameters are chosen from those reported in Dozat and Manning (2017) .", "We use the same hyper-parameters across the models on different treebanks and languages, due to time constraints.", "The details of the chosen hyper-parameters for all experiments are summarized in Appendix A.", "Experiments Setup We evaluate our STACKPTR parser mainly on three treebanks: the English Penn Treebank (PTB version 3.0) (Marcus et al., 1993) , the Penn Chinese Treebank (CTB version 5.1) (Xue et al., 2002) , and the German CoNLL 2009 corpus (Hajič et al., 2009) .", "We use the same experimental settings as Kuncoro et al.", "(2016) .", "To make a thorough empirical comparison with previous studies, we also evaluate our system on treebanks from CoNLL shared task and the Universal Dependency (UD) Treebanks 4 .", "For the CoNLL Treebanks, we use the English treebank from CoNLL-2008 shared task (Surdeanu et al., 2008) and all 13 treebanks from CoNLL-2006 shared task (Buchholz and Marsi, 2006) .", "The experimental settings are the same as .", "For UD Treebanks, we select 12 languages.", "The details of the treebanks and experimental settings are in § 4.5 and Appendix B.", "Evaluation Metrics Parsing performance is measured with five metrics: unlabeled attachment score (UAS), labeled attachment score (LAS), unlabeled complete match (UCM), labeled complete match (LCM), and root accuracy (RA).", "Following previous work (Kuncoro et al., 2016; Dozat and Manning, 2017) , we report results excluding punctuations for Chinese and English.", "For each experiment, we report the mean values with corresponding standard deviations over 5 repetitions.", "Baseline For fair comparison of the parsing performance, we re-implemented the graph-based Deep Biaffine (BIAF) parser (Dozat and Manning, 2017) , which achieved state-of-the-art results on a wide range of languages.", "Our re-implementation adds character-level information using the same LSTM-CNN encoder as our model ( § 3.2) to the original BIAF model, which boosts its performance on all languages.", "Main Results We first conduct experiments to demonstrate the effectiveness of our neural architecture by comparing with the strong baseline BIAF.", "We compare the performance of four variations of our model with different decoder inputs -Org, +gpar, +sib and Full -where the Org model utilizes only the encoder hidden states of head words, while the +gpar and +sib models augments the original one with grandparent and sibling information, respectively.", "The Full model includes all the three information as inputs.", "Figure 2 illustrates the performance (five metrics) of different variations of our STACKPTR parser together with the results of baseline BIAF re-implemented by us, on the test sets of the three languages.", "On UAS and LAS, the Full variation of STACKPTR with decoding beam size 10 outperforms BIAF on Chinese, and obtains competitive performance on English and German.", "An interesting observation is that the Full model achieves the best accuracy on English and Chinese, while performs slightly worse than +sib on German.", "This shows that the importance of higher-order information varies in languages.", "On LCM and UCM, STACKPTR significantly outperforms BIAF on all languages, showing the superiority of our parser on complete sentence parsing.", "The results of our parser on RA are slightly worse than BIAF.", "More details of results are provided in Appendix C. Table 1 illustrates the UAS and LAS of the four versions of our model (with decoding beam size 10) on the three treebanks, together with previous top-performing systems for comparison.", "Note that the results of STACKPTR and our reimplementation of BIAF are the average of 5 repetitions instead of a single run.", "Our Full model significantly outperforms all the transition-based parsers on all three languages, and achieves better results than most graph-based parsers.", "Our Table 1 : UAS and LAS of four versions of our model on test sets for three languages, together with topperforming parsing systems.", "\"T\" and \"G\" indicate transition-and graph-based models, respectively.", "For BIAF, we provide the original results reported in Dozat and Manning (2017) and our re-implementation.", "For STACKPTR and our re-implementation of BiAF, we report the average over 5 runs.", "re-implementation of BIAF obtains better performance than the original one in Dozat and Manning (2017) , demonstrating the effectiveness of the character-level information.", "Our model achieves state-of-the-art performance on both UAS and LAS on Chinese, and best UAS on English.", "On German, the performance is competitive with BIAF, and significantly better than other models.", "Comparison with Previous Work Error Analysis In this section, we characterize the errors made by BIAF and STACKPTR by presenting a number of experiments that relate parsing errors to a set of linguistic and structural properties.", "For simplicity, we follow McDonald and Nivre (2011) and report labeled parsing metrics (either accuracy, precision, or recall) for all experiments.", "Length and Graph Factors Following McDonald and Nivre (2011), we analyze parsing errors related to structural factors.", "Sentence Length.", "Figure 3 (a) shows the accuracy of both parsing models relative to sentence lengths.", "Consistent with the analysis in Mc-Donald and Nivre (2011) , STACKPTR tends to perform better on shorter sentences, which make fewer parsing decisions, significantly reducing the chance of error propagation.", "Dependency Length.", "Figure 3 (b) measures the precision and recall relative to dependency lengths.", "While the graph-based BIAF parser still performs better for longer dependency arcs and transition-based STACKPTR parser does better for shorter ones, the gap between the two systems is marginal, much smaller than that shown Table 3 : UAS and LAS on 14 treebanks from CoNLL shared tasks, together with several state-of-the-art parsers.", "Bi-Att is the bi-directional attention based parser (Cheng et al., 2016) , and NeuroMST is the neural MST parser .", "\"Best Published\" includes the most accurate parsers in term of UAS among , Martins et al.", "(2011) , Martins et al.", "(2013) , , , Zhang and McDonald (2014) , Pitler and McDonald (2015) , and Cheng et al.", "(2016) .", "in McDonald and Nivre (2011) .", "One possible reason is that, unlike traditional transition-based parsers that scan the sentence from left to right, STACKPTR processes in a top-down manner, thus sometimes unnecessarily creating shorter dependency arcs first.", "Root Distance.", "Figure 3 (c) plots the precision and recall of each system for arcs of varying distance to the root.", "Different from the observation in McDonald and Nivre (2011) , STACKPTR does not show an obvious advantage on the precision for arcs further away from the root.", "Furthermore, the STACKPTR parser does not have the tendency to over-predict root modifiers reported in McDonald and Nivre (2011) .", "This behavior can be explained using the same reasoning as above: the fact that arcs further away from the root are usually constructed early in the parsing algorithm of traditional transition-based parsers is not true for the STACKPTR parser.", "Effect of POS Embedding The only prerequisite information that our parsing model relies on is POS tags.", "With the goal of achieving an end-to-end parser, we explore the effect of POS tags on parsing performance.", "We run experiments on PTB using our STACKPTR parser with gold-standard and predicted POS tags, and without tags, respectively.", "STACKPTR in these experiments is the Full model with beam=10.", "Table 2 gives results of the parsers with different versions of POS tags on the test data of PTB.", "The parser with gold-standard POS tags significantly outperforms the other two parsers, showing that dependency parsers can still benefit from accurate POS information.", "The parser with predicted (imperfect) POS tags, however, performs even slightly worse than the parser without using POS tags.", "It illustrates that an end-to-end parser that doesn't rely on POS information can obtain competitive (or even better) performance than parsers using imperfect predicted POS tags, even if the POS tagger is relative high accuracy (accuracy > 97% in this experiment on PTB).", "Table 3 summarizes the parsing results of our model on the test sets of 14 treebanks from the CoNLL shared task, along with the state-of-theart baselines.", "Along with BIAF, we also list the performance of the bi-directional attention based Parser (Bi-Att) (Cheng et al., 2016) and the neural MST parser (NeuroMST) for comparison.", "Our parser achieves state-of-theart performance on both UAS and LAS on eight languages -Arabic, Czech, English, German, Portuguese, Slovene, Spanish, and Swedish.", "On Bulgarian and Dutch, our parser obtains the best UAS.", "On other languages, the performance of our parser is competitive with BIAF, and significantly better than others.", "The only exception is Japanese, on which NeuroMST obtains the best scores.", "Experiments on Other Treebanks CoNLL Treebanks UD Treebanks For UD Treebanks, we select 12 languages -Bulgarian, Catalan, Czech, Dutch, English, French, German, Italian, Norwegian, Romanian, Russian and Spanish.", "For all the languages, we adopt the standard training/dev/test splits, and use the universal POS tags (Petrov et al., 2012) provided in each treebank.", "The statistics of these corpora are provided in Appendix B.", "Table 4 summarizes the results of the STACKPTR parser, along with BIAF for comparison, on both the development and test datasets for each language.", "First, both BIAF and STACKPTR parsers achieve relatively high parsing accuracies on all the 12 languages -all with UAS are higher than 90%.", "On nine languages -Catalan, Czech, Dutch, English, French, German, Norwegian, Russian and Spanish -STACKPTR outperforms BIAF for both UAS and LAS.", "On Bulgarian, STACKPTR achieves slightly better UAS while LAS is slightly worse than BIAF.", "On Italian and Romanian, BIAF obtains marginally better parsing performance than STACKPTR.", "Conclusion In this paper, we proposed STACKPTR, a transition-based neural network architecture, for dependency parsing.", "Combining pointer networks with an internal stack to track the status of the top-down, depth-first search in the decoding procedure, the STACKPTR parser is able to capture information from the whole sentence and all the previously derived subtrees, removing the leftto-right restriction in classical transition-based parsers, while maintaining linear parsing steps, w.r.t the length of the sentences.", "Experimental re-sults on 29 treebanks show the effectiveness of our parser across 20 languages, by achieving state-ofthe-art performance on 21 corpora.", "There are several potential directions for future work.", "First, we intend to consider how to conduct experiments to improve the analysis of parsing errors qualitatively and quantitatively.", "Another interesting direction is to further improve our model by exploring reinforcement learning approaches to learn an optimal order for the children of head words, instead of using a predefined fixed order." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.1", "3.2", "3.3", "3.4", "3.5", "3.6", "3.7", "3.8", "4.1", "4.2", "4.4", "4.4.1", "4.4.2", "4.5.2", "5" ], "paper_header_content": [ "Introduction", "Background", "Dependency Parsing and Notations", "Pointer Networks", "Overview", "Encoder", "Decoder", "Higher-order Information", "Biaffine Attention Mechanism", "Training Objectives", "Discussion", "Implementation Details", "Setup", "Main Results", "Error Analysis", "Length and Graph Factors", "Effect of POS Embedding", "UD Treebanks", "Conclusion" ] }
GEM-SciDuet-train-37#paper-1053#slide-2
Example Arc standard Parsing
Actions: Shift, reduce-right, reduce-left ROOT I saw a girl ROOT I saw a girl Support vector machines [Nivre+ 2004] Feed-forward neural networks [Chen+ 2014] Recurrent neural networks [Dyer+ 2015]
Actions: Shift, reduce-right, reduce-left ROOT I saw a girl ROOT I saw a girl Support vector machines [Nivre+ 2004] Feed-forward neural networks [Chen+ 2014] Recurrent neural networks [Dyer+ 2015]
[]
GEM-SciDuet-train-37#paper-1053#slide-3
1053
Stack-Pointer Networks for Dependency Parsing
We introduce a novel architecture for dependency parsing: stack-pointer networks (STACKPTR). Combining pointer networks (Vinyals et al., 2015) with an internal stack, the proposed model first reads and encodes the whole sentence, then builds the dependency tree top-down (from root-to-leaf) in a depth-first fashion. The stack tracks the status of the depthfirst search and the pointer networks select one child for the word at the top of the stack at each step. The STACKPTR parser benefits from the information of the whole sentence and all previously derived subtree structures, and removes the leftto-right restriction in classical transitionbased parsers. Yet, the number of steps for building any (including non-projective) parse tree is linear in the length of the sentence just as other transition-based parsers, yielding an efficient decoding algorithm with O(n 2 ) time complexity. We evaluate our model on 29 treebanks spanning 20 languages and different dependency annotation schemas, and achieve state-of-theart performance on 21 of them.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281 ], "paper_content_text": [ "Introduction Dependency parsing, which predicts the existence and type of linguistic dependency relations between words, is a first step towards deep language understanding.", "Its importance is widely recognized in the natural language processing (NLP) community, with it benefiting a wide range of NLP applications, such as coreference resolution (Ng, 2010; Durrett and Klein, 2013; Work done while at Carnegie Mellon University.", "2016), sentiment analysis (Tai et al., 2015) , machine translation (Bastings et al., 2017) , information extraction (Nguyen et al., 2009; Angeli et al., 2015; Peng et al., 2017) , word sense disambiguation (Fauceglia et al., 2015) , and low-resource languages processing (McDonald et al., 2013; Ma and Xia, 2014) .", "There are two dominant approaches to dependency parsing (Buchholz and Marsi, 2006; Nivre et al., 2007) : local and greedy transitionbased algorithms (Yamada and Matsumoto, 2003; Nivre and Scholz, 2004; Zhang and Nivre, 2011; Chen and Manning, 2014) , and the globally optimized graph-based algorithms (Eisner, 1996; Mc-Donald et al., 2005a,b; .", "Transition-based dependency parsers read words sequentially (commonly from left-to-right) and build dependency trees incrementally by making series of multiple choice decisions.", "The advantage of this formalism is that the number of operations required to build any projective parse tree is linear with respect to the length of the sentence.", "The challenge, however, is that the decision made at each step is based on local information, leading to error propagation and worse performance compared to graph-based parsers on root and long dependencies (McDonald and Nivre, 2011) .", "Previous studies have explored solutions to address this challenge.", "Stack LSTMs are capable of learning representations of the parser state that are sensitive to the complete contents of the parser's state.", "Andor et al.", "(2016) proposed a globally normalized transition model to replace the locally normalized classifier.", "However, the parsing accuracy is still behind state-of-the-art graph-based parsers (Dozat and Manning, 2017) .", "Graph-based dependency parsers, on the other hand, learn scoring functions for parse trees and perform exhaustive search over all possible trees for a sentence to find the globally highest scoring tree.", "Incorporating this global search algorithm with distributed representations learned from neural networks, neural graph-based parsers (Kiperwasser and Goldberg, 2016; Wang and Chang, 2016; Kuncoro et al., 2016; Dozat and Manning, 2017) have achieved the state-of-the-art accuracies on a number of treebanks in different languages.", "Nevertheless, these models, while accurate, are usually slow (e.g.", "decoding is O(n 3 ) time complexity for first-order models (McDonald et al., 2005a,b) and higher polynomials for higherorder models (McDonald and Pereira, 2006; Ma and Zhao, 2012b,a) ).", "In this paper, we propose a novel neural network architecture for dependency parsing, stackpointer networks (STACKPTR).", "STACKPTR is a transition-based architecture, with the corresponding asymptotic efficiency, but still maintains a global view of the sentence that proves essential for achieving competitive accuracy.", "Our STACKPTR parser has a pointer network (Vinyals et al., 2015) as its backbone, and is equipped with an internal stack to maintain the order of head words in tree structures.", "The STACKPTR parser performs parsing in an incremental, topdown, depth-first fashion; at each step, it generates an arc by assigning a child for the head word at the top of the internal stack.", "This architecture makes it possible to capture information from the whole sentence and all the previously derived subtrees, while maintaining a number of parsing steps linear in the sentence length.", "We evaluate our parser on 29 treebanks across 20 languages and different dependency annotation schemas, and achieve state-of-the-art performance on 21 of them.", "The contributions of this work are summarized as follows: (i) We propose a neural network architecture for dependency parsing that is simple, effective, and efficient.", "(ii) Empirical evaluations on benchmark datasets over 20 languages show that our method achieves state-of-the-art performance on 21 different treebanks 1 .", "(iii) Comprehensive error analysis is conducted to compare the proposed method to a strong graph-based baseline using biaffine attention (Dozat and Manning, 2017) .", "Background We first briefly describe the task of dependency parsing, setup the notation, and review Pointer Networks (Vinyals et al., 2015) .", "Dependency Parsing and Notations Dependency trees represent syntactic relationships between words in the sentences through labeled directed edges between head words and their dependents.", "Figure 1 (a) shows a dependency tree for the sentence, \"But there were no buyers\".", "In this paper, we will use the following notation: Input: x = {w 1 , .", ".", ".", ", w n } represents a generic sentence, where w i is the ith word.", "Output: y = {p 1 , p 2 , · · · , p k } represents a generic (possibly non-projective) dependency tree, where each path p i = $, w i,1 , w i,2 , · · · , w i,l i is a sequence of words from the root to a leaf.", "\"$\" is an universal virtual root that is added to each tree.", "Stack: σ denotes a stack configuration, which is a sequence of words.", "We use σ|w to represent a stack configuration that pushes word w into the stack σ.", "Children: ch(w i ) denotes the list of all the children (modifiers) of word w i .", "Pointer Networks Pointer Networks (PTR-NET) (Vinyals et al., 2015) are a variety of neural network capable of learning the conditional probability of an output sequence with elements that are discrete tokens corresponding to positions in an input sequence.", "This model cannot be trivially expressed by standard sequence-to-sequence networks due to the variable number of input positions in each sentence.", "PTR-NET solves the problem by using attention (Bahdanau et al., 2015; Luong et al., 2015) as a pointer to select a member of the input sequence as the output.", "Formally, the words of the sentence x are fed one-by-one into the encoder (a multiple-layer bidirectional RNN), producing a sequence of encoder hidden states s i .", "At each time step t, the decoder (a uni-directional RNN) receives the input from last step and outputs decoder hidden state h t .", "The attention vector a t is calculated as follows: e t i = score(h t , s i ) a t = softmax (e t ) (1) where score(·, ·) is the attention scoring function, which has several variations such as dot-product, concatenation, and biaffine (Luong et al., 2015) .", "PTR-NET regards the attention vector a t as a probability distribution over the source words, i.e.", "it uses a t i as pointers to select the input elements.", "3 Stack-Pointer Networks Overview Similarly to PTR-NET, STACKPTR first reads the whole sentence and encodes each word into the encoder hidden state s i .", "The internal stack σ is always initialized with the root symbol $.", "At each time step t, the decoder receives the input vector corresponding to the top element of the stack σ (the head word w p where p is the word index), generates the hidden state h t , and computes the attention vector a t using Eq.", "(1).", "The parser chooses a specific position c according to the attention scores in a t to generate a new dependency arc (w h , w c ) by selecting w c as a child of w h .", "Then the parser pushes w c onto the stack, i.e.", "σ → σ|w c , and goes to the next step.", "At one step if the parser points w h to itself, i.e.", "c = h, it indicates that all children of the head word w h have already been selected.", "Then the parser goes to the next step by popping w h out of σ.", "At test time, in order to guarantee a valid dependency tree containing all the words in the input sentences exactly once, the decoder maintains a list of \"available\" words.", "At each decoding step, the parser selects a child for the current head word, and removes the child from the list of available words to make sure that it cannot be selected as a child of other head words.", "For head words with multiple children, it is possible that there is more than one valid selection for each time step.", "In order to define a deterministic decoding process to make sure that there is only one ground-truth choice at each step (which is necessary for simple maximum likelihood estimation), a predefined order for each ch(w i ) needs to be introduced.", "The predefined order of children can have different alternatives, such as leftto-right or inside-out 2 .", "In this paper, we adopt the inside-out order 3 since it enables us to utilize second-order sibling information, which has been proven beneficial for parsing performance (Mc-Donald and Pereira, 2006; ) (see § 3.4 for details).", "Figure 1 (b) depicts the architecture of STACKPTR and the decoding procedure for the example sentence in Figure 1 (a) .", "Encoder The encoder of our parsing model is based on the bi-directional LSTM-CNN architecture (BLSTM-CNNs) (Chiu and Nichols, 2016; where CNNs encode character-level information of a word into its character-level repre-sentation and BLSTM models context information of each word.", "Formally, for each word, the CNN, with character embeddings as inputs, encodes the character-level representation.", "Then the character-level representation vector is concatenated with the word embedding vector to feed into the BLSTM network.", "To enrich word-level information, we also use POS embeddings.", "Finally, the encoder outputs a sequence of hidden states s i .", "Decoder The decoder for our parser is a uni-directional LSTM.", "Different from previous work (Bahdanau et al., 2015; Vinyals et al., 2015) which uses word embeddings of the previous word as the input to the decoder, our decoder receives the encoder hidden state vector (s i ) of the top element in the stack σ (see Figure 1 (b)).", "Compared to word embeddings, the encoder hidden states contain more contextual information, benefiting both the training and decoding procedures.", "The decoder produces a sequence of decoder hidden states h i , one for each decoding step.", "Higher-order Information As mentioned before, our parser is capable of utilizing higher-order information.", "In this paper, we incorporate two kinds of higher-order structures grandparent and sibling.", "A sibling structure is a head word with two successive modifiers, and a grandparent structure is a pair of dependencies connected head-to-tail: 0 12 3 456 2782 96 56 986 2 5214 3 77543 9 5 2 52 ÿ ÿ ! \"", "\"#$% & #& ' % !", "#& (()& *% & & !", "*\"& (+(\"!", "\"% ) &, \" \"#$' (% & #-.,\"/\" \"#$ % %% #*) (& ** *% & 0 % #( !", "$% %()- 1 2 3 -45 67 896: ;<=>?", "@ ABCDE3 -F14 G H I .J3-4 %& *% &()!", "!& %#( & +!", ", & * -\"ABCDE3 -F14K!", "& %& *K&& *&& 1 %& *!", "$% %()% & #-'A #& *# \" !", "0 & $()J3-4 (,% (& !", "!", "$, & *& *!", "& *() & *% & #.\"", "#& !", "$%(!", "K EL' % #& #! '", "#((%& & $.", "\"(,* #*) ( %& *) (0 #%()& * % .", "%& (M NOP QR#*\" \"#$& & (%!", "!SNR P T .,* #*#+% #( \" % (!", "& (' #& ( \" % #+) ( !", "U\"%) (!", "!", "(,% ABCDE3 -F145 V W<; ABCDEXDY3 -FZ4 Y*& % .,& && *\" \"#$& 1%%& () & %Z.#*(),* #*[%% & #(& 0 +& (& (& *% #( ()1'(# & ) #& ( U& (% . )", "/# & % !", "( & *% % &) (% (!", "K EL'' \\\"/ A # /#!", "!", "$.A#& (%u' .u' v.\"u' w\"% # + % %& *& .", "% #& K!", "$.)", "#& (& % & ( \"0 #* ! \"", "& % .", "\"0 % +!", "& % .\"", "& () \"0 % +!", "\"& 0 % +!", "& % ' x yz{ | } {| { { } | C,& * \"0 ( \"\" \"#$ % %+ !", "\"( \"%) ( % & % !", "( & *%'_ & * % % #& (., (K \"+#[ (\"(& ,( !", "K& % %) ( K (%,( [' G H G +H vY*\"$ #0 ( % & #& % \"\" K& (%()& *E % G vH!", "( & *' B( !", "& % % \" #& \"%& !", "%\" 0 #( !", "& % %%& U( \"% '(+ K & $.,! \"", "& *% $& # *& 0 *\"\"K % (% ' x { | } } { } { Ỹ */ % && $ () % ,\"% # +% %/ % & 0 ( \" ) #& ( U& (.,* #*\"#( (% %\" 0 \"#$& & ( & % \" K \"!\"", "\"# % 'E % 0 G vH & (\"#\", \"!", "$0 % \"\"$ #0 ( !", "( & *) (/ % & 0 ( \" % % & %& *+% %) ($ % % .", "#! \"", "(, !", "( & *%.,% U & %\"% * ' Y*E % G vH!", "( & * %+% \"(& ,( & !", "& \"& $ %()\"$ #0 ( %& #0 & % OQ S¡P% % .,* #*#(% % &()*\"0 ,( \"\" & %\"% #\"& %((% \".", "\"¢ £OQ ¤ S¡P% % .,* #*#(% % &()\" \"#$\"& * (+& ,& **\"\"(\" / ' ( !", "!", "$.,\"(& #( !", "& % %¥ ¦ § ,* ©\"ª & * \" #%()& *% « %*\"0 ,( \"\"\" ( & ' #( !", "& % %\"0 (& \"%¬ ¦ § ,* ©\"® & * \"()& * *\"\"(\" /()\" \"#$' _ & & K!", "$.", "#( !", "& % % & %*! )", "0 #(% & & & *\"\"+$©.,* % #( !", "& % %(!", "$ & !*! )", "0 #(% & & & .% #& *#(% & & &# +& \"\"+$\"\" ( (\" / %& (®' E#*& $ ()% %# & \"+$ # % K!", "$ #(+ & ,(%!", "!", ".", "\"#&% % & *#(0 % & #& (% % # /\" * #!", "!", "$ v' #( !", "& % %#(% & #& \") ( ()#( !", "& % % . \"", "#& & *\" K % (()& * °©F®± & (#(% & & & %*\"\"+$©\" ®' #( !", "& % %# & \"+$#( !", "& 0 #( !", "& % , & *& *(& **!", ")() ®« %#(% & & & 'Y* ( &()#(#& & ( #*#(% & #& (²® vG H(³ 0 vG +H ² %& *T S¡ ¢ PSQ¢ £P .)", "\"& *&% & + & \"& (/\"& *( & !#(% & #& (' _ ( \"& ( % %& #-.", "&% ) /#%& ( /\"( & !#(% & #& (%) (!", "!#( !", "& \" #( !", "& % %\"/\"(-' Y* %#+ To utilize higher-order information, the decoder's input at each step is the sum of the encoder hidden states of three words: β t = s h + s g + s s where β t is the input vector of decoder at time t and h, g, s are the indices of the head word and its grandparent and sibling, respectively.", "Figure 1 (b) illustrates the details.", "Here we use the element-wise sum operation instead of concatenation because it does not increase the dimension of the input vector β t , thus introducing no additional model parameters.", "Biaffine Attention Mechanism For attention score function (Eq.", "(1)), we adopt the biaffine attention mechanism (Luong et al., 2015; Dozat and Manning, 2017) : e t i = h T t Ws i + U T h t + V T s i + b where W, U, V, b are parameters, denoting the weight matrix of the bi-linear term, the two weight vectors of the linear terms, and the bias vector.", "As discussed in Dozat and Manning (2017) , applying a multilayer perceptron (MLP) to the output vectors of the BLSTM before the score function can both reduce the dimensionality and overfitting of the model.", "We follow this work by using a one-layer perceptron to s i and h i with elu (Clevert et al., 2015) as its activation function.", "Similarly, the dependency label classifier also uses a biaffine function to score each label, given the head word vector h t and child vector s i as inputs.", "Again, we use MLPs to transform h t and s i before feeding them into the classifier.", "Training Objectives The STACKPTR parser is trained to optimize the probability of the dependency trees given sentences: P θ (y|x), which can be factorized as: P θ (y|x) = k i=1 P θ (p i |p <i , x) = k i=1 l i j=1 P θ (c i,j |c i,<j , p <i , x), (2) where θ represents model parameters.", "p <i denotes the preceding paths that have already been generated.", "c i,j represents the jth word in p i and c i,<j denotes all the proceeding words on the path p i .", "Thus, the STACKPTR parser is an autoregressive model, like sequence-to-sequence models, but it factors the distribution according to a top-down tree structure as opposed to a left-to-right chain.", "We define P θ (c i,j |c i,<j , p <i , x) = a t , where attention vector a t (of dimension n) is used as the distribution over the indices of words in a sentence.", "Arc Prediction Our parser is trained by optimizing the conditional likelihood in Eq (2), which is implemented as the cross-entropy loss.", "Label Prediction We train a separated multiclass classifier in parallel to predict the dependency labels.", "Following Dozat and Manning (2017) , the classifier takes the information of the head word and its child as features.", "The label classifier is trained simultaneously with the parser by optimizing the sum of their objectives.", "Discussion Time Complexity.", "The number of decoding steps to build a parse tree for a sentence of length n is 2n−1, linear in n. Together with the attention mechanism (at each step, we need to compute the attention vector a t , whose runtime is O(n)), the time complexity of decoding algorithm is O(n 2 ), which is more efficient than graph-based parsers that have O(n 3 ) or worse complexity when using dynamic programming or maximum spanning tree (MST) decoding algorithms.", "Top-down Parsing.", "When humans comprehend a natural language sentence, they arguably do it in an incremental, left-to-right manner.", "However, when humans consciously annotate a sentence with syntactic structure, they rarely ever process in fixed left-to-right order.", "Rather, they start by reading the whole sentence, then seeking the main predicates, jumping back-and-forth over the sentence and recursively proceeding to the subtree structures governed by certain head words.", "Our parser follows a similar kind of annotation process: starting from reading the whole sentence, and processing in a top-down manner by finding the main predicates first and only then search for sub-trees governed by them.", "When making latter decisions, the parser has access to the entire structure built in earlier steps.", "Implementation Details Pre-trained Word Embeddings.", "For all the parsing models in different languages, we initialize word vectors with pretrained word embeddings.", "For Chinese, Dutch, English, German and Spanish, we use the structured-skipgram embeddings.", "For other languages we use Polyglot embeddings (Al-Rfou et al., 2013) .", "Optimization.", "Parameter optimization is performed with the Adam optimizer (Kingma and Ba, 2014) with β 1 = β 2 = 0.9.", "We choose an initial learning rate of η 0 = 0.001.", "The learning rate η is annealed by multiplying a fixed decay rate ρ = 0.75 when parsing performance stops increasing on validation sets.", "To reduce the effects of \"gradient exploding\", we use gradient clipping of 5.0 (Pascanu et al., 2013) .", "Dropout Training.", "To mitigate overfitting, we apply dropout (Srivastava et al., 2014; .", "For BLSTM, we use recurrent dropout (Gal and Ghahramani, 2016) with a drop rate of 0.33 between hidden states and 0.33 between layers.", "Following Dozat and Manning (2017) , we also use embedding dropout with a rate of 0.33 on all word, character, and POS embeddings.", "Hyper-Parameters.", "Some parameters are chosen from those reported in Dozat and Manning (2017) .", "We use the same hyper-parameters across the models on different treebanks and languages, due to time constraints.", "The details of the chosen hyper-parameters for all experiments are summarized in Appendix A.", "Experiments Setup We evaluate our STACKPTR parser mainly on three treebanks: the English Penn Treebank (PTB version 3.0) (Marcus et al., 1993) , the Penn Chinese Treebank (CTB version 5.1) (Xue et al., 2002) , and the German CoNLL 2009 corpus (Hajič et al., 2009) .", "We use the same experimental settings as Kuncoro et al.", "(2016) .", "To make a thorough empirical comparison with previous studies, we also evaluate our system on treebanks from CoNLL shared task and the Universal Dependency (UD) Treebanks 4 .", "For the CoNLL Treebanks, we use the English treebank from CoNLL-2008 shared task (Surdeanu et al., 2008) and all 13 treebanks from CoNLL-2006 shared task (Buchholz and Marsi, 2006) .", "The experimental settings are the same as .", "For UD Treebanks, we select 12 languages.", "The details of the treebanks and experimental settings are in § 4.5 and Appendix B.", "Evaluation Metrics Parsing performance is measured with five metrics: unlabeled attachment score (UAS), labeled attachment score (LAS), unlabeled complete match (UCM), labeled complete match (LCM), and root accuracy (RA).", "Following previous work (Kuncoro et al., 2016; Dozat and Manning, 2017) , we report results excluding punctuations for Chinese and English.", "For each experiment, we report the mean values with corresponding standard deviations over 5 repetitions.", "Baseline For fair comparison of the parsing performance, we re-implemented the graph-based Deep Biaffine (BIAF) parser (Dozat and Manning, 2017) , which achieved state-of-the-art results on a wide range of languages.", "Our re-implementation adds character-level information using the same LSTM-CNN encoder as our model ( § 3.2) to the original BIAF model, which boosts its performance on all languages.", "Main Results We first conduct experiments to demonstrate the effectiveness of our neural architecture by comparing with the strong baseline BIAF.", "We compare the performance of four variations of our model with different decoder inputs -Org, +gpar, +sib and Full -where the Org model utilizes only the encoder hidden states of head words, while the +gpar and +sib models augments the original one with grandparent and sibling information, respectively.", "The Full model includes all the three information as inputs.", "Figure 2 illustrates the performance (five metrics) of different variations of our STACKPTR parser together with the results of baseline BIAF re-implemented by us, on the test sets of the three languages.", "On UAS and LAS, the Full variation of STACKPTR with decoding beam size 10 outperforms BIAF on Chinese, and obtains competitive performance on English and German.", "An interesting observation is that the Full model achieves the best accuracy on English and Chinese, while performs slightly worse than +sib on German.", "This shows that the importance of higher-order information varies in languages.", "On LCM and UCM, STACKPTR significantly outperforms BIAF on all languages, showing the superiority of our parser on complete sentence parsing.", "The results of our parser on RA are slightly worse than BIAF.", "More details of results are provided in Appendix C. Table 1 illustrates the UAS and LAS of the four versions of our model (with decoding beam size 10) on the three treebanks, together with previous top-performing systems for comparison.", "Note that the results of STACKPTR and our reimplementation of BIAF are the average of 5 repetitions instead of a single run.", "Our Full model significantly outperforms all the transition-based parsers on all three languages, and achieves better results than most graph-based parsers.", "Our Table 1 : UAS and LAS of four versions of our model on test sets for three languages, together with topperforming parsing systems.", "\"T\" and \"G\" indicate transition-and graph-based models, respectively.", "For BIAF, we provide the original results reported in Dozat and Manning (2017) and our re-implementation.", "For STACKPTR and our re-implementation of BiAF, we report the average over 5 runs.", "re-implementation of BIAF obtains better performance than the original one in Dozat and Manning (2017) , demonstrating the effectiveness of the character-level information.", "Our model achieves state-of-the-art performance on both UAS and LAS on Chinese, and best UAS on English.", "On German, the performance is competitive with BIAF, and significantly better than other models.", "Comparison with Previous Work Error Analysis In this section, we characterize the errors made by BIAF and STACKPTR by presenting a number of experiments that relate parsing errors to a set of linguistic and structural properties.", "For simplicity, we follow McDonald and Nivre (2011) and report labeled parsing metrics (either accuracy, precision, or recall) for all experiments.", "Length and Graph Factors Following McDonald and Nivre (2011), we analyze parsing errors related to structural factors.", "Sentence Length.", "Figure 3 (a) shows the accuracy of both parsing models relative to sentence lengths.", "Consistent with the analysis in Mc-Donald and Nivre (2011) , STACKPTR tends to perform better on shorter sentences, which make fewer parsing decisions, significantly reducing the chance of error propagation.", "Dependency Length.", "Figure 3 (b) measures the precision and recall relative to dependency lengths.", "While the graph-based BIAF parser still performs better for longer dependency arcs and transition-based STACKPTR parser does better for shorter ones, the gap between the two systems is marginal, much smaller than that shown Table 3 : UAS and LAS on 14 treebanks from CoNLL shared tasks, together with several state-of-the-art parsers.", "Bi-Att is the bi-directional attention based parser (Cheng et al., 2016) , and NeuroMST is the neural MST parser .", "\"Best Published\" includes the most accurate parsers in term of UAS among , Martins et al.", "(2011) , Martins et al.", "(2013) , , , Zhang and McDonald (2014) , Pitler and McDonald (2015) , and Cheng et al.", "(2016) .", "in McDonald and Nivre (2011) .", "One possible reason is that, unlike traditional transition-based parsers that scan the sentence from left to right, STACKPTR processes in a top-down manner, thus sometimes unnecessarily creating shorter dependency arcs first.", "Root Distance.", "Figure 3 (c) plots the precision and recall of each system for arcs of varying distance to the root.", "Different from the observation in McDonald and Nivre (2011) , STACKPTR does not show an obvious advantage on the precision for arcs further away from the root.", "Furthermore, the STACKPTR parser does not have the tendency to over-predict root modifiers reported in McDonald and Nivre (2011) .", "This behavior can be explained using the same reasoning as above: the fact that arcs further away from the root are usually constructed early in the parsing algorithm of traditional transition-based parsers is not true for the STACKPTR parser.", "Effect of POS Embedding The only prerequisite information that our parsing model relies on is POS tags.", "With the goal of achieving an end-to-end parser, we explore the effect of POS tags on parsing performance.", "We run experiments on PTB using our STACKPTR parser with gold-standard and predicted POS tags, and without tags, respectively.", "STACKPTR in these experiments is the Full model with beam=10.", "Table 2 gives results of the parsers with different versions of POS tags on the test data of PTB.", "The parser with gold-standard POS tags significantly outperforms the other two parsers, showing that dependency parsers can still benefit from accurate POS information.", "The parser with predicted (imperfect) POS tags, however, performs even slightly worse than the parser without using POS tags.", "It illustrates that an end-to-end parser that doesn't rely on POS information can obtain competitive (or even better) performance than parsers using imperfect predicted POS tags, even if the POS tagger is relative high accuracy (accuracy > 97% in this experiment on PTB).", "Table 3 summarizes the parsing results of our model on the test sets of 14 treebanks from the CoNLL shared task, along with the state-of-theart baselines.", "Along with BIAF, we also list the performance of the bi-directional attention based Parser (Bi-Att) (Cheng et al., 2016) and the neural MST parser (NeuroMST) for comparison.", "Our parser achieves state-of-theart performance on both UAS and LAS on eight languages -Arabic, Czech, English, German, Portuguese, Slovene, Spanish, and Swedish.", "On Bulgarian and Dutch, our parser obtains the best UAS.", "On other languages, the performance of our parser is competitive with BIAF, and significantly better than others.", "The only exception is Japanese, on which NeuroMST obtains the best scores.", "Experiments on Other Treebanks CoNLL Treebanks UD Treebanks For UD Treebanks, we select 12 languages -Bulgarian, Catalan, Czech, Dutch, English, French, German, Italian, Norwegian, Romanian, Russian and Spanish.", "For all the languages, we adopt the standard training/dev/test splits, and use the universal POS tags (Petrov et al., 2012) provided in each treebank.", "The statistics of these corpora are provided in Appendix B.", "Table 4 summarizes the results of the STACKPTR parser, along with BIAF for comparison, on both the development and test datasets for each language.", "First, both BIAF and STACKPTR parsers achieve relatively high parsing accuracies on all the 12 languages -all with UAS are higher than 90%.", "On nine languages -Catalan, Czech, Dutch, English, French, German, Norwegian, Russian and Spanish -STACKPTR outperforms BIAF for both UAS and LAS.", "On Bulgarian, STACKPTR achieves slightly better UAS while LAS is slightly worse than BIAF.", "On Italian and Romanian, BIAF obtains marginally better parsing performance than STACKPTR.", "Conclusion In this paper, we proposed STACKPTR, a transition-based neural network architecture, for dependency parsing.", "Combining pointer networks with an internal stack to track the status of the top-down, depth-first search in the decoding procedure, the STACKPTR parser is able to capture information from the whole sentence and all the previously derived subtrees, removing the leftto-right restriction in classical transition-based parsers, while maintaining linear parsing steps, w.r.t the length of the sentences.", "Experimental re-sults on 29 treebanks show the effectiveness of our parser across 20 languages, by achieving state-ofthe-art performance on 21 corpora.", "There are several potential directions for future work.", "First, we intend to consider how to conduct experiments to improve the analysis of parsing errors qualitatively and quantitatively.", "Another interesting direction is to further improve our model by exploring reinforcement learning approaches to learn an optimal order for the children of head words, instead of using a predefined fixed order." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.1", "3.2", "3.3", "3.4", "3.5", "3.6", "3.7", "3.8", "4.1", "4.2", "4.4", "4.4.1", "4.4.2", "4.5.2", "5" ], "paper_header_content": [ "Introduction", "Background", "Dependency Parsing and Notations", "Pointer Networks", "Overview", "Encoder", "Decoder", "Higher-order Information", "Biaffine Attention Mechanism", "Training Objectives", "Discussion", "Implementation Details", "Setup", "Main Results", "Error Analysis", "Length and Graph Factors", "Effect of POS Embedding", "UD Treebanks", "Conclusion" ] }
GEM-SciDuet-train-37#paper-1053#slide-3
Our Proposal Stack pointer Networks StackPtr
But there were no buyers were there were were but were Actions: "Point" to the next word to choose as a child Model: A neural network, based on "pointer networks" Top-down parsing maintains a global view of the sentence Can maintain full history, low asymptotic running time (c.f. graph-based)
But there were no buyers were there were were but were Actions: "Point" to the next word to choose as a child Model: A neural network, based on "pointer networks" Top-down parsing maintains a global view of the sentence Can maintain full history, low asymptotic running time (c.f. graph-based)
[]
GEM-SciDuet-train-37#paper-1053#slide-4
1053
Stack-Pointer Networks for Dependency Parsing
We introduce a novel architecture for dependency parsing: stack-pointer networks (STACKPTR). Combining pointer networks (Vinyals et al., 2015) with an internal stack, the proposed model first reads and encodes the whole sentence, then builds the dependency tree top-down (from root-to-leaf) in a depth-first fashion. The stack tracks the status of the depthfirst search and the pointer networks select one child for the word at the top of the stack at each step. The STACKPTR parser benefits from the information of the whole sentence and all previously derived subtree structures, and removes the leftto-right restriction in classical transitionbased parsers. Yet, the number of steps for building any (including non-projective) parse tree is linear in the length of the sentence just as other transition-based parsers, yielding an efficient decoding algorithm with O(n 2 ) time complexity. We evaluate our model on 29 treebanks spanning 20 languages and different dependency annotation schemas, and achieve state-of-theart performance on 21 of them.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281 ], "paper_content_text": [ "Introduction Dependency parsing, which predicts the existence and type of linguistic dependency relations between words, is a first step towards deep language understanding.", "Its importance is widely recognized in the natural language processing (NLP) community, with it benefiting a wide range of NLP applications, such as coreference resolution (Ng, 2010; Durrett and Klein, 2013; Work done while at Carnegie Mellon University.", "2016), sentiment analysis (Tai et al., 2015) , machine translation (Bastings et al., 2017) , information extraction (Nguyen et al., 2009; Angeli et al., 2015; Peng et al., 2017) , word sense disambiguation (Fauceglia et al., 2015) , and low-resource languages processing (McDonald et al., 2013; Ma and Xia, 2014) .", "There are two dominant approaches to dependency parsing (Buchholz and Marsi, 2006; Nivre et al., 2007) : local and greedy transitionbased algorithms (Yamada and Matsumoto, 2003; Nivre and Scholz, 2004; Zhang and Nivre, 2011; Chen and Manning, 2014) , and the globally optimized graph-based algorithms (Eisner, 1996; Mc-Donald et al., 2005a,b; .", "Transition-based dependency parsers read words sequentially (commonly from left-to-right) and build dependency trees incrementally by making series of multiple choice decisions.", "The advantage of this formalism is that the number of operations required to build any projective parse tree is linear with respect to the length of the sentence.", "The challenge, however, is that the decision made at each step is based on local information, leading to error propagation and worse performance compared to graph-based parsers on root and long dependencies (McDonald and Nivre, 2011) .", "Previous studies have explored solutions to address this challenge.", "Stack LSTMs are capable of learning representations of the parser state that are sensitive to the complete contents of the parser's state.", "Andor et al.", "(2016) proposed a globally normalized transition model to replace the locally normalized classifier.", "However, the parsing accuracy is still behind state-of-the-art graph-based parsers (Dozat and Manning, 2017) .", "Graph-based dependency parsers, on the other hand, learn scoring functions for parse trees and perform exhaustive search over all possible trees for a sentence to find the globally highest scoring tree.", "Incorporating this global search algorithm with distributed representations learned from neural networks, neural graph-based parsers (Kiperwasser and Goldberg, 2016; Wang and Chang, 2016; Kuncoro et al., 2016; Dozat and Manning, 2017) have achieved the state-of-the-art accuracies on a number of treebanks in different languages.", "Nevertheless, these models, while accurate, are usually slow (e.g.", "decoding is O(n 3 ) time complexity for first-order models (McDonald et al., 2005a,b) and higher polynomials for higherorder models (McDonald and Pereira, 2006; Ma and Zhao, 2012b,a) ).", "In this paper, we propose a novel neural network architecture for dependency parsing, stackpointer networks (STACKPTR).", "STACKPTR is a transition-based architecture, with the corresponding asymptotic efficiency, but still maintains a global view of the sentence that proves essential for achieving competitive accuracy.", "Our STACKPTR parser has a pointer network (Vinyals et al., 2015) as its backbone, and is equipped with an internal stack to maintain the order of head words in tree structures.", "The STACKPTR parser performs parsing in an incremental, topdown, depth-first fashion; at each step, it generates an arc by assigning a child for the head word at the top of the internal stack.", "This architecture makes it possible to capture information from the whole sentence and all the previously derived subtrees, while maintaining a number of parsing steps linear in the sentence length.", "We evaluate our parser on 29 treebanks across 20 languages and different dependency annotation schemas, and achieve state-of-the-art performance on 21 of them.", "The contributions of this work are summarized as follows: (i) We propose a neural network architecture for dependency parsing that is simple, effective, and efficient.", "(ii) Empirical evaluations on benchmark datasets over 20 languages show that our method achieves state-of-the-art performance on 21 different treebanks 1 .", "(iii) Comprehensive error analysis is conducted to compare the proposed method to a strong graph-based baseline using biaffine attention (Dozat and Manning, 2017) .", "Background We first briefly describe the task of dependency parsing, setup the notation, and review Pointer Networks (Vinyals et al., 2015) .", "Dependency Parsing and Notations Dependency trees represent syntactic relationships between words in the sentences through labeled directed edges between head words and their dependents.", "Figure 1 (a) shows a dependency tree for the sentence, \"But there were no buyers\".", "In this paper, we will use the following notation: Input: x = {w 1 , .", ".", ".", ", w n } represents a generic sentence, where w i is the ith word.", "Output: y = {p 1 , p 2 , · · · , p k } represents a generic (possibly non-projective) dependency tree, where each path p i = $, w i,1 , w i,2 , · · · , w i,l i is a sequence of words from the root to a leaf.", "\"$\" is an universal virtual root that is added to each tree.", "Stack: σ denotes a stack configuration, which is a sequence of words.", "We use σ|w to represent a stack configuration that pushes word w into the stack σ.", "Children: ch(w i ) denotes the list of all the children (modifiers) of word w i .", "Pointer Networks Pointer Networks (PTR-NET) (Vinyals et al., 2015) are a variety of neural network capable of learning the conditional probability of an output sequence with elements that are discrete tokens corresponding to positions in an input sequence.", "This model cannot be trivially expressed by standard sequence-to-sequence networks due to the variable number of input positions in each sentence.", "PTR-NET solves the problem by using attention (Bahdanau et al., 2015; Luong et al., 2015) as a pointer to select a member of the input sequence as the output.", "Formally, the words of the sentence x are fed one-by-one into the encoder (a multiple-layer bidirectional RNN), producing a sequence of encoder hidden states s i .", "At each time step t, the decoder (a uni-directional RNN) receives the input from last step and outputs decoder hidden state h t .", "The attention vector a t is calculated as follows: e t i = score(h t , s i ) a t = softmax (e t ) (1) where score(·, ·) is the attention scoring function, which has several variations such as dot-product, concatenation, and biaffine (Luong et al., 2015) .", "PTR-NET regards the attention vector a t as a probability distribution over the source words, i.e.", "it uses a t i as pointers to select the input elements.", "3 Stack-Pointer Networks Overview Similarly to PTR-NET, STACKPTR first reads the whole sentence and encodes each word into the encoder hidden state s i .", "The internal stack σ is always initialized with the root symbol $.", "At each time step t, the decoder receives the input vector corresponding to the top element of the stack σ (the head word w p where p is the word index), generates the hidden state h t , and computes the attention vector a t using Eq.", "(1).", "The parser chooses a specific position c according to the attention scores in a t to generate a new dependency arc (w h , w c ) by selecting w c as a child of w h .", "Then the parser pushes w c onto the stack, i.e.", "σ → σ|w c , and goes to the next step.", "At one step if the parser points w h to itself, i.e.", "c = h, it indicates that all children of the head word w h have already been selected.", "Then the parser goes to the next step by popping w h out of σ.", "At test time, in order to guarantee a valid dependency tree containing all the words in the input sentences exactly once, the decoder maintains a list of \"available\" words.", "At each decoding step, the parser selects a child for the current head word, and removes the child from the list of available words to make sure that it cannot be selected as a child of other head words.", "For head words with multiple children, it is possible that there is more than one valid selection for each time step.", "In order to define a deterministic decoding process to make sure that there is only one ground-truth choice at each step (which is necessary for simple maximum likelihood estimation), a predefined order for each ch(w i ) needs to be introduced.", "The predefined order of children can have different alternatives, such as leftto-right or inside-out 2 .", "In this paper, we adopt the inside-out order 3 since it enables us to utilize second-order sibling information, which has been proven beneficial for parsing performance (Mc-Donald and Pereira, 2006; ) (see § 3.4 for details).", "Figure 1 (b) depicts the architecture of STACKPTR and the decoding procedure for the example sentence in Figure 1 (a) .", "Encoder The encoder of our parsing model is based on the bi-directional LSTM-CNN architecture (BLSTM-CNNs) (Chiu and Nichols, 2016; where CNNs encode character-level information of a word into its character-level repre-sentation and BLSTM models context information of each word.", "Formally, for each word, the CNN, with character embeddings as inputs, encodes the character-level representation.", "Then the character-level representation vector is concatenated with the word embedding vector to feed into the BLSTM network.", "To enrich word-level information, we also use POS embeddings.", "Finally, the encoder outputs a sequence of hidden states s i .", "Decoder The decoder for our parser is a uni-directional LSTM.", "Different from previous work (Bahdanau et al., 2015; Vinyals et al., 2015) which uses word embeddings of the previous word as the input to the decoder, our decoder receives the encoder hidden state vector (s i ) of the top element in the stack σ (see Figure 1 (b)).", "Compared to word embeddings, the encoder hidden states contain more contextual information, benefiting both the training and decoding procedures.", "The decoder produces a sequence of decoder hidden states h i , one for each decoding step.", "Higher-order Information As mentioned before, our parser is capable of utilizing higher-order information.", "In this paper, we incorporate two kinds of higher-order structures grandparent and sibling.", "A sibling structure is a head word with two successive modifiers, and a grandparent structure is a pair of dependencies connected head-to-tail: 0 12 3 456 2782 96 56 986 2 5214 3 77543 9 5 2 52 ÿ ÿ ! \"", "\"#$% & #& ' % !", "#& (()& *% & & !", "*\"& (+(\"!", "\"% ) &, \" \"#$' (% & #-.,\"/\" \"#$ % %% #*) (& ** *% & 0 % #( !", "$% %()- 1 2 3 -45 67 896: ;<=>?", "@ ABCDE3 -F14 G H I .J3-4 %& *% &()!", "!& %#( & +!", ", & * -\"ABCDE3 -F14K!", "& %& *K&& *&& 1 %& *!", "$% %()% & #-'A #& *# \" !", "0 & $()J3-4 (,% (& !", "!", "$, & *& *!", "& *() & *% & #.\"", "#& !", "$%(!", "K EL' % #& #! '", "#((%& & $.", "\"(,* #*) ( %& *) (0 #%()& * % .", "%& (M NOP QR#*\" \"#$& & (%!", "!SNR P T .,* #*#+% #( \" % (!", "& (' #& ( \" % #+) ( !", "U\"%) (!", "!", "(,% ABCDE3 -F145 V W<; ABCDEXDY3 -FZ4 Y*& % .,& && *\" \"#$& 1%%& () & %Z.#*(),* #*[%% & #(& 0 +& (& (& *% #( ()1'(# & ) #& ( U& (% . )", "/# & % !", "( & *% % &) (% (!", "K EL'' \\\"/ A # /#!", "!", "$.A#& (%u' .u' v.\"u' w\"% # + % %& *& .", "% #& K!", "$.)", "#& (& % & ( \"0 #* ! \"", "& % .", "\"0 % +!", "& % .\"", "& () \"0 % +!", "\"& 0 % +!", "& % ' x yz{ | } {| { { } | C,& * \"0 ( \"\" \"#$ % %+ !", "\"( \"%) ( % & % !", "( & *%'_ & * % % #& (., (K \"+#[ (\"(& ,( !", "K& % %) ( K (%,( [' G H G +H vY*\"$ #0 ( % & #& % \"\" K& (%()& *E % G vH!", "( & *' B( !", "& % % \" #& \"%& !", "%\" 0 #( !", "& % %%& U( \"% '(+ K & $.,! \"", "& *% $& # *& 0 *\"\"K % (% ' x { | } } { } { Ỹ */ % && $ () % ,\"% # +% %/ % & 0 ( \" ) #& ( U& (.,* #*\"#( (% %\" 0 \"#$& & ( & % \" K \"!\"", "\"# % 'E % 0 G vH & (\"#\", \"!", "$0 % \"\"$ #0 ( !", "( & *) (/ % & 0 ( \" % % & %& *+% %) ($ % % .", "#! \"", "(, !", "( & *%.,% U & %\"% * ' Y*E % G vH!", "( & * %+% \"(& ,( & !", "& \"& $ %()\"$ #0 ( %& #0 & % OQ S¡P% % .,* #*#(% % &()*\"0 ,( \"\" & %\"% #\"& %((% \".", "\"¢ £OQ ¤ S¡P% % .,* #*#(% % &()\" \"#$\"& * (+& ,& **\"\"(\" / ' ( !", "!", "$.,\"(& #( !", "& % %¥ ¦ § ,* ©\"ª & * \" #%()& *% « %*\"0 ,( \"\"\" ( & ' #( !", "& % %\"0 (& \"%¬ ¦ § ,* ©\"® & * \"()& * *\"\"(\" /()\" \"#$' _ & & K!", "$.", "#( !", "& % % & %*! )", "0 #(% & & & *\"\"+$©.,* % #( !", "& % %(!", "$ & !*! )", "0 #(% & & & .% #& *#(% & & &# +& \"\"+$\"\" ( (\" / %& (®' E#*& $ ()% %# & \"+$ # % K!", "$ #(+ & ,(%!", "!", ".", "\"#&% % & *#(0 % & #& (% % # /\" * #!", "!", "$ v' #( !", "& % %#(% & #& \") ( ()#( !", "& % % . \"", "#& & *\" K % (()& * °©F®± & (#(% & & & %*\"\"+$©\" ®' #( !", "& % %# & \"+$#( !", "& 0 #( !", "& % , & *& *(& **!", ")() ®« %#(% & & & 'Y* ( &()#(#& & ( #*#(% & #& (²® vG H(³ 0 vG +H ² %& *T S¡ ¢ PSQ¢ £P .)", "\"& *&% & + & \"& (/\"& *( & !#(% & #& (' _ ( \"& ( % %& #-.", "&% ) /#%& ( /\"( & !#(% & #& (%) (!", "!#( !", "& \" #( !", "& % %\"/\"(-' Y* %#+ To utilize higher-order information, the decoder's input at each step is the sum of the encoder hidden states of three words: β t = s h + s g + s s where β t is the input vector of decoder at time t and h, g, s are the indices of the head word and its grandparent and sibling, respectively.", "Figure 1 (b) illustrates the details.", "Here we use the element-wise sum operation instead of concatenation because it does not increase the dimension of the input vector β t , thus introducing no additional model parameters.", "Biaffine Attention Mechanism For attention score function (Eq.", "(1)), we adopt the biaffine attention mechanism (Luong et al., 2015; Dozat and Manning, 2017) : e t i = h T t Ws i + U T h t + V T s i + b where W, U, V, b are parameters, denoting the weight matrix of the bi-linear term, the two weight vectors of the linear terms, and the bias vector.", "As discussed in Dozat and Manning (2017) , applying a multilayer perceptron (MLP) to the output vectors of the BLSTM before the score function can both reduce the dimensionality and overfitting of the model.", "We follow this work by using a one-layer perceptron to s i and h i with elu (Clevert et al., 2015) as its activation function.", "Similarly, the dependency label classifier also uses a biaffine function to score each label, given the head word vector h t and child vector s i as inputs.", "Again, we use MLPs to transform h t and s i before feeding them into the classifier.", "Training Objectives The STACKPTR parser is trained to optimize the probability of the dependency trees given sentences: P θ (y|x), which can be factorized as: P θ (y|x) = k i=1 P θ (p i |p <i , x) = k i=1 l i j=1 P θ (c i,j |c i,<j , p <i , x), (2) where θ represents model parameters.", "p <i denotes the preceding paths that have already been generated.", "c i,j represents the jth word in p i and c i,<j denotes all the proceeding words on the path p i .", "Thus, the STACKPTR parser is an autoregressive model, like sequence-to-sequence models, but it factors the distribution according to a top-down tree structure as opposed to a left-to-right chain.", "We define P θ (c i,j |c i,<j , p <i , x) = a t , where attention vector a t (of dimension n) is used as the distribution over the indices of words in a sentence.", "Arc Prediction Our parser is trained by optimizing the conditional likelihood in Eq (2), which is implemented as the cross-entropy loss.", "Label Prediction We train a separated multiclass classifier in parallel to predict the dependency labels.", "Following Dozat and Manning (2017) , the classifier takes the information of the head word and its child as features.", "The label classifier is trained simultaneously with the parser by optimizing the sum of their objectives.", "Discussion Time Complexity.", "The number of decoding steps to build a parse tree for a sentence of length n is 2n−1, linear in n. Together with the attention mechanism (at each step, we need to compute the attention vector a t , whose runtime is O(n)), the time complexity of decoding algorithm is O(n 2 ), which is more efficient than graph-based parsers that have O(n 3 ) or worse complexity when using dynamic programming or maximum spanning tree (MST) decoding algorithms.", "Top-down Parsing.", "When humans comprehend a natural language sentence, they arguably do it in an incremental, left-to-right manner.", "However, when humans consciously annotate a sentence with syntactic structure, they rarely ever process in fixed left-to-right order.", "Rather, they start by reading the whole sentence, then seeking the main predicates, jumping back-and-forth over the sentence and recursively proceeding to the subtree structures governed by certain head words.", "Our parser follows a similar kind of annotation process: starting from reading the whole sentence, and processing in a top-down manner by finding the main predicates first and only then search for sub-trees governed by them.", "When making latter decisions, the parser has access to the entire structure built in earlier steps.", "Implementation Details Pre-trained Word Embeddings.", "For all the parsing models in different languages, we initialize word vectors with pretrained word embeddings.", "For Chinese, Dutch, English, German and Spanish, we use the structured-skipgram embeddings.", "For other languages we use Polyglot embeddings (Al-Rfou et al., 2013) .", "Optimization.", "Parameter optimization is performed with the Adam optimizer (Kingma and Ba, 2014) with β 1 = β 2 = 0.9.", "We choose an initial learning rate of η 0 = 0.001.", "The learning rate η is annealed by multiplying a fixed decay rate ρ = 0.75 when parsing performance stops increasing on validation sets.", "To reduce the effects of \"gradient exploding\", we use gradient clipping of 5.0 (Pascanu et al., 2013) .", "Dropout Training.", "To mitigate overfitting, we apply dropout (Srivastava et al., 2014; .", "For BLSTM, we use recurrent dropout (Gal and Ghahramani, 2016) with a drop rate of 0.33 between hidden states and 0.33 between layers.", "Following Dozat and Manning (2017) , we also use embedding dropout with a rate of 0.33 on all word, character, and POS embeddings.", "Hyper-Parameters.", "Some parameters are chosen from those reported in Dozat and Manning (2017) .", "We use the same hyper-parameters across the models on different treebanks and languages, due to time constraints.", "The details of the chosen hyper-parameters for all experiments are summarized in Appendix A.", "Experiments Setup We evaluate our STACKPTR parser mainly on three treebanks: the English Penn Treebank (PTB version 3.0) (Marcus et al., 1993) , the Penn Chinese Treebank (CTB version 5.1) (Xue et al., 2002) , and the German CoNLL 2009 corpus (Hajič et al., 2009) .", "We use the same experimental settings as Kuncoro et al.", "(2016) .", "To make a thorough empirical comparison with previous studies, we also evaluate our system on treebanks from CoNLL shared task and the Universal Dependency (UD) Treebanks 4 .", "For the CoNLL Treebanks, we use the English treebank from CoNLL-2008 shared task (Surdeanu et al., 2008) and all 13 treebanks from CoNLL-2006 shared task (Buchholz and Marsi, 2006) .", "The experimental settings are the same as .", "For UD Treebanks, we select 12 languages.", "The details of the treebanks and experimental settings are in § 4.5 and Appendix B.", "Evaluation Metrics Parsing performance is measured with five metrics: unlabeled attachment score (UAS), labeled attachment score (LAS), unlabeled complete match (UCM), labeled complete match (LCM), and root accuracy (RA).", "Following previous work (Kuncoro et al., 2016; Dozat and Manning, 2017) , we report results excluding punctuations for Chinese and English.", "For each experiment, we report the mean values with corresponding standard deviations over 5 repetitions.", "Baseline For fair comparison of the parsing performance, we re-implemented the graph-based Deep Biaffine (BIAF) parser (Dozat and Manning, 2017) , which achieved state-of-the-art results on a wide range of languages.", "Our re-implementation adds character-level information using the same LSTM-CNN encoder as our model ( § 3.2) to the original BIAF model, which boosts its performance on all languages.", "Main Results We first conduct experiments to demonstrate the effectiveness of our neural architecture by comparing with the strong baseline BIAF.", "We compare the performance of four variations of our model with different decoder inputs -Org, +gpar, +sib and Full -where the Org model utilizes only the encoder hidden states of head words, while the +gpar and +sib models augments the original one with grandparent and sibling information, respectively.", "The Full model includes all the three information as inputs.", "Figure 2 illustrates the performance (five metrics) of different variations of our STACKPTR parser together with the results of baseline BIAF re-implemented by us, on the test sets of the three languages.", "On UAS and LAS, the Full variation of STACKPTR with decoding beam size 10 outperforms BIAF on Chinese, and obtains competitive performance on English and German.", "An interesting observation is that the Full model achieves the best accuracy on English and Chinese, while performs slightly worse than +sib on German.", "This shows that the importance of higher-order information varies in languages.", "On LCM and UCM, STACKPTR significantly outperforms BIAF on all languages, showing the superiority of our parser on complete sentence parsing.", "The results of our parser on RA are slightly worse than BIAF.", "More details of results are provided in Appendix C. Table 1 illustrates the UAS and LAS of the four versions of our model (with decoding beam size 10) on the three treebanks, together with previous top-performing systems for comparison.", "Note that the results of STACKPTR and our reimplementation of BIAF are the average of 5 repetitions instead of a single run.", "Our Full model significantly outperforms all the transition-based parsers on all three languages, and achieves better results than most graph-based parsers.", "Our Table 1 : UAS and LAS of four versions of our model on test sets for three languages, together with topperforming parsing systems.", "\"T\" and \"G\" indicate transition-and graph-based models, respectively.", "For BIAF, we provide the original results reported in Dozat and Manning (2017) and our re-implementation.", "For STACKPTR and our re-implementation of BiAF, we report the average over 5 runs.", "re-implementation of BIAF obtains better performance than the original one in Dozat and Manning (2017) , demonstrating the effectiveness of the character-level information.", "Our model achieves state-of-the-art performance on both UAS and LAS on Chinese, and best UAS on English.", "On German, the performance is competitive with BIAF, and significantly better than other models.", "Comparison with Previous Work Error Analysis In this section, we characterize the errors made by BIAF and STACKPTR by presenting a number of experiments that relate parsing errors to a set of linguistic and structural properties.", "For simplicity, we follow McDonald and Nivre (2011) and report labeled parsing metrics (either accuracy, precision, or recall) for all experiments.", "Length and Graph Factors Following McDonald and Nivre (2011), we analyze parsing errors related to structural factors.", "Sentence Length.", "Figure 3 (a) shows the accuracy of both parsing models relative to sentence lengths.", "Consistent with the analysis in Mc-Donald and Nivre (2011) , STACKPTR tends to perform better on shorter sentences, which make fewer parsing decisions, significantly reducing the chance of error propagation.", "Dependency Length.", "Figure 3 (b) measures the precision and recall relative to dependency lengths.", "While the graph-based BIAF parser still performs better for longer dependency arcs and transition-based STACKPTR parser does better for shorter ones, the gap between the two systems is marginal, much smaller than that shown Table 3 : UAS and LAS on 14 treebanks from CoNLL shared tasks, together with several state-of-the-art parsers.", "Bi-Att is the bi-directional attention based parser (Cheng et al., 2016) , and NeuroMST is the neural MST parser .", "\"Best Published\" includes the most accurate parsers in term of UAS among , Martins et al.", "(2011) , Martins et al.", "(2013) , , , Zhang and McDonald (2014) , Pitler and McDonald (2015) , and Cheng et al.", "(2016) .", "in McDonald and Nivre (2011) .", "One possible reason is that, unlike traditional transition-based parsers that scan the sentence from left to right, STACKPTR processes in a top-down manner, thus sometimes unnecessarily creating shorter dependency arcs first.", "Root Distance.", "Figure 3 (c) plots the precision and recall of each system for arcs of varying distance to the root.", "Different from the observation in McDonald and Nivre (2011) , STACKPTR does not show an obvious advantage on the precision for arcs further away from the root.", "Furthermore, the STACKPTR parser does not have the tendency to over-predict root modifiers reported in McDonald and Nivre (2011) .", "This behavior can be explained using the same reasoning as above: the fact that arcs further away from the root are usually constructed early in the parsing algorithm of traditional transition-based parsers is not true for the STACKPTR parser.", "Effect of POS Embedding The only prerequisite information that our parsing model relies on is POS tags.", "With the goal of achieving an end-to-end parser, we explore the effect of POS tags on parsing performance.", "We run experiments on PTB using our STACKPTR parser with gold-standard and predicted POS tags, and without tags, respectively.", "STACKPTR in these experiments is the Full model with beam=10.", "Table 2 gives results of the parsers with different versions of POS tags on the test data of PTB.", "The parser with gold-standard POS tags significantly outperforms the other two parsers, showing that dependency parsers can still benefit from accurate POS information.", "The parser with predicted (imperfect) POS tags, however, performs even slightly worse than the parser without using POS tags.", "It illustrates that an end-to-end parser that doesn't rely on POS information can obtain competitive (or even better) performance than parsers using imperfect predicted POS tags, even if the POS tagger is relative high accuracy (accuracy > 97% in this experiment on PTB).", "Table 3 summarizes the parsing results of our model on the test sets of 14 treebanks from the CoNLL shared task, along with the state-of-theart baselines.", "Along with BIAF, we also list the performance of the bi-directional attention based Parser (Bi-Att) (Cheng et al., 2016) and the neural MST parser (NeuroMST) for comparison.", "Our parser achieves state-of-theart performance on both UAS and LAS on eight languages -Arabic, Czech, English, German, Portuguese, Slovene, Spanish, and Swedish.", "On Bulgarian and Dutch, our parser obtains the best UAS.", "On other languages, the performance of our parser is competitive with BIAF, and significantly better than others.", "The only exception is Japanese, on which NeuroMST obtains the best scores.", "Experiments on Other Treebanks CoNLL Treebanks UD Treebanks For UD Treebanks, we select 12 languages -Bulgarian, Catalan, Czech, Dutch, English, French, German, Italian, Norwegian, Romanian, Russian and Spanish.", "For all the languages, we adopt the standard training/dev/test splits, and use the universal POS tags (Petrov et al., 2012) provided in each treebank.", "The statistics of these corpora are provided in Appendix B.", "Table 4 summarizes the results of the STACKPTR parser, along with BIAF for comparison, on both the development and test datasets for each language.", "First, both BIAF and STACKPTR parsers achieve relatively high parsing accuracies on all the 12 languages -all with UAS are higher than 90%.", "On nine languages -Catalan, Czech, Dutch, English, French, German, Norwegian, Russian and Spanish -STACKPTR outperforms BIAF for both UAS and LAS.", "On Bulgarian, STACKPTR achieves slightly better UAS while LAS is slightly worse than BIAF.", "On Italian and Romanian, BIAF obtains marginally better parsing performance than STACKPTR.", "Conclusion In this paper, we proposed STACKPTR, a transition-based neural network architecture, for dependency parsing.", "Combining pointer networks with an internal stack to track the status of the top-down, depth-first search in the decoding procedure, the STACKPTR parser is able to capture information from the whole sentence and all the previously derived subtrees, removing the leftto-right restriction in classical transition-based parsers, while maintaining linear parsing steps, w.r.t the length of the sentences.", "Experimental re-sults on 29 treebanks show the effectiveness of our parser across 20 languages, by achieving state-ofthe-art performance on 21 corpora.", "There are several potential directions for future work.", "First, we intend to consider how to conduct experiments to improve the analysis of parsing errors qualitatively and quantitatively.", "Another interesting direction is to further improve our model by exploring reinforcement learning approaches to learn an optimal order for the children of head words, instead of using a predefined fixed order." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.1", "3.2", "3.3", "3.4", "3.5", "3.6", "3.7", "3.8", "4.1", "4.2", "4.4", "4.4.1", "4.4.2", "4.5.2", "5" ], "paper_header_content": [ "Introduction", "Background", "Dependency Parsing and Notations", "Pointer Networks", "Overview", "Encoder", "Decoder", "Higher-order Information", "Biaffine Attention Mechanism", "Training Objectives", "Discussion", "Implementation Details", "Setup", "Main Results", "Error Analysis", "Length and Graph Factors", "Effect of POS Embedding", "UD Treebanks", "Conclusion" ] }
GEM-SciDuet-train-37#paper-1053#slide-4
Background Pointer Network Vinyals 2015
Output sequence with elements that are discrete tokens corresponding to positions in an input sequence Use attention as a pointer to select a member of the input sequence as the output s and h are the hidden states of encoder and decoder, and score() is the attention scoring function, e.g. bi-affine attention [Luong+ 2015; Dozat+ 2017]
Output sequence with elements that are discrete tokens corresponding to positions in an input sequence Use attention as a pointer to select a member of the input sequence as the output s and h are the hidden states of encoder and decoder, and score() is the attention scoring function, e.g. bi-affine attention [Luong+ 2015; Dozat+ 2017]
[]
GEM-SciDuet-train-37#paper-1053#slide-5
1053
Stack-Pointer Networks for Dependency Parsing
We introduce a novel architecture for dependency parsing: stack-pointer networks (STACKPTR). Combining pointer networks (Vinyals et al., 2015) with an internal stack, the proposed model first reads and encodes the whole sentence, then builds the dependency tree top-down (from root-to-leaf) in a depth-first fashion. The stack tracks the status of the depthfirst search and the pointer networks select one child for the word at the top of the stack at each step. The STACKPTR parser benefits from the information of the whole sentence and all previously derived subtree structures, and removes the leftto-right restriction in classical transitionbased parsers. Yet, the number of steps for building any (including non-projective) parse tree is linear in the length of the sentence just as other transition-based parsers, yielding an efficient decoding algorithm with O(n 2 ) time complexity. We evaluate our model on 29 treebanks spanning 20 languages and different dependency annotation schemas, and achieve state-of-theart performance on 21 of them.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281 ], "paper_content_text": [ "Introduction Dependency parsing, which predicts the existence and type of linguistic dependency relations between words, is a first step towards deep language understanding.", "Its importance is widely recognized in the natural language processing (NLP) community, with it benefiting a wide range of NLP applications, such as coreference resolution (Ng, 2010; Durrett and Klein, 2013; Work done while at Carnegie Mellon University.", "2016), sentiment analysis (Tai et al., 2015) , machine translation (Bastings et al., 2017) , information extraction (Nguyen et al., 2009; Angeli et al., 2015; Peng et al., 2017) , word sense disambiguation (Fauceglia et al., 2015) , and low-resource languages processing (McDonald et al., 2013; Ma and Xia, 2014) .", "There are two dominant approaches to dependency parsing (Buchholz and Marsi, 2006; Nivre et al., 2007) : local and greedy transitionbased algorithms (Yamada and Matsumoto, 2003; Nivre and Scholz, 2004; Zhang and Nivre, 2011; Chen and Manning, 2014) , and the globally optimized graph-based algorithms (Eisner, 1996; Mc-Donald et al., 2005a,b; .", "Transition-based dependency parsers read words sequentially (commonly from left-to-right) and build dependency trees incrementally by making series of multiple choice decisions.", "The advantage of this formalism is that the number of operations required to build any projective parse tree is linear with respect to the length of the sentence.", "The challenge, however, is that the decision made at each step is based on local information, leading to error propagation and worse performance compared to graph-based parsers on root and long dependencies (McDonald and Nivre, 2011) .", "Previous studies have explored solutions to address this challenge.", "Stack LSTMs are capable of learning representations of the parser state that are sensitive to the complete contents of the parser's state.", "Andor et al.", "(2016) proposed a globally normalized transition model to replace the locally normalized classifier.", "However, the parsing accuracy is still behind state-of-the-art graph-based parsers (Dozat and Manning, 2017) .", "Graph-based dependency parsers, on the other hand, learn scoring functions for parse trees and perform exhaustive search over all possible trees for a sentence to find the globally highest scoring tree.", "Incorporating this global search algorithm with distributed representations learned from neural networks, neural graph-based parsers (Kiperwasser and Goldberg, 2016; Wang and Chang, 2016; Kuncoro et al., 2016; Dozat and Manning, 2017) have achieved the state-of-the-art accuracies on a number of treebanks in different languages.", "Nevertheless, these models, while accurate, are usually slow (e.g.", "decoding is O(n 3 ) time complexity for first-order models (McDonald et al., 2005a,b) and higher polynomials for higherorder models (McDonald and Pereira, 2006; Ma and Zhao, 2012b,a) ).", "In this paper, we propose a novel neural network architecture for dependency parsing, stackpointer networks (STACKPTR).", "STACKPTR is a transition-based architecture, with the corresponding asymptotic efficiency, but still maintains a global view of the sentence that proves essential for achieving competitive accuracy.", "Our STACKPTR parser has a pointer network (Vinyals et al., 2015) as its backbone, and is equipped with an internal stack to maintain the order of head words in tree structures.", "The STACKPTR parser performs parsing in an incremental, topdown, depth-first fashion; at each step, it generates an arc by assigning a child for the head word at the top of the internal stack.", "This architecture makes it possible to capture information from the whole sentence and all the previously derived subtrees, while maintaining a number of parsing steps linear in the sentence length.", "We evaluate our parser on 29 treebanks across 20 languages and different dependency annotation schemas, and achieve state-of-the-art performance on 21 of them.", "The contributions of this work are summarized as follows: (i) We propose a neural network architecture for dependency parsing that is simple, effective, and efficient.", "(ii) Empirical evaluations on benchmark datasets over 20 languages show that our method achieves state-of-the-art performance on 21 different treebanks 1 .", "(iii) Comprehensive error analysis is conducted to compare the proposed method to a strong graph-based baseline using biaffine attention (Dozat and Manning, 2017) .", "Background We first briefly describe the task of dependency parsing, setup the notation, and review Pointer Networks (Vinyals et al., 2015) .", "Dependency Parsing and Notations Dependency trees represent syntactic relationships between words in the sentences through labeled directed edges between head words and their dependents.", "Figure 1 (a) shows a dependency tree for the sentence, \"But there were no buyers\".", "In this paper, we will use the following notation: Input: x = {w 1 , .", ".", ".", ", w n } represents a generic sentence, where w i is the ith word.", "Output: y = {p 1 , p 2 , · · · , p k } represents a generic (possibly non-projective) dependency tree, where each path p i = $, w i,1 , w i,2 , · · · , w i,l i is a sequence of words from the root to a leaf.", "\"$\" is an universal virtual root that is added to each tree.", "Stack: σ denotes a stack configuration, which is a sequence of words.", "We use σ|w to represent a stack configuration that pushes word w into the stack σ.", "Children: ch(w i ) denotes the list of all the children (modifiers) of word w i .", "Pointer Networks Pointer Networks (PTR-NET) (Vinyals et al., 2015) are a variety of neural network capable of learning the conditional probability of an output sequence with elements that are discrete tokens corresponding to positions in an input sequence.", "This model cannot be trivially expressed by standard sequence-to-sequence networks due to the variable number of input positions in each sentence.", "PTR-NET solves the problem by using attention (Bahdanau et al., 2015; Luong et al., 2015) as a pointer to select a member of the input sequence as the output.", "Formally, the words of the sentence x are fed one-by-one into the encoder (a multiple-layer bidirectional RNN), producing a sequence of encoder hidden states s i .", "At each time step t, the decoder (a uni-directional RNN) receives the input from last step and outputs decoder hidden state h t .", "The attention vector a t is calculated as follows: e t i = score(h t , s i ) a t = softmax (e t ) (1) where score(·, ·) is the attention scoring function, which has several variations such as dot-product, concatenation, and biaffine (Luong et al., 2015) .", "PTR-NET regards the attention vector a t as a probability distribution over the source words, i.e.", "it uses a t i as pointers to select the input elements.", "3 Stack-Pointer Networks Overview Similarly to PTR-NET, STACKPTR first reads the whole sentence and encodes each word into the encoder hidden state s i .", "The internal stack σ is always initialized with the root symbol $.", "At each time step t, the decoder receives the input vector corresponding to the top element of the stack σ (the head word w p where p is the word index), generates the hidden state h t , and computes the attention vector a t using Eq.", "(1).", "The parser chooses a specific position c according to the attention scores in a t to generate a new dependency arc (w h , w c ) by selecting w c as a child of w h .", "Then the parser pushes w c onto the stack, i.e.", "σ → σ|w c , and goes to the next step.", "At one step if the parser points w h to itself, i.e.", "c = h, it indicates that all children of the head word w h have already been selected.", "Then the parser goes to the next step by popping w h out of σ.", "At test time, in order to guarantee a valid dependency tree containing all the words in the input sentences exactly once, the decoder maintains a list of \"available\" words.", "At each decoding step, the parser selects a child for the current head word, and removes the child from the list of available words to make sure that it cannot be selected as a child of other head words.", "For head words with multiple children, it is possible that there is more than one valid selection for each time step.", "In order to define a deterministic decoding process to make sure that there is only one ground-truth choice at each step (which is necessary for simple maximum likelihood estimation), a predefined order for each ch(w i ) needs to be introduced.", "The predefined order of children can have different alternatives, such as leftto-right or inside-out 2 .", "In this paper, we adopt the inside-out order 3 since it enables us to utilize second-order sibling information, which has been proven beneficial for parsing performance (Mc-Donald and Pereira, 2006; ) (see § 3.4 for details).", "Figure 1 (b) depicts the architecture of STACKPTR and the decoding procedure for the example sentence in Figure 1 (a) .", "Encoder The encoder of our parsing model is based on the bi-directional LSTM-CNN architecture (BLSTM-CNNs) (Chiu and Nichols, 2016; where CNNs encode character-level information of a word into its character-level repre-sentation and BLSTM models context information of each word.", "Formally, for each word, the CNN, with character embeddings as inputs, encodes the character-level representation.", "Then the character-level representation vector is concatenated with the word embedding vector to feed into the BLSTM network.", "To enrich word-level information, we also use POS embeddings.", "Finally, the encoder outputs a sequence of hidden states s i .", "Decoder The decoder for our parser is a uni-directional LSTM.", "Different from previous work (Bahdanau et al., 2015; Vinyals et al., 2015) which uses word embeddings of the previous word as the input to the decoder, our decoder receives the encoder hidden state vector (s i ) of the top element in the stack σ (see Figure 1 (b)).", "Compared to word embeddings, the encoder hidden states contain more contextual information, benefiting both the training and decoding procedures.", "The decoder produces a sequence of decoder hidden states h i , one for each decoding step.", "Higher-order Information As mentioned before, our parser is capable of utilizing higher-order information.", "In this paper, we incorporate two kinds of higher-order structures grandparent and sibling.", "A sibling structure is a head word with two successive modifiers, and a grandparent structure is a pair of dependencies connected head-to-tail: 0 12 3 456 2782 96 56 986 2 5214 3 77543 9 5 2 52 ÿ ÿ ! \"", "\"#$% & #& ' % !", "#& (()& *% & & !", "*\"& (+(\"!", "\"% ) &, \" \"#$' (% & #-.,\"/\" \"#$ % %% #*) (& ** *% & 0 % #( !", "$% %()- 1 2 3 -45 67 896: ;<=>?", "@ ABCDE3 -F14 G H I .J3-4 %& *% &()!", "!& %#( & +!", ", & * -\"ABCDE3 -F14K!", "& %& *K&& *&& 1 %& *!", "$% %()% & #-'A #& *# \" !", "0 & $()J3-4 (,% (& !", "!", "$, & *& *!", "& *() & *% & #.\"", "#& !", "$%(!", "K EL' % #& #! '", "#((%& & $.", "\"(,* #*) ( %& *) (0 #%()& * % .", "%& (M NOP QR#*\" \"#$& & (%!", "!SNR P T .,* #*#+% #( \" % (!", "& (' #& ( \" % #+) ( !", "U\"%) (!", "!", "(,% ABCDE3 -F145 V W<; ABCDEXDY3 -FZ4 Y*& % .,& && *\" \"#$& 1%%& () & %Z.#*(),* #*[%% & #(& 0 +& (& (& *% #( ()1'(# & ) #& ( U& (% . )", "/# & % !", "( & *% % &) (% (!", "K EL'' \\\"/ A # /#!", "!", "$.A#& (%u' .u' v.\"u' w\"% # + % %& *& .", "% #& K!", "$.)", "#& (& % & ( \"0 #* ! \"", "& % .", "\"0 % +!", "& % .\"", "& () \"0 % +!", "\"& 0 % +!", "& % ' x yz{ | } {| { { } | C,& * \"0 ( \"\" \"#$ % %+ !", "\"( \"%) ( % & % !", "( & *%'_ & * % % #& (., (K \"+#[ (\"(& ,( !", "K& % %) ( K (%,( [' G H G +H vY*\"$ #0 ( % & #& % \"\" K& (%()& *E % G vH!", "( & *' B( !", "& % % \" #& \"%& !", "%\" 0 #( !", "& % %%& U( \"% '(+ K & $.,! \"", "& *% $& # *& 0 *\"\"K % (% ' x { | } } { } { Ỹ */ % && $ () % ,\"% # +% %/ % & 0 ( \" ) #& ( U& (.,* #*\"#( (% %\" 0 \"#$& & ( & % \" K \"!\"", "\"# % 'E % 0 G vH & (\"#\", \"!", "$0 % \"\"$ #0 ( !", "( & *) (/ % & 0 ( \" % % & %& *+% %) ($ % % .", "#! \"", "(, !", "( & *%.,% U & %\"% * ' Y*E % G vH!", "( & * %+% \"(& ,( & !", "& \"& $ %()\"$ #0 ( %& #0 & % OQ S¡P% % .,* #*#(% % &()*\"0 ,( \"\" & %\"% #\"& %((% \".", "\"¢ £OQ ¤ S¡P% % .,* #*#(% % &()\" \"#$\"& * (+& ,& **\"\"(\" / ' ( !", "!", "$.,\"(& #( !", "& % %¥ ¦ § ,* ©\"ª & * \" #%()& *% « %*\"0 ,( \"\"\" ( & ' #( !", "& % %\"0 (& \"%¬ ¦ § ,* ©\"® & * \"()& * *\"\"(\" /()\" \"#$' _ & & K!", "$.", "#( !", "& % % & %*! )", "0 #(% & & & *\"\"+$©.,* % #( !", "& % %(!", "$ & !*! )", "0 #(% & & & .% #& *#(% & & &# +& \"\"+$\"\" ( (\" / %& (®' E#*& $ ()% %# & \"+$ # % K!", "$ #(+ & ,(%!", "!", ".", "\"#&% % & *#(0 % & #& (% % # /\" * #!", "!", "$ v' #( !", "& % %#(% & #& \") ( ()#( !", "& % % . \"", "#& & *\" K % (()& * °©F®± & (#(% & & & %*\"\"+$©\" ®' #( !", "& % %# & \"+$#( !", "& 0 #( !", "& % , & *& *(& **!", ")() ®« %#(% & & & 'Y* ( &()#(#& & ( #*#(% & #& (²® vG H(³ 0 vG +H ² %& *T S¡ ¢ PSQ¢ £P .)", "\"& *&% & + & \"& (/\"& *( & !#(% & #& (' _ ( \"& ( % %& #-.", "&% ) /#%& ( /\"( & !#(% & #& (%) (!", "!#( !", "& \" #( !", "& % %\"/\"(-' Y* %#+ To utilize higher-order information, the decoder's input at each step is the sum of the encoder hidden states of three words: β t = s h + s g + s s where β t is the input vector of decoder at time t and h, g, s are the indices of the head word and its grandparent and sibling, respectively.", "Figure 1 (b) illustrates the details.", "Here we use the element-wise sum operation instead of concatenation because it does not increase the dimension of the input vector β t , thus introducing no additional model parameters.", "Biaffine Attention Mechanism For attention score function (Eq.", "(1)), we adopt the biaffine attention mechanism (Luong et al., 2015; Dozat and Manning, 2017) : e t i = h T t Ws i + U T h t + V T s i + b where W, U, V, b are parameters, denoting the weight matrix of the bi-linear term, the two weight vectors of the linear terms, and the bias vector.", "As discussed in Dozat and Manning (2017) , applying a multilayer perceptron (MLP) to the output vectors of the BLSTM before the score function can both reduce the dimensionality and overfitting of the model.", "We follow this work by using a one-layer perceptron to s i and h i with elu (Clevert et al., 2015) as its activation function.", "Similarly, the dependency label classifier also uses a biaffine function to score each label, given the head word vector h t and child vector s i as inputs.", "Again, we use MLPs to transform h t and s i before feeding them into the classifier.", "Training Objectives The STACKPTR parser is trained to optimize the probability of the dependency trees given sentences: P θ (y|x), which can be factorized as: P θ (y|x) = k i=1 P θ (p i |p <i , x) = k i=1 l i j=1 P θ (c i,j |c i,<j , p <i , x), (2) where θ represents model parameters.", "p <i denotes the preceding paths that have already been generated.", "c i,j represents the jth word in p i and c i,<j denotes all the proceeding words on the path p i .", "Thus, the STACKPTR parser is an autoregressive model, like sequence-to-sequence models, but it factors the distribution according to a top-down tree structure as opposed to a left-to-right chain.", "We define P θ (c i,j |c i,<j , p <i , x) = a t , where attention vector a t (of dimension n) is used as the distribution over the indices of words in a sentence.", "Arc Prediction Our parser is trained by optimizing the conditional likelihood in Eq (2), which is implemented as the cross-entropy loss.", "Label Prediction We train a separated multiclass classifier in parallel to predict the dependency labels.", "Following Dozat and Manning (2017) , the classifier takes the information of the head word and its child as features.", "The label classifier is trained simultaneously with the parser by optimizing the sum of their objectives.", "Discussion Time Complexity.", "The number of decoding steps to build a parse tree for a sentence of length n is 2n−1, linear in n. Together with the attention mechanism (at each step, we need to compute the attention vector a t , whose runtime is O(n)), the time complexity of decoding algorithm is O(n 2 ), which is more efficient than graph-based parsers that have O(n 3 ) or worse complexity when using dynamic programming or maximum spanning tree (MST) decoding algorithms.", "Top-down Parsing.", "When humans comprehend a natural language sentence, they arguably do it in an incremental, left-to-right manner.", "However, when humans consciously annotate a sentence with syntactic structure, they rarely ever process in fixed left-to-right order.", "Rather, they start by reading the whole sentence, then seeking the main predicates, jumping back-and-forth over the sentence and recursively proceeding to the subtree structures governed by certain head words.", "Our parser follows a similar kind of annotation process: starting from reading the whole sentence, and processing in a top-down manner by finding the main predicates first and only then search for sub-trees governed by them.", "When making latter decisions, the parser has access to the entire structure built in earlier steps.", "Implementation Details Pre-trained Word Embeddings.", "For all the parsing models in different languages, we initialize word vectors with pretrained word embeddings.", "For Chinese, Dutch, English, German and Spanish, we use the structured-skipgram embeddings.", "For other languages we use Polyglot embeddings (Al-Rfou et al., 2013) .", "Optimization.", "Parameter optimization is performed with the Adam optimizer (Kingma and Ba, 2014) with β 1 = β 2 = 0.9.", "We choose an initial learning rate of η 0 = 0.001.", "The learning rate η is annealed by multiplying a fixed decay rate ρ = 0.75 when parsing performance stops increasing on validation sets.", "To reduce the effects of \"gradient exploding\", we use gradient clipping of 5.0 (Pascanu et al., 2013) .", "Dropout Training.", "To mitigate overfitting, we apply dropout (Srivastava et al., 2014; .", "For BLSTM, we use recurrent dropout (Gal and Ghahramani, 2016) with a drop rate of 0.33 between hidden states and 0.33 between layers.", "Following Dozat and Manning (2017) , we also use embedding dropout with a rate of 0.33 on all word, character, and POS embeddings.", "Hyper-Parameters.", "Some parameters are chosen from those reported in Dozat and Manning (2017) .", "We use the same hyper-parameters across the models on different treebanks and languages, due to time constraints.", "The details of the chosen hyper-parameters for all experiments are summarized in Appendix A.", "Experiments Setup We evaluate our STACKPTR parser mainly on three treebanks: the English Penn Treebank (PTB version 3.0) (Marcus et al., 1993) , the Penn Chinese Treebank (CTB version 5.1) (Xue et al., 2002) , and the German CoNLL 2009 corpus (Hajič et al., 2009) .", "We use the same experimental settings as Kuncoro et al.", "(2016) .", "To make a thorough empirical comparison with previous studies, we also evaluate our system on treebanks from CoNLL shared task and the Universal Dependency (UD) Treebanks 4 .", "For the CoNLL Treebanks, we use the English treebank from CoNLL-2008 shared task (Surdeanu et al., 2008) and all 13 treebanks from CoNLL-2006 shared task (Buchholz and Marsi, 2006) .", "The experimental settings are the same as .", "For UD Treebanks, we select 12 languages.", "The details of the treebanks and experimental settings are in § 4.5 and Appendix B.", "Evaluation Metrics Parsing performance is measured with five metrics: unlabeled attachment score (UAS), labeled attachment score (LAS), unlabeled complete match (UCM), labeled complete match (LCM), and root accuracy (RA).", "Following previous work (Kuncoro et al., 2016; Dozat and Manning, 2017) , we report results excluding punctuations for Chinese and English.", "For each experiment, we report the mean values with corresponding standard deviations over 5 repetitions.", "Baseline For fair comparison of the parsing performance, we re-implemented the graph-based Deep Biaffine (BIAF) parser (Dozat and Manning, 2017) , which achieved state-of-the-art results on a wide range of languages.", "Our re-implementation adds character-level information using the same LSTM-CNN encoder as our model ( § 3.2) to the original BIAF model, which boosts its performance on all languages.", "Main Results We first conduct experiments to demonstrate the effectiveness of our neural architecture by comparing with the strong baseline BIAF.", "We compare the performance of four variations of our model with different decoder inputs -Org, +gpar, +sib and Full -where the Org model utilizes only the encoder hidden states of head words, while the +gpar and +sib models augments the original one with grandparent and sibling information, respectively.", "The Full model includes all the three information as inputs.", "Figure 2 illustrates the performance (five metrics) of different variations of our STACKPTR parser together with the results of baseline BIAF re-implemented by us, on the test sets of the three languages.", "On UAS and LAS, the Full variation of STACKPTR with decoding beam size 10 outperforms BIAF on Chinese, and obtains competitive performance on English and German.", "An interesting observation is that the Full model achieves the best accuracy on English and Chinese, while performs slightly worse than +sib on German.", "This shows that the importance of higher-order information varies in languages.", "On LCM and UCM, STACKPTR significantly outperforms BIAF on all languages, showing the superiority of our parser on complete sentence parsing.", "The results of our parser on RA are slightly worse than BIAF.", "More details of results are provided in Appendix C. Table 1 illustrates the UAS and LAS of the four versions of our model (with decoding beam size 10) on the three treebanks, together with previous top-performing systems for comparison.", "Note that the results of STACKPTR and our reimplementation of BIAF are the average of 5 repetitions instead of a single run.", "Our Full model significantly outperforms all the transition-based parsers on all three languages, and achieves better results than most graph-based parsers.", "Our Table 1 : UAS and LAS of four versions of our model on test sets for three languages, together with topperforming parsing systems.", "\"T\" and \"G\" indicate transition-and graph-based models, respectively.", "For BIAF, we provide the original results reported in Dozat and Manning (2017) and our re-implementation.", "For STACKPTR and our re-implementation of BiAF, we report the average over 5 runs.", "re-implementation of BIAF obtains better performance than the original one in Dozat and Manning (2017) , demonstrating the effectiveness of the character-level information.", "Our model achieves state-of-the-art performance on both UAS and LAS on Chinese, and best UAS on English.", "On German, the performance is competitive with BIAF, and significantly better than other models.", "Comparison with Previous Work Error Analysis In this section, we characterize the errors made by BIAF and STACKPTR by presenting a number of experiments that relate parsing errors to a set of linguistic and structural properties.", "For simplicity, we follow McDonald and Nivre (2011) and report labeled parsing metrics (either accuracy, precision, or recall) for all experiments.", "Length and Graph Factors Following McDonald and Nivre (2011), we analyze parsing errors related to structural factors.", "Sentence Length.", "Figure 3 (a) shows the accuracy of both parsing models relative to sentence lengths.", "Consistent with the analysis in Mc-Donald and Nivre (2011) , STACKPTR tends to perform better on shorter sentences, which make fewer parsing decisions, significantly reducing the chance of error propagation.", "Dependency Length.", "Figure 3 (b) measures the precision and recall relative to dependency lengths.", "While the graph-based BIAF parser still performs better for longer dependency arcs and transition-based STACKPTR parser does better for shorter ones, the gap between the two systems is marginal, much smaller than that shown Table 3 : UAS and LAS on 14 treebanks from CoNLL shared tasks, together with several state-of-the-art parsers.", "Bi-Att is the bi-directional attention based parser (Cheng et al., 2016) , and NeuroMST is the neural MST parser .", "\"Best Published\" includes the most accurate parsers in term of UAS among , Martins et al.", "(2011) , Martins et al.", "(2013) , , , Zhang and McDonald (2014) , Pitler and McDonald (2015) , and Cheng et al.", "(2016) .", "in McDonald and Nivre (2011) .", "One possible reason is that, unlike traditional transition-based parsers that scan the sentence from left to right, STACKPTR processes in a top-down manner, thus sometimes unnecessarily creating shorter dependency arcs first.", "Root Distance.", "Figure 3 (c) plots the precision and recall of each system for arcs of varying distance to the root.", "Different from the observation in McDonald and Nivre (2011) , STACKPTR does not show an obvious advantage on the precision for arcs further away from the root.", "Furthermore, the STACKPTR parser does not have the tendency to over-predict root modifiers reported in McDonald and Nivre (2011) .", "This behavior can be explained using the same reasoning as above: the fact that arcs further away from the root are usually constructed early in the parsing algorithm of traditional transition-based parsers is not true for the STACKPTR parser.", "Effect of POS Embedding The only prerequisite information that our parsing model relies on is POS tags.", "With the goal of achieving an end-to-end parser, we explore the effect of POS tags on parsing performance.", "We run experiments on PTB using our STACKPTR parser with gold-standard and predicted POS tags, and without tags, respectively.", "STACKPTR in these experiments is the Full model with beam=10.", "Table 2 gives results of the parsers with different versions of POS tags on the test data of PTB.", "The parser with gold-standard POS tags significantly outperforms the other two parsers, showing that dependency parsers can still benefit from accurate POS information.", "The parser with predicted (imperfect) POS tags, however, performs even slightly worse than the parser without using POS tags.", "It illustrates that an end-to-end parser that doesn't rely on POS information can obtain competitive (or even better) performance than parsers using imperfect predicted POS tags, even if the POS tagger is relative high accuracy (accuracy > 97% in this experiment on PTB).", "Table 3 summarizes the parsing results of our model on the test sets of 14 treebanks from the CoNLL shared task, along with the state-of-theart baselines.", "Along with BIAF, we also list the performance of the bi-directional attention based Parser (Bi-Att) (Cheng et al., 2016) and the neural MST parser (NeuroMST) for comparison.", "Our parser achieves state-of-theart performance on both UAS and LAS on eight languages -Arabic, Czech, English, German, Portuguese, Slovene, Spanish, and Swedish.", "On Bulgarian and Dutch, our parser obtains the best UAS.", "On other languages, the performance of our parser is competitive with BIAF, and significantly better than others.", "The only exception is Japanese, on which NeuroMST obtains the best scores.", "Experiments on Other Treebanks CoNLL Treebanks UD Treebanks For UD Treebanks, we select 12 languages -Bulgarian, Catalan, Czech, Dutch, English, French, German, Italian, Norwegian, Romanian, Russian and Spanish.", "For all the languages, we adopt the standard training/dev/test splits, and use the universal POS tags (Petrov et al., 2012) provided in each treebank.", "The statistics of these corpora are provided in Appendix B.", "Table 4 summarizes the results of the STACKPTR parser, along with BIAF for comparison, on both the development and test datasets for each language.", "First, both BIAF and STACKPTR parsers achieve relatively high parsing accuracies on all the 12 languages -all with UAS are higher than 90%.", "On nine languages -Catalan, Czech, Dutch, English, French, German, Norwegian, Russian and Spanish -STACKPTR outperforms BIAF for both UAS and LAS.", "On Bulgarian, STACKPTR achieves slightly better UAS while LAS is slightly worse than BIAF.", "On Italian and Romanian, BIAF obtains marginally better parsing performance than STACKPTR.", "Conclusion In this paper, we proposed STACKPTR, a transition-based neural network architecture, for dependency parsing.", "Combining pointer networks with an internal stack to track the status of the top-down, depth-first search in the decoding procedure, the STACKPTR parser is able to capture information from the whole sentence and all the previously derived subtrees, removing the leftto-right restriction in classical transition-based parsers, while maintaining linear parsing steps, w.r.t the length of the sentences.", "Experimental re-sults on 29 treebanks show the effectiveness of our parser across 20 languages, by achieving state-ofthe-art performance on 21 corpora.", "There are several potential directions for future work.", "First, we intend to consider how to conduct experiments to improve the analysis of parsing errors qualitatively and quantitatively.", "Another interesting direction is to further improve our model by exploring reinforcement learning approaches to learn an optimal order for the children of head words, instead of using a predefined fixed order." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.1", "3.2", "3.3", "3.4", "3.5", "3.6", "3.7", "3.8", "4.1", "4.2", "4.4", "4.4.1", "4.4.2", "4.5.2", "5" ], "paper_header_content": [ "Introduction", "Background", "Dependency Parsing and Notations", "Pointer Networks", "Overview", "Encoder", "Decoder", "Higher-order Information", "Biaffine Attention Mechanism", "Training Objectives", "Discussion", "Implementation Details", "Setup", "Main Results", "Error Analysis", "Length and Graph Factors", "Effect of POS Embedding", "UD Treebanks", "Conclusion" ] }
GEM-SciDuet-train-37#paper-1053#slide-5
Variable Definitions
each of which is a sequence of words from root to a leaf w1 w2 w3 w4
each of which is a sequence of words from root to a leaf w1 w2 w3 w4
[]
GEM-SciDuet-train-37#paper-1053#slide-6
1053
Stack-Pointer Networks for Dependency Parsing
We introduce a novel architecture for dependency parsing: stack-pointer networks (STACKPTR). Combining pointer networks (Vinyals et al., 2015) with an internal stack, the proposed model first reads and encodes the whole sentence, then builds the dependency tree top-down (from root-to-leaf) in a depth-first fashion. The stack tracks the status of the depthfirst search and the pointer networks select one child for the word at the top of the stack at each step. The STACKPTR parser benefits from the information of the whole sentence and all previously derived subtree structures, and removes the leftto-right restriction in classical transitionbased parsers. Yet, the number of steps for building any (including non-projective) parse tree is linear in the length of the sentence just as other transition-based parsers, yielding an efficient decoding algorithm with O(n 2 ) time complexity. We evaluate our model on 29 treebanks spanning 20 languages and different dependency annotation schemas, and achieve state-of-theart performance on 21 of them.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281 ], "paper_content_text": [ "Introduction Dependency parsing, which predicts the existence and type of linguistic dependency relations between words, is a first step towards deep language understanding.", "Its importance is widely recognized in the natural language processing (NLP) community, with it benefiting a wide range of NLP applications, such as coreference resolution (Ng, 2010; Durrett and Klein, 2013; Work done while at Carnegie Mellon University.", "2016), sentiment analysis (Tai et al., 2015) , machine translation (Bastings et al., 2017) , information extraction (Nguyen et al., 2009; Angeli et al., 2015; Peng et al., 2017) , word sense disambiguation (Fauceglia et al., 2015) , and low-resource languages processing (McDonald et al., 2013; Ma and Xia, 2014) .", "There are two dominant approaches to dependency parsing (Buchholz and Marsi, 2006; Nivre et al., 2007) : local and greedy transitionbased algorithms (Yamada and Matsumoto, 2003; Nivre and Scholz, 2004; Zhang and Nivre, 2011; Chen and Manning, 2014) , and the globally optimized graph-based algorithms (Eisner, 1996; Mc-Donald et al., 2005a,b; .", "Transition-based dependency parsers read words sequentially (commonly from left-to-right) and build dependency trees incrementally by making series of multiple choice decisions.", "The advantage of this formalism is that the number of operations required to build any projective parse tree is linear with respect to the length of the sentence.", "The challenge, however, is that the decision made at each step is based on local information, leading to error propagation and worse performance compared to graph-based parsers on root and long dependencies (McDonald and Nivre, 2011) .", "Previous studies have explored solutions to address this challenge.", "Stack LSTMs are capable of learning representations of the parser state that are sensitive to the complete contents of the parser's state.", "Andor et al.", "(2016) proposed a globally normalized transition model to replace the locally normalized classifier.", "However, the parsing accuracy is still behind state-of-the-art graph-based parsers (Dozat and Manning, 2017) .", "Graph-based dependency parsers, on the other hand, learn scoring functions for parse trees and perform exhaustive search over all possible trees for a sentence to find the globally highest scoring tree.", "Incorporating this global search algorithm with distributed representations learned from neural networks, neural graph-based parsers (Kiperwasser and Goldberg, 2016; Wang and Chang, 2016; Kuncoro et al., 2016; Dozat and Manning, 2017) have achieved the state-of-the-art accuracies on a number of treebanks in different languages.", "Nevertheless, these models, while accurate, are usually slow (e.g.", "decoding is O(n 3 ) time complexity for first-order models (McDonald et al., 2005a,b) and higher polynomials for higherorder models (McDonald and Pereira, 2006; Ma and Zhao, 2012b,a) ).", "In this paper, we propose a novel neural network architecture for dependency parsing, stackpointer networks (STACKPTR).", "STACKPTR is a transition-based architecture, with the corresponding asymptotic efficiency, but still maintains a global view of the sentence that proves essential for achieving competitive accuracy.", "Our STACKPTR parser has a pointer network (Vinyals et al., 2015) as its backbone, and is equipped with an internal stack to maintain the order of head words in tree structures.", "The STACKPTR parser performs parsing in an incremental, topdown, depth-first fashion; at each step, it generates an arc by assigning a child for the head word at the top of the internal stack.", "This architecture makes it possible to capture information from the whole sentence and all the previously derived subtrees, while maintaining a number of parsing steps linear in the sentence length.", "We evaluate our parser on 29 treebanks across 20 languages and different dependency annotation schemas, and achieve state-of-the-art performance on 21 of them.", "The contributions of this work are summarized as follows: (i) We propose a neural network architecture for dependency parsing that is simple, effective, and efficient.", "(ii) Empirical evaluations on benchmark datasets over 20 languages show that our method achieves state-of-the-art performance on 21 different treebanks 1 .", "(iii) Comprehensive error analysis is conducted to compare the proposed method to a strong graph-based baseline using biaffine attention (Dozat and Manning, 2017) .", "Background We first briefly describe the task of dependency parsing, setup the notation, and review Pointer Networks (Vinyals et al., 2015) .", "Dependency Parsing and Notations Dependency trees represent syntactic relationships between words in the sentences through labeled directed edges between head words and their dependents.", "Figure 1 (a) shows a dependency tree for the sentence, \"But there were no buyers\".", "In this paper, we will use the following notation: Input: x = {w 1 , .", ".", ".", ", w n } represents a generic sentence, where w i is the ith word.", "Output: y = {p 1 , p 2 , · · · , p k } represents a generic (possibly non-projective) dependency tree, where each path p i = $, w i,1 , w i,2 , · · · , w i,l i is a sequence of words from the root to a leaf.", "\"$\" is an universal virtual root that is added to each tree.", "Stack: σ denotes a stack configuration, which is a sequence of words.", "We use σ|w to represent a stack configuration that pushes word w into the stack σ.", "Children: ch(w i ) denotes the list of all the children (modifiers) of word w i .", "Pointer Networks Pointer Networks (PTR-NET) (Vinyals et al., 2015) are a variety of neural network capable of learning the conditional probability of an output sequence with elements that are discrete tokens corresponding to positions in an input sequence.", "This model cannot be trivially expressed by standard sequence-to-sequence networks due to the variable number of input positions in each sentence.", "PTR-NET solves the problem by using attention (Bahdanau et al., 2015; Luong et al., 2015) as a pointer to select a member of the input sequence as the output.", "Formally, the words of the sentence x are fed one-by-one into the encoder (a multiple-layer bidirectional RNN), producing a sequence of encoder hidden states s i .", "At each time step t, the decoder (a uni-directional RNN) receives the input from last step and outputs decoder hidden state h t .", "The attention vector a t is calculated as follows: e t i = score(h t , s i ) a t = softmax (e t ) (1) where score(·, ·) is the attention scoring function, which has several variations such as dot-product, concatenation, and biaffine (Luong et al., 2015) .", "PTR-NET regards the attention vector a t as a probability distribution over the source words, i.e.", "it uses a t i as pointers to select the input elements.", "3 Stack-Pointer Networks Overview Similarly to PTR-NET, STACKPTR first reads the whole sentence and encodes each word into the encoder hidden state s i .", "The internal stack σ is always initialized with the root symbol $.", "At each time step t, the decoder receives the input vector corresponding to the top element of the stack σ (the head word w p where p is the word index), generates the hidden state h t , and computes the attention vector a t using Eq.", "(1).", "The parser chooses a specific position c according to the attention scores in a t to generate a new dependency arc (w h , w c ) by selecting w c as a child of w h .", "Then the parser pushes w c onto the stack, i.e.", "σ → σ|w c , and goes to the next step.", "At one step if the parser points w h to itself, i.e.", "c = h, it indicates that all children of the head word w h have already been selected.", "Then the parser goes to the next step by popping w h out of σ.", "At test time, in order to guarantee a valid dependency tree containing all the words in the input sentences exactly once, the decoder maintains a list of \"available\" words.", "At each decoding step, the parser selects a child for the current head word, and removes the child from the list of available words to make sure that it cannot be selected as a child of other head words.", "For head words with multiple children, it is possible that there is more than one valid selection for each time step.", "In order to define a deterministic decoding process to make sure that there is only one ground-truth choice at each step (which is necessary for simple maximum likelihood estimation), a predefined order for each ch(w i ) needs to be introduced.", "The predefined order of children can have different alternatives, such as leftto-right or inside-out 2 .", "In this paper, we adopt the inside-out order 3 since it enables us to utilize second-order sibling information, which has been proven beneficial for parsing performance (Mc-Donald and Pereira, 2006; ) (see § 3.4 for details).", "Figure 1 (b) depicts the architecture of STACKPTR and the decoding procedure for the example sentence in Figure 1 (a) .", "Encoder The encoder of our parsing model is based on the bi-directional LSTM-CNN architecture (BLSTM-CNNs) (Chiu and Nichols, 2016; where CNNs encode character-level information of a word into its character-level repre-sentation and BLSTM models context information of each word.", "Formally, for each word, the CNN, with character embeddings as inputs, encodes the character-level representation.", "Then the character-level representation vector is concatenated with the word embedding vector to feed into the BLSTM network.", "To enrich word-level information, we also use POS embeddings.", "Finally, the encoder outputs a sequence of hidden states s i .", "Decoder The decoder for our parser is a uni-directional LSTM.", "Different from previous work (Bahdanau et al., 2015; Vinyals et al., 2015) which uses word embeddings of the previous word as the input to the decoder, our decoder receives the encoder hidden state vector (s i ) of the top element in the stack σ (see Figure 1 (b)).", "Compared to word embeddings, the encoder hidden states contain more contextual information, benefiting both the training and decoding procedures.", "The decoder produces a sequence of decoder hidden states h i , one for each decoding step.", "Higher-order Information As mentioned before, our parser is capable of utilizing higher-order information.", "In this paper, we incorporate two kinds of higher-order structures grandparent and sibling.", "A sibling structure is a head word with two successive modifiers, and a grandparent structure is a pair of dependencies connected head-to-tail: 0 12 3 456 2782 96 56 986 2 5214 3 77543 9 5 2 52 ÿ ÿ ! \"", "\"#$% & #& ' % !", "#& (()& *% & & !", "*\"& (+(\"!", "\"% ) &, \" \"#$' (% & #-.,\"/\" \"#$ % %% #*) (& ** *% & 0 % #( !", "$% %()- 1 2 3 -45 67 896: ;<=>?", "@ ABCDE3 -F14 G H I .J3-4 %& *% &()!", "!& %#( & +!", ", & * -\"ABCDE3 -F14K!", "& %& *K&& *&& 1 %& *!", "$% %()% & #-'A #& *# \" !", "0 & $()J3-4 (,% (& !", "!", "$, & *& *!", "& *() & *% & #.\"", "#& !", "$%(!", "K EL' % #& #! '", "#((%& & $.", "\"(,* #*) ( %& *) (0 #%()& * % .", "%& (M NOP QR#*\" \"#$& & (%!", "!SNR P T .,* #*#+% #( \" % (!", "& (' #& ( \" % #+) ( !", "U\"%) (!", "!", "(,% ABCDE3 -F145 V W<; ABCDEXDY3 -FZ4 Y*& % .,& && *\" \"#$& 1%%& () & %Z.#*(),* #*[%% & #(& 0 +& (& (& *% #( ()1'(# & ) #& ( U& (% . )", "/# & % !", "( & *% % &) (% (!", "K EL'' \\\"/ A # /#!", "!", "$.A#& (%u' .u' v.\"u' w\"% # + % %& *& .", "% #& K!", "$.)", "#& (& % & ( \"0 #* ! \"", "& % .", "\"0 % +!", "& % .\"", "& () \"0 % +!", "\"& 0 % +!", "& % ' x yz{ | } {| { { } | C,& * \"0 ( \"\" \"#$ % %+ !", "\"( \"%) ( % & % !", "( & *%'_ & * % % #& (., (K \"+#[ (\"(& ,( !", "K& % %) ( K (%,( [' G H G +H vY*\"$ #0 ( % & #& % \"\" K& (%()& *E % G vH!", "( & *' B( !", "& % % \" #& \"%& !", "%\" 0 #( !", "& % %%& U( \"% '(+ K & $.,! \"", "& *% $& # *& 0 *\"\"K % (% ' x { | } } { } { Ỹ */ % && $ () % ,\"% # +% %/ % & 0 ( \" ) #& ( U& (.,* #*\"#( (% %\" 0 \"#$& & ( & % \" K \"!\"", "\"# % 'E % 0 G vH & (\"#\", \"!", "$0 % \"\"$ #0 ( !", "( & *) (/ % & 0 ( \" % % & %& *+% %) ($ % % .", "#! \"", "(, !", "( & *%.,% U & %\"% * ' Y*E % G vH!", "( & * %+% \"(& ,( & !", "& \"& $ %()\"$ #0 ( %& #0 & % OQ S¡P% % .,* #*#(% % &()*\"0 ,( \"\" & %\"% #\"& %((% \".", "\"¢ £OQ ¤ S¡P% % .,* #*#(% % &()\" \"#$\"& * (+& ,& **\"\"(\" / ' ( !", "!", "$.,\"(& #( !", "& % %¥ ¦ § ,* ©\"ª & * \" #%()& *% « %*\"0 ,( \"\"\" ( & ' #( !", "& % %\"0 (& \"%¬ ¦ § ,* ©\"® & * \"()& * *\"\"(\" /()\" \"#$' _ & & K!", "$.", "#( !", "& % % & %*! )", "0 #(% & & & *\"\"+$©.,* % #( !", "& % %(!", "$ & !*! )", "0 #(% & & & .% #& *#(% & & &# +& \"\"+$\"\" ( (\" / %& (®' E#*& $ ()% %# & \"+$ # % K!", "$ #(+ & ,(%!", "!", ".", "\"#&% % & *#(0 % & #& (% % # /\" * #!", "!", "$ v' #( !", "& % %#(% & #& \") ( ()#( !", "& % % . \"", "#& & *\" K % (()& * °©F®± & (#(% & & & %*\"\"+$©\" ®' #( !", "& % %# & \"+$#( !", "& 0 #( !", "& % , & *& *(& **!", ")() ®« %#(% & & & 'Y* ( &()#(#& & ( #*#(% & #& (²® vG H(³ 0 vG +H ² %& *T S¡ ¢ PSQ¢ £P .)", "\"& *&% & + & \"& (/\"& *( & !#(% & #& (' _ ( \"& ( % %& #-.", "&% ) /#%& ( /\"( & !#(% & #& (%) (!", "!#( !", "& \" #( !", "& % %\"/\"(-' Y* %#+ To utilize higher-order information, the decoder's input at each step is the sum of the encoder hidden states of three words: β t = s h + s g + s s where β t is the input vector of decoder at time t and h, g, s are the indices of the head word and its grandparent and sibling, respectively.", "Figure 1 (b) illustrates the details.", "Here we use the element-wise sum operation instead of concatenation because it does not increase the dimension of the input vector β t , thus introducing no additional model parameters.", "Biaffine Attention Mechanism For attention score function (Eq.", "(1)), we adopt the biaffine attention mechanism (Luong et al., 2015; Dozat and Manning, 2017) : e t i = h T t Ws i + U T h t + V T s i + b where W, U, V, b are parameters, denoting the weight matrix of the bi-linear term, the two weight vectors of the linear terms, and the bias vector.", "As discussed in Dozat and Manning (2017) , applying a multilayer perceptron (MLP) to the output vectors of the BLSTM before the score function can both reduce the dimensionality and overfitting of the model.", "We follow this work by using a one-layer perceptron to s i and h i with elu (Clevert et al., 2015) as its activation function.", "Similarly, the dependency label classifier also uses a biaffine function to score each label, given the head word vector h t and child vector s i as inputs.", "Again, we use MLPs to transform h t and s i before feeding them into the classifier.", "Training Objectives The STACKPTR parser is trained to optimize the probability of the dependency trees given sentences: P θ (y|x), which can be factorized as: P θ (y|x) = k i=1 P θ (p i |p <i , x) = k i=1 l i j=1 P θ (c i,j |c i,<j , p <i , x), (2) where θ represents model parameters.", "p <i denotes the preceding paths that have already been generated.", "c i,j represents the jth word in p i and c i,<j denotes all the proceeding words on the path p i .", "Thus, the STACKPTR parser is an autoregressive model, like sequence-to-sequence models, but it factors the distribution according to a top-down tree structure as opposed to a left-to-right chain.", "We define P θ (c i,j |c i,<j , p <i , x) = a t , where attention vector a t (of dimension n) is used as the distribution over the indices of words in a sentence.", "Arc Prediction Our parser is trained by optimizing the conditional likelihood in Eq (2), which is implemented as the cross-entropy loss.", "Label Prediction We train a separated multiclass classifier in parallel to predict the dependency labels.", "Following Dozat and Manning (2017) , the classifier takes the information of the head word and its child as features.", "The label classifier is trained simultaneously with the parser by optimizing the sum of their objectives.", "Discussion Time Complexity.", "The number of decoding steps to build a parse tree for a sentence of length n is 2n−1, linear in n. Together with the attention mechanism (at each step, we need to compute the attention vector a t , whose runtime is O(n)), the time complexity of decoding algorithm is O(n 2 ), which is more efficient than graph-based parsers that have O(n 3 ) or worse complexity when using dynamic programming or maximum spanning tree (MST) decoding algorithms.", "Top-down Parsing.", "When humans comprehend a natural language sentence, they arguably do it in an incremental, left-to-right manner.", "However, when humans consciously annotate a sentence with syntactic structure, they rarely ever process in fixed left-to-right order.", "Rather, they start by reading the whole sentence, then seeking the main predicates, jumping back-and-forth over the sentence and recursively proceeding to the subtree structures governed by certain head words.", "Our parser follows a similar kind of annotation process: starting from reading the whole sentence, and processing in a top-down manner by finding the main predicates first and only then search for sub-trees governed by them.", "When making latter decisions, the parser has access to the entire structure built in earlier steps.", "Implementation Details Pre-trained Word Embeddings.", "For all the parsing models in different languages, we initialize word vectors with pretrained word embeddings.", "For Chinese, Dutch, English, German and Spanish, we use the structured-skipgram embeddings.", "For other languages we use Polyglot embeddings (Al-Rfou et al., 2013) .", "Optimization.", "Parameter optimization is performed with the Adam optimizer (Kingma and Ba, 2014) with β 1 = β 2 = 0.9.", "We choose an initial learning rate of η 0 = 0.001.", "The learning rate η is annealed by multiplying a fixed decay rate ρ = 0.75 when parsing performance stops increasing on validation sets.", "To reduce the effects of \"gradient exploding\", we use gradient clipping of 5.0 (Pascanu et al., 2013) .", "Dropout Training.", "To mitigate overfitting, we apply dropout (Srivastava et al., 2014; .", "For BLSTM, we use recurrent dropout (Gal and Ghahramani, 2016) with a drop rate of 0.33 between hidden states and 0.33 between layers.", "Following Dozat and Manning (2017) , we also use embedding dropout with a rate of 0.33 on all word, character, and POS embeddings.", "Hyper-Parameters.", "Some parameters are chosen from those reported in Dozat and Manning (2017) .", "We use the same hyper-parameters across the models on different treebanks and languages, due to time constraints.", "The details of the chosen hyper-parameters for all experiments are summarized in Appendix A.", "Experiments Setup We evaluate our STACKPTR parser mainly on three treebanks: the English Penn Treebank (PTB version 3.0) (Marcus et al., 1993) , the Penn Chinese Treebank (CTB version 5.1) (Xue et al., 2002) , and the German CoNLL 2009 corpus (Hajič et al., 2009) .", "We use the same experimental settings as Kuncoro et al.", "(2016) .", "To make a thorough empirical comparison with previous studies, we also evaluate our system on treebanks from CoNLL shared task and the Universal Dependency (UD) Treebanks 4 .", "For the CoNLL Treebanks, we use the English treebank from CoNLL-2008 shared task (Surdeanu et al., 2008) and all 13 treebanks from CoNLL-2006 shared task (Buchholz and Marsi, 2006) .", "The experimental settings are the same as .", "For UD Treebanks, we select 12 languages.", "The details of the treebanks and experimental settings are in § 4.5 and Appendix B.", "Evaluation Metrics Parsing performance is measured with five metrics: unlabeled attachment score (UAS), labeled attachment score (LAS), unlabeled complete match (UCM), labeled complete match (LCM), and root accuracy (RA).", "Following previous work (Kuncoro et al., 2016; Dozat and Manning, 2017) , we report results excluding punctuations for Chinese and English.", "For each experiment, we report the mean values with corresponding standard deviations over 5 repetitions.", "Baseline For fair comparison of the parsing performance, we re-implemented the graph-based Deep Biaffine (BIAF) parser (Dozat and Manning, 2017) , which achieved state-of-the-art results on a wide range of languages.", "Our re-implementation adds character-level information using the same LSTM-CNN encoder as our model ( § 3.2) to the original BIAF model, which boosts its performance on all languages.", "Main Results We first conduct experiments to demonstrate the effectiveness of our neural architecture by comparing with the strong baseline BIAF.", "We compare the performance of four variations of our model with different decoder inputs -Org, +gpar, +sib and Full -where the Org model utilizes only the encoder hidden states of head words, while the +gpar and +sib models augments the original one with grandparent and sibling information, respectively.", "The Full model includes all the three information as inputs.", "Figure 2 illustrates the performance (five metrics) of different variations of our STACKPTR parser together with the results of baseline BIAF re-implemented by us, on the test sets of the three languages.", "On UAS and LAS, the Full variation of STACKPTR with decoding beam size 10 outperforms BIAF on Chinese, and obtains competitive performance on English and German.", "An interesting observation is that the Full model achieves the best accuracy on English and Chinese, while performs slightly worse than +sib on German.", "This shows that the importance of higher-order information varies in languages.", "On LCM and UCM, STACKPTR significantly outperforms BIAF on all languages, showing the superiority of our parser on complete sentence parsing.", "The results of our parser on RA are slightly worse than BIAF.", "More details of results are provided in Appendix C. Table 1 illustrates the UAS and LAS of the four versions of our model (with decoding beam size 10) on the three treebanks, together with previous top-performing systems for comparison.", "Note that the results of STACKPTR and our reimplementation of BIAF are the average of 5 repetitions instead of a single run.", "Our Full model significantly outperforms all the transition-based parsers on all three languages, and achieves better results than most graph-based parsers.", "Our Table 1 : UAS and LAS of four versions of our model on test sets for three languages, together with topperforming parsing systems.", "\"T\" and \"G\" indicate transition-and graph-based models, respectively.", "For BIAF, we provide the original results reported in Dozat and Manning (2017) and our re-implementation.", "For STACKPTR and our re-implementation of BiAF, we report the average over 5 runs.", "re-implementation of BIAF obtains better performance than the original one in Dozat and Manning (2017) , demonstrating the effectiveness of the character-level information.", "Our model achieves state-of-the-art performance on both UAS and LAS on Chinese, and best UAS on English.", "On German, the performance is competitive with BIAF, and significantly better than other models.", "Comparison with Previous Work Error Analysis In this section, we characterize the errors made by BIAF and STACKPTR by presenting a number of experiments that relate parsing errors to a set of linguistic and structural properties.", "For simplicity, we follow McDonald and Nivre (2011) and report labeled parsing metrics (either accuracy, precision, or recall) for all experiments.", "Length and Graph Factors Following McDonald and Nivre (2011), we analyze parsing errors related to structural factors.", "Sentence Length.", "Figure 3 (a) shows the accuracy of both parsing models relative to sentence lengths.", "Consistent with the analysis in Mc-Donald and Nivre (2011) , STACKPTR tends to perform better on shorter sentences, which make fewer parsing decisions, significantly reducing the chance of error propagation.", "Dependency Length.", "Figure 3 (b) measures the precision and recall relative to dependency lengths.", "While the graph-based BIAF parser still performs better for longer dependency arcs and transition-based STACKPTR parser does better for shorter ones, the gap between the two systems is marginal, much smaller than that shown Table 3 : UAS and LAS on 14 treebanks from CoNLL shared tasks, together with several state-of-the-art parsers.", "Bi-Att is the bi-directional attention based parser (Cheng et al., 2016) , and NeuroMST is the neural MST parser .", "\"Best Published\" includes the most accurate parsers in term of UAS among , Martins et al.", "(2011) , Martins et al.", "(2013) , , , Zhang and McDonald (2014) , Pitler and McDonald (2015) , and Cheng et al.", "(2016) .", "in McDonald and Nivre (2011) .", "One possible reason is that, unlike traditional transition-based parsers that scan the sentence from left to right, STACKPTR processes in a top-down manner, thus sometimes unnecessarily creating shorter dependency arcs first.", "Root Distance.", "Figure 3 (c) plots the precision and recall of each system for arcs of varying distance to the root.", "Different from the observation in McDonald and Nivre (2011) , STACKPTR does not show an obvious advantage on the precision for arcs further away from the root.", "Furthermore, the STACKPTR parser does not have the tendency to over-predict root modifiers reported in McDonald and Nivre (2011) .", "This behavior can be explained using the same reasoning as above: the fact that arcs further away from the root are usually constructed early in the parsing algorithm of traditional transition-based parsers is not true for the STACKPTR parser.", "Effect of POS Embedding The only prerequisite information that our parsing model relies on is POS tags.", "With the goal of achieving an end-to-end parser, we explore the effect of POS tags on parsing performance.", "We run experiments on PTB using our STACKPTR parser with gold-standard and predicted POS tags, and without tags, respectively.", "STACKPTR in these experiments is the Full model with beam=10.", "Table 2 gives results of the parsers with different versions of POS tags on the test data of PTB.", "The parser with gold-standard POS tags significantly outperforms the other two parsers, showing that dependency parsers can still benefit from accurate POS information.", "The parser with predicted (imperfect) POS tags, however, performs even slightly worse than the parser without using POS tags.", "It illustrates that an end-to-end parser that doesn't rely on POS information can obtain competitive (or even better) performance than parsers using imperfect predicted POS tags, even if the POS tagger is relative high accuracy (accuracy > 97% in this experiment on PTB).", "Table 3 summarizes the parsing results of our model on the test sets of 14 treebanks from the CoNLL shared task, along with the state-of-theart baselines.", "Along with BIAF, we also list the performance of the bi-directional attention based Parser (Bi-Att) (Cheng et al., 2016) and the neural MST parser (NeuroMST) for comparison.", "Our parser achieves state-of-theart performance on both UAS and LAS on eight languages -Arabic, Czech, English, German, Portuguese, Slovene, Spanish, and Swedish.", "On Bulgarian and Dutch, our parser obtains the best UAS.", "On other languages, the performance of our parser is competitive with BIAF, and significantly better than others.", "The only exception is Japanese, on which NeuroMST obtains the best scores.", "Experiments on Other Treebanks CoNLL Treebanks UD Treebanks For UD Treebanks, we select 12 languages -Bulgarian, Catalan, Czech, Dutch, English, French, German, Italian, Norwegian, Romanian, Russian and Spanish.", "For all the languages, we adopt the standard training/dev/test splits, and use the universal POS tags (Petrov et al., 2012) provided in each treebank.", "The statistics of these corpora are provided in Appendix B.", "Table 4 summarizes the results of the STACKPTR parser, along with BIAF for comparison, on both the development and test datasets for each language.", "First, both BIAF and STACKPTR parsers achieve relatively high parsing accuracies on all the 12 languages -all with UAS are higher than 90%.", "On nine languages -Catalan, Czech, Dutch, English, French, German, Norwegian, Russian and Spanish -STACKPTR outperforms BIAF for both UAS and LAS.", "On Bulgarian, STACKPTR achieves slightly better UAS while LAS is slightly worse than BIAF.", "On Italian and Romanian, BIAF obtains marginally better parsing performance than STACKPTR.", "Conclusion In this paper, we proposed STACKPTR, a transition-based neural network architecture, for dependency parsing.", "Combining pointer networks with an internal stack to track the status of the top-down, depth-first search in the decoding procedure, the STACKPTR parser is able to capture information from the whole sentence and all the previously derived subtrees, removing the leftto-right restriction in classical transition-based parsers, while maintaining linear parsing steps, w.r.t the length of the sentences.", "Experimental re-sults on 29 treebanks show the effectiveness of our parser across 20 languages, by achieving state-ofthe-art performance on 21 corpora.", "There are several potential directions for future work.", "First, we intend to consider how to conduct experiments to improve the analysis of parsing errors qualitatively and quantitatively.", "Another interesting direction is to further improve our model by exploring reinforcement learning approaches to learn an optimal order for the children of head words, instead of using a predefined fixed order." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.1", "3.2", "3.3", "3.4", "3.5", "3.6", "3.7", "3.8", "4.1", "4.2", "4.4", "4.4.1", "4.4.2", "4.5.2", "5" ], "paper_header_content": [ "Introduction", "Background", "Dependency Parsing and Notations", "Pointer Networks", "Overview", "Encoder", "Decoder", "Higher-order Information", "Biaffine Attention Mechanism", "Training Objectives", "Discussion", "Implementation Details", "Setup", "Main Results", "Error Analysis", "Length and Graph Factors", "Effect of POS Embedding", "UD Treebanks", "Conclusion" ] }
GEM-SciDuet-train-37#paper-1053#slide-6
Transition System
List (): of words whose head has not been selected Stack (): of partially processed head words whose children have not been fully selected Stack is initialized with the root symbol At each decoding step t receive the top element of stack as head word wh, and generate the hidden state ht compute the attention vector at using ht and encoder hidden states s generate an arc: choose a specific word (wc) from as the child of wh , remove wc from and push it onto complete a head: pop wh out of
List (): of words whose head has not been selected Stack (): of partially processed head words whose children have not been fully selected Stack is initialized with the root symbol At each decoding step t receive the top element of stack as head word wh, and generate the hidden state ht compute the attention vector at using ht and encoder hidden states s generate an arc: choose a specific word (wc) from as the child of wh , remove wc from and push it onto complete a head: pop wh out of
[]
GEM-SciDuet-train-37#paper-1053#slide-7
1053
Stack-Pointer Networks for Dependency Parsing
We introduce a novel architecture for dependency parsing: stack-pointer networks (STACKPTR). Combining pointer networks (Vinyals et al., 2015) with an internal stack, the proposed model first reads and encodes the whole sentence, then builds the dependency tree top-down (from root-to-leaf) in a depth-first fashion. The stack tracks the status of the depthfirst search and the pointer networks select one child for the word at the top of the stack at each step. The STACKPTR parser benefits from the information of the whole sentence and all previously derived subtree structures, and removes the leftto-right restriction in classical transitionbased parsers. Yet, the number of steps for building any (including non-projective) parse tree is linear in the length of the sentence just as other transition-based parsers, yielding an efficient decoding algorithm with O(n 2 ) time complexity. We evaluate our model on 29 treebanks spanning 20 languages and different dependency annotation schemas, and achieve state-of-theart performance on 21 of them.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281 ], "paper_content_text": [ "Introduction Dependency parsing, which predicts the existence and type of linguistic dependency relations between words, is a first step towards deep language understanding.", "Its importance is widely recognized in the natural language processing (NLP) community, with it benefiting a wide range of NLP applications, such as coreference resolution (Ng, 2010; Durrett and Klein, 2013; Work done while at Carnegie Mellon University.", "2016), sentiment analysis (Tai et al., 2015) , machine translation (Bastings et al., 2017) , information extraction (Nguyen et al., 2009; Angeli et al., 2015; Peng et al., 2017) , word sense disambiguation (Fauceglia et al., 2015) , and low-resource languages processing (McDonald et al., 2013; Ma and Xia, 2014) .", "There are two dominant approaches to dependency parsing (Buchholz and Marsi, 2006; Nivre et al., 2007) : local and greedy transitionbased algorithms (Yamada and Matsumoto, 2003; Nivre and Scholz, 2004; Zhang and Nivre, 2011; Chen and Manning, 2014) , and the globally optimized graph-based algorithms (Eisner, 1996; Mc-Donald et al., 2005a,b; .", "Transition-based dependency parsers read words sequentially (commonly from left-to-right) and build dependency trees incrementally by making series of multiple choice decisions.", "The advantage of this formalism is that the number of operations required to build any projective parse tree is linear with respect to the length of the sentence.", "The challenge, however, is that the decision made at each step is based on local information, leading to error propagation and worse performance compared to graph-based parsers on root and long dependencies (McDonald and Nivre, 2011) .", "Previous studies have explored solutions to address this challenge.", "Stack LSTMs are capable of learning representations of the parser state that are sensitive to the complete contents of the parser's state.", "Andor et al.", "(2016) proposed a globally normalized transition model to replace the locally normalized classifier.", "However, the parsing accuracy is still behind state-of-the-art graph-based parsers (Dozat and Manning, 2017) .", "Graph-based dependency parsers, on the other hand, learn scoring functions for parse trees and perform exhaustive search over all possible trees for a sentence to find the globally highest scoring tree.", "Incorporating this global search algorithm with distributed representations learned from neural networks, neural graph-based parsers (Kiperwasser and Goldberg, 2016; Wang and Chang, 2016; Kuncoro et al., 2016; Dozat and Manning, 2017) have achieved the state-of-the-art accuracies on a number of treebanks in different languages.", "Nevertheless, these models, while accurate, are usually slow (e.g.", "decoding is O(n 3 ) time complexity for first-order models (McDonald et al., 2005a,b) and higher polynomials for higherorder models (McDonald and Pereira, 2006; Ma and Zhao, 2012b,a) ).", "In this paper, we propose a novel neural network architecture for dependency parsing, stackpointer networks (STACKPTR).", "STACKPTR is a transition-based architecture, with the corresponding asymptotic efficiency, but still maintains a global view of the sentence that proves essential for achieving competitive accuracy.", "Our STACKPTR parser has a pointer network (Vinyals et al., 2015) as its backbone, and is equipped with an internal stack to maintain the order of head words in tree structures.", "The STACKPTR parser performs parsing in an incremental, topdown, depth-first fashion; at each step, it generates an arc by assigning a child for the head word at the top of the internal stack.", "This architecture makes it possible to capture information from the whole sentence and all the previously derived subtrees, while maintaining a number of parsing steps linear in the sentence length.", "We evaluate our parser on 29 treebanks across 20 languages and different dependency annotation schemas, and achieve state-of-the-art performance on 21 of them.", "The contributions of this work are summarized as follows: (i) We propose a neural network architecture for dependency parsing that is simple, effective, and efficient.", "(ii) Empirical evaluations on benchmark datasets over 20 languages show that our method achieves state-of-the-art performance on 21 different treebanks 1 .", "(iii) Comprehensive error analysis is conducted to compare the proposed method to a strong graph-based baseline using biaffine attention (Dozat and Manning, 2017) .", "Background We first briefly describe the task of dependency parsing, setup the notation, and review Pointer Networks (Vinyals et al., 2015) .", "Dependency Parsing and Notations Dependency trees represent syntactic relationships between words in the sentences through labeled directed edges between head words and their dependents.", "Figure 1 (a) shows a dependency tree for the sentence, \"But there were no buyers\".", "In this paper, we will use the following notation: Input: x = {w 1 , .", ".", ".", ", w n } represents a generic sentence, where w i is the ith word.", "Output: y = {p 1 , p 2 , · · · , p k } represents a generic (possibly non-projective) dependency tree, where each path p i = $, w i,1 , w i,2 , · · · , w i,l i is a sequence of words from the root to a leaf.", "\"$\" is an universal virtual root that is added to each tree.", "Stack: σ denotes a stack configuration, which is a sequence of words.", "We use σ|w to represent a stack configuration that pushes word w into the stack σ.", "Children: ch(w i ) denotes the list of all the children (modifiers) of word w i .", "Pointer Networks Pointer Networks (PTR-NET) (Vinyals et al., 2015) are a variety of neural network capable of learning the conditional probability of an output sequence with elements that are discrete tokens corresponding to positions in an input sequence.", "This model cannot be trivially expressed by standard sequence-to-sequence networks due to the variable number of input positions in each sentence.", "PTR-NET solves the problem by using attention (Bahdanau et al., 2015; Luong et al., 2015) as a pointer to select a member of the input sequence as the output.", "Formally, the words of the sentence x are fed one-by-one into the encoder (a multiple-layer bidirectional RNN), producing a sequence of encoder hidden states s i .", "At each time step t, the decoder (a uni-directional RNN) receives the input from last step and outputs decoder hidden state h t .", "The attention vector a t is calculated as follows: e t i = score(h t , s i ) a t = softmax (e t ) (1) where score(·, ·) is the attention scoring function, which has several variations such as dot-product, concatenation, and biaffine (Luong et al., 2015) .", "PTR-NET regards the attention vector a t as a probability distribution over the source words, i.e.", "it uses a t i as pointers to select the input elements.", "3 Stack-Pointer Networks Overview Similarly to PTR-NET, STACKPTR first reads the whole sentence and encodes each word into the encoder hidden state s i .", "The internal stack σ is always initialized with the root symbol $.", "At each time step t, the decoder receives the input vector corresponding to the top element of the stack σ (the head word w p where p is the word index), generates the hidden state h t , and computes the attention vector a t using Eq.", "(1).", "The parser chooses a specific position c according to the attention scores in a t to generate a new dependency arc (w h , w c ) by selecting w c as a child of w h .", "Then the parser pushes w c onto the stack, i.e.", "σ → σ|w c , and goes to the next step.", "At one step if the parser points w h to itself, i.e.", "c = h, it indicates that all children of the head word w h have already been selected.", "Then the parser goes to the next step by popping w h out of σ.", "At test time, in order to guarantee a valid dependency tree containing all the words in the input sentences exactly once, the decoder maintains a list of \"available\" words.", "At each decoding step, the parser selects a child for the current head word, and removes the child from the list of available words to make sure that it cannot be selected as a child of other head words.", "For head words with multiple children, it is possible that there is more than one valid selection for each time step.", "In order to define a deterministic decoding process to make sure that there is only one ground-truth choice at each step (which is necessary for simple maximum likelihood estimation), a predefined order for each ch(w i ) needs to be introduced.", "The predefined order of children can have different alternatives, such as leftto-right or inside-out 2 .", "In this paper, we adopt the inside-out order 3 since it enables us to utilize second-order sibling information, which has been proven beneficial for parsing performance (Mc-Donald and Pereira, 2006; ) (see § 3.4 for details).", "Figure 1 (b) depicts the architecture of STACKPTR and the decoding procedure for the example sentence in Figure 1 (a) .", "Encoder The encoder of our parsing model is based on the bi-directional LSTM-CNN architecture (BLSTM-CNNs) (Chiu and Nichols, 2016; where CNNs encode character-level information of a word into its character-level repre-sentation and BLSTM models context information of each word.", "Formally, for each word, the CNN, with character embeddings as inputs, encodes the character-level representation.", "Then the character-level representation vector is concatenated with the word embedding vector to feed into the BLSTM network.", "To enrich word-level information, we also use POS embeddings.", "Finally, the encoder outputs a sequence of hidden states s i .", "Decoder The decoder for our parser is a uni-directional LSTM.", "Different from previous work (Bahdanau et al., 2015; Vinyals et al., 2015) which uses word embeddings of the previous word as the input to the decoder, our decoder receives the encoder hidden state vector (s i ) of the top element in the stack σ (see Figure 1 (b)).", "Compared to word embeddings, the encoder hidden states contain more contextual information, benefiting both the training and decoding procedures.", "The decoder produces a sequence of decoder hidden states h i , one for each decoding step.", "Higher-order Information As mentioned before, our parser is capable of utilizing higher-order information.", "In this paper, we incorporate two kinds of higher-order structures grandparent and sibling.", "A sibling structure is a head word with two successive modifiers, and a grandparent structure is a pair of dependencies connected head-to-tail: 0 12 3 456 2782 96 56 986 2 5214 3 77543 9 5 2 52 ÿ ÿ ! \"", "\"#$% & #& ' % !", "#& (()& *% & & !", "*\"& (+(\"!", "\"% ) &, \" \"#$' (% & #-.,\"/\" \"#$ % %% #*) (& ** *% & 0 % #( !", "$% %()- 1 2 3 -45 67 896: ;<=>?", "@ ABCDE3 -F14 G H I .J3-4 %& *% &()!", "!& %#( & +!", ", & * -\"ABCDE3 -F14K!", "& %& *K&& *&& 1 %& *!", "$% %()% & #-'A #& *# \" !", "0 & $()J3-4 (,% (& !", "!", "$, & *& *!", "& *() & *% & #.\"", "#& !", "$%(!", "K EL' % #& #! '", "#((%& & $.", "\"(,* #*) ( %& *) (0 #%()& * % .", "%& (M NOP QR#*\" \"#$& & (%!", "!SNR P T .,* #*#+% #( \" % (!", "& (' #& ( \" % #+) ( !", "U\"%) (!", "!", "(,% ABCDE3 -F145 V W<; ABCDEXDY3 -FZ4 Y*& % .,& && *\" \"#$& 1%%& () & %Z.#*(),* #*[%% & #(& 0 +& (& (& *% #( ()1'(# & ) #& ( U& (% . )", "/# & % !", "( & *% % &) (% (!", "K EL'' \\\"/ A # /#!", "!", "$.A#& (%u' .u' v.\"u' w\"% # + % %& *& .", "% #& K!", "$.)", "#& (& % & ( \"0 #* ! \"", "& % .", "\"0 % +!", "& % .\"", "& () \"0 % +!", "\"& 0 % +!", "& % ' x yz{ | } {| { { } | C,& * \"0 ( \"\" \"#$ % %+ !", "\"( \"%) ( % & % !", "( & *%'_ & * % % #& (., (K \"+#[ (\"(& ,( !", "K& % %) ( K (%,( [' G H G +H vY*\"$ #0 ( % & #& % \"\" K& (%()& *E % G vH!", "( & *' B( !", "& % % \" #& \"%& !", "%\" 0 #( !", "& % %%& U( \"% '(+ K & $.,! \"", "& *% $& # *& 0 *\"\"K % (% ' x { | } } { } { Ỹ */ % && $ () % ,\"% # +% %/ % & 0 ( \" ) #& ( U& (.,* #*\"#( (% %\" 0 \"#$& & ( & % \" K \"!\"", "\"# % 'E % 0 G vH & (\"#\", \"!", "$0 % \"\"$ #0 ( !", "( & *) (/ % & 0 ( \" % % & %& *+% %) ($ % % .", "#! \"", "(, !", "( & *%.,% U & %\"% * ' Y*E % G vH!", "( & * %+% \"(& ,( & !", "& \"& $ %()\"$ #0 ( %& #0 & % OQ S¡P% % .,* #*#(% % &()*\"0 ,( \"\" & %\"% #\"& %((% \".", "\"¢ £OQ ¤ S¡P% % .,* #*#(% % &()\" \"#$\"& * (+& ,& **\"\"(\" / ' ( !", "!", "$.,\"(& #( !", "& % %¥ ¦ § ,* ©\"ª & * \" #%()& *% « %*\"0 ,( \"\"\" ( & ' #( !", "& % %\"0 (& \"%¬ ¦ § ,* ©\"® & * \"()& * *\"\"(\" /()\" \"#$' _ & & K!", "$.", "#( !", "& % % & %*! )", "0 #(% & & & *\"\"+$©.,* % #( !", "& % %(!", "$ & !*! )", "0 #(% & & & .% #& *#(% & & &# +& \"\"+$\"\" ( (\" / %& (®' E#*& $ ()% %# & \"+$ # % K!", "$ #(+ & ,(%!", "!", ".", "\"#&% % & *#(0 % & #& (% % # /\" * #!", "!", "$ v' #( !", "& % %#(% & #& \") ( ()#( !", "& % % . \"", "#& & *\" K % (()& * °©F®± & (#(% & & & %*\"\"+$©\" ®' #( !", "& % %# & \"+$#( !", "& 0 #( !", "& % , & *& *(& **!", ")() ®« %#(% & & & 'Y* ( &()#(#& & ( #*#(% & #& (²® vG H(³ 0 vG +H ² %& *T S¡ ¢ PSQ¢ £P .)", "\"& *&% & + & \"& (/\"& *( & !#(% & #& (' _ ( \"& ( % %& #-.", "&% ) /#%& ( /\"( & !#(% & #& (%) (!", "!#( !", "& \" #( !", "& % %\"/\"(-' Y* %#+ To utilize higher-order information, the decoder's input at each step is the sum of the encoder hidden states of three words: β t = s h + s g + s s where β t is the input vector of decoder at time t and h, g, s are the indices of the head word and its grandparent and sibling, respectively.", "Figure 1 (b) illustrates the details.", "Here we use the element-wise sum operation instead of concatenation because it does not increase the dimension of the input vector β t , thus introducing no additional model parameters.", "Biaffine Attention Mechanism For attention score function (Eq.", "(1)), we adopt the biaffine attention mechanism (Luong et al., 2015; Dozat and Manning, 2017) : e t i = h T t Ws i + U T h t + V T s i + b where W, U, V, b are parameters, denoting the weight matrix of the bi-linear term, the two weight vectors of the linear terms, and the bias vector.", "As discussed in Dozat and Manning (2017) , applying a multilayer perceptron (MLP) to the output vectors of the BLSTM before the score function can both reduce the dimensionality and overfitting of the model.", "We follow this work by using a one-layer perceptron to s i and h i with elu (Clevert et al., 2015) as its activation function.", "Similarly, the dependency label classifier also uses a biaffine function to score each label, given the head word vector h t and child vector s i as inputs.", "Again, we use MLPs to transform h t and s i before feeding them into the classifier.", "Training Objectives The STACKPTR parser is trained to optimize the probability of the dependency trees given sentences: P θ (y|x), which can be factorized as: P θ (y|x) = k i=1 P θ (p i |p <i , x) = k i=1 l i j=1 P θ (c i,j |c i,<j , p <i , x), (2) where θ represents model parameters.", "p <i denotes the preceding paths that have already been generated.", "c i,j represents the jth word in p i and c i,<j denotes all the proceeding words on the path p i .", "Thus, the STACKPTR parser is an autoregressive model, like sequence-to-sequence models, but it factors the distribution according to a top-down tree structure as opposed to a left-to-right chain.", "We define P θ (c i,j |c i,<j , p <i , x) = a t , where attention vector a t (of dimension n) is used as the distribution over the indices of words in a sentence.", "Arc Prediction Our parser is trained by optimizing the conditional likelihood in Eq (2), which is implemented as the cross-entropy loss.", "Label Prediction We train a separated multiclass classifier in parallel to predict the dependency labels.", "Following Dozat and Manning (2017) , the classifier takes the information of the head word and its child as features.", "The label classifier is trained simultaneously with the parser by optimizing the sum of their objectives.", "Discussion Time Complexity.", "The number of decoding steps to build a parse tree for a sentence of length n is 2n−1, linear in n. Together with the attention mechanism (at each step, we need to compute the attention vector a t , whose runtime is O(n)), the time complexity of decoding algorithm is O(n 2 ), which is more efficient than graph-based parsers that have O(n 3 ) or worse complexity when using dynamic programming or maximum spanning tree (MST) decoding algorithms.", "Top-down Parsing.", "When humans comprehend a natural language sentence, they arguably do it in an incremental, left-to-right manner.", "However, when humans consciously annotate a sentence with syntactic structure, they rarely ever process in fixed left-to-right order.", "Rather, they start by reading the whole sentence, then seeking the main predicates, jumping back-and-forth over the sentence and recursively proceeding to the subtree structures governed by certain head words.", "Our parser follows a similar kind of annotation process: starting from reading the whole sentence, and processing in a top-down manner by finding the main predicates first and only then search for sub-trees governed by them.", "When making latter decisions, the parser has access to the entire structure built in earlier steps.", "Implementation Details Pre-trained Word Embeddings.", "For all the parsing models in different languages, we initialize word vectors with pretrained word embeddings.", "For Chinese, Dutch, English, German and Spanish, we use the structured-skipgram embeddings.", "For other languages we use Polyglot embeddings (Al-Rfou et al., 2013) .", "Optimization.", "Parameter optimization is performed with the Adam optimizer (Kingma and Ba, 2014) with β 1 = β 2 = 0.9.", "We choose an initial learning rate of η 0 = 0.001.", "The learning rate η is annealed by multiplying a fixed decay rate ρ = 0.75 when parsing performance stops increasing on validation sets.", "To reduce the effects of \"gradient exploding\", we use gradient clipping of 5.0 (Pascanu et al., 2013) .", "Dropout Training.", "To mitigate overfitting, we apply dropout (Srivastava et al., 2014; .", "For BLSTM, we use recurrent dropout (Gal and Ghahramani, 2016) with a drop rate of 0.33 between hidden states and 0.33 between layers.", "Following Dozat and Manning (2017) , we also use embedding dropout with a rate of 0.33 on all word, character, and POS embeddings.", "Hyper-Parameters.", "Some parameters are chosen from those reported in Dozat and Manning (2017) .", "We use the same hyper-parameters across the models on different treebanks and languages, due to time constraints.", "The details of the chosen hyper-parameters for all experiments are summarized in Appendix A.", "Experiments Setup We evaluate our STACKPTR parser mainly on three treebanks: the English Penn Treebank (PTB version 3.0) (Marcus et al., 1993) , the Penn Chinese Treebank (CTB version 5.1) (Xue et al., 2002) , and the German CoNLL 2009 corpus (Hajič et al., 2009) .", "We use the same experimental settings as Kuncoro et al.", "(2016) .", "To make a thorough empirical comparison with previous studies, we also evaluate our system on treebanks from CoNLL shared task and the Universal Dependency (UD) Treebanks 4 .", "For the CoNLL Treebanks, we use the English treebank from CoNLL-2008 shared task (Surdeanu et al., 2008) and all 13 treebanks from CoNLL-2006 shared task (Buchholz and Marsi, 2006) .", "The experimental settings are the same as .", "For UD Treebanks, we select 12 languages.", "The details of the treebanks and experimental settings are in § 4.5 and Appendix B.", "Evaluation Metrics Parsing performance is measured with five metrics: unlabeled attachment score (UAS), labeled attachment score (LAS), unlabeled complete match (UCM), labeled complete match (LCM), and root accuracy (RA).", "Following previous work (Kuncoro et al., 2016; Dozat and Manning, 2017) , we report results excluding punctuations for Chinese and English.", "For each experiment, we report the mean values with corresponding standard deviations over 5 repetitions.", "Baseline For fair comparison of the parsing performance, we re-implemented the graph-based Deep Biaffine (BIAF) parser (Dozat and Manning, 2017) , which achieved state-of-the-art results on a wide range of languages.", "Our re-implementation adds character-level information using the same LSTM-CNN encoder as our model ( § 3.2) to the original BIAF model, which boosts its performance on all languages.", "Main Results We first conduct experiments to demonstrate the effectiveness of our neural architecture by comparing with the strong baseline BIAF.", "We compare the performance of four variations of our model with different decoder inputs -Org, +gpar, +sib and Full -where the Org model utilizes only the encoder hidden states of head words, while the +gpar and +sib models augments the original one with grandparent and sibling information, respectively.", "The Full model includes all the three information as inputs.", "Figure 2 illustrates the performance (five metrics) of different variations of our STACKPTR parser together with the results of baseline BIAF re-implemented by us, on the test sets of the three languages.", "On UAS and LAS, the Full variation of STACKPTR with decoding beam size 10 outperforms BIAF on Chinese, and obtains competitive performance on English and German.", "An interesting observation is that the Full model achieves the best accuracy on English and Chinese, while performs slightly worse than +sib on German.", "This shows that the importance of higher-order information varies in languages.", "On LCM and UCM, STACKPTR significantly outperforms BIAF on all languages, showing the superiority of our parser on complete sentence parsing.", "The results of our parser on RA are slightly worse than BIAF.", "More details of results are provided in Appendix C. Table 1 illustrates the UAS and LAS of the four versions of our model (with decoding beam size 10) on the three treebanks, together with previous top-performing systems for comparison.", "Note that the results of STACKPTR and our reimplementation of BIAF are the average of 5 repetitions instead of a single run.", "Our Full model significantly outperforms all the transition-based parsers on all three languages, and achieves better results than most graph-based parsers.", "Our Table 1 : UAS and LAS of four versions of our model on test sets for three languages, together with topperforming parsing systems.", "\"T\" and \"G\" indicate transition-and graph-based models, respectively.", "For BIAF, we provide the original results reported in Dozat and Manning (2017) and our re-implementation.", "For STACKPTR and our re-implementation of BiAF, we report the average over 5 runs.", "re-implementation of BIAF obtains better performance than the original one in Dozat and Manning (2017) , demonstrating the effectiveness of the character-level information.", "Our model achieves state-of-the-art performance on both UAS and LAS on Chinese, and best UAS on English.", "On German, the performance is competitive with BIAF, and significantly better than other models.", "Comparison with Previous Work Error Analysis In this section, we characterize the errors made by BIAF and STACKPTR by presenting a number of experiments that relate parsing errors to a set of linguistic and structural properties.", "For simplicity, we follow McDonald and Nivre (2011) and report labeled parsing metrics (either accuracy, precision, or recall) for all experiments.", "Length and Graph Factors Following McDonald and Nivre (2011), we analyze parsing errors related to structural factors.", "Sentence Length.", "Figure 3 (a) shows the accuracy of both parsing models relative to sentence lengths.", "Consistent with the analysis in Mc-Donald and Nivre (2011) , STACKPTR tends to perform better on shorter sentences, which make fewer parsing decisions, significantly reducing the chance of error propagation.", "Dependency Length.", "Figure 3 (b) measures the precision and recall relative to dependency lengths.", "While the graph-based BIAF parser still performs better for longer dependency arcs and transition-based STACKPTR parser does better for shorter ones, the gap between the two systems is marginal, much smaller than that shown Table 3 : UAS and LAS on 14 treebanks from CoNLL shared tasks, together with several state-of-the-art parsers.", "Bi-Att is the bi-directional attention based parser (Cheng et al., 2016) , and NeuroMST is the neural MST parser .", "\"Best Published\" includes the most accurate parsers in term of UAS among , Martins et al.", "(2011) , Martins et al.", "(2013) , , , Zhang and McDonald (2014) , Pitler and McDonald (2015) , and Cheng et al.", "(2016) .", "in McDonald and Nivre (2011) .", "One possible reason is that, unlike traditional transition-based parsers that scan the sentence from left to right, STACKPTR processes in a top-down manner, thus sometimes unnecessarily creating shorter dependency arcs first.", "Root Distance.", "Figure 3 (c) plots the precision and recall of each system for arcs of varying distance to the root.", "Different from the observation in McDonald and Nivre (2011) , STACKPTR does not show an obvious advantage on the precision for arcs further away from the root.", "Furthermore, the STACKPTR parser does not have the tendency to over-predict root modifiers reported in McDonald and Nivre (2011) .", "This behavior can be explained using the same reasoning as above: the fact that arcs further away from the root are usually constructed early in the parsing algorithm of traditional transition-based parsers is not true for the STACKPTR parser.", "Effect of POS Embedding The only prerequisite information that our parsing model relies on is POS tags.", "With the goal of achieving an end-to-end parser, we explore the effect of POS tags on parsing performance.", "We run experiments on PTB using our STACKPTR parser with gold-standard and predicted POS tags, and without tags, respectively.", "STACKPTR in these experiments is the Full model with beam=10.", "Table 2 gives results of the parsers with different versions of POS tags on the test data of PTB.", "The parser with gold-standard POS tags significantly outperforms the other two parsers, showing that dependency parsers can still benefit from accurate POS information.", "The parser with predicted (imperfect) POS tags, however, performs even slightly worse than the parser without using POS tags.", "It illustrates that an end-to-end parser that doesn't rely on POS information can obtain competitive (or even better) performance than parsers using imperfect predicted POS tags, even if the POS tagger is relative high accuracy (accuracy > 97% in this experiment on PTB).", "Table 3 summarizes the parsing results of our model on the test sets of 14 treebanks from the CoNLL shared task, along with the state-of-theart baselines.", "Along with BIAF, we also list the performance of the bi-directional attention based Parser (Bi-Att) (Cheng et al., 2016) and the neural MST parser (NeuroMST) for comparison.", "Our parser achieves state-of-theart performance on both UAS and LAS on eight languages -Arabic, Czech, English, German, Portuguese, Slovene, Spanish, and Swedish.", "On Bulgarian and Dutch, our parser obtains the best UAS.", "On other languages, the performance of our parser is competitive with BIAF, and significantly better than others.", "The only exception is Japanese, on which NeuroMST obtains the best scores.", "Experiments on Other Treebanks CoNLL Treebanks UD Treebanks For UD Treebanks, we select 12 languages -Bulgarian, Catalan, Czech, Dutch, English, French, German, Italian, Norwegian, Romanian, Russian and Spanish.", "For all the languages, we adopt the standard training/dev/test splits, and use the universal POS tags (Petrov et al., 2012) provided in each treebank.", "The statistics of these corpora are provided in Appendix B.", "Table 4 summarizes the results of the STACKPTR parser, along with BIAF for comparison, on both the development and test datasets for each language.", "First, both BIAF and STACKPTR parsers achieve relatively high parsing accuracies on all the 12 languages -all with UAS are higher than 90%.", "On nine languages -Catalan, Czech, Dutch, English, French, German, Norwegian, Russian and Spanish -STACKPTR outperforms BIAF for both UAS and LAS.", "On Bulgarian, STACKPTR achieves slightly better UAS while LAS is slightly worse than BIAF.", "On Italian and Romanian, BIAF obtains marginally better parsing performance than STACKPTR.", "Conclusion In this paper, we proposed STACKPTR, a transition-based neural network architecture, for dependency parsing.", "Combining pointer networks with an internal stack to track the status of the top-down, depth-first search in the decoding procedure, the STACKPTR parser is able to capture information from the whole sentence and all the previously derived subtrees, removing the leftto-right restriction in classical transition-based parsers, while maintaining linear parsing steps, w.r.t the length of the sentences.", "Experimental re-sults on 29 treebanks show the effectiveness of our parser across 20 languages, by achieving state-ofthe-art performance on 21 corpora.", "There are several potential directions for future work.", "First, we intend to consider how to conduct experiments to improve the analysis of parsing errors qualitatively and quantitatively.", "Another interesting direction is to further improve our model by exploring reinforcement learning approaches to learn an optimal order for the children of head words, instead of using a predefined fixed order." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.1", "3.2", "3.3", "3.4", "3.5", "3.6", "3.7", "3.8", "4.1", "4.2", "4.4", "4.4.1", "4.4.2", "4.5.2", "5" ], "paper_header_content": [ "Introduction", "Background", "Dependency Parsing and Notations", "Pointer Networks", "Overview", "Encoder", "Decoder", "Higher-order Information", "Biaffine Attention Mechanism", "Training Objectives", "Discussion", "Implementation Details", "Setup", "Main Results", "Error Analysis", "Length and Graph Factors", "Effect of POS Embedding", "UD Treebanks", "Conclusion" ] }
GEM-SciDuet-train-37#paper-1053#slide-7
Features for the Classifier
Utilize higher-order information at each step of the top- down decoding procedure Sibling and Grandchild structures proven beneficial for parsing performance (McDonald and Use element-wise sum of the encoder hidden states instead of concatenation - does not increase the dimension of t t sh sg ss
Utilize higher-order information at each step of the top- down decoding procedure Sibling and Grandchild structures proven beneficial for parsing performance (McDonald and Use element-wise sum of the encoder hidden states instead of concatenation - does not increase the dimension of t t sh sg ss
[]
GEM-SciDuet-train-37#paper-1053#slide-8
1053
Stack-Pointer Networks for Dependency Parsing
We introduce a novel architecture for dependency parsing: stack-pointer networks (STACKPTR). Combining pointer networks (Vinyals et al., 2015) with an internal stack, the proposed model first reads and encodes the whole sentence, then builds the dependency tree top-down (from root-to-leaf) in a depth-first fashion. The stack tracks the status of the depthfirst search and the pointer networks select one child for the word at the top of the stack at each step. The STACKPTR parser benefits from the information of the whole sentence and all previously derived subtree structures, and removes the leftto-right restriction in classical transitionbased parsers. Yet, the number of steps for building any (including non-projective) parse tree is linear in the length of the sentence just as other transition-based parsers, yielding an efficient decoding algorithm with O(n 2 ) time complexity. We evaluate our model on 29 treebanks spanning 20 languages and different dependency annotation schemas, and achieve state-of-theart performance on 21 of them.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281 ], "paper_content_text": [ "Introduction Dependency parsing, which predicts the existence and type of linguistic dependency relations between words, is a first step towards deep language understanding.", "Its importance is widely recognized in the natural language processing (NLP) community, with it benefiting a wide range of NLP applications, such as coreference resolution (Ng, 2010; Durrett and Klein, 2013; Work done while at Carnegie Mellon University.", "2016), sentiment analysis (Tai et al., 2015) , machine translation (Bastings et al., 2017) , information extraction (Nguyen et al., 2009; Angeli et al., 2015; Peng et al., 2017) , word sense disambiguation (Fauceglia et al., 2015) , and low-resource languages processing (McDonald et al., 2013; Ma and Xia, 2014) .", "There are two dominant approaches to dependency parsing (Buchholz and Marsi, 2006; Nivre et al., 2007) : local and greedy transitionbased algorithms (Yamada and Matsumoto, 2003; Nivre and Scholz, 2004; Zhang and Nivre, 2011; Chen and Manning, 2014) , and the globally optimized graph-based algorithms (Eisner, 1996; Mc-Donald et al., 2005a,b; .", "Transition-based dependency parsers read words sequentially (commonly from left-to-right) and build dependency trees incrementally by making series of multiple choice decisions.", "The advantage of this formalism is that the number of operations required to build any projective parse tree is linear with respect to the length of the sentence.", "The challenge, however, is that the decision made at each step is based on local information, leading to error propagation and worse performance compared to graph-based parsers on root and long dependencies (McDonald and Nivre, 2011) .", "Previous studies have explored solutions to address this challenge.", "Stack LSTMs are capable of learning representations of the parser state that are sensitive to the complete contents of the parser's state.", "Andor et al.", "(2016) proposed a globally normalized transition model to replace the locally normalized classifier.", "However, the parsing accuracy is still behind state-of-the-art graph-based parsers (Dozat and Manning, 2017) .", "Graph-based dependency parsers, on the other hand, learn scoring functions for parse trees and perform exhaustive search over all possible trees for a sentence to find the globally highest scoring tree.", "Incorporating this global search algorithm with distributed representations learned from neural networks, neural graph-based parsers (Kiperwasser and Goldberg, 2016; Wang and Chang, 2016; Kuncoro et al., 2016; Dozat and Manning, 2017) have achieved the state-of-the-art accuracies on a number of treebanks in different languages.", "Nevertheless, these models, while accurate, are usually slow (e.g.", "decoding is O(n 3 ) time complexity for first-order models (McDonald et al., 2005a,b) and higher polynomials for higherorder models (McDonald and Pereira, 2006; Ma and Zhao, 2012b,a) ).", "In this paper, we propose a novel neural network architecture for dependency parsing, stackpointer networks (STACKPTR).", "STACKPTR is a transition-based architecture, with the corresponding asymptotic efficiency, but still maintains a global view of the sentence that proves essential for achieving competitive accuracy.", "Our STACKPTR parser has a pointer network (Vinyals et al., 2015) as its backbone, and is equipped with an internal stack to maintain the order of head words in tree structures.", "The STACKPTR parser performs parsing in an incremental, topdown, depth-first fashion; at each step, it generates an arc by assigning a child for the head word at the top of the internal stack.", "This architecture makes it possible to capture information from the whole sentence and all the previously derived subtrees, while maintaining a number of parsing steps linear in the sentence length.", "We evaluate our parser on 29 treebanks across 20 languages and different dependency annotation schemas, and achieve state-of-the-art performance on 21 of them.", "The contributions of this work are summarized as follows: (i) We propose a neural network architecture for dependency parsing that is simple, effective, and efficient.", "(ii) Empirical evaluations on benchmark datasets over 20 languages show that our method achieves state-of-the-art performance on 21 different treebanks 1 .", "(iii) Comprehensive error analysis is conducted to compare the proposed method to a strong graph-based baseline using biaffine attention (Dozat and Manning, 2017) .", "Background We first briefly describe the task of dependency parsing, setup the notation, and review Pointer Networks (Vinyals et al., 2015) .", "Dependency Parsing and Notations Dependency trees represent syntactic relationships between words in the sentences through labeled directed edges between head words and their dependents.", "Figure 1 (a) shows a dependency tree for the sentence, \"But there were no buyers\".", "In this paper, we will use the following notation: Input: x = {w 1 , .", ".", ".", ", w n } represents a generic sentence, where w i is the ith word.", "Output: y = {p 1 , p 2 , · · · , p k } represents a generic (possibly non-projective) dependency tree, where each path p i = $, w i,1 , w i,2 , · · · , w i,l i is a sequence of words from the root to a leaf.", "\"$\" is an universal virtual root that is added to each tree.", "Stack: σ denotes a stack configuration, which is a sequence of words.", "We use σ|w to represent a stack configuration that pushes word w into the stack σ.", "Children: ch(w i ) denotes the list of all the children (modifiers) of word w i .", "Pointer Networks Pointer Networks (PTR-NET) (Vinyals et al., 2015) are a variety of neural network capable of learning the conditional probability of an output sequence with elements that are discrete tokens corresponding to positions in an input sequence.", "This model cannot be trivially expressed by standard sequence-to-sequence networks due to the variable number of input positions in each sentence.", "PTR-NET solves the problem by using attention (Bahdanau et al., 2015; Luong et al., 2015) as a pointer to select a member of the input sequence as the output.", "Formally, the words of the sentence x are fed one-by-one into the encoder (a multiple-layer bidirectional RNN), producing a sequence of encoder hidden states s i .", "At each time step t, the decoder (a uni-directional RNN) receives the input from last step and outputs decoder hidden state h t .", "The attention vector a t is calculated as follows: e t i = score(h t , s i ) a t = softmax (e t ) (1) where score(·, ·) is the attention scoring function, which has several variations such as dot-product, concatenation, and biaffine (Luong et al., 2015) .", "PTR-NET regards the attention vector a t as a probability distribution over the source words, i.e.", "it uses a t i as pointers to select the input elements.", "3 Stack-Pointer Networks Overview Similarly to PTR-NET, STACKPTR first reads the whole sentence and encodes each word into the encoder hidden state s i .", "The internal stack σ is always initialized with the root symbol $.", "At each time step t, the decoder receives the input vector corresponding to the top element of the stack σ (the head word w p where p is the word index), generates the hidden state h t , and computes the attention vector a t using Eq.", "(1).", "The parser chooses a specific position c according to the attention scores in a t to generate a new dependency arc (w h , w c ) by selecting w c as a child of w h .", "Then the parser pushes w c onto the stack, i.e.", "σ → σ|w c , and goes to the next step.", "At one step if the parser points w h to itself, i.e.", "c = h, it indicates that all children of the head word w h have already been selected.", "Then the parser goes to the next step by popping w h out of σ.", "At test time, in order to guarantee a valid dependency tree containing all the words in the input sentences exactly once, the decoder maintains a list of \"available\" words.", "At each decoding step, the parser selects a child for the current head word, and removes the child from the list of available words to make sure that it cannot be selected as a child of other head words.", "For head words with multiple children, it is possible that there is more than one valid selection for each time step.", "In order to define a deterministic decoding process to make sure that there is only one ground-truth choice at each step (which is necessary for simple maximum likelihood estimation), a predefined order for each ch(w i ) needs to be introduced.", "The predefined order of children can have different alternatives, such as leftto-right or inside-out 2 .", "In this paper, we adopt the inside-out order 3 since it enables us to utilize second-order sibling information, which has been proven beneficial for parsing performance (Mc-Donald and Pereira, 2006; ) (see § 3.4 for details).", "Figure 1 (b) depicts the architecture of STACKPTR and the decoding procedure for the example sentence in Figure 1 (a) .", "Encoder The encoder of our parsing model is based on the bi-directional LSTM-CNN architecture (BLSTM-CNNs) (Chiu and Nichols, 2016; where CNNs encode character-level information of a word into its character-level repre-sentation and BLSTM models context information of each word.", "Formally, for each word, the CNN, with character embeddings as inputs, encodes the character-level representation.", "Then the character-level representation vector is concatenated with the word embedding vector to feed into the BLSTM network.", "To enrich word-level information, we also use POS embeddings.", "Finally, the encoder outputs a sequence of hidden states s i .", "Decoder The decoder for our parser is a uni-directional LSTM.", "Different from previous work (Bahdanau et al., 2015; Vinyals et al., 2015) which uses word embeddings of the previous word as the input to the decoder, our decoder receives the encoder hidden state vector (s i ) of the top element in the stack σ (see Figure 1 (b)).", "Compared to word embeddings, the encoder hidden states contain more contextual information, benefiting both the training and decoding procedures.", "The decoder produces a sequence of decoder hidden states h i , one for each decoding step.", "Higher-order Information As mentioned before, our parser is capable of utilizing higher-order information.", "In this paper, we incorporate two kinds of higher-order structures grandparent and sibling.", "A sibling structure is a head word with two successive modifiers, and a grandparent structure is a pair of dependencies connected head-to-tail: 0 12 3 456 2782 96 56 986 2 5214 3 77543 9 5 2 52 ÿ ÿ ! \"", "\"#$% & #& ' % !", "#& (()& *% & & !", "*\"& (+(\"!", "\"% ) &, \" \"#$' (% & #-.,\"/\" \"#$ % %% #*) (& ** *% & 0 % #( !", "$% %()- 1 2 3 -45 67 896: ;<=>?", "@ ABCDE3 -F14 G H I .J3-4 %& *% &()!", "!& %#( & +!", ", & * -\"ABCDE3 -F14K!", "& %& *K&& *&& 1 %& *!", "$% %()% & #-'A #& *# \" !", "0 & $()J3-4 (,% (& !", "!", "$, & *& *!", "& *() & *% & #.\"", "#& !", "$%(!", "K EL' % #& #! '", "#((%& & $.", "\"(,* #*) ( %& *) (0 #%()& * % .", "%& (M NOP QR#*\" \"#$& & (%!", "!SNR P T .,* #*#+% #( \" % (!", "& (' #& ( \" % #+) ( !", "U\"%) (!", "!", "(,% ABCDE3 -F145 V W<; ABCDEXDY3 -FZ4 Y*& % .,& && *\" \"#$& 1%%& () & %Z.#*(),* #*[%% & #(& 0 +& (& (& *% #( ()1'(# & ) #& ( U& (% . )", "/# & % !", "( & *% % &) (% (!", "K EL'' \\\"/ A # /#!", "!", "$.A#& (%u' .u' v.\"u' w\"% # + % %& *& .", "% #& K!", "$.)", "#& (& % & ( \"0 #* ! \"", "& % .", "\"0 % +!", "& % .\"", "& () \"0 % +!", "\"& 0 % +!", "& % ' x yz{ | } {| { { } | C,& * \"0 ( \"\" \"#$ % %+ !", "\"( \"%) ( % & % !", "( & *%'_ & * % % #& (., (K \"+#[ (\"(& ,( !", "K& % %) ( K (%,( [' G H G +H vY*\"$ #0 ( % & #& % \"\" K& (%()& *E % G vH!", "( & *' B( !", "& % % \" #& \"%& !", "%\" 0 #( !", "& % %%& U( \"% '(+ K & $.,! \"", "& *% $& # *& 0 *\"\"K % (% ' x { | } } { } { Ỹ */ % && $ () % ,\"% # +% %/ % & 0 ( \" ) #& ( U& (.,* #*\"#( (% %\" 0 \"#$& & ( & % \" K \"!\"", "\"# % 'E % 0 G vH & (\"#\", \"!", "$0 % \"\"$ #0 ( !", "( & *) (/ % & 0 ( \" % % & %& *+% %) ($ % % .", "#! \"", "(, !", "( & *%.,% U & %\"% * ' Y*E % G vH!", "( & * %+% \"(& ,( & !", "& \"& $ %()\"$ #0 ( %& #0 & % OQ S¡P% % .,* #*#(% % &()*\"0 ,( \"\" & %\"% #\"& %((% \".", "\"¢ £OQ ¤ S¡P% % .,* #*#(% % &()\" \"#$\"& * (+& ,& **\"\"(\" / ' ( !", "!", "$.,\"(& #( !", "& % %¥ ¦ § ,* ©\"ª & * \" #%()& *% « %*\"0 ,( \"\"\" ( & ' #( !", "& % %\"0 (& \"%¬ ¦ § ,* ©\"® & * \"()& * *\"\"(\" /()\" \"#$' _ & & K!", "$.", "#( !", "& % % & %*! )", "0 #(% & & & *\"\"+$©.,* % #( !", "& % %(!", "$ & !*! )", "0 #(% & & & .% #& *#(% & & &# +& \"\"+$\"\" ( (\" / %& (®' E#*& $ ()% %# & \"+$ # % K!", "$ #(+ & ,(%!", "!", ".", "\"#&% % & *#(0 % & #& (% % # /\" * #!", "!", "$ v' #( !", "& % %#(% & #& \") ( ()#( !", "& % % . \"", "#& & *\" K % (()& * °©F®± & (#(% & & & %*\"\"+$©\" ®' #( !", "& % %# & \"+$#( !", "& 0 #( !", "& % , & *& *(& **!", ")() ®« %#(% & & & 'Y* ( &()#(#& & ( #*#(% & #& (²® vG H(³ 0 vG +H ² %& *T S¡ ¢ PSQ¢ £P .)", "\"& *&% & + & \"& (/\"& *( & !#(% & #& (' _ ( \"& ( % %& #-.", "&% ) /#%& ( /\"( & !#(% & #& (%) (!", "!#( !", "& \" #( !", "& % %\"/\"(-' Y* %#+ To utilize higher-order information, the decoder's input at each step is the sum of the encoder hidden states of three words: β t = s h + s g + s s where β t is the input vector of decoder at time t and h, g, s are the indices of the head word and its grandparent and sibling, respectively.", "Figure 1 (b) illustrates the details.", "Here we use the element-wise sum operation instead of concatenation because it does not increase the dimension of the input vector β t , thus introducing no additional model parameters.", "Biaffine Attention Mechanism For attention score function (Eq.", "(1)), we adopt the biaffine attention mechanism (Luong et al., 2015; Dozat and Manning, 2017) : e t i = h T t Ws i + U T h t + V T s i + b where W, U, V, b are parameters, denoting the weight matrix of the bi-linear term, the two weight vectors of the linear terms, and the bias vector.", "As discussed in Dozat and Manning (2017) , applying a multilayer perceptron (MLP) to the output vectors of the BLSTM before the score function can both reduce the dimensionality and overfitting of the model.", "We follow this work by using a one-layer perceptron to s i and h i with elu (Clevert et al., 2015) as its activation function.", "Similarly, the dependency label classifier also uses a biaffine function to score each label, given the head word vector h t and child vector s i as inputs.", "Again, we use MLPs to transform h t and s i before feeding them into the classifier.", "Training Objectives The STACKPTR parser is trained to optimize the probability of the dependency trees given sentences: P θ (y|x), which can be factorized as: P θ (y|x) = k i=1 P θ (p i |p <i , x) = k i=1 l i j=1 P θ (c i,j |c i,<j , p <i , x), (2) where θ represents model parameters.", "p <i denotes the preceding paths that have already been generated.", "c i,j represents the jth word in p i and c i,<j denotes all the proceeding words on the path p i .", "Thus, the STACKPTR parser is an autoregressive model, like sequence-to-sequence models, but it factors the distribution according to a top-down tree structure as opposed to a left-to-right chain.", "We define P θ (c i,j |c i,<j , p <i , x) = a t , where attention vector a t (of dimension n) is used as the distribution over the indices of words in a sentence.", "Arc Prediction Our parser is trained by optimizing the conditional likelihood in Eq (2), which is implemented as the cross-entropy loss.", "Label Prediction We train a separated multiclass classifier in parallel to predict the dependency labels.", "Following Dozat and Manning (2017) , the classifier takes the information of the head word and its child as features.", "The label classifier is trained simultaneously with the parser by optimizing the sum of their objectives.", "Discussion Time Complexity.", "The number of decoding steps to build a parse tree for a sentence of length n is 2n−1, linear in n. Together with the attention mechanism (at each step, we need to compute the attention vector a t , whose runtime is O(n)), the time complexity of decoding algorithm is O(n 2 ), which is more efficient than graph-based parsers that have O(n 3 ) or worse complexity when using dynamic programming or maximum spanning tree (MST) decoding algorithms.", "Top-down Parsing.", "When humans comprehend a natural language sentence, they arguably do it in an incremental, left-to-right manner.", "However, when humans consciously annotate a sentence with syntactic structure, they rarely ever process in fixed left-to-right order.", "Rather, they start by reading the whole sentence, then seeking the main predicates, jumping back-and-forth over the sentence and recursively proceeding to the subtree structures governed by certain head words.", "Our parser follows a similar kind of annotation process: starting from reading the whole sentence, and processing in a top-down manner by finding the main predicates first and only then search for sub-trees governed by them.", "When making latter decisions, the parser has access to the entire structure built in earlier steps.", "Implementation Details Pre-trained Word Embeddings.", "For all the parsing models in different languages, we initialize word vectors with pretrained word embeddings.", "For Chinese, Dutch, English, German and Spanish, we use the structured-skipgram embeddings.", "For other languages we use Polyglot embeddings (Al-Rfou et al., 2013) .", "Optimization.", "Parameter optimization is performed with the Adam optimizer (Kingma and Ba, 2014) with β 1 = β 2 = 0.9.", "We choose an initial learning rate of η 0 = 0.001.", "The learning rate η is annealed by multiplying a fixed decay rate ρ = 0.75 when parsing performance stops increasing on validation sets.", "To reduce the effects of \"gradient exploding\", we use gradient clipping of 5.0 (Pascanu et al., 2013) .", "Dropout Training.", "To mitigate overfitting, we apply dropout (Srivastava et al., 2014; .", "For BLSTM, we use recurrent dropout (Gal and Ghahramani, 2016) with a drop rate of 0.33 between hidden states and 0.33 between layers.", "Following Dozat and Manning (2017) , we also use embedding dropout with a rate of 0.33 on all word, character, and POS embeddings.", "Hyper-Parameters.", "Some parameters are chosen from those reported in Dozat and Manning (2017) .", "We use the same hyper-parameters across the models on different treebanks and languages, due to time constraints.", "The details of the chosen hyper-parameters for all experiments are summarized in Appendix A.", "Experiments Setup We evaluate our STACKPTR parser mainly on three treebanks: the English Penn Treebank (PTB version 3.0) (Marcus et al., 1993) , the Penn Chinese Treebank (CTB version 5.1) (Xue et al., 2002) , and the German CoNLL 2009 corpus (Hajič et al., 2009) .", "We use the same experimental settings as Kuncoro et al.", "(2016) .", "To make a thorough empirical comparison with previous studies, we also evaluate our system on treebanks from CoNLL shared task and the Universal Dependency (UD) Treebanks 4 .", "For the CoNLL Treebanks, we use the English treebank from CoNLL-2008 shared task (Surdeanu et al., 2008) and all 13 treebanks from CoNLL-2006 shared task (Buchholz and Marsi, 2006) .", "The experimental settings are the same as .", "For UD Treebanks, we select 12 languages.", "The details of the treebanks and experimental settings are in § 4.5 and Appendix B.", "Evaluation Metrics Parsing performance is measured with five metrics: unlabeled attachment score (UAS), labeled attachment score (LAS), unlabeled complete match (UCM), labeled complete match (LCM), and root accuracy (RA).", "Following previous work (Kuncoro et al., 2016; Dozat and Manning, 2017) , we report results excluding punctuations for Chinese and English.", "For each experiment, we report the mean values with corresponding standard deviations over 5 repetitions.", "Baseline For fair comparison of the parsing performance, we re-implemented the graph-based Deep Biaffine (BIAF) parser (Dozat and Manning, 2017) , which achieved state-of-the-art results on a wide range of languages.", "Our re-implementation adds character-level information using the same LSTM-CNN encoder as our model ( § 3.2) to the original BIAF model, which boosts its performance on all languages.", "Main Results We first conduct experiments to demonstrate the effectiveness of our neural architecture by comparing with the strong baseline BIAF.", "We compare the performance of four variations of our model with different decoder inputs -Org, +gpar, +sib and Full -where the Org model utilizes only the encoder hidden states of head words, while the +gpar and +sib models augments the original one with grandparent and sibling information, respectively.", "The Full model includes all the three information as inputs.", "Figure 2 illustrates the performance (five metrics) of different variations of our STACKPTR parser together with the results of baseline BIAF re-implemented by us, on the test sets of the three languages.", "On UAS and LAS, the Full variation of STACKPTR with decoding beam size 10 outperforms BIAF on Chinese, and obtains competitive performance on English and German.", "An interesting observation is that the Full model achieves the best accuracy on English and Chinese, while performs slightly worse than +sib on German.", "This shows that the importance of higher-order information varies in languages.", "On LCM and UCM, STACKPTR significantly outperforms BIAF on all languages, showing the superiority of our parser on complete sentence parsing.", "The results of our parser on RA are slightly worse than BIAF.", "More details of results are provided in Appendix C. Table 1 illustrates the UAS and LAS of the four versions of our model (with decoding beam size 10) on the three treebanks, together with previous top-performing systems for comparison.", "Note that the results of STACKPTR and our reimplementation of BIAF are the average of 5 repetitions instead of a single run.", "Our Full model significantly outperforms all the transition-based parsers on all three languages, and achieves better results than most graph-based parsers.", "Our Table 1 : UAS and LAS of four versions of our model on test sets for three languages, together with topperforming parsing systems.", "\"T\" and \"G\" indicate transition-and graph-based models, respectively.", "For BIAF, we provide the original results reported in Dozat and Manning (2017) and our re-implementation.", "For STACKPTR and our re-implementation of BiAF, we report the average over 5 runs.", "re-implementation of BIAF obtains better performance than the original one in Dozat and Manning (2017) , demonstrating the effectiveness of the character-level information.", "Our model achieves state-of-the-art performance on both UAS and LAS on Chinese, and best UAS on English.", "On German, the performance is competitive with BIAF, and significantly better than other models.", "Comparison with Previous Work Error Analysis In this section, we characterize the errors made by BIAF and STACKPTR by presenting a number of experiments that relate parsing errors to a set of linguistic and structural properties.", "For simplicity, we follow McDonald and Nivre (2011) and report labeled parsing metrics (either accuracy, precision, or recall) for all experiments.", "Length and Graph Factors Following McDonald and Nivre (2011), we analyze parsing errors related to structural factors.", "Sentence Length.", "Figure 3 (a) shows the accuracy of both parsing models relative to sentence lengths.", "Consistent with the analysis in Mc-Donald and Nivre (2011) , STACKPTR tends to perform better on shorter sentences, which make fewer parsing decisions, significantly reducing the chance of error propagation.", "Dependency Length.", "Figure 3 (b) measures the precision and recall relative to dependency lengths.", "While the graph-based BIAF parser still performs better for longer dependency arcs and transition-based STACKPTR parser does better for shorter ones, the gap between the two systems is marginal, much smaller than that shown Table 3 : UAS and LAS on 14 treebanks from CoNLL shared tasks, together with several state-of-the-art parsers.", "Bi-Att is the bi-directional attention based parser (Cheng et al., 2016) , and NeuroMST is the neural MST parser .", "\"Best Published\" includes the most accurate parsers in term of UAS among , Martins et al.", "(2011) , Martins et al.", "(2013) , , , Zhang and McDonald (2014) , Pitler and McDonald (2015) , and Cheng et al.", "(2016) .", "in McDonald and Nivre (2011) .", "One possible reason is that, unlike traditional transition-based parsers that scan the sentence from left to right, STACKPTR processes in a top-down manner, thus sometimes unnecessarily creating shorter dependency arcs first.", "Root Distance.", "Figure 3 (c) plots the precision and recall of each system for arcs of varying distance to the root.", "Different from the observation in McDonald and Nivre (2011) , STACKPTR does not show an obvious advantage on the precision for arcs further away from the root.", "Furthermore, the STACKPTR parser does not have the tendency to over-predict root modifiers reported in McDonald and Nivre (2011) .", "This behavior can be explained using the same reasoning as above: the fact that arcs further away from the root are usually constructed early in the parsing algorithm of traditional transition-based parsers is not true for the STACKPTR parser.", "Effect of POS Embedding The only prerequisite information that our parsing model relies on is POS tags.", "With the goal of achieving an end-to-end parser, we explore the effect of POS tags on parsing performance.", "We run experiments on PTB using our STACKPTR parser with gold-standard and predicted POS tags, and without tags, respectively.", "STACKPTR in these experiments is the Full model with beam=10.", "Table 2 gives results of the parsers with different versions of POS tags on the test data of PTB.", "The parser with gold-standard POS tags significantly outperforms the other two parsers, showing that dependency parsers can still benefit from accurate POS information.", "The parser with predicted (imperfect) POS tags, however, performs even slightly worse than the parser without using POS tags.", "It illustrates that an end-to-end parser that doesn't rely on POS information can obtain competitive (or even better) performance than parsers using imperfect predicted POS tags, even if the POS tagger is relative high accuracy (accuracy > 97% in this experiment on PTB).", "Table 3 summarizes the parsing results of our model on the test sets of 14 treebanks from the CoNLL shared task, along with the state-of-theart baselines.", "Along with BIAF, we also list the performance of the bi-directional attention based Parser (Bi-Att) (Cheng et al., 2016) and the neural MST parser (NeuroMST) for comparison.", "Our parser achieves state-of-theart performance on both UAS and LAS on eight languages -Arabic, Czech, English, German, Portuguese, Slovene, Spanish, and Swedish.", "On Bulgarian and Dutch, our parser obtains the best UAS.", "On other languages, the performance of our parser is competitive with BIAF, and significantly better than others.", "The only exception is Japanese, on which NeuroMST obtains the best scores.", "Experiments on Other Treebanks CoNLL Treebanks UD Treebanks For UD Treebanks, we select 12 languages -Bulgarian, Catalan, Czech, Dutch, English, French, German, Italian, Norwegian, Romanian, Russian and Spanish.", "For all the languages, we adopt the standard training/dev/test splits, and use the universal POS tags (Petrov et al., 2012) provided in each treebank.", "The statistics of these corpora are provided in Appendix B.", "Table 4 summarizes the results of the STACKPTR parser, along with BIAF for comparison, on both the development and test datasets for each language.", "First, both BIAF and STACKPTR parsers achieve relatively high parsing accuracies on all the 12 languages -all with UAS are higher than 90%.", "On nine languages -Catalan, Czech, Dutch, English, French, German, Norwegian, Russian and Spanish -STACKPTR outperforms BIAF for both UAS and LAS.", "On Bulgarian, STACKPTR achieves slightly better UAS while LAS is slightly worse than BIAF.", "On Italian and Romanian, BIAF obtains marginally better parsing performance than STACKPTR.", "Conclusion In this paper, we proposed STACKPTR, a transition-based neural network architecture, for dependency parsing.", "Combining pointer networks with an internal stack to track the status of the top-down, depth-first search in the decoding procedure, the STACKPTR parser is able to capture information from the whole sentence and all the previously derived subtrees, removing the leftto-right restriction in classical transition-based parsers, while maintaining linear parsing steps, w.r.t the length of the sentences.", "Experimental re-sults on 29 treebanks show the effectiveness of our parser across 20 languages, by achieving state-ofthe-art performance on 21 corpora.", "There are several potential directions for future work.", "First, we intend to consider how to conduct experiments to improve the analysis of parsing errors qualitatively and quantitatively.", "Another interesting direction is to further improve our model by exploring reinforcement learning approaches to learn an optimal order for the children of head words, instead of using a predefined fixed order." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.1", "3.2", "3.3", "3.4", "3.5", "3.6", "3.7", "3.8", "4.1", "4.2", "4.4", "4.4.1", "4.4.2", "4.5.2", "5" ], "paper_header_content": [ "Introduction", "Background", "Dependency Parsing and Notations", "Pointer Networks", "Overview", "Encoder", "Decoder", "Higher-order Information", "Biaffine Attention Mechanism", "Training Objectives", "Discussion", "Implementation Details", "Setup", "Main Results", "Error Analysis", "Length and Graph Factors", "Effect of POS Embedding", "UD Treebanks", "Conclusion" ] }
GEM-SciDuet-train-37#paper-1053#slide-8
Example
But there were no buyers were there were were but were
But there were no buyers were there were were but were
[]
GEM-SciDuet-train-37#paper-1053#slide-9
1053
Stack-Pointer Networks for Dependency Parsing
We introduce a novel architecture for dependency parsing: stack-pointer networks (STACKPTR). Combining pointer networks (Vinyals et al., 2015) with an internal stack, the proposed model first reads and encodes the whole sentence, then builds the dependency tree top-down (from root-to-leaf) in a depth-first fashion. The stack tracks the status of the depthfirst search and the pointer networks select one child for the word at the top of the stack at each step. The STACKPTR parser benefits from the information of the whole sentence and all previously derived subtree structures, and removes the leftto-right restriction in classical transitionbased parsers. Yet, the number of steps for building any (including non-projective) parse tree is linear in the length of the sentence just as other transition-based parsers, yielding an efficient decoding algorithm with O(n 2 ) time complexity. We evaluate our model on 29 treebanks spanning 20 languages and different dependency annotation schemas, and achieve state-of-theart performance on 21 of them.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281 ], "paper_content_text": [ "Introduction Dependency parsing, which predicts the existence and type of linguistic dependency relations between words, is a first step towards deep language understanding.", "Its importance is widely recognized in the natural language processing (NLP) community, with it benefiting a wide range of NLP applications, such as coreference resolution (Ng, 2010; Durrett and Klein, 2013; Work done while at Carnegie Mellon University.", "2016), sentiment analysis (Tai et al., 2015) , machine translation (Bastings et al., 2017) , information extraction (Nguyen et al., 2009; Angeli et al., 2015; Peng et al., 2017) , word sense disambiguation (Fauceglia et al., 2015) , and low-resource languages processing (McDonald et al., 2013; Ma and Xia, 2014) .", "There are two dominant approaches to dependency parsing (Buchholz and Marsi, 2006; Nivre et al., 2007) : local and greedy transitionbased algorithms (Yamada and Matsumoto, 2003; Nivre and Scholz, 2004; Zhang and Nivre, 2011; Chen and Manning, 2014) , and the globally optimized graph-based algorithms (Eisner, 1996; Mc-Donald et al., 2005a,b; .", "Transition-based dependency parsers read words sequentially (commonly from left-to-right) and build dependency trees incrementally by making series of multiple choice decisions.", "The advantage of this formalism is that the number of operations required to build any projective parse tree is linear with respect to the length of the sentence.", "The challenge, however, is that the decision made at each step is based on local information, leading to error propagation and worse performance compared to graph-based parsers on root and long dependencies (McDonald and Nivre, 2011) .", "Previous studies have explored solutions to address this challenge.", "Stack LSTMs are capable of learning representations of the parser state that are sensitive to the complete contents of the parser's state.", "Andor et al.", "(2016) proposed a globally normalized transition model to replace the locally normalized classifier.", "However, the parsing accuracy is still behind state-of-the-art graph-based parsers (Dozat and Manning, 2017) .", "Graph-based dependency parsers, on the other hand, learn scoring functions for parse trees and perform exhaustive search over all possible trees for a sentence to find the globally highest scoring tree.", "Incorporating this global search algorithm with distributed representations learned from neural networks, neural graph-based parsers (Kiperwasser and Goldberg, 2016; Wang and Chang, 2016; Kuncoro et al., 2016; Dozat and Manning, 2017) have achieved the state-of-the-art accuracies on a number of treebanks in different languages.", "Nevertheless, these models, while accurate, are usually slow (e.g.", "decoding is O(n 3 ) time complexity for first-order models (McDonald et al., 2005a,b) and higher polynomials for higherorder models (McDonald and Pereira, 2006; Ma and Zhao, 2012b,a) ).", "In this paper, we propose a novel neural network architecture for dependency parsing, stackpointer networks (STACKPTR).", "STACKPTR is a transition-based architecture, with the corresponding asymptotic efficiency, but still maintains a global view of the sentence that proves essential for achieving competitive accuracy.", "Our STACKPTR parser has a pointer network (Vinyals et al., 2015) as its backbone, and is equipped with an internal stack to maintain the order of head words in tree structures.", "The STACKPTR parser performs parsing in an incremental, topdown, depth-first fashion; at each step, it generates an arc by assigning a child for the head word at the top of the internal stack.", "This architecture makes it possible to capture information from the whole sentence and all the previously derived subtrees, while maintaining a number of parsing steps linear in the sentence length.", "We evaluate our parser on 29 treebanks across 20 languages and different dependency annotation schemas, and achieve state-of-the-art performance on 21 of them.", "The contributions of this work are summarized as follows: (i) We propose a neural network architecture for dependency parsing that is simple, effective, and efficient.", "(ii) Empirical evaluations on benchmark datasets over 20 languages show that our method achieves state-of-the-art performance on 21 different treebanks 1 .", "(iii) Comprehensive error analysis is conducted to compare the proposed method to a strong graph-based baseline using biaffine attention (Dozat and Manning, 2017) .", "Background We first briefly describe the task of dependency parsing, setup the notation, and review Pointer Networks (Vinyals et al., 2015) .", "Dependency Parsing and Notations Dependency trees represent syntactic relationships between words in the sentences through labeled directed edges between head words and their dependents.", "Figure 1 (a) shows a dependency tree for the sentence, \"But there were no buyers\".", "In this paper, we will use the following notation: Input: x = {w 1 , .", ".", ".", ", w n } represents a generic sentence, where w i is the ith word.", "Output: y = {p 1 , p 2 , · · · , p k } represents a generic (possibly non-projective) dependency tree, where each path p i = $, w i,1 , w i,2 , · · · , w i,l i is a sequence of words from the root to a leaf.", "\"$\" is an universal virtual root that is added to each tree.", "Stack: σ denotes a stack configuration, which is a sequence of words.", "We use σ|w to represent a stack configuration that pushes word w into the stack σ.", "Children: ch(w i ) denotes the list of all the children (modifiers) of word w i .", "Pointer Networks Pointer Networks (PTR-NET) (Vinyals et al., 2015) are a variety of neural network capable of learning the conditional probability of an output sequence with elements that are discrete tokens corresponding to positions in an input sequence.", "This model cannot be trivially expressed by standard sequence-to-sequence networks due to the variable number of input positions in each sentence.", "PTR-NET solves the problem by using attention (Bahdanau et al., 2015; Luong et al., 2015) as a pointer to select a member of the input sequence as the output.", "Formally, the words of the sentence x are fed one-by-one into the encoder (a multiple-layer bidirectional RNN), producing a sequence of encoder hidden states s i .", "At each time step t, the decoder (a uni-directional RNN) receives the input from last step and outputs decoder hidden state h t .", "The attention vector a t is calculated as follows: e t i = score(h t , s i ) a t = softmax (e t ) (1) where score(·, ·) is the attention scoring function, which has several variations such as dot-product, concatenation, and biaffine (Luong et al., 2015) .", "PTR-NET regards the attention vector a t as a probability distribution over the source words, i.e.", "it uses a t i as pointers to select the input elements.", "3 Stack-Pointer Networks Overview Similarly to PTR-NET, STACKPTR first reads the whole sentence and encodes each word into the encoder hidden state s i .", "The internal stack σ is always initialized with the root symbol $.", "At each time step t, the decoder receives the input vector corresponding to the top element of the stack σ (the head word w p where p is the word index), generates the hidden state h t , and computes the attention vector a t using Eq.", "(1).", "The parser chooses a specific position c according to the attention scores in a t to generate a new dependency arc (w h , w c ) by selecting w c as a child of w h .", "Then the parser pushes w c onto the stack, i.e.", "σ → σ|w c , and goes to the next step.", "At one step if the parser points w h to itself, i.e.", "c = h, it indicates that all children of the head word w h have already been selected.", "Then the parser goes to the next step by popping w h out of σ.", "At test time, in order to guarantee a valid dependency tree containing all the words in the input sentences exactly once, the decoder maintains a list of \"available\" words.", "At each decoding step, the parser selects a child for the current head word, and removes the child from the list of available words to make sure that it cannot be selected as a child of other head words.", "For head words with multiple children, it is possible that there is more than one valid selection for each time step.", "In order to define a deterministic decoding process to make sure that there is only one ground-truth choice at each step (which is necessary for simple maximum likelihood estimation), a predefined order for each ch(w i ) needs to be introduced.", "The predefined order of children can have different alternatives, such as leftto-right or inside-out 2 .", "In this paper, we adopt the inside-out order 3 since it enables us to utilize second-order sibling information, which has been proven beneficial for parsing performance (Mc-Donald and Pereira, 2006; ) (see § 3.4 for details).", "Figure 1 (b) depicts the architecture of STACKPTR and the decoding procedure for the example sentence in Figure 1 (a) .", "Encoder The encoder of our parsing model is based on the bi-directional LSTM-CNN architecture (BLSTM-CNNs) (Chiu and Nichols, 2016; where CNNs encode character-level information of a word into its character-level repre-sentation and BLSTM models context information of each word.", "Formally, for each word, the CNN, with character embeddings as inputs, encodes the character-level representation.", "Then the character-level representation vector is concatenated with the word embedding vector to feed into the BLSTM network.", "To enrich word-level information, we also use POS embeddings.", "Finally, the encoder outputs a sequence of hidden states s i .", "Decoder The decoder for our parser is a uni-directional LSTM.", "Different from previous work (Bahdanau et al., 2015; Vinyals et al., 2015) which uses word embeddings of the previous word as the input to the decoder, our decoder receives the encoder hidden state vector (s i ) of the top element in the stack σ (see Figure 1 (b)).", "Compared to word embeddings, the encoder hidden states contain more contextual information, benefiting both the training and decoding procedures.", "The decoder produces a sequence of decoder hidden states h i , one for each decoding step.", "Higher-order Information As mentioned before, our parser is capable of utilizing higher-order information.", "In this paper, we incorporate two kinds of higher-order structures grandparent and sibling.", "A sibling structure is a head word with two successive modifiers, and a grandparent structure is a pair of dependencies connected head-to-tail: 0 12 3 456 2782 96 56 986 2 5214 3 77543 9 5 2 52 ÿ ÿ ! \"", "\"#$% & #& ' % !", "#& (()& *% & & !", "*\"& (+(\"!", "\"% ) &, \" \"#$' (% & #-.,\"/\" \"#$ % %% #*) (& ** *% & 0 % #( !", "$% %()- 1 2 3 -45 67 896: ;<=>?", "@ ABCDE3 -F14 G H I .J3-4 %& *% &()!", "!& %#( & +!", ", & * -\"ABCDE3 -F14K!", "& %& *K&& *&& 1 %& *!", "$% %()% & #-'A #& *# \" !", "0 & $()J3-4 (,% (& !", "!", "$, & *& *!", "& *() & *% & #.\"", "#& !", "$%(!", "K EL' % #& #! '", "#((%& & $.", "\"(,* #*) ( %& *) (0 #%()& * % .", "%& (M NOP QR#*\" \"#$& & (%!", "!SNR P T .,* #*#+% #( \" % (!", "& (' #& ( \" % #+) ( !", "U\"%) (!", "!", "(,% ABCDE3 -F145 V W<; ABCDEXDY3 -FZ4 Y*& % .,& && *\" \"#$& 1%%& () & %Z.#*(),* #*[%% & #(& 0 +& (& (& *% #( ()1'(# & ) #& ( U& (% . )", "/# & % !", "( & *% % &) (% (!", "K EL'' \\\"/ A # /#!", "!", "$.A#& (%u' .u' v.\"u' w\"% # + % %& *& .", "% #& K!", "$.)", "#& (& % & ( \"0 #* ! \"", "& % .", "\"0 % +!", "& % .\"", "& () \"0 % +!", "\"& 0 % +!", "& % ' x yz{ | } {| { { } | C,& * \"0 ( \"\" \"#$ % %+ !", "\"( \"%) ( % & % !", "( & *%'_ & * % % #& (., (K \"+#[ (\"(& ,( !", "K& % %) ( K (%,( [' G H G +H vY*\"$ #0 ( % & #& % \"\" K& (%()& *E % G vH!", "( & *' B( !", "& % % \" #& \"%& !", "%\" 0 #( !", "& % %%& U( \"% '(+ K & $.,! \"", "& *% $& # *& 0 *\"\"K % (% ' x { | } } { } { Ỹ */ % && $ () % ,\"% # +% %/ % & 0 ( \" ) #& ( U& (.,* #*\"#( (% %\" 0 \"#$& & ( & % \" K \"!\"", "\"# % 'E % 0 G vH & (\"#\", \"!", "$0 % \"\"$ #0 ( !", "( & *) (/ % & 0 ( \" % % & %& *+% %) ($ % % .", "#! \"", "(, !", "( & *%.,% U & %\"% * ' Y*E % G vH!", "( & * %+% \"(& ,( & !", "& \"& $ %()\"$ #0 ( %& #0 & % OQ S¡P% % .,* #*#(% % &()*\"0 ,( \"\" & %\"% #\"& %((% \".", "\"¢ £OQ ¤ S¡P% % .,* #*#(% % &()\" \"#$\"& * (+& ,& **\"\"(\" / ' ( !", "!", "$.,\"(& #( !", "& % %¥ ¦ § ,* ©\"ª & * \" #%()& *% « %*\"0 ,( \"\"\" ( & ' #( !", "& % %\"0 (& \"%¬ ¦ § ,* ©\"® & * \"()& * *\"\"(\" /()\" \"#$' _ & & K!", "$.", "#( !", "& % % & %*! )", "0 #(% & & & *\"\"+$©.,* % #( !", "& % %(!", "$ & !*! )", "0 #(% & & & .% #& *#(% & & &# +& \"\"+$\"\" ( (\" / %& (®' E#*& $ ()% %# & \"+$ # % K!", "$ #(+ & ,(%!", "!", ".", "\"#&% % & *#(0 % & #& (% % # /\" * #!", "!", "$ v' #( !", "& % %#(% & #& \") ( ()#( !", "& % % . \"", "#& & *\" K % (()& * °©F®± & (#(% & & & %*\"\"+$©\" ®' #( !", "& % %# & \"+$#( !", "& 0 #( !", "& % , & *& *(& **!", ")() ®« %#(% & & & 'Y* ( &()#(#& & ( #*#(% & #& (²® vG H(³ 0 vG +H ² %& *T S¡ ¢ PSQ¢ £P .)", "\"& *&% & + & \"& (/\"& *( & !#(% & #& (' _ ( \"& ( % %& #-.", "&% ) /#%& ( /\"( & !#(% & #& (%) (!", "!#( !", "& \" #( !", "& % %\"/\"(-' Y* %#+ To utilize higher-order information, the decoder's input at each step is the sum of the encoder hidden states of three words: β t = s h + s g + s s where β t is the input vector of decoder at time t and h, g, s are the indices of the head word and its grandparent and sibling, respectively.", "Figure 1 (b) illustrates the details.", "Here we use the element-wise sum operation instead of concatenation because it does not increase the dimension of the input vector β t , thus introducing no additional model parameters.", "Biaffine Attention Mechanism For attention score function (Eq.", "(1)), we adopt the biaffine attention mechanism (Luong et al., 2015; Dozat and Manning, 2017) : e t i = h T t Ws i + U T h t + V T s i + b where W, U, V, b are parameters, denoting the weight matrix of the bi-linear term, the two weight vectors of the linear terms, and the bias vector.", "As discussed in Dozat and Manning (2017) , applying a multilayer perceptron (MLP) to the output vectors of the BLSTM before the score function can both reduce the dimensionality and overfitting of the model.", "We follow this work by using a one-layer perceptron to s i and h i with elu (Clevert et al., 2015) as its activation function.", "Similarly, the dependency label classifier also uses a biaffine function to score each label, given the head word vector h t and child vector s i as inputs.", "Again, we use MLPs to transform h t and s i before feeding them into the classifier.", "Training Objectives The STACKPTR parser is trained to optimize the probability of the dependency trees given sentences: P θ (y|x), which can be factorized as: P θ (y|x) = k i=1 P θ (p i |p <i , x) = k i=1 l i j=1 P θ (c i,j |c i,<j , p <i , x), (2) where θ represents model parameters.", "p <i denotes the preceding paths that have already been generated.", "c i,j represents the jth word in p i and c i,<j denotes all the proceeding words on the path p i .", "Thus, the STACKPTR parser is an autoregressive model, like sequence-to-sequence models, but it factors the distribution according to a top-down tree structure as opposed to a left-to-right chain.", "We define P θ (c i,j |c i,<j , p <i , x) = a t , where attention vector a t (of dimension n) is used as the distribution over the indices of words in a sentence.", "Arc Prediction Our parser is trained by optimizing the conditional likelihood in Eq (2), which is implemented as the cross-entropy loss.", "Label Prediction We train a separated multiclass classifier in parallel to predict the dependency labels.", "Following Dozat and Manning (2017) , the classifier takes the information of the head word and its child as features.", "The label classifier is trained simultaneously with the parser by optimizing the sum of their objectives.", "Discussion Time Complexity.", "The number of decoding steps to build a parse tree for a sentence of length n is 2n−1, linear in n. Together with the attention mechanism (at each step, we need to compute the attention vector a t , whose runtime is O(n)), the time complexity of decoding algorithm is O(n 2 ), which is more efficient than graph-based parsers that have O(n 3 ) or worse complexity when using dynamic programming or maximum spanning tree (MST) decoding algorithms.", "Top-down Parsing.", "When humans comprehend a natural language sentence, they arguably do it in an incremental, left-to-right manner.", "However, when humans consciously annotate a sentence with syntactic structure, they rarely ever process in fixed left-to-right order.", "Rather, they start by reading the whole sentence, then seeking the main predicates, jumping back-and-forth over the sentence and recursively proceeding to the subtree structures governed by certain head words.", "Our parser follows a similar kind of annotation process: starting from reading the whole sentence, and processing in a top-down manner by finding the main predicates first and only then search for sub-trees governed by them.", "When making latter decisions, the parser has access to the entire structure built in earlier steps.", "Implementation Details Pre-trained Word Embeddings.", "For all the parsing models in different languages, we initialize word vectors with pretrained word embeddings.", "For Chinese, Dutch, English, German and Spanish, we use the structured-skipgram embeddings.", "For other languages we use Polyglot embeddings (Al-Rfou et al., 2013) .", "Optimization.", "Parameter optimization is performed with the Adam optimizer (Kingma and Ba, 2014) with β 1 = β 2 = 0.9.", "We choose an initial learning rate of η 0 = 0.001.", "The learning rate η is annealed by multiplying a fixed decay rate ρ = 0.75 when parsing performance stops increasing on validation sets.", "To reduce the effects of \"gradient exploding\", we use gradient clipping of 5.0 (Pascanu et al., 2013) .", "Dropout Training.", "To mitigate overfitting, we apply dropout (Srivastava et al., 2014; .", "For BLSTM, we use recurrent dropout (Gal and Ghahramani, 2016) with a drop rate of 0.33 between hidden states and 0.33 between layers.", "Following Dozat and Manning (2017) , we also use embedding dropout with a rate of 0.33 on all word, character, and POS embeddings.", "Hyper-Parameters.", "Some parameters are chosen from those reported in Dozat and Manning (2017) .", "We use the same hyper-parameters across the models on different treebanks and languages, due to time constraints.", "The details of the chosen hyper-parameters for all experiments are summarized in Appendix A.", "Experiments Setup We evaluate our STACKPTR parser mainly on three treebanks: the English Penn Treebank (PTB version 3.0) (Marcus et al., 1993) , the Penn Chinese Treebank (CTB version 5.1) (Xue et al., 2002) , and the German CoNLL 2009 corpus (Hajič et al., 2009) .", "We use the same experimental settings as Kuncoro et al.", "(2016) .", "To make a thorough empirical comparison with previous studies, we also evaluate our system on treebanks from CoNLL shared task and the Universal Dependency (UD) Treebanks 4 .", "For the CoNLL Treebanks, we use the English treebank from CoNLL-2008 shared task (Surdeanu et al., 2008) and all 13 treebanks from CoNLL-2006 shared task (Buchholz and Marsi, 2006) .", "The experimental settings are the same as .", "For UD Treebanks, we select 12 languages.", "The details of the treebanks and experimental settings are in § 4.5 and Appendix B.", "Evaluation Metrics Parsing performance is measured with five metrics: unlabeled attachment score (UAS), labeled attachment score (LAS), unlabeled complete match (UCM), labeled complete match (LCM), and root accuracy (RA).", "Following previous work (Kuncoro et al., 2016; Dozat and Manning, 2017) , we report results excluding punctuations for Chinese and English.", "For each experiment, we report the mean values with corresponding standard deviations over 5 repetitions.", "Baseline For fair comparison of the parsing performance, we re-implemented the graph-based Deep Biaffine (BIAF) parser (Dozat and Manning, 2017) , which achieved state-of-the-art results on a wide range of languages.", "Our re-implementation adds character-level information using the same LSTM-CNN encoder as our model ( § 3.2) to the original BIAF model, which boosts its performance on all languages.", "Main Results We first conduct experiments to demonstrate the effectiveness of our neural architecture by comparing with the strong baseline BIAF.", "We compare the performance of four variations of our model with different decoder inputs -Org, +gpar, +sib and Full -where the Org model utilizes only the encoder hidden states of head words, while the +gpar and +sib models augments the original one with grandparent and sibling information, respectively.", "The Full model includes all the three information as inputs.", "Figure 2 illustrates the performance (five metrics) of different variations of our STACKPTR parser together with the results of baseline BIAF re-implemented by us, on the test sets of the three languages.", "On UAS and LAS, the Full variation of STACKPTR with decoding beam size 10 outperforms BIAF on Chinese, and obtains competitive performance on English and German.", "An interesting observation is that the Full model achieves the best accuracy on English and Chinese, while performs slightly worse than +sib on German.", "This shows that the importance of higher-order information varies in languages.", "On LCM and UCM, STACKPTR significantly outperforms BIAF on all languages, showing the superiority of our parser on complete sentence parsing.", "The results of our parser on RA are slightly worse than BIAF.", "More details of results are provided in Appendix C. Table 1 illustrates the UAS and LAS of the four versions of our model (with decoding beam size 10) on the three treebanks, together with previous top-performing systems for comparison.", "Note that the results of STACKPTR and our reimplementation of BIAF are the average of 5 repetitions instead of a single run.", "Our Full model significantly outperforms all the transition-based parsers on all three languages, and achieves better results than most graph-based parsers.", "Our Table 1 : UAS and LAS of four versions of our model on test sets for three languages, together with topperforming parsing systems.", "\"T\" and \"G\" indicate transition-and graph-based models, respectively.", "For BIAF, we provide the original results reported in Dozat and Manning (2017) and our re-implementation.", "For STACKPTR and our re-implementation of BiAF, we report the average over 5 runs.", "re-implementation of BIAF obtains better performance than the original one in Dozat and Manning (2017) , demonstrating the effectiveness of the character-level information.", "Our model achieves state-of-the-art performance on both UAS and LAS on Chinese, and best UAS on English.", "On German, the performance is competitive with BIAF, and significantly better than other models.", "Comparison with Previous Work Error Analysis In this section, we characterize the errors made by BIAF and STACKPTR by presenting a number of experiments that relate parsing errors to a set of linguistic and structural properties.", "For simplicity, we follow McDonald and Nivre (2011) and report labeled parsing metrics (either accuracy, precision, or recall) for all experiments.", "Length and Graph Factors Following McDonald and Nivre (2011), we analyze parsing errors related to structural factors.", "Sentence Length.", "Figure 3 (a) shows the accuracy of both parsing models relative to sentence lengths.", "Consistent with the analysis in Mc-Donald and Nivre (2011) , STACKPTR tends to perform better on shorter sentences, which make fewer parsing decisions, significantly reducing the chance of error propagation.", "Dependency Length.", "Figure 3 (b) measures the precision and recall relative to dependency lengths.", "While the graph-based BIAF parser still performs better for longer dependency arcs and transition-based STACKPTR parser does better for shorter ones, the gap between the two systems is marginal, much smaller than that shown Table 3 : UAS and LAS on 14 treebanks from CoNLL shared tasks, together with several state-of-the-art parsers.", "Bi-Att is the bi-directional attention based parser (Cheng et al., 2016) , and NeuroMST is the neural MST parser .", "\"Best Published\" includes the most accurate parsers in term of UAS among , Martins et al.", "(2011) , Martins et al.", "(2013) , , , Zhang and McDonald (2014) , Pitler and McDonald (2015) , and Cheng et al.", "(2016) .", "in McDonald and Nivre (2011) .", "One possible reason is that, unlike traditional transition-based parsers that scan the sentence from left to right, STACKPTR processes in a top-down manner, thus sometimes unnecessarily creating shorter dependency arcs first.", "Root Distance.", "Figure 3 (c) plots the precision and recall of each system for arcs of varying distance to the root.", "Different from the observation in McDonald and Nivre (2011) , STACKPTR does not show an obvious advantage on the precision for arcs further away from the root.", "Furthermore, the STACKPTR parser does not have the tendency to over-predict root modifiers reported in McDonald and Nivre (2011) .", "This behavior can be explained using the same reasoning as above: the fact that arcs further away from the root are usually constructed early in the parsing algorithm of traditional transition-based parsers is not true for the STACKPTR parser.", "Effect of POS Embedding The only prerequisite information that our parsing model relies on is POS tags.", "With the goal of achieving an end-to-end parser, we explore the effect of POS tags on parsing performance.", "We run experiments on PTB using our STACKPTR parser with gold-standard and predicted POS tags, and without tags, respectively.", "STACKPTR in these experiments is the Full model with beam=10.", "Table 2 gives results of the parsers with different versions of POS tags on the test data of PTB.", "The parser with gold-standard POS tags significantly outperforms the other two parsers, showing that dependency parsers can still benefit from accurate POS information.", "The parser with predicted (imperfect) POS tags, however, performs even slightly worse than the parser without using POS tags.", "It illustrates that an end-to-end parser that doesn't rely on POS information can obtain competitive (or even better) performance than parsers using imperfect predicted POS tags, even if the POS tagger is relative high accuracy (accuracy > 97% in this experiment on PTB).", "Table 3 summarizes the parsing results of our model on the test sets of 14 treebanks from the CoNLL shared task, along with the state-of-theart baselines.", "Along with BIAF, we also list the performance of the bi-directional attention based Parser (Bi-Att) (Cheng et al., 2016) and the neural MST parser (NeuroMST) for comparison.", "Our parser achieves state-of-theart performance on both UAS and LAS on eight languages -Arabic, Czech, English, German, Portuguese, Slovene, Spanish, and Swedish.", "On Bulgarian and Dutch, our parser obtains the best UAS.", "On other languages, the performance of our parser is competitive with BIAF, and significantly better than others.", "The only exception is Japanese, on which NeuroMST obtains the best scores.", "Experiments on Other Treebanks CoNLL Treebanks UD Treebanks For UD Treebanks, we select 12 languages -Bulgarian, Catalan, Czech, Dutch, English, French, German, Italian, Norwegian, Romanian, Russian and Spanish.", "For all the languages, we adopt the standard training/dev/test splits, and use the universal POS tags (Petrov et al., 2012) provided in each treebank.", "The statistics of these corpora are provided in Appendix B.", "Table 4 summarizes the results of the STACKPTR parser, along with BIAF for comparison, on both the development and test datasets for each language.", "First, both BIAF and STACKPTR parsers achieve relatively high parsing accuracies on all the 12 languages -all with UAS are higher than 90%.", "On nine languages -Catalan, Czech, Dutch, English, French, German, Norwegian, Russian and Spanish -STACKPTR outperforms BIAF for both UAS and LAS.", "On Bulgarian, STACKPTR achieves slightly better UAS while LAS is slightly worse than BIAF.", "On Italian and Romanian, BIAF obtains marginally better parsing performance than STACKPTR.", "Conclusion In this paper, we proposed STACKPTR, a transition-based neural network architecture, for dependency parsing.", "Combining pointer networks with an internal stack to track the status of the top-down, depth-first search in the decoding procedure, the STACKPTR parser is able to capture information from the whole sentence and all the previously derived subtrees, removing the leftto-right restriction in classical transition-based parsers, while maintaining linear parsing steps, w.r.t the length of the sentences.", "Experimental re-sults on 29 treebanks show the effectiveness of our parser across 20 languages, by achieving state-ofthe-art performance on 21 corpora.", "There are several potential directions for future work.", "First, we intend to consider how to conduct experiments to improve the analysis of parsing errors qualitatively and quantitatively.", "Another interesting direction is to further improve our model by exploring reinforcement learning approaches to learn an optimal order for the children of head words, instead of using a predefined fixed order." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.1", "3.2", "3.3", "3.4", "3.5", "3.6", "3.7", "3.8", "4.1", "4.2", "4.4", "4.4.1", "4.4.2", "4.5.2", "5" ], "paper_header_content": [ "Introduction", "Background", "Dependency Parsing and Notations", "Pointer Networks", "Overview", "Encoder", "Decoder", "Higher-order Information", "Biaffine Attention Mechanism", "Training Objectives", "Discussion", "Implementation Details", "Setup", "Main Results", "Error Analysis", "Length and Graph Factors", "Effect of POS Embedding", "UD Treebanks", "Conclusion" ] }
GEM-SciDuet-train-37#paper-1053#slide-9
Learning StackPtr
Factorize into sequence of top-down paths Pre-defined inside-out order for children of each head word Enables parser to utilize higher-order sibling information Train separate classifier for dependency label prediction Use head word and child information [Dozat+ 2017]
Factorize into sequence of top-down paths Pre-defined inside-out order for children of each head word Enables parser to utilize higher-order sibling information Train separate classifier for dependency label prediction Use head word and child information [Dozat+ 2017]
[]
GEM-SciDuet-train-37#paper-1053#slide-10
1053
Stack-Pointer Networks for Dependency Parsing
We introduce a novel architecture for dependency parsing: stack-pointer networks (STACKPTR). Combining pointer networks (Vinyals et al., 2015) with an internal stack, the proposed model first reads and encodes the whole sentence, then builds the dependency tree top-down (from root-to-leaf) in a depth-first fashion. The stack tracks the status of the depthfirst search and the pointer networks select one child for the word at the top of the stack at each step. The STACKPTR parser benefits from the information of the whole sentence and all previously derived subtree structures, and removes the leftto-right restriction in classical transitionbased parsers. Yet, the number of steps for building any (including non-projective) parse tree is linear in the length of the sentence just as other transition-based parsers, yielding an efficient decoding algorithm with O(n 2 ) time complexity. We evaluate our model on 29 treebanks spanning 20 languages and different dependency annotation schemas, and achieve state-of-theart performance on 21 of them.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281 ], "paper_content_text": [ "Introduction Dependency parsing, which predicts the existence and type of linguistic dependency relations between words, is a first step towards deep language understanding.", "Its importance is widely recognized in the natural language processing (NLP) community, with it benefiting a wide range of NLP applications, such as coreference resolution (Ng, 2010; Durrett and Klein, 2013; Work done while at Carnegie Mellon University.", "2016), sentiment analysis (Tai et al., 2015) , machine translation (Bastings et al., 2017) , information extraction (Nguyen et al., 2009; Angeli et al., 2015; Peng et al., 2017) , word sense disambiguation (Fauceglia et al., 2015) , and low-resource languages processing (McDonald et al., 2013; Ma and Xia, 2014) .", "There are two dominant approaches to dependency parsing (Buchholz and Marsi, 2006; Nivre et al., 2007) : local and greedy transitionbased algorithms (Yamada and Matsumoto, 2003; Nivre and Scholz, 2004; Zhang and Nivre, 2011; Chen and Manning, 2014) , and the globally optimized graph-based algorithms (Eisner, 1996; Mc-Donald et al., 2005a,b; .", "Transition-based dependency parsers read words sequentially (commonly from left-to-right) and build dependency trees incrementally by making series of multiple choice decisions.", "The advantage of this formalism is that the number of operations required to build any projective parse tree is linear with respect to the length of the sentence.", "The challenge, however, is that the decision made at each step is based on local information, leading to error propagation and worse performance compared to graph-based parsers on root and long dependencies (McDonald and Nivre, 2011) .", "Previous studies have explored solutions to address this challenge.", "Stack LSTMs are capable of learning representations of the parser state that are sensitive to the complete contents of the parser's state.", "Andor et al.", "(2016) proposed a globally normalized transition model to replace the locally normalized classifier.", "However, the parsing accuracy is still behind state-of-the-art graph-based parsers (Dozat and Manning, 2017) .", "Graph-based dependency parsers, on the other hand, learn scoring functions for parse trees and perform exhaustive search over all possible trees for a sentence to find the globally highest scoring tree.", "Incorporating this global search algorithm with distributed representations learned from neural networks, neural graph-based parsers (Kiperwasser and Goldberg, 2016; Wang and Chang, 2016; Kuncoro et al., 2016; Dozat and Manning, 2017) have achieved the state-of-the-art accuracies on a number of treebanks in different languages.", "Nevertheless, these models, while accurate, are usually slow (e.g.", "decoding is O(n 3 ) time complexity for first-order models (McDonald et al., 2005a,b) and higher polynomials for higherorder models (McDonald and Pereira, 2006; Ma and Zhao, 2012b,a) ).", "In this paper, we propose a novel neural network architecture for dependency parsing, stackpointer networks (STACKPTR).", "STACKPTR is a transition-based architecture, with the corresponding asymptotic efficiency, but still maintains a global view of the sentence that proves essential for achieving competitive accuracy.", "Our STACKPTR parser has a pointer network (Vinyals et al., 2015) as its backbone, and is equipped with an internal stack to maintain the order of head words in tree structures.", "The STACKPTR parser performs parsing in an incremental, topdown, depth-first fashion; at each step, it generates an arc by assigning a child for the head word at the top of the internal stack.", "This architecture makes it possible to capture information from the whole sentence and all the previously derived subtrees, while maintaining a number of parsing steps linear in the sentence length.", "We evaluate our parser on 29 treebanks across 20 languages and different dependency annotation schemas, and achieve state-of-the-art performance on 21 of them.", "The contributions of this work are summarized as follows: (i) We propose a neural network architecture for dependency parsing that is simple, effective, and efficient.", "(ii) Empirical evaluations on benchmark datasets over 20 languages show that our method achieves state-of-the-art performance on 21 different treebanks 1 .", "(iii) Comprehensive error analysis is conducted to compare the proposed method to a strong graph-based baseline using biaffine attention (Dozat and Manning, 2017) .", "Background We first briefly describe the task of dependency parsing, setup the notation, and review Pointer Networks (Vinyals et al., 2015) .", "Dependency Parsing and Notations Dependency trees represent syntactic relationships between words in the sentences through labeled directed edges between head words and their dependents.", "Figure 1 (a) shows a dependency tree for the sentence, \"But there were no buyers\".", "In this paper, we will use the following notation: Input: x = {w 1 , .", ".", ".", ", w n } represents a generic sentence, where w i is the ith word.", "Output: y = {p 1 , p 2 , · · · , p k } represents a generic (possibly non-projective) dependency tree, where each path p i = $, w i,1 , w i,2 , · · · , w i,l i is a sequence of words from the root to a leaf.", "\"$\" is an universal virtual root that is added to each tree.", "Stack: σ denotes a stack configuration, which is a sequence of words.", "We use σ|w to represent a stack configuration that pushes word w into the stack σ.", "Children: ch(w i ) denotes the list of all the children (modifiers) of word w i .", "Pointer Networks Pointer Networks (PTR-NET) (Vinyals et al., 2015) are a variety of neural network capable of learning the conditional probability of an output sequence with elements that are discrete tokens corresponding to positions in an input sequence.", "This model cannot be trivially expressed by standard sequence-to-sequence networks due to the variable number of input positions in each sentence.", "PTR-NET solves the problem by using attention (Bahdanau et al., 2015; Luong et al., 2015) as a pointer to select a member of the input sequence as the output.", "Formally, the words of the sentence x are fed one-by-one into the encoder (a multiple-layer bidirectional RNN), producing a sequence of encoder hidden states s i .", "At each time step t, the decoder (a uni-directional RNN) receives the input from last step and outputs decoder hidden state h t .", "The attention vector a t is calculated as follows: e t i = score(h t , s i ) a t = softmax (e t ) (1) where score(·, ·) is the attention scoring function, which has several variations such as dot-product, concatenation, and biaffine (Luong et al., 2015) .", "PTR-NET regards the attention vector a t as a probability distribution over the source words, i.e.", "it uses a t i as pointers to select the input elements.", "3 Stack-Pointer Networks Overview Similarly to PTR-NET, STACKPTR first reads the whole sentence and encodes each word into the encoder hidden state s i .", "The internal stack σ is always initialized with the root symbol $.", "At each time step t, the decoder receives the input vector corresponding to the top element of the stack σ (the head word w p where p is the word index), generates the hidden state h t , and computes the attention vector a t using Eq.", "(1).", "The parser chooses a specific position c according to the attention scores in a t to generate a new dependency arc (w h , w c ) by selecting w c as a child of w h .", "Then the parser pushes w c onto the stack, i.e.", "σ → σ|w c , and goes to the next step.", "At one step if the parser points w h to itself, i.e.", "c = h, it indicates that all children of the head word w h have already been selected.", "Then the parser goes to the next step by popping w h out of σ.", "At test time, in order to guarantee a valid dependency tree containing all the words in the input sentences exactly once, the decoder maintains a list of \"available\" words.", "At each decoding step, the parser selects a child for the current head word, and removes the child from the list of available words to make sure that it cannot be selected as a child of other head words.", "For head words with multiple children, it is possible that there is more than one valid selection for each time step.", "In order to define a deterministic decoding process to make sure that there is only one ground-truth choice at each step (which is necessary for simple maximum likelihood estimation), a predefined order for each ch(w i ) needs to be introduced.", "The predefined order of children can have different alternatives, such as leftto-right or inside-out 2 .", "In this paper, we adopt the inside-out order 3 since it enables us to utilize second-order sibling information, which has been proven beneficial for parsing performance (Mc-Donald and Pereira, 2006; ) (see § 3.4 for details).", "Figure 1 (b) depicts the architecture of STACKPTR and the decoding procedure for the example sentence in Figure 1 (a) .", "Encoder The encoder of our parsing model is based on the bi-directional LSTM-CNN architecture (BLSTM-CNNs) (Chiu and Nichols, 2016; where CNNs encode character-level information of a word into its character-level repre-sentation and BLSTM models context information of each word.", "Formally, for each word, the CNN, with character embeddings as inputs, encodes the character-level representation.", "Then the character-level representation vector is concatenated with the word embedding vector to feed into the BLSTM network.", "To enrich word-level information, we also use POS embeddings.", "Finally, the encoder outputs a sequence of hidden states s i .", "Decoder The decoder for our parser is a uni-directional LSTM.", "Different from previous work (Bahdanau et al., 2015; Vinyals et al., 2015) which uses word embeddings of the previous word as the input to the decoder, our decoder receives the encoder hidden state vector (s i ) of the top element in the stack σ (see Figure 1 (b)).", "Compared to word embeddings, the encoder hidden states contain more contextual information, benefiting both the training and decoding procedures.", "The decoder produces a sequence of decoder hidden states h i , one for each decoding step.", "Higher-order Information As mentioned before, our parser is capable of utilizing higher-order information.", "In this paper, we incorporate two kinds of higher-order structures grandparent and sibling.", "A sibling structure is a head word with two successive modifiers, and a grandparent structure is a pair of dependencies connected head-to-tail: 0 12 3 456 2782 96 56 986 2 5214 3 77543 9 5 2 52 ÿ ÿ ! \"", "\"#$% & #& ' % !", "#& (()& *% & & !", "*\"& (+(\"!", "\"% ) &, \" \"#$' (% & #-.,\"/\" \"#$ % %% #*) (& ** *% & 0 % #( !", "$% %()- 1 2 3 -45 67 896: ;<=>?", "@ ABCDE3 -F14 G H I .J3-4 %& *% &()!", "!& %#( & +!", ", & * -\"ABCDE3 -F14K!", "& %& *K&& *&& 1 %& *!", "$% %()% & #-'A #& *# \" !", "0 & $()J3-4 (,% (& !", "!", "$, & *& *!", "& *() & *% & #.\"", "#& !", "$%(!", "K EL' % #& #! '", "#((%& & $.", "\"(,* #*) ( %& *) (0 #%()& * % .", "%& (M NOP QR#*\" \"#$& & (%!", "!SNR P T .,* #*#+% #( \" % (!", "& (' #& ( \" % #+) ( !", "U\"%) (!", "!", "(,% ABCDE3 -F145 V W<; ABCDEXDY3 -FZ4 Y*& % .,& && *\" \"#$& 1%%& () & %Z.#*(),* #*[%% & #(& 0 +& (& (& *% #( ()1'(# & ) #& ( U& (% . )", "/# & % !", "( & *% % &) (% (!", "K EL'' \\\"/ A # /#!", "!", "$.A#& (%u' .u' v.\"u' w\"% # + % %& *& .", "% #& K!", "$.)", "#& (& % & ( \"0 #* ! \"", "& % .", "\"0 % +!", "& % .\"", "& () \"0 % +!", "\"& 0 % +!", "& % ' x yz{ | } {| { { } | C,& * \"0 ( \"\" \"#$ % %+ !", "\"( \"%) ( % & % !", "( & *%'_ & * % % #& (., (K \"+#[ (\"(& ,( !", "K& % %) ( K (%,( [' G H G +H vY*\"$ #0 ( % & #& % \"\" K& (%()& *E % G vH!", "( & *' B( !", "& % % \" #& \"%& !", "%\" 0 #( !", "& % %%& U( \"% '(+ K & $.,! \"", "& *% $& # *& 0 *\"\"K % (% ' x { | } } { } { Ỹ */ % && $ () % ,\"% # +% %/ % & 0 ( \" ) #& ( U& (.,* #*\"#( (% %\" 0 \"#$& & ( & % \" K \"!\"", "\"# % 'E % 0 G vH & (\"#\", \"!", "$0 % \"\"$ #0 ( !", "( & *) (/ % & 0 ( \" % % & %& *+% %) ($ % % .", "#! \"", "(, !", "( & *%.,% U & %\"% * ' Y*E % G vH!", "( & * %+% \"(& ,( & !", "& \"& $ %()\"$ #0 ( %& #0 & % OQ S¡P% % .,* #*#(% % &()*\"0 ,( \"\" & %\"% #\"& %((% \".", "\"¢ £OQ ¤ S¡P% % .,* #*#(% % &()\" \"#$\"& * (+& ,& **\"\"(\" / ' ( !", "!", "$.,\"(& #( !", "& % %¥ ¦ § ,* ©\"ª & * \" #%()& *% « %*\"0 ,( \"\"\" ( & ' #( !", "& % %\"0 (& \"%¬ ¦ § ,* ©\"® & * \"()& * *\"\"(\" /()\" \"#$' _ & & K!", "$.", "#( !", "& % % & %*! )", "0 #(% & & & *\"\"+$©.,* % #( !", "& % %(!", "$ & !*! )", "0 #(% & & & .% #& *#(% & & &# +& \"\"+$\"\" ( (\" / %& (®' E#*& $ ()% %# & \"+$ # % K!", "$ #(+ & ,(%!", "!", ".", "\"#&% % & *#(0 % & #& (% % # /\" * #!", "!", "$ v' #( !", "& % %#(% & #& \") ( ()#( !", "& % % . \"", "#& & *\" K % (()& * °©F®± & (#(% & & & %*\"\"+$©\" ®' #( !", "& % %# & \"+$#( !", "& 0 #( !", "& % , & *& *(& **!", ")() ®« %#(% & & & 'Y* ( &()#(#& & ( #*#(% & #& (²® vG H(³ 0 vG +H ² %& *T S¡ ¢ PSQ¢ £P .)", "\"& *&% & + & \"& (/\"& *( & !#(% & #& (' _ ( \"& ( % %& #-.", "&% ) /#%& ( /\"( & !#(% & #& (%) (!", "!#( !", "& \" #( !", "& % %\"/\"(-' Y* %#+ To utilize higher-order information, the decoder's input at each step is the sum of the encoder hidden states of three words: β t = s h + s g + s s where β t is the input vector of decoder at time t and h, g, s are the indices of the head word and its grandparent and sibling, respectively.", "Figure 1 (b) illustrates the details.", "Here we use the element-wise sum operation instead of concatenation because it does not increase the dimension of the input vector β t , thus introducing no additional model parameters.", "Biaffine Attention Mechanism For attention score function (Eq.", "(1)), we adopt the biaffine attention mechanism (Luong et al., 2015; Dozat and Manning, 2017) : e t i = h T t Ws i + U T h t + V T s i + b where W, U, V, b are parameters, denoting the weight matrix of the bi-linear term, the two weight vectors of the linear terms, and the bias vector.", "As discussed in Dozat and Manning (2017) , applying a multilayer perceptron (MLP) to the output vectors of the BLSTM before the score function can both reduce the dimensionality and overfitting of the model.", "We follow this work by using a one-layer perceptron to s i and h i with elu (Clevert et al., 2015) as its activation function.", "Similarly, the dependency label classifier also uses a biaffine function to score each label, given the head word vector h t and child vector s i as inputs.", "Again, we use MLPs to transform h t and s i before feeding them into the classifier.", "Training Objectives The STACKPTR parser is trained to optimize the probability of the dependency trees given sentences: P θ (y|x), which can be factorized as: P θ (y|x) = k i=1 P θ (p i |p <i , x) = k i=1 l i j=1 P θ (c i,j |c i,<j , p <i , x), (2) where θ represents model parameters.", "p <i denotes the preceding paths that have already been generated.", "c i,j represents the jth word in p i and c i,<j denotes all the proceeding words on the path p i .", "Thus, the STACKPTR parser is an autoregressive model, like sequence-to-sequence models, but it factors the distribution according to a top-down tree structure as opposed to a left-to-right chain.", "We define P θ (c i,j |c i,<j , p <i , x) = a t , where attention vector a t (of dimension n) is used as the distribution over the indices of words in a sentence.", "Arc Prediction Our parser is trained by optimizing the conditional likelihood in Eq (2), which is implemented as the cross-entropy loss.", "Label Prediction We train a separated multiclass classifier in parallel to predict the dependency labels.", "Following Dozat and Manning (2017) , the classifier takes the information of the head word and its child as features.", "The label classifier is trained simultaneously with the parser by optimizing the sum of their objectives.", "Discussion Time Complexity.", "The number of decoding steps to build a parse tree for a sentence of length n is 2n−1, linear in n. Together with the attention mechanism (at each step, we need to compute the attention vector a t , whose runtime is O(n)), the time complexity of decoding algorithm is O(n 2 ), which is more efficient than graph-based parsers that have O(n 3 ) or worse complexity when using dynamic programming or maximum spanning tree (MST) decoding algorithms.", "Top-down Parsing.", "When humans comprehend a natural language sentence, they arguably do it in an incremental, left-to-right manner.", "However, when humans consciously annotate a sentence with syntactic structure, they rarely ever process in fixed left-to-right order.", "Rather, they start by reading the whole sentence, then seeking the main predicates, jumping back-and-forth over the sentence and recursively proceeding to the subtree structures governed by certain head words.", "Our parser follows a similar kind of annotation process: starting from reading the whole sentence, and processing in a top-down manner by finding the main predicates first and only then search for sub-trees governed by them.", "When making latter decisions, the parser has access to the entire structure built in earlier steps.", "Implementation Details Pre-trained Word Embeddings.", "For all the parsing models in different languages, we initialize word vectors with pretrained word embeddings.", "For Chinese, Dutch, English, German and Spanish, we use the structured-skipgram embeddings.", "For other languages we use Polyglot embeddings (Al-Rfou et al., 2013) .", "Optimization.", "Parameter optimization is performed with the Adam optimizer (Kingma and Ba, 2014) with β 1 = β 2 = 0.9.", "We choose an initial learning rate of η 0 = 0.001.", "The learning rate η is annealed by multiplying a fixed decay rate ρ = 0.75 when parsing performance stops increasing on validation sets.", "To reduce the effects of \"gradient exploding\", we use gradient clipping of 5.0 (Pascanu et al., 2013) .", "Dropout Training.", "To mitigate overfitting, we apply dropout (Srivastava et al., 2014; .", "For BLSTM, we use recurrent dropout (Gal and Ghahramani, 2016) with a drop rate of 0.33 between hidden states and 0.33 between layers.", "Following Dozat and Manning (2017) , we also use embedding dropout with a rate of 0.33 on all word, character, and POS embeddings.", "Hyper-Parameters.", "Some parameters are chosen from those reported in Dozat and Manning (2017) .", "We use the same hyper-parameters across the models on different treebanks and languages, due to time constraints.", "The details of the chosen hyper-parameters for all experiments are summarized in Appendix A.", "Experiments Setup We evaluate our STACKPTR parser mainly on three treebanks: the English Penn Treebank (PTB version 3.0) (Marcus et al., 1993) , the Penn Chinese Treebank (CTB version 5.1) (Xue et al., 2002) , and the German CoNLL 2009 corpus (Hajič et al., 2009) .", "We use the same experimental settings as Kuncoro et al.", "(2016) .", "To make a thorough empirical comparison with previous studies, we also evaluate our system on treebanks from CoNLL shared task and the Universal Dependency (UD) Treebanks 4 .", "For the CoNLL Treebanks, we use the English treebank from CoNLL-2008 shared task (Surdeanu et al., 2008) and all 13 treebanks from CoNLL-2006 shared task (Buchholz and Marsi, 2006) .", "The experimental settings are the same as .", "For UD Treebanks, we select 12 languages.", "The details of the treebanks and experimental settings are in § 4.5 and Appendix B.", "Evaluation Metrics Parsing performance is measured with five metrics: unlabeled attachment score (UAS), labeled attachment score (LAS), unlabeled complete match (UCM), labeled complete match (LCM), and root accuracy (RA).", "Following previous work (Kuncoro et al., 2016; Dozat and Manning, 2017) , we report results excluding punctuations for Chinese and English.", "For each experiment, we report the mean values with corresponding standard deviations over 5 repetitions.", "Baseline For fair comparison of the parsing performance, we re-implemented the graph-based Deep Biaffine (BIAF) parser (Dozat and Manning, 2017) , which achieved state-of-the-art results on a wide range of languages.", "Our re-implementation adds character-level information using the same LSTM-CNN encoder as our model ( § 3.2) to the original BIAF model, which boosts its performance on all languages.", "Main Results We first conduct experiments to demonstrate the effectiveness of our neural architecture by comparing with the strong baseline BIAF.", "We compare the performance of four variations of our model with different decoder inputs -Org, +gpar, +sib and Full -where the Org model utilizes only the encoder hidden states of head words, while the +gpar and +sib models augments the original one with grandparent and sibling information, respectively.", "The Full model includes all the three information as inputs.", "Figure 2 illustrates the performance (five metrics) of different variations of our STACKPTR parser together with the results of baseline BIAF re-implemented by us, on the test sets of the three languages.", "On UAS and LAS, the Full variation of STACKPTR with decoding beam size 10 outperforms BIAF on Chinese, and obtains competitive performance on English and German.", "An interesting observation is that the Full model achieves the best accuracy on English and Chinese, while performs slightly worse than +sib on German.", "This shows that the importance of higher-order information varies in languages.", "On LCM and UCM, STACKPTR significantly outperforms BIAF on all languages, showing the superiority of our parser on complete sentence parsing.", "The results of our parser on RA are slightly worse than BIAF.", "More details of results are provided in Appendix C. Table 1 illustrates the UAS and LAS of the four versions of our model (with decoding beam size 10) on the three treebanks, together with previous top-performing systems for comparison.", "Note that the results of STACKPTR and our reimplementation of BIAF are the average of 5 repetitions instead of a single run.", "Our Full model significantly outperforms all the transition-based parsers on all three languages, and achieves better results than most graph-based parsers.", "Our Table 1 : UAS and LAS of four versions of our model on test sets for three languages, together with topperforming parsing systems.", "\"T\" and \"G\" indicate transition-and graph-based models, respectively.", "For BIAF, we provide the original results reported in Dozat and Manning (2017) and our re-implementation.", "For STACKPTR and our re-implementation of BiAF, we report the average over 5 runs.", "re-implementation of BIAF obtains better performance than the original one in Dozat and Manning (2017) , demonstrating the effectiveness of the character-level information.", "Our model achieves state-of-the-art performance on both UAS and LAS on Chinese, and best UAS on English.", "On German, the performance is competitive with BIAF, and significantly better than other models.", "Comparison with Previous Work Error Analysis In this section, we characterize the errors made by BIAF and STACKPTR by presenting a number of experiments that relate parsing errors to a set of linguistic and structural properties.", "For simplicity, we follow McDonald and Nivre (2011) and report labeled parsing metrics (either accuracy, precision, or recall) for all experiments.", "Length and Graph Factors Following McDonald and Nivre (2011), we analyze parsing errors related to structural factors.", "Sentence Length.", "Figure 3 (a) shows the accuracy of both parsing models relative to sentence lengths.", "Consistent with the analysis in Mc-Donald and Nivre (2011) , STACKPTR tends to perform better on shorter sentences, which make fewer parsing decisions, significantly reducing the chance of error propagation.", "Dependency Length.", "Figure 3 (b) measures the precision and recall relative to dependency lengths.", "While the graph-based BIAF parser still performs better for longer dependency arcs and transition-based STACKPTR parser does better for shorter ones, the gap between the two systems is marginal, much smaller than that shown Table 3 : UAS and LAS on 14 treebanks from CoNLL shared tasks, together with several state-of-the-art parsers.", "Bi-Att is the bi-directional attention based parser (Cheng et al., 2016) , and NeuroMST is the neural MST parser .", "\"Best Published\" includes the most accurate parsers in term of UAS among , Martins et al.", "(2011) , Martins et al.", "(2013) , , , Zhang and McDonald (2014) , Pitler and McDonald (2015) , and Cheng et al.", "(2016) .", "in McDonald and Nivre (2011) .", "One possible reason is that, unlike traditional transition-based parsers that scan the sentence from left to right, STACKPTR processes in a top-down manner, thus sometimes unnecessarily creating shorter dependency arcs first.", "Root Distance.", "Figure 3 (c) plots the precision and recall of each system for arcs of varying distance to the root.", "Different from the observation in McDonald and Nivre (2011) , STACKPTR does not show an obvious advantage on the precision for arcs further away from the root.", "Furthermore, the STACKPTR parser does not have the tendency to over-predict root modifiers reported in McDonald and Nivre (2011) .", "This behavior can be explained using the same reasoning as above: the fact that arcs further away from the root are usually constructed early in the parsing algorithm of traditional transition-based parsers is not true for the STACKPTR parser.", "Effect of POS Embedding The only prerequisite information that our parsing model relies on is POS tags.", "With the goal of achieving an end-to-end parser, we explore the effect of POS tags on parsing performance.", "We run experiments on PTB using our STACKPTR parser with gold-standard and predicted POS tags, and without tags, respectively.", "STACKPTR in these experiments is the Full model with beam=10.", "Table 2 gives results of the parsers with different versions of POS tags on the test data of PTB.", "The parser with gold-standard POS tags significantly outperforms the other two parsers, showing that dependency parsers can still benefit from accurate POS information.", "The parser with predicted (imperfect) POS tags, however, performs even slightly worse than the parser without using POS tags.", "It illustrates that an end-to-end parser that doesn't rely on POS information can obtain competitive (or even better) performance than parsers using imperfect predicted POS tags, even if the POS tagger is relative high accuracy (accuracy > 97% in this experiment on PTB).", "Table 3 summarizes the parsing results of our model on the test sets of 14 treebanks from the CoNLL shared task, along with the state-of-theart baselines.", "Along with BIAF, we also list the performance of the bi-directional attention based Parser (Bi-Att) (Cheng et al., 2016) and the neural MST parser (NeuroMST) for comparison.", "Our parser achieves state-of-theart performance on both UAS and LAS on eight languages -Arabic, Czech, English, German, Portuguese, Slovene, Spanish, and Swedish.", "On Bulgarian and Dutch, our parser obtains the best UAS.", "On other languages, the performance of our parser is competitive with BIAF, and significantly better than others.", "The only exception is Japanese, on which NeuroMST obtains the best scores.", "Experiments on Other Treebanks CoNLL Treebanks UD Treebanks For UD Treebanks, we select 12 languages -Bulgarian, Catalan, Czech, Dutch, English, French, German, Italian, Norwegian, Romanian, Russian and Spanish.", "For all the languages, we adopt the standard training/dev/test splits, and use the universal POS tags (Petrov et al., 2012) provided in each treebank.", "The statistics of these corpora are provided in Appendix B.", "Table 4 summarizes the results of the STACKPTR parser, along with BIAF for comparison, on both the development and test datasets for each language.", "First, both BIAF and STACKPTR parsers achieve relatively high parsing accuracies on all the 12 languages -all with UAS are higher than 90%.", "On nine languages -Catalan, Czech, Dutch, English, French, German, Norwegian, Russian and Spanish -STACKPTR outperforms BIAF for both UAS and LAS.", "On Bulgarian, STACKPTR achieves slightly better UAS while LAS is slightly worse than BIAF.", "On Italian and Romanian, BIAF obtains marginally better parsing performance than STACKPTR.", "Conclusion In this paper, we proposed STACKPTR, a transition-based neural network architecture, for dependency parsing.", "Combining pointer networks with an internal stack to track the status of the top-down, depth-first search in the decoding procedure, the STACKPTR parser is able to capture information from the whole sentence and all the previously derived subtrees, removing the leftto-right restriction in classical transition-based parsers, while maintaining linear parsing steps, w.r.t the length of the sentences.", "Experimental re-sults on 29 treebanks show the effectiveness of our parser across 20 languages, by achieving state-ofthe-art performance on 21 corpora.", "There are several potential directions for future work.", "First, we intend to consider how to conduct experiments to improve the analysis of parsing errors qualitatively and quantitatively.", "Another interesting direction is to further improve our model by exploring reinforcement learning approaches to learn an optimal order for the children of head words, instead of using a predefined fixed order." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.1", "3.2", "3.3", "3.4", "3.5", "3.6", "3.7", "3.8", "4.1", "4.2", "4.4", "4.4.1", "4.4.2", "4.5.2", "5" ], "paper_header_content": [ "Introduction", "Background", "Dependency Parsing and Notations", "Pointer Networks", "Overview", "Encoder", "Decoder", "Higher-order Information", "Biaffine Attention Mechanism", "Training Objectives", "Discussion", "Implementation Details", "Setup", "Main Results", "Error Analysis", "Length and Graph Factors", "Effect of POS Embedding", "UD Treebanks", "Conclusion" ] }
GEM-SciDuet-train-37#paper-1053#slide-10
Experiment 1 Main Results and Analysis
English PTB, Chinese PTB, German CoNLL 2009 shared task Parsing models for comparison Baseline: Deep Biaffine (BiAF) parser (Dozat et al., 2017), augmented with character-level information Four versions of StackPtr: Org: utilizes only head word information +gpar: augment Org with grandparent information +sib: augment Org with sibling information Full: include all the three information Unlabeled Attachment Score (UAS), Labeled Attachment Score (LAS), Unlabeled Complete Match (UCM), Labeled Complete Match (LCM), Root Accuracy (RA)
English PTB, Chinese PTB, German CoNLL 2009 shared task Parsing models for comparison Baseline: Deep Biaffine (BiAF) parser (Dozat et al., 2017), augmented with character-level information Four versions of StackPtr: Org: utilizes only head word information +gpar: augment Org with grandparent information +sib: augment Org with sibling information Full: include all the three information Unlabeled Attachment Score (UAS), Labeled Attachment Score (LAS), Unlabeled Complete Match (UCM), Labeled Complete Match (LCM), Root Accuracy (RA)
[]
GEM-SciDuet-train-37#paper-1053#slide-12
1053
Stack-Pointer Networks for Dependency Parsing
We introduce a novel architecture for dependency parsing: stack-pointer networks (STACKPTR). Combining pointer networks (Vinyals et al., 2015) with an internal stack, the proposed model first reads and encodes the whole sentence, then builds the dependency tree top-down (from root-to-leaf) in a depth-first fashion. The stack tracks the status of the depthfirst search and the pointer networks select one child for the word at the top of the stack at each step. The STACKPTR parser benefits from the information of the whole sentence and all previously derived subtree structures, and removes the leftto-right restriction in classical transitionbased parsers. Yet, the number of steps for building any (including non-projective) parse tree is linear in the length of the sentence just as other transition-based parsers, yielding an efficient decoding algorithm with O(n 2 ) time complexity. We evaluate our model on 29 treebanks spanning 20 languages and different dependency annotation schemas, and achieve state-of-theart performance on 21 of them.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281 ], "paper_content_text": [ "Introduction Dependency parsing, which predicts the existence and type of linguistic dependency relations between words, is a first step towards deep language understanding.", "Its importance is widely recognized in the natural language processing (NLP) community, with it benefiting a wide range of NLP applications, such as coreference resolution (Ng, 2010; Durrett and Klein, 2013; Work done while at Carnegie Mellon University.", "2016), sentiment analysis (Tai et al., 2015) , machine translation (Bastings et al., 2017) , information extraction (Nguyen et al., 2009; Angeli et al., 2015; Peng et al., 2017) , word sense disambiguation (Fauceglia et al., 2015) , and low-resource languages processing (McDonald et al., 2013; Ma and Xia, 2014) .", "There are two dominant approaches to dependency parsing (Buchholz and Marsi, 2006; Nivre et al., 2007) : local and greedy transitionbased algorithms (Yamada and Matsumoto, 2003; Nivre and Scholz, 2004; Zhang and Nivre, 2011; Chen and Manning, 2014) , and the globally optimized graph-based algorithms (Eisner, 1996; Mc-Donald et al., 2005a,b; .", "Transition-based dependency parsers read words sequentially (commonly from left-to-right) and build dependency trees incrementally by making series of multiple choice decisions.", "The advantage of this formalism is that the number of operations required to build any projective parse tree is linear with respect to the length of the sentence.", "The challenge, however, is that the decision made at each step is based on local information, leading to error propagation and worse performance compared to graph-based parsers on root and long dependencies (McDonald and Nivre, 2011) .", "Previous studies have explored solutions to address this challenge.", "Stack LSTMs are capable of learning representations of the parser state that are sensitive to the complete contents of the parser's state.", "Andor et al.", "(2016) proposed a globally normalized transition model to replace the locally normalized classifier.", "However, the parsing accuracy is still behind state-of-the-art graph-based parsers (Dozat and Manning, 2017) .", "Graph-based dependency parsers, on the other hand, learn scoring functions for parse trees and perform exhaustive search over all possible trees for a sentence to find the globally highest scoring tree.", "Incorporating this global search algorithm with distributed representations learned from neural networks, neural graph-based parsers (Kiperwasser and Goldberg, 2016; Wang and Chang, 2016; Kuncoro et al., 2016; Dozat and Manning, 2017) have achieved the state-of-the-art accuracies on a number of treebanks in different languages.", "Nevertheless, these models, while accurate, are usually slow (e.g.", "decoding is O(n 3 ) time complexity for first-order models (McDonald et al., 2005a,b) and higher polynomials for higherorder models (McDonald and Pereira, 2006; Ma and Zhao, 2012b,a) ).", "In this paper, we propose a novel neural network architecture for dependency parsing, stackpointer networks (STACKPTR).", "STACKPTR is a transition-based architecture, with the corresponding asymptotic efficiency, but still maintains a global view of the sentence that proves essential for achieving competitive accuracy.", "Our STACKPTR parser has a pointer network (Vinyals et al., 2015) as its backbone, and is equipped with an internal stack to maintain the order of head words in tree structures.", "The STACKPTR parser performs parsing in an incremental, topdown, depth-first fashion; at each step, it generates an arc by assigning a child for the head word at the top of the internal stack.", "This architecture makes it possible to capture information from the whole sentence and all the previously derived subtrees, while maintaining a number of parsing steps linear in the sentence length.", "We evaluate our parser on 29 treebanks across 20 languages and different dependency annotation schemas, and achieve state-of-the-art performance on 21 of them.", "The contributions of this work are summarized as follows: (i) We propose a neural network architecture for dependency parsing that is simple, effective, and efficient.", "(ii) Empirical evaluations on benchmark datasets over 20 languages show that our method achieves state-of-the-art performance on 21 different treebanks 1 .", "(iii) Comprehensive error analysis is conducted to compare the proposed method to a strong graph-based baseline using biaffine attention (Dozat and Manning, 2017) .", "Background We first briefly describe the task of dependency parsing, setup the notation, and review Pointer Networks (Vinyals et al., 2015) .", "Dependency Parsing and Notations Dependency trees represent syntactic relationships between words in the sentences through labeled directed edges between head words and their dependents.", "Figure 1 (a) shows a dependency tree for the sentence, \"But there were no buyers\".", "In this paper, we will use the following notation: Input: x = {w 1 , .", ".", ".", ", w n } represents a generic sentence, where w i is the ith word.", "Output: y = {p 1 , p 2 , · · · , p k } represents a generic (possibly non-projective) dependency tree, where each path p i = $, w i,1 , w i,2 , · · · , w i,l i is a sequence of words from the root to a leaf.", "\"$\" is an universal virtual root that is added to each tree.", "Stack: σ denotes a stack configuration, which is a sequence of words.", "We use σ|w to represent a stack configuration that pushes word w into the stack σ.", "Children: ch(w i ) denotes the list of all the children (modifiers) of word w i .", "Pointer Networks Pointer Networks (PTR-NET) (Vinyals et al., 2015) are a variety of neural network capable of learning the conditional probability of an output sequence with elements that are discrete tokens corresponding to positions in an input sequence.", "This model cannot be trivially expressed by standard sequence-to-sequence networks due to the variable number of input positions in each sentence.", "PTR-NET solves the problem by using attention (Bahdanau et al., 2015; Luong et al., 2015) as a pointer to select a member of the input sequence as the output.", "Formally, the words of the sentence x are fed one-by-one into the encoder (a multiple-layer bidirectional RNN), producing a sequence of encoder hidden states s i .", "At each time step t, the decoder (a uni-directional RNN) receives the input from last step and outputs decoder hidden state h t .", "The attention vector a t is calculated as follows: e t i = score(h t , s i ) a t = softmax (e t ) (1) where score(·, ·) is the attention scoring function, which has several variations such as dot-product, concatenation, and biaffine (Luong et al., 2015) .", "PTR-NET regards the attention vector a t as a probability distribution over the source words, i.e.", "it uses a t i as pointers to select the input elements.", "3 Stack-Pointer Networks Overview Similarly to PTR-NET, STACKPTR first reads the whole sentence and encodes each word into the encoder hidden state s i .", "The internal stack σ is always initialized with the root symbol $.", "At each time step t, the decoder receives the input vector corresponding to the top element of the stack σ (the head word w p where p is the word index), generates the hidden state h t , and computes the attention vector a t using Eq.", "(1).", "The parser chooses a specific position c according to the attention scores in a t to generate a new dependency arc (w h , w c ) by selecting w c as a child of w h .", "Then the parser pushes w c onto the stack, i.e.", "σ → σ|w c , and goes to the next step.", "At one step if the parser points w h to itself, i.e.", "c = h, it indicates that all children of the head word w h have already been selected.", "Then the parser goes to the next step by popping w h out of σ.", "At test time, in order to guarantee a valid dependency tree containing all the words in the input sentences exactly once, the decoder maintains a list of \"available\" words.", "At each decoding step, the parser selects a child for the current head word, and removes the child from the list of available words to make sure that it cannot be selected as a child of other head words.", "For head words with multiple children, it is possible that there is more than one valid selection for each time step.", "In order to define a deterministic decoding process to make sure that there is only one ground-truth choice at each step (which is necessary for simple maximum likelihood estimation), a predefined order for each ch(w i ) needs to be introduced.", "The predefined order of children can have different alternatives, such as leftto-right or inside-out 2 .", "In this paper, we adopt the inside-out order 3 since it enables us to utilize second-order sibling information, which has been proven beneficial for parsing performance (Mc-Donald and Pereira, 2006; ) (see § 3.4 for details).", "Figure 1 (b) depicts the architecture of STACKPTR and the decoding procedure for the example sentence in Figure 1 (a) .", "Encoder The encoder of our parsing model is based on the bi-directional LSTM-CNN architecture (BLSTM-CNNs) (Chiu and Nichols, 2016; where CNNs encode character-level information of a word into its character-level repre-sentation and BLSTM models context information of each word.", "Formally, for each word, the CNN, with character embeddings as inputs, encodes the character-level representation.", "Then the character-level representation vector is concatenated with the word embedding vector to feed into the BLSTM network.", "To enrich word-level information, we also use POS embeddings.", "Finally, the encoder outputs a sequence of hidden states s i .", "Decoder The decoder for our parser is a uni-directional LSTM.", "Different from previous work (Bahdanau et al., 2015; Vinyals et al., 2015) which uses word embeddings of the previous word as the input to the decoder, our decoder receives the encoder hidden state vector (s i ) of the top element in the stack σ (see Figure 1 (b)).", "Compared to word embeddings, the encoder hidden states contain more contextual information, benefiting both the training and decoding procedures.", "The decoder produces a sequence of decoder hidden states h i , one for each decoding step.", "Higher-order Information As mentioned before, our parser is capable of utilizing higher-order information.", "In this paper, we incorporate two kinds of higher-order structures grandparent and sibling.", "A sibling structure is a head word with two successive modifiers, and a grandparent structure is a pair of dependencies connected head-to-tail: 0 12 3 456 2782 96 56 986 2 5214 3 77543 9 5 2 52 ÿ ÿ ! \"", "\"#$% & #& ' % !", "#& (()& *% & & !", "*\"& (+(\"!", "\"% ) &, \" \"#$' (% & #-.,\"/\" \"#$ % %% #*) (& ** *% & 0 % #( !", "$% %()- 1 2 3 -45 67 896: ;<=>?", "@ ABCDE3 -F14 G H I .J3-4 %& *% &()!", "!& %#( & +!", ", & * -\"ABCDE3 -F14K!", "& %& *K&& *&& 1 %& *!", "$% %()% & #-'A #& *# \" !", "0 & $()J3-4 (,% (& !", "!", "$, & *& *!", "& *() & *% & #.\"", "#& !", "$%(!", "K EL' % #& #! '", "#((%& & $.", "\"(,* #*) ( %& *) (0 #%()& * % .", "%& (M NOP QR#*\" \"#$& & (%!", "!SNR P T .,* #*#+% #( \" % (!", "& (' #& ( \" % #+) ( !", "U\"%) (!", "!", "(,% ABCDE3 -F145 V W<; ABCDEXDY3 -FZ4 Y*& % .,& && *\" \"#$& 1%%& () & %Z.#*(),* #*[%% & #(& 0 +& (& (& *% #( ()1'(# & ) #& ( U& (% . )", "/# & % !", "( & *% % &) (% (!", "K EL'' \\\"/ A # /#!", "!", "$.A#& (%u' .u' v.\"u' w\"% # + % %& *& .", "% #& K!", "$.)", "#& (& % & ( \"0 #* ! \"", "& % .", "\"0 % +!", "& % .\"", "& () \"0 % +!", "\"& 0 % +!", "& % ' x yz{ | } {| { { } | C,& * \"0 ( \"\" \"#$ % %+ !", "\"( \"%) ( % & % !", "( & *%'_ & * % % #& (., (K \"+#[ (\"(& ,( !", "K& % %) ( K (%,( [' G H G +H vY*\"$ #0 ( % & #& % \"\" K& (%()& *E % G vH!", "( & *' B( !", "& % % \" #& \"%& !", "%\" 0 #( !", "& % %%& U( \"% '(+ K & $.,! \"", "& *% $& # *& 0 *\"\"K % (% ' x { | } } { } { Ỹ */ % && $ () % ,\"% # +% %/ % & 0 ( \" ) #& ( U& (.,* #*\"#( (% %\" 0 \"#$& & ( & % \" K \"!\"", "\"# % 'E % 0 G vH & (\"#\", \"!", "$0 % \"\"$ #0 ( !", "( & *) (/ % & 0 ( \" % % & %& *+% %) ($ % % .", "#! \"", "(, !", "( & *%.,% U & %\"% * ' Y*E % G vH!", "( & * %+% \"(& ,( & !", "& \"& $ %()\"$ #0 ( %& #0 & % OQ S¡P% % .,* #*#(% % &()*\"0 ,( \"\" & %\"% #\"& %((% \".", "\"¢ £OQ ¤ S¡P% % .,* #*#(% % &()\" \"#$\"& * (+& ,& **\"\"(\" / ' ( !", "!", "$.,\"(& #( !", "& % %¥ ¦ § ,* ©\"ª & * \" #%()& *% « %*\"0 ,( \"\"\" ( & ' #( !", "& % %\"0 (& \"%¬ ¦ § ,* ©\"® & * \"()& * *\"\"(\" /()\" \"#$' _ & & K!", "$.", "#( !", "& % % & %*! )", "0 #(% & & & *\"\"+$©.,* % #( !", "& % %(!", "$ & !*! )", "0 #(% & & & .% #& *#(% & & &# +& \"\"+$\"\" ( (\" / %& (®' E#*& $ ()% %# & \"+$ # % K!", "$ #(+ & ,(%!", "!", ".", "\"#&% % & *#(0 % & #& (% % # /\" * #!", "!", "$ v' #( !", "& % %#(% & #& \") ( ()#( !", "& % % . \"", "#& & *\" K % (()& * °©F®± & (#(% & & & %*\"\"+$©\" ®' #( !", "& % %# & \"+$#( !", "& 0 #( !", "& % , & *& *(& **!", ")() ®« %#(% & & & 'Y* ( &()#(#& & ( #*#(% & #& (²® vG H(³ 0 vG +H ² %& *T S¡ ¢ PSQ¢ £P .)", "\"& *&% & + & \"& (/\"& *( & !#(% & #& (' _ ( \"& ( % %& #-.", "&% ) /#%& ( /\"( & !#(% & #& (%) (!", "!#( !", "& \" #( !", "& % %\"/\"(-' Y* %#+ To utilize higher-order information, the decoder's input at each step is the sum of the encoder hidden states of three words: β t = s h + s g + s s where β t is the input vector of decoder at time t and h, g, s are the indices of the head word and its grandparent and sibling, respectively.", "Figure 1 (b) illustrates the details.", "Here we use the element-wise sum operation instead of concatenation because it does not increase the dimension of the input vector β t , thus introducing no additional model parameters.", "Biaffine Attention Mechanism For attention score function (Eq.", "(1)), we adopt the biaffine attention mechanism (Luong et al., 2015; Dozat and Manning, 2017) : e t i = h T t Ws i + U T h t + V T s i + b where W, U, V, b are parameters, denoting the weight matrix of the bi-linear term, the two weight vectors of the linear terms, and the bias vector.", "As discussed in Dozat and Manning (2017) , applying a multilayer perceptron (MLP) to the output vectors of the BLSTM before the score function can both reduce the dimensionality and overfitting of the model.", "We follow this work by using a one-layer perceptron to s i and h i with elu (Clevert et al., 2015) as its activation function.", "Similarly, the dependency label classifier also uses a biaffine function to score each label, given the head word vector h t and child vector s i as inputs.", "Again, we use MLPs to transform h t and s i before feeding them into the classifier.", "Training Objectives The STACKPTR parser is trained to optimize the probability of the dependency trees given sentences: P θ (y|x), which can be factorized as: P θ (y|x) = k i=1 P θ (p i |p <i , x) = k i=1 l i j=1 P θ (c i,j |c i,<j , p <i , x), (2) where θ represents model parameters.", "p <i denotes the preceding paths that have already been generated.", "c i,j represents the jth word in p i and c i,<j denotes all the proceeding words on the path p i .", "Thus, the STACKPTR parser is an autoregressive model, like sequence-to-sequence models, but it factors the distribution according to a top-down tree structure as opposed to a left-to-right chain.", "We define P θ (c i,j |c i,<j , p <i , x) = a t , where attention vector a t (of dimension n) is used as the distribution over the indices of words in a sentence.", "Arc Prediction Our parser is trained by optimizing the conditional likelihood in Eq (2), which is implemented as the cross-entropy loss.", "Label Prediction We train a separated multiclass classifier in parallel to predict the dependency labels.", "Following Dozat and Manning (2017) , the classifier takes the information of the head word and its child as features.", "The label classifier is trained simultaneously with the parser by optimizing the sum of their objectives.", "Discussion Time Complexity.", "The number of decoding steps to build a parse tree for a sentence of length n is 2n−1, linear in n. Together with the attention mechanism (at each step, we need to compute the attention vector a t , whose runtime is O(n)), the time complexity of decoding algorithm is O(n 2 ), which is more efficient than graph-based parsers that have O(n 3 ) or worse complexity when using dynamic programming or maximum spanning tree (MST) decoding algorithms.", "Top-down Parsing.", "When humans comprehend a natural language sentence, they arguably do it in an incremental, left-to-right manner.", "However, when humans consciously annotate a sentence with syntactic structure, they rarely ever process in fixed left-to-right order.", "Rather, they start by reading the whole sentence, then seeking the main predicates, jumping back-and-forth over the sentence and recursively proceeding to the subtree structures governed by certain head words.", "Our parser follows a similar kind of annotation process: starting from reading the whole sentence, and processing in a top-down manner by finding the main predicates first and only then search for sub-trees governed by them.", "When making latter decisions, the parser has access to the entire structure built in earlier steps.", "Implementation Details Pre-trained Word Embeddings.", "For all the parsing models in different languages, we initialize word vectors with pretrained word embeddings.", "For Chinese, Dutch, English, German and Spanish, we use the structured-skipgram embeddings.", "For other languages we use Polyglot embeddings (Al-Rfou et al., 2013) .", "Optimization.", "Parameter optimization is performed with the Adam optimizer (Kingma and Ba, 2014) with β 1 = β 2 = 0.9.", "We choose an initial learning rate of η 0 = 0.001.", "The learning rate η is annealed by multiplying a fixed decay rate ρ = 0.75 when parsing performance stops increasing on validation sets.", "To reduce the effects of \"gradient exploding\", we use gradient clipping of 5.0 (Pascanu et al., 2013) .", "Dropout Training.", "To mitigate overfitting, we apply dropout (Srivastava et al., 2014; .", "For BLSTM, we use recurrent dropout (Gal and Ghahramani, 2016) with a drop rate of 0.33 between hidden states and 0.33 between layers.", "Following Dozat and Manning (2017) , we also use embedding dropout with a rate of 0.33 on all word, character, and POS embeddings.", "Hyper-Parameters.", "Some parameters are chosen from those reported in Dozat and Manning (2017) .", "We use the same hyper-parameters across the models on different treebanks and languages, due to time constraints.", "The details of the chosen hyper-parameters for all experiments are summarized in Appendix A.", "Experiments Setup We evaluate our STACKPTR parser mainly on three treebanks: the English Penn Treebank (PTB version 3.0) (Marcus et al., 1993) , the Penn Chinese Treebank (CTB version 5.1) (Xue et al., 2002) , and the German CoNLL 2009 corpus (Hajič et al., 2009) .", "We use the same experimental settings as Kuncoro et al.", "(2016) .", "To make a thorough empirical comparison with previous studies, we also evaluate our system on treebanks from CoNLL shared task and the Universal Dependency (UD) Treebanks 4 .", "For the CoNLL Treebanks, we use the English treebank from CoNLL-2008 shared task (Surdeanu et al., 2008) and all 13 treebanks from CoNLL-2006 shared task (Buchholz and Marsi, 2006) .", "The experimental settings are the same as .", "For UD Treebanks, we select 12 languages.", "The details of the treebanks and experimental settings are in § 4.5 and Appendix B.", "Evaluation Metrics Parsing performance is measured with five metrics: unlabeled attachment score (UAS), labeled attachment score (LAS), unlabeled complete match (UCM), labeled complete match (LCM), and root accuracy (RA).", "Following previous work (Kuncoro et al., 2016; Dozat and Manning, 2017) , we report results excluding punctuations for Chinese and English.", "For each experiment, we report the mean values with corresponding standard deviations over 5 repetitions.", "Baseline For fair comparison of the parsing performance, we re-implemented the graph-based Deep Biaffine (BIAF) parser (Dozat and Manning, 2017) , which achieved state-of-the-art results on a wide range of languages.", "Our re-implementation adds character-level information using the same LSTM-CNN encoder as our model ( § 3.2) to the original BIAF model, which boosts its performance on all languages.", "Main Results We first conduct experiments to demonstrate the effectiveness of our neural architecture by comparing with the strong baseline BIAF.", "We compare the performance of four variations of our model with different decoder inputs -Org, +gpar, +sib and Full -where the Org model utilizes only the encoder hidden states of head words, while the +gpar and +sib models augments the original one with grandparent and sibling information, respectively.", "The Full model includes all the three information as inputs.", "Figure 2 illustrates the performance (five metrics) of different variations of our STACKPTR parser together with the results of baseline BIAF re-implemented by us, on the test sets of the three languages.", "On UAS and LAS, the Full variation of STACKPTR with decoding beam size 10 outperforms BIAF on Chinese, and obtains competitive performance on English and German.", "An interesting observation is that the Full model achieves the best accuracy on English and Chinese, while performs slightly worse than +sib on German.", "This shows that the importance of higher-order information varies in languages.", "On LCM and UCM, STACKPTR significantly outperforms BIAF on all languages, showing the superiority of our parser on complete sentence parsing.", "The results of our parser on RA are slightly worse than BIAF.", "More details of results are provided in Appendix C. Table 1 illustrates the UAS and LAS of the four versions of our model (with decoding beam size 10) on the three treebanks, together with previous top-performing systems for comparison.", "Note that the results of STACKPTR and our reimplementation of BIAF are the average of 5 repetitions instead of a single run.", "Our Full model significantly outperforms all the transition-based parsers on all three languages, and achieves better results than most graph-based parsers.", "Our Table 1 : UAS and LAS of four versions of our model on test sets for three languages, together with topperforming parsing systems.", "\"T\" and \"G\" indicate transition-and graph-based models, respectively.", "For BIAF, we provide the original results reported in Dozat and Manning (2017) and our re-implementation.", "For STACKPTR and our re-implementation of BiAF, we report the average over 5 runs.", "re-implementation of BIAF obtains better performance than the original one in Dozat and Manning (2017) , demonstrating the effectiveness of the character-level information.", "Our model achieves state-of-the-art performance on both UAS and LAS on Chinese, and best UAS on English.", "On German, the performance is competitive with BIAF, and significantly better than other models.", "Comparison with Previous Work Error Analysis In this section, we characterize the errors made by BIAF and STACKPTR by presenting a number of experiments that relate parsing errors to a set of linguistic and structural properties.", "For simplicity, we follow McDonald and Nivre (2011) and report labeled parsing metrics (either accuracy, precision, or recall) for all experiments.", "Length and Graph Factors Following McDonald and Nivre (2011), we analyze parsing errors related to structural factors.", "Sentence Length.", "Figure 3 (a) shows the accuracy of both parsing models relative to sentence lengths.", "Consistent with the analysis in Mc-Donald and Nivre (2011) , STACKPTR tends to perform better on shorter sentences, which make fewer parsing decisions, significantly reducing the chance of error propagation.", "Dependency Length.", "Figure 3 (b) measures the precision and recall relative to dependency lengths.", "While the graph-based BIAF parser still performs better for longer dependency arcs and transition-based STACKPTR parser does better for shorter ones, the gap between the two systems is marginal, much smaller than that shown Table 3 : UAS and LAS on 14 treebanks from CoNLL shared tasks, together with several state-of-the-art parsers.", "Bi-Att is the bi-directional attention based parser (Cheng et al., 2016) , and NeuroMST is the neural MST parser .", "\"Best Published\" includes the most accurate parsers in term of UAS among , Martins et al.", "(2011) , Martins et al.", "(2013) , , , Zhang and McDonald (2014) , Pitler and McDonald (2015) , and Cheng et al.", "(2016) .", "in McDonald and Nivre (2011) .", "One possible reason is that, unlike traditional transition-based parsers that scan the sentence from left to right, STACKPTR processes in a top-down manner, thus sometimes unnecessarily creating shorter dependency arcs first.", "Root Distance.", "Figure 3 (c) plots the precision and recall of each system for arcs of varying distance to the root.", "Different from the observation in McDonald and Nivre (2011) , STACKPTR does not show an obvious advantage on the precision for arcs further away from the root.", "Furthermore, the STACKPTR parser does not have the tendency to over-predict root modifiers reported in McDonald and Nivre (2011) .", "This behavior can be explained using the same reasoning as above: the fact that arcs further away from the root are usually constructed early in the parsing algorithm of traditional transition-based parsers is not true for the STACKPTR parser.", "Effect of POS Embedding The only prerequisite information that our parsing model relies on is POS tags.", "With the goal of achieving an end-to-end parser, we explore the effect of POS tags on parsing performance.", "We run experiments on PTB using our STACKPTR parser with gold-standard and predicted POS tags, and without tags, respectively.", "STACKPTR in these experiments is the Full model with beam=10.", "Table 2 gives results of the parsers with different versions of POS tags on the test data of PTB.", "The parser with gold-standard POS tags significantly outperforms the other two parsers, showing that dependency parsers can still benefit from accurate POS information.", "The parser with predicted (imperfect) POS tags, however, performs even slightly worse than the parser without using POS tags.", "It illustrates that an end-to-end parser that doesn't rely on POS information can obtain competitive (or even better) performance than parsers using imperfect predicted POS tags, even if the POS tagger is relative high accuracy (accuracy > 97% in this experiment on PTB).", "Table 3 summarizes the parsing results of our model on the test sets of 14 treebanks from the CoNLL shared task, along with the state-of-theart baselines.", "Along with BIAF, we also list the performance of the bi-directional attention based Parser (Bi-Att) (Cheng et al., 2016) and the neural MST parser (NeuroMST) for comparison.", "Our parser achieves state-of-theart performance on both UAS and LAS on eight languages -Arabic, Czech, English, German, Portuguese, Slovene, Spanish, and Swedish.", "On Bulgarian and Dutch, our parser obtains the best UAS.", "On other languages, the performance of our parser is competitive with BIAF, and significantly better than others.", "The only exception is Japanese, on which NeuroMST obtains the best scores.", "Experiments on Other Treebanks CoNLL Treebanks UD Treebanks For UD Treebanks, we select 12 languages -Bulgarian, Catalan, Czech, Dutch, English, French, German, Italian, Norwegian, Romanian, Russian and Spanish.", "For all the languages, we adopt the standard training/dev/test splits, and use the universal POS tags (Petrov et al., 2012) provided in each treebank.", "The statistics of these corpora are provided in Appendix B.", "Table 4 summarizes the results of the STACKPTR parser, along with BIAF for comparison, on both the development and test datasets for each language.", "First, both BIAF and STACKPTR parsers achieve relatively high parsing accuracies on all the 12 languages -all with UAS are higher than 90%.", "On nine languages -Catalan, Czech, Dutch, English, French, German, Norwegian, Russian and Spanish -STACKPTR outperforms BIAF for both UAS and LAS.", "On Bulgarian, STACKPTR achieves slightly better UAS while LAS is slightly worse than BIAF.", "On Italian and Romanian, BIAF obtains marginally better parsing performance than STACKPTR.", "Conclusion In this paper, we proposed STACKPTR, a transition-based neural network architecture, for dependency parsing.", "Combining pointer networks with an internal stack to track the status of the top-down, depth-first search in the decoding procedure, the STACKPTR parser is able to capture information from the whole sentence and all the previously derived subtrees, removing the leftto-right restriction in classical transition-based parsers, while maintaining linear parsing steps, w.r.t the length of the sentences.", "Experimental re-sults on 29 treebanks show the effectiveness of our parser across 20 languages, by achieving state-ofthe-art performance on 21 corpora.", "There are several potential directions for future work.", "First, we intend to consider how to conduct experiments to improve the analysis of parsing errors qualitatively and quantitatively.", "Another interesting direction is to further improve our model by exploring reinforcement learning approaches to learn an optimal order for the children of head words, instead of using a predefined fixed order." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.1", "3.2", "3.3", "3.4", "3.5", "3.6", "3.7", "3.8", "4.1", "4.2", "4.4", "4.4.1", "4.4.2", "4.5.2", "5" ], "paper_header_content": [ "Introduction", "Background", "Dependency Parsing and Notations", "Pointer Networks", "Overview", "Encoder", "Decoder", "Higher-order Information", "Biaffine Attention Mechanism", "Training Objectives", "Discussion", "Implementation Details", "Setup", "Main Results", "Error Analysis", "Length and Graph Factors", "Effect of POS Embedding", "UD Treebanks", "Conclusion" ] }
GEM-SciDuet-train-37#paper-1053#slide-12
Parsing Performance on Test Data wrt Sentence Length
StackPtr tends to perform better on shorter sentences, consistent with transition-based/graph-based
StackPtr tends to perform better on shorter sentences, consistent with transition-based/graph-based
[]
GEM-SciDuet-train-37#paper-1053#slide-13
1053
Stack-Pointer Networks for Dependency Parsing
We introduce a novel architecture for dependency parsing: stack-pointer networks (STACKPTR). Combining pointer networks (Vinyals et al., 2015) with an internal stack, the proposed model first reads and encodes the whole sentence, then builds the dependency tree top-down (from root-to-leaf) in a depth-first fashion. The stack tracks the status of the depthfirst search and the pointer networks select one child for the word at the top of the stack at each step. The STACKPTR parser benefits from the information of the whole sentence and all previously derived subtree structures, and removes the leftto-right restriction in classical transitionbased parsers. Yet, the number of steps for building any (including non-projective) parse tree is linear in the length of the sentence just as other transition-based parsers, yielding an efficient decoding algorithm with O(n 2 ) time complexity. We evaluate our model on 29 treebanks spanning 20 languages and different dependency annotation schemas, and achieve state-of-theart performance on 21 of them.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281 ], "paper_content_text": [ "Introduction Dependency parsing, which predicts the existence and type of linguistic dependency relations between words, is a first step towards deep language understanding.", "Its importance is widely recognized in the natural language processing (NLP) community, with it benefiting a wide range of NLP applications, such as coreference resolution (Ng, 2010; Durrett and Klein, 2013; Work done while at Carnegie Mellon University.", "2016), sentiment analysis (Tai et al., 2015) , machine translation (Bastings et al., 2017) , information extraction (Nguyen et al., 2009; Angeli et al., 2015; Peng et al., 2017) , word sense disambiguation (Fauceglia et al., 2015) , and low-resource languages processing (McDonald et al., 2013; Ma and Xia, 2014) .", "There are two dominant approaches to dependency parsing (Buchholz and Marsi, 2006; Nivre et al., 2007) : local and greedy transitionbased algorithms (Yamada and Matsumoto, 2003; Nivre and Scholz, 2004; Zhang and Nivre, 2011; Chen and Manning, 2014) , and the globally optimized graph-based algorithms (Eisner, 1996; Mc-Donald et al., 2005a,b; .", "Transition-based dependency parsers read words sequentially (commonly from left-to-right) and build dependency trees incrementally by making series of multiple choice decisions.", "The advantage of this formalism is that the number of operations required to build any projective parse tree is linear with respect to the length of the sentence.", "The challenge, however, is that the decision made at each step is based on local information, leading to error propagation and worse performance compared to graph-based parsers on root and long dependencies (McDonald and Nivre, 2011) .", "Previous studies have explored solutions to address this challenge.", "Stack LSTMs are capable of learning representations of the parser state that are sensitive to the complete contents of the parser's state.", "Andor et al.", "(2016) proposed a globally normalized transition model to replace the locally normalized classifier.", "However, the parsing accuracy is still behind state-of-the-art graph-based parsers (Dozat and Manning, 2017) .", "Graph-based dependency parsers, on the other hand, learn scoring functions for parse trees and perform exhaustive search over all possible trees for a sentence to find the globally highest scoring tree.", "Incorporating this global search algorithm with distributed representations learned from neural networks, neural graph-based parsers (Kiperwasser and Goldberg, 2016; Wang and Chang, 2016; Kuncoro et al., 2016; Dozat and Manning, 2017) have achieved the state-of-the-art accuracies on a number of treebanks in different languages.", "Nevertheless, these models, while accurate, are usually slow (e.g.", "decoding is O(n 3 ) time complexity for first-order models (McDonald et al., 2005a,b) and higher polynomials for higherorder models (McDonald and Pereira, 2006; Ma and Zhao, 2012b,a) ).", "In this paper, we propose a novel neural network architecture for dependency parsing, stackpointer networks (STACKPTR).", "STACKPTR is a transition-based architecture, with the corresponding asymptotic efficiency, but still maintains a global view of the sentence that proves essential for achieving competitive accuracy.", "Our STACKPTR parser has a pointer network (Vinyals et al., 2015) as its backbone, and is equipped with an internal stack to maintain the order of head words in tree structures.", "The STACKPTR parser performs parsing in an incremental, topdown, depth-first fashion; at each step, it generates an arc by assigning a child for the head word at the top of the internal stack.", "This architecture makes it possible to capture information from the whole sentence and all the previously derived subtrees, while maintaining a number of parsing steps linear in the sentence length.", "We evaluate our parser on 29 treebanks across 20 languages and different dependency annotation schemas, and achieve state-of-the-art performance on 21 of them.", "The contributions of this work are summarized as follows: (i) We propose a neural network architecture for dependency parsing that is simple, effective, and efficient.", "(ii) Empirical evaluations on benchmark datasets over 20 languages show that our method achieves state-of-the-art performance on 21 different treebanks 1 .", "(iii) Comprehensive error analysis is conducted to compare the proposed method to a strong graph-based baseline using biaffine attention (Dozat and Manning, 2017) .", "Background We first briefly describe the task of dependency parsing, setup the notation, and review Pointer Networks (Vinyals et al., 2015) .", "Dependency Parsing and Notations Dependency trees represent syntactic relationships between words in the sentences through labeled directed edges between head words and their dependents.", "Figure 1 (a) shows a dependency tree for the sentence, \"But there were no buyers\".", "In this paper, we will use the following notation: Input: x = {w 1 , .", ".", ".", ", w n } represents a generic sentence, where w i is the ith word.", "Output: y = {p 1 , p 2 , · · · , p k } represents a generic (possibly non-projective) dependency tree, where each path p i = $, w i,1 , w i,2 , · · · , w i,l i is a sequence of words from the root to a leaf.", "\"$\" is an universal virtual root that is added to each tree.", "Stack: σ denotes a stack configuration, which is a sequence of words.", "We use σ|w to represent a stack configuration that pushes word w into the stack σ.", "Children: ch(w i ) denotes the list of all the children (modifiers) of word w i .", "Pointer Networks Pointer Networks (PTR-NET) (Vinyals et al., 2015) are a variety of neural network capable of learning the conditional probability of an output sequence with elements that are discrete tokens corresponding to positions in an input sequence.", "This model cannot be trivially expressed by standard sequence-to-sequence networks due to the variable number of input positions in each sentence.", "PTR-NET solves the problem by using attention (Bahdanau et al., 2015; Luong et al., 2015) as a pointer to select a member of the input sequence as the output.", "Formally, the words of the sentence x are fed one-by-one into the encoder (a multiple-layer bidirectional RNN), producing a sequence of encoder hidden states s i .", "At each time step t, the decoder (a uni-directional RNN) receives the input from last step and outputs decoder hidden state h t .", "The attention vector a t is calculated as follows: e t i = score(h t , s i ) a t = softmax (e t ) (1) where score(·, ·) is the attention scoring function, which has several variations such as dot-product, concatenation, and biaffine (Luong et al., 2015) .", "PTR-NET regards the attention vector a t as a probability distribution over the source words, i.e.", "it uses a t i as pointers to select the input elements.", "3 Stack-Pointer Networks Overview Similarly to PTR-NET, STACKPTR first reads the whole sentence and encodes each word into the encoder hidden state s i .", "The internal stack σ is always initialized with the root symbol $.", "At each time step t, the decoder receives the input vector corresponding to the top element of the stack σ (the head word w p where p is the word index), generates the hidden state h t , and computes the attention vector a t using Eq.", "(1).", "The parser chooses a specific position c according to the attention scores in a t to generate a new dependency arc (w h , w c ) by selecting w c as a child of w h .", "Then the parser pushes w c onto the stack, i.e.", "σ → σ|w c , and goes to the next step.", "At one step if the parser points w h to itself, i.e.", "c = h, it indicates that all children of the head word w h have already been selected.", "Then the parser goes to the next step by popping w h out of σ.", "At test time, in order to guarantee a valid dependency tree containing all the words in the input sentences exactly once, the decoder maintains a list of \"available\" words.", "At each decoding step, the parser selects a child for the current head word, and removes the child from the list of available words to make sure that it cannot be selected as a child of other head words.", "For head words with multiple children, it is possible that there is more than one valid selection for each time step.", "In order to define a deterministic decoding process to make sure that there is only one ground-truth choice at each step (which is necessary for simple maximum likelihood estimation), a predefined order for each ch(w i ) needs to be introduced.", "The predefined order of children can have different alternatives, such as leftto-right or inside-out 2 .", "In this paper, we adopt the inside-out order 3 since it enables us to utilize second-order sibling information, which has been proven beneficial for parsing performance (Mc-Donald and Pereira, 2006; ) (see § 3.4 for details).", "Figure 1 (b) depicts the architecture of STACKPTR and the decoding procedure for the example sentence in Figure 1 (a) .", "Encoder The encoder of our parsing model is based on the bi-directional LSTM-CNN architecture (BLSTM-CNNs) (Chiu and Nichols, 2016; where CNNs encode character-level information of a word into its character-level repre-sentation and BLSTM models context information of each word.", "Formally, for each word, the CNN, with character embeddings as inputs, encodes the character-level representation.", "Then the character-level representation vector is concatenated with the word embedding vector to feed into the BLSTM network.", "To enrich word-level information, we also use POS embeddings.", "Finally, the encoder outputs a sequence of hidden states s i .", "Decoder The decoder for our parser is a uni-directional LSTM.", "Different from previous work (Bahdanau et al., 2015; Vinyals et al., 2015) which uses word embeddings of the previous word as the input to the decoder, our decoder receives the encoder hidden state vector (s i ) of the top element in the stack σ (see Figure 1 (b)).", "Compared to word embeddings, the encoder hidden states contain more contextual information, benefiting both the training and decoding procedures.", "The decoder produces a sequence of decoder hidden states h i , one for each decoding step.", "Higher-order Information As mentioned before, our parser is capable of utilizing higher-order information.", "In this paper, we incorporate two kinds of higher-order structures grandparent and sibling.", "A sibling structure is a head word with two successive modifiers, and a grandparent structure is a pair of dependencies connected head-to-tail: 0 12 3 456 2782 96 56 986 2 5214 3 77543 9 5 2 52 ÿ ÿ ! \"", "\"#$% & #& ' % !", "#& (()& *% & & !", "*\"& (+(\"!", "\"% ) &, \" \"#$' (% & #-.,\"/\" \"#$ % %% #*) (& ** *% & 0 % #( !", "$% %()- 1 2 3 -45 67 896: ;<=>?", "@ ABCDE3 -F14 G H I .J3-4 %& *% &()!", "!& %#( & +!", ", & * -\"ABCDE3 -F14K!", "& %& *K&& *&& 1 %& *!", "$% %()% & #-'A #& *# \" !", "0 & $()J3-4 (,% (& !", "!", "$, & *& *!", "& *() & *% & #.\"", "#& !", "$%(!", "K EL' % #& #! '", "#((%& & $.", "\"(,* #*) ( %& *) (0 #%()& * % .", "%& (M NOP QR#*\" \"#$& & (%!", "!SNR P T .,* #*#+% #( \" % (!", "& (' #& ( \" % #+) ( !", "U\"%) (!", "!", "(,% ABCDE3 -F145 V W<; ABCDEXDY3 -FZ4 Y*& % .,& && *\" \"#$& 1%%& () & %Z.#*(),* #*[%% & #(& 0 +& (& (& *% #( ()1'(# & ) #& ( U& (% . )", "/# & % !", "( & *% % &) (% (!", "K EL'' \\\"/ A # /#!", "!", "$.A#& (%u' .u' v.\"u' w\"% # + % %& *& .", "% #& K!", "$.)", "#& (& % & ( \"0 #* ! \"", "& % .", "\"0 % +!", "& % .\"", "& () \"0 % +!", "\"& 0 % +!", "& % ' x yz{ | } {| { { } | C,& * \"0 ( \"\" \"#$ % %+ !", "\"( \"%) ( % & % !", "( & *%'_ & * % % #& (., (K \"+#[ (\"(& ,( !", "K& % %) ( K (%,( [' G H G +H vY*\"$ #0 ( % & #& % \"\" K& (%()& *E % G vH!", "( & *' B( !", "& % % \" #& \"%& !", "%\" 0 #( !", "& % %%& U( \"% '(+ K & $.,! \"", "& *% $& # *& 0 *\"\"K % (% ' x { | } } { } { Ỹ */ % && $ () % ,\"% # +% %/ % & 0 ( \" ) #& ( U& (.,* #*\"#( (% %\" 0 \"#$& & ( & % \" K \"!\"", "\"# % 'E % 0 G vH & (\"#\", \"!", "$0 % \"\"$ #0 ( !", "( & *) (/ % & 0 ( \" % % & %& *+% %) ($ % % .", "#! \"", "(, !", "( & *%.,% U & %\"% * ' Y*E % G vH!", "( & * %+% \"(& ,( & !", "& \"& $ %()\"$ #0 ( %& #0 & % OQ S¡P% % .,* #*#(% % &()*\"0 ,( \"\" & %\"% #\"& %((% \".", "\"¢ £OQ ¤ S¡P% % .,* #*#(% % &()\" \"#$\"& * (+& ,& **\"\"(\" / ' ( !", "!", "$.,\"(& #( !", "& % %¥ ¦ § ,* ©\"ª & * \" #%()& *% « %*\"0 ,( \"\"\" ( & ' #( !", "& % %\"0 (& \"%¬ ¦ § ,* ©\"® & * \"()& * *\"\"(\" /()\" \"#$' _ & & K!", "$.", "#( !", "& % % & %*! )", "0 #(% & & & *\"\"+$©.,* % #( !", "& % %(!", "$ & !*! )", "0 #(% & & & .% #& *#(% & & &# +& \"\"+$\"\" ( (\" / %& (®' E#*& $ ()% %# & \"+$ # % K!", "$ #(+ & ,(%!", "!", ".", "\"#&% % & *#(0 % & #& (% % # /\" * #!", "!", "$ v' #( !", "& % %#(% & #& \") ( ()#( !", "& % % . \"", "#& & *\" K % (()& * °©F®± & (#(% & & & %*\"\"+$©\" ®' #( !", "& % %# & \"+$#( !", "& 0 #( !", "& % , & *& *(& **!", ")() ®« %#(% & & & 'Y* ( &()#(#& & ( #*#(% & #& (²® vG H(³ 0 vG +H ² %& *T S¡ ¢ PSQ¢ £P .)", "\"& *&% & + & \"& (/\"& *( & !#(% & #& (' _ ( \"& ( % %& #-.", "&% ) /#%& ( /\"( & !#(% & #& (%) (!", "!#( !", "& \" #( !", "& % %\"/\"(-' Y* %#+ To utilize higher-order information, the decoder's input at each step is the sum of the encoder hidden states of three words: β t = s h + s g + s s where β t is the input vector of decoder at time t and h, g, s are the indices of the head word and its grandparent and sibling, respectively.", "Figure 1 (b) illustrates the details.", "Here we use the element-wise sum operation instead of concatenation because it does not increase the dimension of the input vector β t , thus introducing no additional model parameters.", "Biaffine Attention Mechanism For attention score function (Eq.", "(1)), we adopt the biaffine attention mechanism (Luong et al., 2015; Dozat and Manning, 2017) : e t i = h T t Ws i + U T h t + V T s i + b where W, U, V, b are parameters, denoting the weight matrix of the bi-linear term, the two weight vectors of the linear terms, and the bias vector.", "As discussed in Dozat and Manning (2017) , applying a multilayer perceptron (MLP) to the output vectors of the BLSTM before the score function can both reduce the dimensionality and overfitting of the model.", "We follow this work by using a one-layer perceptron to s i and h i with elu (Clevert et al., 2015) as its activation function.", "Similarly, the dependency label classifier also uses a biaffine function to score each label, given the head word vector h t and child vector s i as inputs.", "Again, we use MLPs to transform h t and s i before feeding them into the classifier.", "Training Objectives The STACKPTR parser is trained to optimize the probability of the dependency trees given sentences: P θ (y|x), which can be factorized as: P θ (y|x) = k i=1 P θ (p i |p <i , x) = k i=1 l i j=1 P θ (c i,j |c i,<j , p <i , x), (2) where θ represents model parameters.", "p <i denotes the preceding paths that have already been generated.", "c i,j represents the jth word in p i and c i,<j denotes all the proceeding words on the path p i .", "Thus, the STACKPTR parser is an autoregressive model, like sequence-to-sequence models, but it factors the distribution according to a top-down tree structure as opposed to a left-to-right chain.", "We define P θ (c i,j |c i,<j , p <i , x) = a t , where attention vector a t (of dimension n) is used as the distribution over the indices of words in a sentence.", "Arc Prediction Our parser is trained by optimizing the conditional likelihood in Eq (2), which is implemented as the cross-entropy loss.", "Label Prediction We train a separated multiclass classifier in parallel to predict the dependency labels.", "Following Dozat and Manning (2017) , the classifier takes the information of the head word and its child as features.", "The label classifier is trained simultaneously with the parser by optimizing the sum of their objectives.", "Discussion Time Complexity.", "The number of decoding steps to build a parse tree for a sentence of length n is 2n−1, linear in n. Together with the attention mechanism (at each step, we need to compute the attention vector a t , whose runtime is O(n)), the time complexity of decoding algorithm is O(n 2 ), which is more efficient than graph-based parsers that have O(n 3 ) or worse complexity when using dynamic programming or maximum spanning tree (MST) decoding algorithms.", "Top-down Parsing.", "When humans comprehend a natural language sentence, they arguably do it in an incremental, left-to-right manner.", "However, when humans consciously annotate a sentence with syntactic structure, they rarely ever process in fixed left-to-right order.", "Rather, they start by reading the whole sentence, then seeking the main predicates, jumping back-and-forth over the sentence and recursively proceeding to the subtree structures governed by certain head words.", "Our parser follows a similar kind of annotation process: starting from reading the whole sentence, and processing in a top-down manner by finding the main predicates first and only then search for sub-trees governed by them.", "When making latter decisions, the parser has access to the entire structure built in earlier steps.", "Implementation Details Pre-trained Word Embeddings.", "For all the parsing models in different languages, we initialize word vectors with pretrained word embeddings.", "For Chinese, Dutch, English, German and Spanish, we use the structured-skipgram embeddings.", "For other languages we use Polyglot embeddings (Al-Rfou et al., 2013) .", "Optimization.", "Parameter optimization is performed with the Adam optimizer (Kingma and Ba, 2014) with β 1 = β 2 = 0.9.", "We choose an initial learning rate of η 0 = 0.001.", "The learning rate η is annealed by multiplying a fixed decay rate ρ = 0.75 when parsing performance stops increasing on validation sets.", "To reduce the effects of \"gradient exploding\", we use gradient clipping of 5.0 (Pascanu et al., 2013) .", "Dropout Training.", "To mitigate overfitting, we apply dropout (Srivastava et al., 2014; .", "For BLSTM, we use recurrent dropout (Gal and Ghahramani, 2016) with a drop rate of 0.33 between hidden states and 0.33 between layers.", "Following Dozat and Manning (2017) , we also use embedding dropout with a rate of 0.33 on all word, character, and POS embeddings.", "Hyper-Parameters.", "Some parameters are chosen from those reported in Dozat and Manning (2017) .", "We use the same hyper-parameters across the models on different treebanks and languages, due to time constraints.", "The details of the chosen hyper-parameters for all experiments are summarized in Appendix A.", "Experiments Setup We evaluate our STACKPTR parser mainly on three treebanks: the English Penn Treebank (PTB version 3.0) (Marcus et al., 1993) , the Penn Chinese Treebank (CTB version 5.1) (Xue et al., 2002) , and the German CoNLL 2009 corpus (Hajič et al., 2009) .", "We use the same experimental settings as Kuncoro et al.", "(2016) .", "To make a thorough empirical comparison with previous studies, we also evaluate our system on treebanks from CoNLL shared task and the Universal Dependency (UD) Treebanks 4 .", "For the CoNLL Treebanks, we use the English treebank from CoNLL-2008 shared task (Surdeanu et al., 2008) and all 13 treebanks from CoNLL-2006 shared task (Buchholz and Marsi, 2006) .", "The experimental settings are the same as .", "For UD Treebanks, we select 12 languages.", "The details of the treebanks and experimental settings are in § 4.5 and Appendix B.", "Evaluation Metrics Parsing performance is measured with five metrics: unlabeled attachment score (UAS), labeled attachment score (LAS), unlabeled complete match (UCM), labeled complete match (LCM), and root accuracy (RA).", "Following previous work (Kuncoro et al., 2016; Dozat and Manning, 2017) , we report results excluding punctuations for Chinese and English.", "For each experiment, we report the mean values with corresponding standard deviations over 5 repetitions.", "Baseline For fair comparison of the parsing performance, we re-implemented the graph-based Deep Biaffine (BIAF) parser (Dozat and Manning, 2017) , which achieved state-of-the-art results on a wide range of languages.", "Our re-implementation adds character-level information using the same LSTM-CNN encoder as our model ( § 3.2) to the original BIAF model, which boosts its performance on all languages.", "Main Results We first conduct experiments to demonstrate the effectiveness of our neural architecture by comparing with the strong baseline BIAF.", "We compare the performance of four variations of our model with different decoder inputs -Org, +gpar, +sib and Full -where the Org model utilizes only the encoder hidden states of head words, while the +gpar and +sib models augments the original one with grandparent and sibling information, respectively.", "The Full model includes all the three information as inputs.", "Figure 2 illustrates the performance (five metrics) of different variations of our STACKPTR parser together with the results of baseline BIAF re-implemented by us, on the test sets of the three languages.", "On UAS and LAS, the Full variation of STACKPTR with decoding beam size 10 outperforms BIAF on Chinese, and obtains competitive performance on English and German.", "An interesting observation is that the Full model achieves the best accuracy on English and Chinese, while performs slightly worse than +sib on German.", "This shows that the importance of higher-order information varies in languages.", "On LCM and UCM, STACKPTR significantly outperforms BIAF on all languages, showing the superiority of our parser on complete sentence parsing.", "The results of our parser on RA are slightly worse than BIAF.", "More details of results are provided in Appendix C. Table 1 illustrates the UAS and LAS of the four versions of our model (with decoding beam size 10) on the three treebanks, together with previous top-performing systems for comparison.", "Note that the results of STACKPTR and our reimplementation of BIAF are the average of 5 repetitions instead of a single run.", "Our Full model significantly outperforms all the transition-based parsers on all three languages, and achieves better results than most graph-based parsers.", "Our Table 1 : UAS and LAS of four versions of our model on test sets for three languages, together with topperforming parsing systems.", "\"T\" and \"G\" indicate transition-and graph-based models, respectively.", "For BIAF, we provide the original results reported in Dozat and Manning (2017) and our re-implementation.", "For STACKPTR and our re-implementation of BiAF, we report the average over 5 runs.", "re-implementation of BIAF obtains better performance than the original one in Dozat and Manning (2017) , demonstrating the effectiveness of the character-level information.", "Our model achieves state-of-the-art performance on both UAS and LAS on Chinese, and best UAS on English.", "On German, the performance is competitive with BIAF, and significantly better than other models.", "Comparison with Previous Work Error Analysis In this section, we characterize the errors made by BIAF and STACKPTR by presenting a number of experiments that relate parsing errors to a set of linguistic and structural properties.", "For simplicity, we follow McDonald and Nivre (2011) and report labeled parsing metrics (either accuracy, precision, or recall) for all experiments.", "Length and Graph Factors Following McDonald and Nivre (2011), we analyze parsing errors related to structural factors.", "Sentence Length.", "Figure 3 (a) shows the accuracy of both parsing models relative to sentence lengths.", "Consistent with the analysis in Mc-Donald and Nivre (2011) , STACKPTR tends to perform better on shorter sentences, which make fewer parsing decisions, significantly reducing the chance of error propagation.", "Dependency Length.", "Figure 3 (b) measures the precision and recall relative to dependency lengths.", "While the graph-based BIAF parser still performs better for longer dependency arcs and transition-based STACKPTR parser does better for shorter ones, the gap between the two systems is marginal, much smaller than that shown Table 3 : UAS and LAS on 14 treebanks from CoNLL shared tasks, together with several state-of-the-art parsers.", "Bi-Att is the bi-directional attention based parser (Cheng et al., 2016) , and NeuroMST is the neural MST parser .", "\"Best Published\" includes the most accurate parsers in term of UAS among , Martins et al.", "(2011) , Martins et al.", "(2013) , , , Zhang and McDonald (2014) , Pitler and McDonald (2015) , and Cheng et al.", "(2016) .", "in McDonald and Nivre (2011) .", "One possible reason is that, unlike traditional transition-based parsers that scan the sentence from left to right, STACKPTR processes in a top-down manner, thus sometimes unnecessarily creating shorter dependency arcs first.", "Root Distance.", "Figure 3 (c) plots the precision and recall of each system for arcs of varying distance to the root.", "Different from the observation in McDonald and Nivre (2011) , STACKPTR does not show an obvious advantage on the precision for arcs further away from the root.", "Furthermore, the STACKPTR parser does not have the tendency to over-predict root modifiers reported in McDonald and Nivre (2011) .", "This behavior can be explained using the same reasoning as above: the fact that arcs further away from the root are usually constructed early in the parsing algorithm of traditional transition-based parsers is not true for the STACKPTR parser.", "Effect of POS Embedding The only prerequisite information that our parsing model relies on is POS tags.", "With the goal of achieving an end-to-end parser, we explore the effect of POS tags on parsing performance.", "We run experiments on PTB using our STACKPTR parser with gold-standard and predicted POS tags, and without tags, respectively.", "STACKPTR in these experiments is the Full model with beam=10.", "Table 2 gives results of the parsers with different versions of POS tags on the test data of PTB.", "The parser with gold-standard POS tags significantly outperforms the other two parsers, showing that dependency parsers can still benefit from accurate POS information.", "The parser with predicted (imperfect) POS tags, however, performs even slightly worse than the parser without using POS tags.", "It illustrates that an end-to-end parser that doesn't rely on POS information can obtain competitive (or even better) performance than parsers using imperfect predicted POS tags, even if the POS tagger is relative high accuracy (accuracy > 97% in this experiment on PTB).", "Table 3 summarizes the parsing results of our model on the test sets of 14 treebanks from the CoNLL shared task, along with the state-of-theart baselines.", "Along with BIAF, we also list the performance of the bi-directional attention based Parser (Bi-Att) (Cheng et al., 2016) and the neural MST parser (NeuroMST) for comparison.", "Our parser achieves state-of-theart performance on both UAS and LAS on eight languages -Arabic, Czech, English, German, Portuguese, Slovene, Spanish, and Swedish.", "On Bulgarian and Dutch, our parser obtains the best UAS.", "On other languages, the performance of our parser is competitive with BIAF, and significantly better than others.", "The only exception is Japanese, on which NeuroMST obtains the best scores.", "Experiments on Other Treebanks CoNLL Treebanks UD Treebanks For UD Treebanks, we select 12 languages -Bulgarian, Catalan, Czech, Dutch, English, French, German, Italian, Norwegian, Romanian, Russian and Spanish.", "For all the languages, we adopt the standard training/dev/test splits, and use the universal POS tags (Petrov et al., 2012) provided in each treebank.", "The statistics of these corpora are provided in Appendix B.", "Table 4 summarizes the results of the STACKPTR parser, along with BIAF for comparison, on both the development and test datasets for each language.", "First, both BIAF and STACKPTR parsers achieve relatively high parsing accuracies on all the 12 languages -all with UAS are higher than 90%.", "On nine languages -Catalan, Czech, Dutch, English, French, German, Norwegian, Russian and Spanish -STACKPTR outperforms BIAF for both UAS and LAS.", "On Bulgarian, STACKPTR achieves slightly better UAS while LAS is slightly worse than BIAF.", "On Italian and Romanian, BIAF obtains marginally better parsing performance than STACKPTR.", "Conclusion In this paper, we proposed STACKPTR, a transition-based neural network architecture, for dependency parsing.", "Combining pointer networks with an internal stack to track the status of the top-down, depth-first search in the decoding procedure, the STACKPTR parser is able to capture information from the whole sentence and all the previously derived subtrees, removing the leftto-right restriction in classical transition-based parsers, while maintaining linear parsing steps, w.r.t the length of the sentences.", "Experimental re-sults on 29 treebanks show the effectiveness of our parser across 20 languages, by achieving state-ofthe-art performance on 21 corpora.", "There are several potential directions for future work.", "First, we intend to consider how to conduct experiments to improve the analysis of parsing errors qualitatively and quantitatively.", "Another interesting direction is to further improve our model by exploring reinforcement learning approaches to learn an optimal order for the children of head words, instead of using a predefined fixed order." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.1", "3.2", "3.3", "3.4", "3.5", "3.6", "3.7", "3.8", "4.1", "4.2", "4.4", "4.4.1", "4.4.2", "4.5.2", "5" ], "paper_header_content": [ "Introduction", "Background", "Dependency Parsing and Notations", "Pointer Networks", "Overview", "Encoder", "Decoder", "Higher-order Information", "Biaffine Attention Mechanism", "Training Objectives", "Discussion", "Implementation Details", "Setup", "Main Results", "Error Analysis", "Length and Graph Factors", "Effect of POS Embedding", "UD Treebanks", "Conclusion" ] }
GEM-SciDuet-train-37#paper-1053#slide-13
Parsing Performance wrt Dependency Length
The gap between Stack-Ptr and BiAF is marginal, graph- based BiAF still performs better for longer arcs
The gap between Stack-Ptr and BiAF is marginal, graph- based BiAF still performs better for longer arcs
[]
GEM-SciDuet-train-37#paper-1053#slide-14
1053
Stack-Pointer Networks for Dependency Parsing
We introduce a novel architecture for dependency parsing: stack-pointer networks (STACKPTR). Combining pointer networks (Vinyals et al., 2015) with an internal stack, the proposed model first reads and encodes the whole sentence, then builds the dependency tree top-down (from root-to-leaf) in a depth-first fashion. The stack tracks the status of the depthfirst search and the pointer networks select one child for the word at the top of the stack at each step. The STACKPTR parser benefits from the information of the whole sentence and all previously derived subtree structures, and removes the leftto-right restriction in classical transitionbased parsers. Yet, the number of steps for building any (including non-projective) parse tree is linear in the length of the sentence just as other transition-based parsers, yielding an efficient decoding algorithm with O(n 2 ) time complexity. We evaluate our model on 29 treebanks spanning 20 languages and different dependency annotation schemas, and achieve state-of-theart performance on 21 of them.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281 ], "paper_content_text": [ "Introduction Dependency parsing, which predicts the existence and type of linguistic dependency relations between words, is a first step towards deep language understanding.", "Its importance is widely recognized in the natural language processing (NLP) community, with it benefiting a wide range of NLP applications, such as coreference resolution (Ng, 2010; Durrett and Klein, 2013; Work done while at Carnegie Mellon University.", "2016), sentiment analysis (Tai et al., 2015) , machine translation (Bastings et al., 2017) , information extraction (Nguyen et al., 2009; Angeli et al., 2015; Peng et al., 2017) , word sense disambiguation (Fauceglia et al., 2015) , and low-resource languages processing (McDonald et al., 2013; Ma and Xia, 2014) .", "There are two dominant approaches to dependency parsing (Buchholz and Marsi, 2006; Nivre et al., 2007) : local and greedy transitionbased algorithms (Yamada and Matsumoto, 2003; Nivre and Scholz, 2004; Zhang and Nivre, 2011; Chen and Manning, 2014) , and the globally optimized graph-based algorithms (Eisner, 1996; Mc-Donald et al., 2005a,b; .", "Transition-based dependency parsers read words sequentially (commonly from left-to-right) and build dependency trees incrementally by making series of multiple choice decisions.", "The advantage of this formalism is that the number of operations required to build any projective parse tree is linear with respect to the length of the sentence.", "The challenge, however, is that the decision made at each step is based on local information, leading to error propagation and worse performance compared to graph-based parsers on root and long dependencies (McDonald and Nivre, 2011) .", "Previous studies have explored solutions to address this challenge.", "Stack LSTMs are capable of learning representations of the parser state that are sensitive to the complete contents of the parser's state.", "Andor et al.", "(2016) proposed a globally normalized transition model to replace the locally normalized classifier.", "However, the parsing accuracy is still behind state-of-the-art graph-based parsers (Dozat and Manning, 2017) .", "Graph-based dependency parsers, on the other hand, learn scoring functions for parse trees and perform exhaustive search over all possible trees for a sentence to find the globally highest scoring tree.", "Incorporating this global search algorithm with distributed representations learned from neural networks, neural graph-based parsers (Kiperwasser and Goldberg, 2016; Wang and Chang, 2016; Kuncoro et al., 2016; Dozat and Manning, 2017) have achieved the state-of-the-art accuracies on a number of treebanks in different languages.", "Nevertheless, these models, while accurate, are usually slow (e.g.", "decoding is O(n 3 ) time complexity for first-order models (McDonald et al., 2005a,b) and higher polynomials for higherorder models (McDonald and Pereira, 2006; Ma and Zhao, 2012b,a) ).", "In this paper, we propose a novel neural network architecture for dependency parsing, stackpointer networks (STACKPTR).", "STACKPTR is a transition-based architecture, with the corresponding asymptotic efficiency, but still maintains a global view of the sentence that proves essential for achieving competitive accuracy.", "Our STACKPTR parser has a pointer network (Vinyals et al., 2015) as its backbone, and is equipped with an internal stack to maintain the order of head words in tree structures.", "The STACKPTR parser performs parsing in an incremental, topdown, depth-first fashion; at each step, it generates an arc by assigning a child for the head word at the top of the internal stack.", "This architecture makes it possible to capture information from the whole sentence and all the previously derived subtrees, while maintaining a number of parsing steps linear in the sentence length.", "We evaluate our parser on 29 treebanks across 20 languages and different dependency annotation schemas, and achieve state-of-the-art performance on 21 of them.", "The contributions of this work are summarized as follows: (i) We propose a neural network architecture for dependency parsing that is simple, effective, and efficient.", "(ii) Empirical evaluations on benchmark datasets over 20 languages show that our method achieves state-of-the-art performance on 21 different treebanks 1 .", "(iii) Comprehensive error analysis is conducted to compare the proposed method to a strong graph-based baseline using biaffine attention (Dozat and Manning, 2017) .", "Background We first briefly describe the task of dependency parsing, setup the notation, and review Pointer Networks (Vinyals et al., 2015) .", "Dependency Parsing and Notations Dependency trees represent syntactic relationships between words in the sentences through labeled directed edges between head words and their dependents.", "Figure 1 (a) shows a dependency tree for the sentence, \"But there were no buyers\".", "In this paper, we will use the following notation: Input: x = {w 1 , .", ".", ".", ", w n } represents a generic sentence, where w i is the ith word.", "Output: y = {p 1 , p 2 , · · · , p k } represents a generic (possibly non-projective) dependency tree, where each path p i = $, w i,1 , w i,2 , · · · , w i,l i is a sequence of words from the root to a leaf.", "\"$\" is an universal virtual root that is added to each tree.", "Stack: σ denotes a stack configuration, which is a sequence of words.", "We use σ|w to represent a stack configuration that pushes word w into the stack σ.", "Children: ch(w i ) denotes the list of all the children (modifiers) of word w i .", "Pointer Networks Pointer Networks (PTR-NET) (Vinyals et al., 2015) are a variety of neural network capable of learning the conditional probability of an output sequence with elements that are discrete tokens corresponding to positions in an input sequence.", "This model cannot be trivially expressed by standard sequence-to-sequence networks due to the variable number of input positions in each sentence.", "PTR-NET solves the problem by using attention (Bahdanau et al., 2015; Luong et al., 2015) as a pointer to select a member of the input sequence as the output.", "Formally, the words of the sentence x are fed one-by-one into the encoder (a multiple-layer bidirectional RNN), producing a sequence of encoder hidden states s i .", "At each time step t, the decoder (a uni-directional RNN) receives the input from last step and outputs decoder hidden state h t .", "The attention vector a t is calculated as follows: e t i = score(h t , s i ) a t = softmax (e t ) (1) where score(·, ·) is the attention scoring function, which has several variations such as dot-product, concatenation, and biaffine (Luong et al., 2015) .", "PTR-NET regards the attention vector a t as a probability distribution over the source words, i.e.", "it uses a t i as pointers to select the input elements.", "3 Stack-Pointer Networks Overview Similarly to PTR-NET, STACKPTR first reads the whole sentence and encodes each word into the encoder hidden state s i .", "The internal stack σ is always initialized with the root symbol $.", "At each time step t, the decoder receives the input vector corresponding to the top element of the stack σ (the head word w p where p is the word index), generates the hidden state h t , and computes the attention vector a t using Eq.", "(1).", "The parser chooses a specific position c according to the attention scores in a t to generate a new dependency arc (w h , w c ) by selecting w c as a child of w h .", "Then the parser pushes w c onto the stack, i.e.", "σ → σ|w c , and goes to the next step.", "At one step if the parser points w h to itself, i.e.", "c = h, it indicates that all children of the head word w h have already been selected.", "Then the parser goes to the next step by popping w h out of σ.", "At test time, in order to guarantee a valid dependency tree containing all the words in the input sentences exactly once, the decoder maintains a list of \"available\" words.", "At each decoding step, the parser selects a child for the current head word, and removes the child from the list of available words to make sure that it cannot be selected as a child of other head words.", "For head words with multiple children, it is possible that there is more than one valid selection for each time step.", "In order to define a deterministic decoding process to make sure that there is only one ground-truth choice at each step (which is necessary for simple maximum likelihood estimation), a predefined order for each ch(w i ) needs to be introduced.", "The predefined order of children can have different alternatives, such as leftto-right or inside-out 2 .", "In this paper, we adopt the inside-out order 3 since it enables us to utilize second-order sibling information, which has been proven beneficial for parsing performance (Mc-Donald and Pereira, 2006; ) (see § 3.4 for details).", "Figure 1 (b) depicts the architecture of STACKPTR and the decoding procedure for the example sentence in Figure 1 (a) .", "Encoder The encoder of our parsing model is based on the bi-directional LSTM-CNN architecture (BLSTM-CNNs) (Chiu and Nichols, 2016; where CNNs encode character-level information of a word into its character-level repre-sentation and BLSTM models context information of each word.", "Formally, for each word, the CNN, with character embeddings as inputs, encodes the character-level representation.", "Then the character-level representation vector is concatenated with the word embedding vector to feed into the BLSTM network.", "To enrich word-level information, we also use POS embeddings.", "Finally, the encoder outputs a sequence of hidden states s i .", "Decoder The decoder for our parser is a uni-directional LSTM.", "Different from previous work (Bahdanau et al., 2015; Vinyals et al., 2015) which uses word embeddings of the previous word as the input to the decoder, our decoder receives the encoder hidden state vector (s i ) of the top element in the stack σ (see Figure 1 (b)).", "Compared to word embeddings, the encoder hidden states contain more contextual information, benefiting both the training and decoding procedures.", "The decoder produces a sequence of decoder hidden states h i , one for each decoding step.", "Higher-order Information As mentioned before, our parser is capable of utilizing higher-order information.", "In this paper, we incorporate two kinds of higher-order structures grandparent and sibling.", "A sibling structure is a head word with two successive modifiers, and a grandparent structure is a pair of dependencies connected head-to-tail: 0 12 3 456 2782 96 56 986 2 5214 3 77543 9 5 2 52 ÿ ÿ ! \"", "\"#$% & #& ' % !", "#& (()& *% & & !", "*\"& (+(\"!", "\"% ) &, \" \"#$' (% & #-.,\"/\" \"#$ % %% #*) (& ** *% & 0 % #( !", "$% %()- 1 2 3 -45 67 896: ;<=>?", "@ ABCDE3 -F14 G H I .J3-4 %& *% &()!", "!& %#( & +!", ", & * -\"ABCDE3 -F14K!", "& %& *K&& *&& 1 %& *!", "$% %()% & #-'A #& *# \" !", "0 & $()J3-4 (,% (& !", "!", "$, & *& *!", "& *() & *% & #.\"", "#& !", "$%(!", "K EL' % #& #! '", "#((%& & $.", "\"(,* #*) ( %& *) (0 #%()& * % .", "%& (M NOP QR#*\" \"#$& & (%!", "!SNR P T .,* #*#+% #( \" % (!", "& (' #& ( \" % #+) ( !", "U\"%) (!", "!", "(,% ABCDE3 -F145 V W<; ABCDEXDY3 -FZ4 Y*& % .,& && *\" \"#$& 1%%& () & %Z.#*(),* #*[%% & #(& 0 +& (& (& *% #( ()1'(# & ) #& ( U& (% . )", "/# & % !", "( & *% % &) (% (!", "K EL'' \\\"/ A # /#!", "!", "$.A#& (%u' .u' v.\"u' w\"% # + % %& *& .", "% #& K!", "$.)", "#& (& % & ( \"0 #* ! \"", "& % .", "\"0 % +!", "& % .\"", "& () \"0 % +!", "\"& 0 % +!", "& % ' x yz{ | } {| { { } | C,& * \"0 ( \"\" \"#$ % %+ !", "\"( \"%) ( % & % !", "( & *%'_ & * % % #& (., (K \"+#[ (\"(& ,( !", "K& % %) ( K (%,( [' G H G +H vY*\"$ #0 ( % & #& % \"\" K& (%()& *E % G vH!", "( & *' B( !", "& % % \" #& \"%& !", "%\" 0 #( !", "& % %%& U( \"% '(+ K & $.,! \"", "& *% $& # *& 0 *\"\"K % (% ' x { | } } { } { Ỹ */ % && $ () % ,\"% # +% %/ % & 0 ( \" ) #& ( U& (.,* #*\"#( (% %\" 0 \"#$& & ( & % \" K \"!\"", "\"# % 'E % 0 G vH & (\"#\", \"!", "$0 % \"\"$ #0 ( !", "( & *) (/ % & 0 ( \" % % & %& *+% %) ($ % % .", "#! \"", "(, !", "( & *%.,% U & %\"% * ' Y*E % G vH!", "( & * %+% \"(& ,( & !", "& \"& $ %()\"$ #0 ( %& #0 & % OQ S¡P% % .,* #*#(% % &()*\"0 ,( \"\" & %\"% #\"& %((% \".", "\"¢ £OQ ¤ S¡P% % .,* #*#(% % &()\" \"#$\"& * (+& ,& **\"\"(\" / ' ( !", "!", "$.,\"(& #( !", "& % %¥ ¦ § ,* ©\"ª & * \" #%()& *% « %*\"0 ,( \"\"\" ( & ' #( !", "& % %\"0 (& \"%¬ ¦ § ,* ©\"® & * \"()& * *\"\"(\" /()\" \"#$' _ & & K!", "$.", "#( !", "& % % & %*! )", "0 #(% & & & *\"\"+$©.,* % #( !", "& % %(!", "$ & !*! )", "0 #(% & & & .% #& *#(% & & &# +& \"\"+$\"\" ( (\" / %& (®' E#*& $ ()% %# & \"+$ # % K!", "$ #(+ & ,(%!", "!", ".", "\"#&% % & *#(0 % & #& (% % # /\" * #!", "!", "$ v' #( !", "& % %#(% & #& \") ( ()#( !", "& % % . \"", "#& & *\" K % (()& * °©F®± & (#(% & & & %*\"\"+$©\" ®' #( !", "& % %# & \"+$#( !", "& 0 #( !", "& % , & *& *(& **!", ")() ®« %#(% & & & 'Y* ( &()#(#& & ( #*#(% & #& (²® vG H(³ 0 vG +H ² %& *T S¡ ¢ PSQ¢ £P .)", "\"& *&% & + & \"& (/\"& *( & !#(% & #& (' _ ( \"& ( % %& #-.", "&% ) /#%& ( /\"( & !#(% & #& (%) (!", "!#( !", "& \" #( !", "& % %\"/\"(-' Y* %#+ To utilize higher-order information, the decoder's input at each step is the sum of the encoder hidden states of three words: β t = s h + s g + s s where β t is the input vector of decoder at time t and h, g, s are the indices of the head word and its grandparent and sibling, respectively.", "Figure 1 (b) illustrates the details.", "Here we use the element-wise sum operation instead of concatenation because it does not increase the dimension of the input vector β t , thus introducing no additional model parameters.", "Biaffine Attention Mechanism For attention score function (Eq.", "(1)), we adopt the biaffine attention mechanism (Luong et al., 2015; Dozat and Manning, 2017) : e t i = h T t Ws i + U T h t + V T s i + b where W, U, V, b are parameters, denoting the weight matrix of the bi-linear term, the two weight vectors of the linear terms, and the bias vector.", "As discussed in Dozat and Manning (2017) , applying a multilayer perceptron (MLP) to the output vectors of the BLSTM before the score function can both reduce the dimensionality and overfitting of the model.", "We follow this work by using a one-layer perceptron to s i and h i with elu (Clevert et al., 2015) as its activation function.", "Similarly, the dependency label classifier also uses a biaffine function to score each label, given the head word vector h t and child vector s i as inputs.", "Again, we use MLPs to transform h t and s i before feeding them into the classifier.", "Training Objectives The STACKPTR parser is trained to optimize the probability of the dependency trees given sentences: P θ (y|x), which can be factorized as: P θ (y|x) = k i=1 P θ (p i |p <i , x) = k i=1 l i j=1 P θ (c i,j |c i,<j , p <i , x), (2) where θ represents model parameters.", "p <i denotes the preceding paths that have already been generated.", "c i,j represents the jth word in p i and c i,<j denotes all the proceeding words on the path p i .", "Thus, the STACKPTR parser is an autoregressive model, like sequence-to-sequence models, but it factors the distribution according to a top-down tree structure as opposed to a left-to-right chain.", "We define P θ (c i,j |c i,<j , p <i , x) = a t , where attention vector a t (of dimension n) is used as the distribution over the indices of words in a sentence.", "Arc Prediction Our parser is trained by optimizing the conditional likelihood in Eq (2), which is implemented as the cross-entropy loss.", "Label Prediction We train a separated multiclass classifier in parallel to predict the dependency labels.", "Following Dozat and Manning (2017) , the classifier takes the information of the head word and its child as features.", "The label classifier is trained simultaneously with the parser by optimizing the sum of their objectives.", "Discussion Time Complexity.", "The number of decoding steps to build a parse tree for a sentence of length n is 2n−1, linear in n. Together with the attention mechanism (at each step, we need to compute the attention vector a t , whose runtime is O(n)), the time complexity of decoding algorithm is O(n 2 ), which is more efficient than graph-based parsers that have O(n 3 ) or worse complexity when using dynamic programming or maximum spanning tree (MST) decoding algorithms.", "Top-down Parsing.", "When humans comprehend a natural language sentence, they arguably do it in an incremental, left-to-right manner.", "However, when humans consciously annotate a sentence with syntactic structure, they rarely ever process in fixed left-to-right order.", "Rather, they start by reading the whole sentence, then seeking the main predicates, jumping back-and-forth over the sentence and recursively proceeding to the subtree structures governed by certain head words.", "Our parser follows a similar kind of annotation process: starting from reading the whole sentence, and processing in a top-down manner by finding the main predicates first and only then search for sub-trees governed by them.", "When making latter decisions, the parser has access to the entire structure built in earlier steps.", "Implementation Details Pre-trained Word Embeddings.", "For all the parsing models in different languages, we initialize word vectors with pretrained word embeddings.", "For Chinese, Dutch, English, German and Spanish, we use the structured-skipgram embeddings.", "For other languages we use Polyglot embeddings (Al-Rfou et al., 2013) .", "Optimization.", "Parameter optimization is performed with the Adam optimizer (Kingma and Ba, 2014) with β 1 = β 2 = 0.9.", "We choose an initial learning rate of η 0 = 0.001.", "The learning rate η is annealed by multiplying a fixed decay rate ρ = 0.75 when parsing performance stops increasing on validation sets.", "To reduce the effects of \"gradient exploding\", we use gradient clipping of 5.0 (Pascanu et al., 2013) .", "Dropout Training.", "To mitigate overfitting, we apply dropout (Srivastava et al., 2014; .", "For BLSTM, we use recurrent dropout (Gal and Ghahramani, 2016) with a drop rate of 0.33 between hidden states and 0.33 between layers.", "Following Dozat and Manning (2017) , we also use embedding dropout with a rate of 0.33 on all word, character, and POS embeddings.", "Hyper-Parameters.", "Some parameters are chosen from those reported in Dozat and Manning (2017) .", "We use the same hyper-parameters across the models on different treebanks and languages, due to time constraints.", "The details of the chosen hyper-parameters for all experiments are summarized in Appendix A.", "Experiments Setup We evaluate our STACKPTR parser mainly on three treebanks: the English Penn Treebank (PTB version 3.0) (Marcus et al., 1993) , the Penn Chinese Treebank (CTB version 5.1) (Xue et al., 2002) , and the German CoNLL 2009 corpus (Hajič et al., 2009) .", "We use the same experimental settings as Kuncoro et al.", "(2016) .", "To make a thorough empirical comparison with previous studies, we also evaluate our system on treebanks from CoNLL shared task and the Universal Dependency (UD) Treebanks 4 .", "For the CoNLL Treebanks, we use the English treebank from CoNLL-2008 shared task (Surdeanu et al., 2008) and all 13 treebanks from CoNLL-2006 shared task (Buchholz and Marsi, 2006) .", "The experimental settings are the same as .", "For UD Treebanks, we select 12 languages.", "The details of the treebanks and experimental settings are in § 4.5 and Appendix B.", "Evaluation Metrics Parsing performance is measured with five metrics: unlabeled attachment score (UAS), labeled attachment score (LAS), unlabeled complete match (UCM), labeled complete match (LCM), and root accuracy (RA).", "Following previous work (Kuncoro et al., 2016; Dozat and Manning, 2017) , we report results excluding punctuations for Chinese and English.", "For each experiment, we report the mean values with corresponding standard deviations over 5 repetitions.", "Baseline For fair comparison of the parsing performance, we re-implemented the graph-based Deep Biaffine (BIAF) parser (Dozat and Manning, 2017) , which achieved state-of-the-art results on a wide range of languages.", "Our re-implementation adds character-level information using the same LSTM-CNN encoder as our model ( § 3.2) to the original BIAF model, which boosts its performance on all languages.", "Main Results We first conduct experiments to demonstrate the effectiveness of our neural architecture by comparing with the strong baseline BIAF.", "We compare the performance of four variations of our model with different decoder inputs -Org, +gpar, +sib and Full -where the Org model utilizes only the encoder hidden states of head words, while the +gpar and +sib models augments the original one with grandparent and sibling information, respectively.", "The Full model includes all the three information as inputs.", "Figure 2 illustrates the performance (five metrics) of different variations of our STACKPTR parser together with the results of baseline BIAF re-implemented by us, on the test sets of the three languages.", "On UAS and LAS, the Full variation of STACKPTR with decoding beam size 10 outperforms BIAF on Chinese, and obtains competitive performance on English and German.", "An interesting observation is that the Full model achieves the best accuracy on English and Chinese, while performs slightly worse than +sib on German.", "This shows that the importance of higher-order information varies in languages.", "On LCM and UCM, STACKPTR significantly outperforms BIAF on all languages, showing the superiority of our parser on complete sentence parsing.", "The results of our parser on RA are slightly worse than BIAF.", "More details of results are provided in Appendix C. Table 1 illustrates the UAS and LAS of the four versions of our model (with decoding beam size 10) on the three treebanks, together with previous top-performing systems for comparison.", "Note that the results of STACKPTR and our reimplementation of BIAF are the average of 5 repetitions instead of a single run.", "Our Full model significantly outperforms all the transition-based parsers on all three languages, and achieves better results than most graph-based parsers.", "Our Table 1 : UAS and LAS of four versions of our model on test sets for three languages, together with topperforming parsing systems.", "\"T\" and \"G\" indicate transition-and graph-based models, respectively.", "For BIAF, we provide the original results reported in Dozat and Manning (2017) and our re-implementation.", "For STACKPTR and our re-implementation of BiAF, we report the average over 5 runs.", "re-implementation of BIAF obtains better performance than the original one in Dozat and Manning (2017) , demonstrating the effectiveness of the character-level information.", "Our model achieves state-of-the-art performance on both UAS and LAS on Chinese, and best UAS on English.", "On German, the performance is competitive with BIAF, and significantly better than other models.", "Comparison with Previous Work Error Analysis In this section, we characterize the errors made by BIAF and STACKPTR by presenting a number of experiments that relate parsing errors to a set of linguistic and structural properties.", "For simplicity, we follow McDonald and Nivre (2011) and report labeled parsing metrics (either accuracy, precision, or recall) for all experiments.", "Length and Graph Factors Following McDonald and Nivre (2011), we analyze parsing errors related to structural factors.", "Sentence Length.", "Figure 3 (a) shows the accuracy of both parsing models relative to sentence lengths.", "Consistent with the analysis in Mc-Donald and Nivre (2011) , STACKPTR tends to perform better on shorter sentences, which make fewer parsing decisions, significantly reducing the chance of error propagation.", "Dependency Length.", "Figure 3 (b) measures the precision and recall relative to dependency lengths.", "While the graph-based BIAF parser still performs better for longer dependency arcs and transition-based STACKPTR parser does better for shorter ones, the gap between the two systems is marginal, much smaller than that shown Table 3 : UAS and LAS on 14 treebanks from CoNLL shared tasks, together with several state-of-the-art parsers.", "Bi-Att is the bi-directional attention based parser (Cheng et al., 2016) , and NeuroMST is the neural MST parser .", "\"Best Published\" includes the most accurate parsers in term of UAS among , Martins et al.", "(2011) , Martins et al.", "(2013) , , , Zhang and McDonald (2014) , Pitler and McDonald (2015) , and Cheng et al.", "(2016) .", "in McDonald and Nivre (2011) .", "One possible reason is that, unlike traditional transition-based parsers that scan the sentence from left to right, STACKPTR processes in a top-down manner, thus sometimes unnecessarily creating shorter dependency arcs first.", "Root Distance.", "Figure 3 (c) plots the precision and recall of each system for arcs of varying distance to the root.", "Different from the observation in McDonald and Nivre (2011) , STACKPTR does not show an obvious advantage on the precision for arcs further away from the root.", "Furthermore, the STACKPTR parser does not have the tendency to over-predict root modifiers reported in McDonald and Nivre (2011) .", "This behavior can be explained using the same reasoning as above: the fact that arcs further away from the root are usually constructed early in the parsing algorithm of traditional transition-based parsers is not true for the STACKPTR parser.", "Effect of POS Embedding The only prerequisite information that our parsing model relies on is POS tags.", "With the goal of achieving an end-to-end parser, we explore the effect of POS tags on parsing performance.", "We run experiments on PTB using our STACKPTR parser with gold-standard and predicted POS tags, and without tags, respectively.", "STACKPTR in these experiments is the Full model with beam=10.", "Table 2 gives results of the parsers with different versions of POS tags on the test data of PTB.", "The parser with gold-standard POS tags significantly outperforms the other two parsers, showing that dependency parsers can still benefit from accurate POS information.", "The parser with predicted (imperfect) POS tags, however, performs even slightly worse than the parser without using POS tags.", "It illustrates that an end-to-end parser that doesn't rely on POS information can obtain competitive (or even better) performance than parsers using imperfect predicted POS tags, even if the POS tagger is relative high accuracy (accuracy > 97% in this experiment on PTB).", "Table 3 summarizes the parsing results of our model on the test sets of 14 treebanks from the CoNLL shared task, along with the state-of-theart baselines.", "Along with BIAF, we also list the performance of the bi-directional attention based Parser (Bi-Att) (Cheng et al., 2016) and the neural MST parser (NeuroMST) for comparison.", "Our parser achieves state-of-theart performance on both UAS and LAS on eight languages -Arabic, Czech, English, German, Portuguese, Slovene, Spanish, and Swedish.", "On Bulgarian and Dutch, our parser obtains the best UAS.", "On other languages, the performance of our parser is competitive with BIAF, and significantly better than others.", "The only exception is Japanese, on which NeuroMST obtains the best scores.", "Experiments on Other Treebanks CoNLL Treebanks UD Treebanks For UD Treebanks, we select 12 languages -Bulgarian, Catalan, Czech, Dutch, English, French, German, Italian, Norwegian, Romanian, Russian and Spanish.", "For all the languages, we adopt the standard training/dev/test splits, and use the universal POS tags (Petrov et al., 2012) provided in each treebank.", "The statistics of these corpora are provided in Appendix B.", "Table 4 summarizes the results of the STACKPTR parser, along with BIAF for comparison, on both the development and test datasets for each language.", "First, both BIAF and STACKPTR parsers achieve relatively high parsing accuracies on all the 12 languages -all with UAS are higher than 90%.", "On nine languages -Catalan, Czech, Dutch, English, French, German, Norwegian, Russian and Spanish -STACKPTR outperforms BIAF for both UAS and LAS.", "On Bulgarian, STACKPTR achieves slightly better UAS while LAS is slightly worse than BIAF.", "On Italian and Romanian, BIAF obtains marginally better parsing performance than STACKPTR.", "Conclusion In this paper, we proposed STACKPTR, a transition-based neural network architecture, for dependency parsing.", "Combining pointer networks with an internal stack to track the status of the top-down, depth-first search in the decoding procedure, the STACKPTR parser is able to capture information from the whole sentence and all the previously derived subtrees, removing the leftto-right restriction in classical transition-based parsers, while maintaining linear parsing steps, w.r.t the length of the sentences.", "Experimental re-sults on 29 treebanks show the effectiveness of our parser across 20 languages, by achieving state-ofthe-art performance on 21 corpora.", "There are several potential directions for future work.", "First, we intend to consider how to conduct experiments to improve the analysis of parsing errors qualitatively and quantitatively.", "Another interesting direction is to further improve our model by exploring reinforcement learning approaches to learn an optimal order for the children of head words, instead of using a predefined fixed order." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.1", "3.2", "3.3", "3.4", "3.5", "3.6", "3.7", "3.8", "4.1", "4.2", "4.4", "4.4.1", "4.4.2", "4.5.2", "5" ], "paper_header_content": [ "Introduction", "Background", "Dependency Parsing and Notations", "Pointer Networks", "Overview", "Encoder", "Decoder", "Higher-order Information", "Biaffine Attention Mechanism", "Training Objectives", "Discussion", "Implementation Details", "Setup", "Main Results", "Error Analysis", "Length and Graph Factors", "Effect of POS Embedding", "UD Treebanks", "Conclusion" ] }
GEM-SciDuet-train-37#paper-1053#slide-14
Parsing Performance wrt Root Distance
Different from McDonald and Nivre (2011), StackPtr and BiAf similar regardless of root distance
Different from McDonald and Nivre (2011), StackPtr and BiAf similar regardless of root distance
[]
GEM-SciDuet-train-37#paper-1053#slide-15
1053
Stack-Pointer Networks for Dependency Parsing
We introduce a novel architecture for dependency parsing: stack-pointer networks (STACKPTR). Combining pointer networks (Vinyals et al., 2015) with an internal stack, the proposed model first reads and encodes the whole sentence, then builds the dependency tree top-down (from root-to-leaf) in a depth-first fashion. The stack tracks the status of the depthfirst search and the pointer networks select one child for the word at the top of the stack at each step. The STACKPTR parser benefits from the information of the whole sentence and all previously derived subtree structures, and removes the leftto-right restriction in classical transitionbased parsers. Yet, the number of steps for building any (including non-projective) parse tree is linear in the length of the sentence just as other transition-based parsers, yielding an efficient decoding algorithm with O(n 2 ) time complexity. We evaluate our model on 29 treebanks spanning 20 languages and different dependency annotation schemas, and achieve state-of-theart performance on 21 of them.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281 ], "paper_content_text": [ "Introduction Dependency parsing, which predicts the existence and type of linguistic dependency relations between words, is a first step towards deep language understanding.", "Its importance is widely recognized in the natural language processing (NLP) community, with it benefiting a wide range of NLP applications, such as coreference resolution (Ng, 2010; Durrett and Klein, 2013; Work done while at Carnegie Mellon University.", "2016), sentiment analysis (Tai et al., 2015) , machine translation (Bastings et al., 2017) , information extraction (Nguyen et al., 2009; Angeli et al., 2015; Peng et al., 2017) , word sense disambiguation (Fauceglia et al., 2015) , and low-resource languages processing (McDonald et al., 2013; Ma and Xia, 2014) .", "There are two dominant approaches to dependency parsing (Buchholz and Marsi, 2006; Nivre et al., 2007) : local and greedy transitionbased algorithms (Yamada and Matsumoto, 2003; Nivre and Scholz, 2004; Zhang and Nivre, 2011; Chen and Manning, 2014) , and the globally optimized graph-based algorithms (Eisner, 1996; Mc-Donald et al., 2005a,b; .", "Transition-based dependency parsers read words sequentially (commonly from left-to-right) and build dependency trees incrementally by making series of multiple choice decisions.", "The advantage of this formalism is that the number of operations required to build any projective parse tree is linear with respect to the length of the sentence.", "The challenge, however, is that the decision made at each step is based on local information, leading to error propagation and worse performance compared to graph-based parsers on root and long dependencies (McDonald and Nivre, 2011) .", "Previous studies have explored solutions to address this challenge.", "Stack LSTMs are capable of learning representations of the parser state that are sensitive to the complete contents of the parser's state.", "Andor et al.", "(2016) proposed a globally normalized transition model to replace the locally normalized classifier.", "However, the parsing accuracy is still behind state-of-the-art graph-based parsers (Dozat and Manning, 2017) .", "Graph-based dependency parsers, on the other hand, learn scoring functions for parse trees and perform exhaustive search over all possible trees for a sentence to find the globally highest scoring tree.", "Incorporating this global search algorithm with distributed representations learned from neural networks, neural graph-based parsers (Kiperwasser and Goldberg, 2016; Wang and Chang, 2016; Kuncoro et al., 2016; Dozat and Manning, 2017) have achieved the state-of-the-art accuracies on a number of treebanks in different languages.", "Nevertheless, these models, while accurate, are usually slow (e.g.", "decoding is O(n 3 ) time complexity for first-order models (McDonald et al., 2005a,b) and higher polynomials for higherorder models (McDonald and Pereira, 2006; Ma and Zhao, 2012b,a) ).", "In this paper, we propose a novel neural network architecture for dependency parsing, stackpointer networks (STACKPTR).", "STACKPTR is a transition-based architecture, with the corresponding asymptotic efficiency, but still maintains a global view of the sentence that proves essential for achieving competitive accuracy.", "Our STACKPTR parser has a pointer network (Vinyals et al., 2015) as its backbone, and is equipped with an internal stack to maintain the order of head words in tree structures.", "The STACKPTR parser performs parsing in an incremental, topdown, depth-first fashion; at each step, it generates an arc by assigning a child for the head word at the top of the internal stack.", "This architecture makes it possible to capture information from the whole sentence and all the previously derived subtrees, while maintaining a number of parsing steps linear in the sentence length.", "We evaluate our parser on 29 treebanks across 20 languages and different dependency annotation schemas, and achieve state-of-the-art performance on 21 of them.", "The contributions of this work are summarized as follows: (i) We propose a neural network architecture for dependency parsing that is simple, effective, and efficient.", "(ii) Empirical evaluations on benchmark datasets over 20 languages show that our method achieves state-of-the-art performance on 21 different treebanks 1 .", "(iii) Comprehensive error analysis is conducted to compare the proposed method to a strong graph-based baseline using biaffine attention (Dozat and Manning, 2017) .", "Background We first briefly describe the task of dependency parsing, setup the notation, and review Pointer Networks (Vinyals et al., 2015) .", "Dependency Parsing and Notations Dependency trees represent syntactic relationships between words in the sentences through labeled directed edges between head words and their dependents.", "Figure 1 (a) shows a dependency tree for the sentence, \"But there were no buyers\".", "In this paper, we will use the following notation: Input: x = {w 1 , .", ".", ".", ", w n } represents a generic sentence, where w i is the ith word.", "Output: y = {p 1 , p 2 , · · · , p k } represents a generic (possibly non-projective) dependency tree, where each path p i = $, w i,1 , w i,2 , · · · , w i,l i is a sequence of words from the root to a leaf.", "\"$\" is an universal virtual root that is added to each tree.", "Stack: σ denotes a stack configuration, which is a sequence of words.", "We use σ|w to represent a stack configuration that pushes word w into the stack σ.", "Children: ch(w i ) denotes the list of all the children (modifiers) of word w i .", "Pointer Networks Pointer Networks (PTR-NET) (Vinyals et al., 2015) are a variety of neural network capable of learning the conditional probability of an output sequence with elements that are discrete tokens corresponding to positions in an input sequence.", "This model cannot be trivially expressed by standard sequence-to-sequence networks due to the variable number of input positions in each sentence.", "PTR-NET solves the problem by using attention (Bahdanau et al., 2015; Luong et al., 2015) as a pointer to select a member of the input sequence as the output.", "Formally, the words of the sentence x are fed one-by-one into the encoder (a multiple-layer bidirectional RNN), producing a sequence of encoder hidden states s i .", "At each time step t, the decoder (a uni-directional RNN) receives the input from last step and outputs decoder hidden state h t .", "The attention vector a t is calculated as follows: e t i = score(h t , s i ) a t = softmax (e t ) (1) where score(·, ·) is the attention scoring function, which has several variations such as dot-product, concatenation, and biaffine (Luong et al., 2015) .", "PTR-NET regards the attention vector a t as a probability distribution over the source words, i.e.", "it uses a t i as pointers to select the input elements.", "3 Stack-Pointer Networks Overview Similarly to PTR-NET, STACKPTR first reads the whole sentence and encodes each word into the encoder hidden state s i .", "The internal stack σ is always initialized with the root symbol $.", "At each time step t, the decoder receives the input vector corresponding to the top element of the stack σ (the head word w p where p is the word index), generates the hidden state h t , and computes the attention vector a t using Eq.", "(1).", "The parser chooses a specific position c according to the attention scores in a t to generate a new dependency arc (w h , w c ) by selecting w c as a child of w h .", "Then the parser pushes w c onto the stack, i.e.", "σ → σ|w c , and goes to the next step.", "At one step if the parser points w h to itself, i.e.", "c = h, it indicates that all children of the head word w h have already been selected.", "Then the parser goes to the next step by popping w h out of σ.", "At test time, in order to guarantee a valid dependency tree containing all the words in the input sentences exactly once, the decoder maintains a list of \"available\" words.", "At each decoding step, the parser selects a child for the current head word, and removes the child from the list of available words to make sure that it cannot be selected as a child of other head words.", "For head words with multiple children, it is possible that there is more than one valid selection for each time step.", "In order to define a deterministic decoding process to make sure that there is only one ground-truth choice at each step (which is necessary for simple maximum likelihood estimation), a predefined order for each ch(w i ) needs to be introduced.", "The predefined order of children can have different alternatives, such as leftto-right or inside-out 2 .", "In this paper, we adopt the inside-out order 3 since it enables us to utilize second-order sibling information, which has been proven beneficial for parsing performance (Mc-Donald and Pereira, 2006; ) (see § 3.4 for details).", "Figure 1 (b) depicts the architecture of STACKPTR and the decoding procedure for the example sentence in Figure 1 (a) .", "Encoder The encoder of our parsing model is based on the bi-directional LSTM-CNN architecture (BLSTM-CNNs) (Chiu and Nichols, 2016; where CNNs encode character-level information of a word into its character-level repre-sentation and BLSTM models context information of each word.", "Formally, for each word, the CNN, with character embeddings as inputs, encodes the character-level representation.", "Then the character-level representation vector is concatenated with the word embedding vector to feed into the BLSTM network.", "To enrich word-level information, we also use POS embeddings.", "Finally, the encoder outputs a sequence of hidden states s i .", "Decoder The decoder for our parser is a uni-directional LSTM.", "Different from previous work (Bahdanau et al., 2015; Vinyals et al., 2015) which uses word embeddings of the previous word as the input to the decoder, our decoder receives the encoder hidden state vector (s i ) of the top element in the stack σ (see Figure 1 (b)).", "Compared to word embeddings, the encoder hidden states contain more contextual information, benefiting both the training and decoding procedures.", "The decoder produces a sequence of decoder hidden states h i , one for each decoding step.", "Higher-order Information As mentioned before, our parser is capable of utilizing higher-order information.", "In this paper, we incorporate two kinds of higher-order structures grandparent and sibling.", "A sibling structure is a head word with two successive modifiers, and a grandparent structure is a pair of dependencies connected head-to-tail: 0 12 3 456 2782 96 56 986 2 5214 3 77543 9 5 2 52 ÿ ÿ ! \"", "\"#$% & #& ' % !", "#& (()& *% & & !", "*\"& (+(\"!", "\"% ) &, \" \"#$' (% & #-.,\"/\" \"#$ % %% #*) (& ** *% & 0 % #( !", "$% %()- 1 2 3 -45 67 896: ;<=>?", "@ ABCDE3 -F14 G H I .J3-4 %& *% &()!", "!& %#( & +!", ", & * -\"ABCDE3 -F14K!", "& %& *K&& *&& 1 %& *!", "$% %()% & #-'A #& *# \" !", "0 & $()J3-4 (,% (& !", "!", "$, & *& *!", "& *() & *% & #.\"", "#& !", "$%(!", "K EL' % #& #! '", "#((%& & $.", "\"(,* #*) ( %& *) (0 #%()& * % .", "%& (M NOP QR#*\" \"#$& & (%!", "!SNR P T .,* #*#+% #( \" % (!", "& (' #& ( \" % #+) ( !", "U\"%) (!", "!", "(,% ABCDE3 -F145 V W<; ABCDEXDY3 -FZ4 Y*& % .,& && *\" \"#$& 1%%& () & %Z.#*(),* #*[%% & #(& 0 +& (& (& *% #( ()1'(# & ) #& ( U& (% . )", "/# & % !", "( & *% % &) (% (!", "K EL'' \\\"/ A # /#!", "!", "$.A#& (%u' .u' v.\"u' w\"% # + % %& *& .", "% #& K!", "$.)", "#& (& % & ( \"0 #* ! \"", "& % .", "\"0 % +!", "& % .\"", "& () \"0 % +!", "\"& 0 % +!", "& % ' x yz{ | } {| { { } | C,& * \"0 ( \"\" \"#$ % %+ !", "\"( \"%) ( % & % !", "( & *%'_ & * % % #& (., (K \"+#[ (\"(& ,( !", "K& % %) ( K (%,( [' G H G +H vY*\"$ #0 ( % & #& % \"\" K& (%()& *E % G vH!", "( & *' B( !", "& % % \" #& \"%& !", "%\" 0 #( !", "& % %%& U( \"% '(+ K & $.,! \"", "& *% $& # *& 0 *\"\"K % (% ' x { | } } { } { Ỹ */ % && $ () % ,\"% # +% %/ % & 0 ( \" ) #& ( U& (.,* #*\"#( (% %\" 0 \"#$& & ( & % \" K \"!\"", "\"# % 'E % 0 G vH & (\"#\", \"!", "$0 % \"\"$ #0 ( !", "( & *) (/ % & 0 ( \" % % & %& *+% %) ($ % % .", "#! \"", "(, !", "( & *%.,% U & %\"% * ' Y*E % G vH!", "( & * %+% \"(& ,( & !", "& \"& $ %()\"$ #0 ( %& #0 & % OQ S¡P% % .,* #*#(% % &()*\"0 ,( \"\" & %\"% #\"& %((% \".", "\"¢ £OQ ¤ S¡P% % .,* #*#(% % &()\" \"#$\"& * (+& ,& **\"\"(\" / ' ( !", "!", "$.,\"(& #( !", "& % %¥ ¦ § ,* ©\"ª & * \" #%()& *% « %*\"0 ,( \"\"\" ( & ' #( !", "& % %\"0 (& \"%¬ ¦ § ,* ©\"® & * \"()& * *\"\"(\" /()\" \"#$' _ & & K!", "$.", "#( !", "& % % & %*! )", "0 #(% & & & *\"\"+$©.,* % #( !", "& % %(!", "$ & !*! )", "0 #(% & & & .% #& *#(% & & &# +& \"\"+$\"\" ( (\" / %& (®' E#*& $ ()% %# & \"+$ # % K!", "$ #(+ & ,(%!", "!", ".", "\"#&% % & *#(0 % & #& (% % # /\" * #!", "!", "$ v' #( !", "& % %#(% & #& \") ( ()#( !", "& % % . \"", "#& & *\" K % (()& * °©F®± & (#(% & & & %*\"\"+$©\" ®' #( !", "& % %# & \"+$#( !", "& 0 #( !", "& % , & *& *(& **!", ")() ®« %#(% & & & 'Y* ( &()#(#& & ( #*#(% & #& (²® vG H(³ 0 vG +H ² %& *T S¡ ¢ PSQ¢ £P .)", "\"& *&% & + & \"& (/\"& *( & !#(% & #& (' _ ( \"& ( % %& #-.", "&% ) /#%& ( /\"( & !#(% & #& (%) (!", "!#( !", "& \" #( !", "& % %\"/\"(-' Y* %#+ To utilize higher-order information, the decoder's input at each step is the sum of the encoder hidden states of three words: β t = s h + s g + s s where β t is the input vector of decoder at time t and h, g, s are the indices of the head word and its grandparent and sibling, respectively.", "Figure 1 (b) illustrates the details.", "Here we use the element-wise sum operation instead of concatenation because it does not increase the dimension of the input vector β t , thus introducing no additional model parameters.", "Biaffine Attention Mechanism For attention score function (Eq.", "(1)), we adopt the biaffine attention mechanism (Luong et al., 2015; Dozat and Manning, 2017) : e t i = h T t Ws i + U T h t + V T s i + b where W, U, V, b are parameters, denoting the weight matrix of the bi-linear term, the two weight vectors of the linear terms, and the bias vector.", "As discussed in Dozat and Manning (2017) , applying a multilayer perceptron (MLP) to the output vectors of the BLSTM before the score function can both reduce the dimensionality and overfitting of the model.", "We follow this work by using a one-layer perceptron to s i and h i with elu (Clevert et al., 2015) as its activation function.", "Similarly, the dependency label classifier also uses a biaffine function to score each label, given the head word vector h t and child vector s i as inputs.", "Again, we use MLPs to transform h t and s i before feeding them into the classifier.", "Training Objectives The STACKPTR parser is trained to optimize the probability of the dependency trees given sentences: P θ (y|x), which can be factorized as: P θ (y|x) = k i=1 P θ (p i |p <i , x) = k i=1 l i j=1 P θ (c i,j |c i,<j , p <i , x), (2) where θ represents model parameters.", "p <i denotes the preceding paths that have already been generated.", "c i,j represents the jth word in p i and c i,<j denotes all the proceeding words on the path p i .", "Thus, the STACKPTR parser is an autoregressive model, like sequence-to-sequence models, but it factors the distribution according to a top-down tree structure as opposed to a left-to-right chain.", "We define P θ (c i,j |c i,<j , p <i , x) = a t , where attention vector a t (of dimension n) is used as the distribution over the indices of words in a sentence.", "Arc Prediction Our parser is trained by optimizing the conditional likelihood in Eq (2), which is implemented as the cross-entropy loss.", "Label Prediction We train a separated multiclass classifier in parallel to predict the dependency labels.", "Following Dozat and Manning (2017) , the classifier takes the information of the head word and its child as features.", "The label classifier is trained simultaneously with the parser by optimizing the sum of their objectives.", "Discussion Time Complexity.", "The number of decoding steps to build a parse tree for a sentence of length n is 2n−1, linear in n. Together with the attention mechanism (at each step, we need to compute the attention vector a t , whose runtime is O(n)), the time complexity of decoding algorithm is O(n 2 ), which is more efficient than graph-based parsers that have O(n 3 ) or worse complexity when using dynamic programming or maximum spanning tree (MST) decoding algorithms.", "Top-down Parsing.", "When humans comprehend a natural language sentence, they arguably do it in an incremental, left-to-right manner.", "However, when humans consciously annotate a sentence with syntactic structure, they rarely ever process in fixed left-to-right order.", "Rather, they start by reading the whole sentence, then seeking the main predicates, jumping back-and-forth over the sentence and recursively proceeding to the subtree structures governed by certain head words.", "Our parser follows a similar kind of annotation process: starting from reading the whole sentence, and processing in a top-down manner by finding the main predicates first and only then search for sub-trees governed by them.", "When making latter decisions, the parser has access to the entire structure built in earlier steps.", "Implementation Details Pre-trained Word Embeddings.", "For all the parsing models in different languages, we initialize word vectors with pretrained word embeddings.", "For Chinese, Dutch, English, German and Spanish, we use the structured-skipgram embeddings.", "For other languages we use Polyglot embeddings (Al-Rfou et al., 2013) .", "Optimization.", "Parameter optimization is performed with the Adam optimizer (Kingma and Ba, 2014) with β 1 = β 2 = 0.9.", "We choose an initial learning rate of η 0 = 0.001.", "The learning rate η is annealed by multiplying a fixed decay rate ρ = 0.75 when parsing performance stops increasing on validation sets.", "To reduce the effects of \"gradient exploding\", we use gradient clipping of 5.0 (Pascanu et al., 2013) .", "Dropout Training.", "To mitigate overfitting, we apply dropout (Srivastava et al., 2014; .", "For BLSTM, we use recurrent dropout (Gal and Ghahramani, 2016) with a drop rate of 0.33 between hidden states and 0.33 between layers.", "Following Dozat and Manning (2017) , we also use embedding dropout with a rate of 0.33 on all word, character, and POS embeddings.", "Hyper-Parameters.", "Some parameters are chosen from those reported in Dozat and Manning (2017) .", "We use the same hyper-parameters across the models on different treebanks and languages, due to time constraints.", "The details of the chosen hyper-parameters for all experiments are summarized in Appendix A.", "Experiments Setup We evaluate our STACKPTR parser mainly on three treebanks: the English Penn Treebank (PTB version 3.0) (Marcus et al., 1993) , the Penn Chinese Treebank (CTB version 5.1) (Xue et al., 2002) , and the German CoNLL 2009 corpus (Hajič et al., 2009) .", "We use the same experimental settings as Kuncoro et al.", "(2016) .", "To make a thorough empirical comparison with previous studies, we also evaluate our system on treebanks from CoNLL shared task and the Universal Dependency (UD) Treebanks 4 .", "For the CoNLL Treebanks, we use the English treebank from CoNLL-2008 shared task (Surdeanu et al., 2008) and all 13 treebanks from CoNLL-2006 shared task (Buchholz and Marsi, 2006) .", "The experimental settings are the same as .", "For UD Treebanks, we select 12 languages.", "The details of the treebanks and experimental settings are in § 4.5 and Appendix B.", "Evaluation Metrics Parsing performance is measured with five metrics: unlabeled attachment score (UAS), labeled attachment score (LAS), unlabeled complete match (UCM), labeled complete match (LCM), and root accuracy (RA).", "Following previous work (Kuncoro et al., 2016; Dozat and Manning, 2017) , we report results excluding punctuations for Chinese and English.", "For each experiment, we report the mean values with corresponding standard deviations over 5 repetitions.", "Baseline For fair comparison of the parsing performance, we re-implemented the graph-based Deep Biaffine (BIAF) parser (Dozat and Manning, 2017) , which achieved state-of-the-art results on a wide range of languages.", "Our re-implementation adds character-level information using the same LSTM-CNN encoder as our model ( § 3.2) to the original BIAF model, which boosts its performance on all languages.", "Main Results We first conduct experiments to demonstrate the effectiveness of our neural architecture by comparing with the strong baseline BIAF.", "We compare the performance of four variations of our model with different decoder inputs -Org, +gpar, +sib and Full -where the Org model utilizes only the encoder hidden states of head words, while the +gpar and +sib models augments the original one with grandparent and sibling information, respectively.", "The Full model includes all the three information as inputs.", "Figure 2 illustrates the performance (five metrics) of different variations of our STACKPTR parser together with the results of baseline BIAF re-implemented by us, on the test sets of the three languages.", "On UAS and LAS, the Full variation of STACKPTR with decoding beam size 10 outperforms BIAF on Chinese, and obtains competitive performance on English and German.", "An interesting observation is that the Full model achieves the best accuracy on English and Chinese, while performs slightly worse than +sib on German.", "This shows that the importance of higher-order information varies in languages.", "On LCM and UCM, STACKPTR significantly outperforms BIAF on all languages, showing the superiority of our parser on complete sentence parsing.", "The results of our parser on RA are slightly worse than BIAF.", "More details of results are provided in Appendix C. Table 1 illustrates the UAS and LAS of the four versions of our model (with decoding beam size 10) on the three treebanks, together with previous top-performing systems for comparison.", "Note that the results of STACKPTR and our reimplementation of BIAF are the average of 5 repetitions instead of a single run.", "Our Full model significantly outperforms all the transition-based parsers on all three languages, and achieves better results than most graph-based parsers.", "Our Table 1 : UAS and LAS of four versions of our model on test sets for three languages, together with topperforming parsing systems.", "\"T\" and \"G\" indicate transition-and graph-based models, respectively.", "For BIAF, we provide the original results reported in Dozat and Manning (2017) and our re-implementation.", "For STACKPTR and our re-implementation of BiAF, we report the average over 5 runs.", "re-implementation of BIAF obtains better performance than the original one in Dozat and Manning (2017) , demonstrating the effectiveness of the character-level information.", "Our model achieves state-of-the-art performance on both UAS and LAS on Chinese, and best UAS on English.", "On German, the performance is competitive with BIAF, and significantly better than other models.", "Comparison with Previous Work Error Analysis In this section, we characterize the errors made by BIAF and STACKPTR by presenting a number of experiments that relate parsing errors to a set of linguistic and structural properties.", "For simplicity, we follow McDonald and Nivre (2011) and report labeled parsing metrics (either accuracy, precision, or recall) for all experiments.", "Length and Graph Factors Following McDonald and Nivre (2011), we analyze parsing errors related to structural factors.", "Sentence Length.", "Figure 3 (a) shows the accuracy of both parsing models relative to sentence lengths.", "Consistent with the analysis in Mc-Donald and Nivre (2011) , STACKPTR tends to perform better on shorter sentences, which make fewer parsing decisions, significantly reducing the chance of error propagation.", "Dependency Length.", "Figure 3 (b) measures the precision and recall relative to dependency lengths.", "While the graph-based BIAF parser still performs better for longer dependency arcs and transition-based STACKPTR parser does better for shorter ones, the gap between the two systems is marginal, much smaller than that shown Table 3 : UAS and LAS on 14 treebanks from CoNLL shared tasks, together with several state-of-the-art parsers.", "Bi-Att is the bi-directional attention based parser (Cheng et al., 2016) , and NeuroMST is the neural MST parser .", "\"Best Published\" includes the most accurate parsers in term of UAS among , Martins et al.", "(2011) , Martins et al.", "(2013) , , , Zhang and McDonald (2014) , Pitler and McDonald (2015) , and Cheng et al.", "(2016) .", "in McDonald and Nivre (2011) .", "One possible reason is that, unlike traditional transition-based parsers that scan the sentence from left to right, STACKPTR processes in a top-down manner, thus sometimes unnecessarily creating shorter dependency arcs first.", "Root Distance.", "Figure 3 (c) plots the precision and recall of each system for arcs of varying distance to the root.", "Different from the observation in McDonald and Nivre (2011) , STACKPTR does not show an obvious advantage on the precision for arcs further away from the root.", "Furthermore, the STACKPTR parser does not have the tendency to over-predict root modifiers reported in McDonald and Nivre (2011) .", "This behavior can be explained using the same reasoning as above: the fact that arcs further away from the root are usually constructed early in the parsing algorithm of traditional transition-based parsers is not true for the STACKPTR parser.", "Effect of POS Embedding The only prerequisite information that our parsing model relies on is POS tags.", "With the goal of achieving an end-to-end parser, we explore the effect of POS tags on parsing performance.", "We run experiments on PTB using our STACKPTR parser with gold-standard and predicted POS tags, and without tags, respectively.", "STACKPTR in these experiments is the Full model with beam=10.", "Table 2 gives results of the parsers with different versions of POS tags on the test data of PTB.", "The parser with gold-standard POS tags significantly outperforms the other two parsers, showing that dependency parsers can still benefit from accurate POS information.", "The parser with predicted (imperfect) POS tags, however, performs even slightly worse than the parser without using POS tags.", "It illustrates that an end-to-end parser that doesn't rely on POS information can obtain competitive (or even better) performance than parsers using imperfect predicted POS tags, even if the POS tagger is relative high accuracy (accuracy > 97% in this experiment on PTB).", "Table 3 summarizes the parsing results of our model on the test sets of 14 treebanks from the CoNLL shared task, along with the state-of-theart baselines.", "Along with BIAF, we also list the performance of the bi-directional attention based Parser (Bi-Att) (Cheng et al., 2016) and the neural MST parser (NeuroMST) for comparison.", "Our parser achieves state-of-theart performance on both UAS and LAS on eight languages -Arabic, Czech, English, German, Portuguese, Slovene, Spanish, and Swedish.", "On Bulgarian and Dutch, our parser obtains the best UAS.", "On other languages, the performance of our parser is competitive with BIAF, and significantly better than others.", "The only exception is Japanese, on which NeuroMST obtains the best scores.", "Experiments on Other Treebanks CoNLL Treebanks UD Treebanks For UD Treebanks, we select 12 languages -Bulgarian, Catalan, Czech, Dutch, English, French, German, Italian, Norwegian, Romanian, Russian and Spanish.", "For all the languages, we adopt the standard training/dev/test splits, and use the universal POS tags (Petrov et al., 2012) provided in each treebank.", "The statistics of these corpora are provided in Appendix B.", "Table 4 summarizes the results of the STACKPTR parser, along with BIAF for comparison, on both the development and test datasets for each language.", "First, both BIAF and STACKPTR parsers achieve relatively high parsing accuracies on all the 12 languages -all with UAS are higher than 90%.", "On nine languages -Catalan, Czech, Dutch, English, French, German, Norwegian, Russian and Spanish -STACKPTR outperforms BIAF for both UAS and LAS.", "On Bulgarian, STACKPTR achieves slightly better UAS while LAS is slightly worse than BIAF.", "On Italian and Romanian, BIAF obtains marginally better parsing performance than STACKPTR.", "Conclusion In this paper, we proposed STACKPTR, a transition-based neural network architecture, for dependency parsing.", "Combining pointer networks with an internal stack to track the status of the top-down, depth-first search in the decoding procedure, the STACKPTR parser is able to capture information from the whole sentence and all the previously derived subtrees, removing the leftto-right restriction in classical transition-based parsers, while maintaining linear parsing steps, w.r.t the length of the sentences.", "Experimental re-sults on 29 treebanks show the effectiveness of our parser across 20 languages, by achieving state-ofthe-art performance on 21 corpora.", "There are several potential directions for future work.", "First, we intend to consider how to conduct experiments to improve the analysis of parsing errors qualitatively and quantitatively.", "Another interesting direction is to further improve our model by exploring reinforcement learning approaches to learn an optimal order for the children of head words, instead of using a predefined fixed order." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.1", "3.2", "3.3", "3.4", "3.5", "3.6", "3.7", "3.8", "4.1", "4.2", "4.4", "4.4.1", "4.4.2", "4.5.2", "5" ], "paper_header_content": [ "Introduction", "Background", "Dependency Parsing and Notations", "Pointer Networks", "Overview", "Encoder", "Decoder", "Higher-order Information", "Biaffine Attention Mechanism", "Training Objectives", "Discussion", "Implementation Details", "Setup", "Main Results", "Error Analysis", "Length and Graph Factors", "Effect of POS Embedding", "UD Treebanks", "Conclusion" ] }
GEM-SciDuet-train-37#paper-1053#slide-15
Effect of POS Embedding
Gold: Parser with gold-standard POS tags Pred: Parser with predicted POS tags (97.3% accuracy) None: Parser without POS tags
Gold: Parser with gold-standard POS tags Pred: Parser with predicted POS tags (97.3% accuracy) None: Parser without POS tags
[]
GEM-SciDuet-train-37#paper-1053#slide-16
1053
Stack-Pointer Networks for Dependency Parsing
We introduce a novel architecture for dependency parsing: stack-pointer networks (STACKPTR). Combining pointer networks (Vinyals et al., 2015) with an internal stack, the proposed model first reads and encodes the whole sentence, then builds the dependency tree top-down (from root-to-leaf) in a depth-first fashion. The stack tracks the status of the depthfirst search and the pointer networks select one child for the word at the top of the stack at each step. The STACKPTR parser benefits from the information of the whole sentence and all previously derived subtree structures, and removes the leftto-right restriction in classical transitionbased parsers. Yet, the number of steps for building any (including non-projective) parse tree is linear in the length of the sentence just as other transition-based parsers, yielding an efficient decoding algorithm with O(n 2 ) time complexity. We evaluate our model on 29 treebanks spanning 20 languages and different dependency annotation schemas, and achieve state-of-theart performance on 21 of them.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281 ], "paper_content_text": [ "Introduction Dependency parsing, which predicts the existence and type of linguistic dependency relations between words, is a first step towards deep language understanding.", "Its importance is widely recognized in the natural language processing (NLP) community, with it benefiting a wide range of NLP applications, such as coreference resolution (Ng, 2010; Durrett and Klein, 2013; Work done while at Carnegie Mellon University.", "2016), sentiment analysis (Tai et al., 2015) , machine translation (Bastings et al., 2017) , information extraction (Nguyen et al., 2009; Angeli et al., 2015; Peng et al., 2017) , word sense disambiguation (Fauceglia et al., 2015) , and low-resource languages processing (McDonald et al., 2013; Ma and Xia, 2014) .", "There are two dominant approaches to dependency parsing (Buchholz and Marsi, 2006; Nivre et al., 2007) : local and greedy transitionbased algorithms (Yamada and Matsumoto, 2003; Nivre and Scholz, 2004; Zhang and Nivre, 2011; Chen and Manning, 2014) , and the globally optimized graph-based algorithms (Eisner, 1996; Mc-Donald et al., 2005a,b; .", "Transition-based dependency parsers read words sequentially (commonly from left-to-right) and build dependency trees incrementally by making series of multiple choice decisions.", "The advantage of this formalism is that the number of operations required to build any projective parse tree is linear with respect to the length of the sentence.", "The challenge, however, is that the decision made at each step is based on local information, leading to error propagation and worse performance compared to graph-based parsers on root and long dependencies (McDonald and Nivre, 2011) .", "Previous studies have explored solutions to address this challenge.", "Stack LSTMs are capable of learning representations of the parser state that are sensitive to the complete contents of the parser's state.", "Andor et al.", "(2016) proposed a globally normalized transition model to replace the locally normalized classifier.", "However, the parsing accuracy is still behind state-of-the-art graph-based parsers (Dozat and Manning, 2017) .", "Graph-based dependency parsers, on the other hand, learn scoring functions for parse trees and perform exhaustive search over all possible trees for a sentence to find the globally highest scoring tree.", "Incorporating this global search algorithm with distributed representations learned from neural networks, neural graph-based parsers (Kiperwasser and Goldberg, 2016; Wang and Chang, 2016; Kuncoro et al., 2016; Dozat and Manning, 2017) have achieved the state-of-the-art accuracies on a number of treebanks in different languages.", "Nevertheless, these models, while accurate, are usually slow (e.g.", "decoding is O(n 3 ) time complexity for first-order models (McDonald et al., 2005a,b) and higher polynomials for higherorder models (McDonald and Pereira, 2006; Ma and Zhao, 2012b,a) ).", "In this paper, we propose a novel neural network architecture for dependency parsing, stackpointer networks (STACKPTR).", "STACKPTR is a transition-based architecture, with the corresponding asymptotic efficiency, but still maintains a global view of the sentence that proves essential for achieving competitive accuracy.", "Our STACKPTR parser has a pointer network (Vinyals et al., 2015) as its backbone, and is equipped with an internal stack to maintain the order of head words in tree structures.", "The STACKPTR parser performs parsing in an incremental, topdown, depth-first fashion; at each step, it generates an arc by assigning a child for the head word at the top of the internal stack.", "This architecture makes it possible to capture information from the whole sentence and all the previously derived subtrees, while maintaining a number of parsing steps linear in the sentence length.", "We evaluate our parser on 29 treebanks across 20 languages and different dependency annotation schemas, and achieve state-of-the-art performance on 21 of them.", "The contributions of this work are summarized as follows: (i) We propose a neural network architecture for dependency parsing that is simple, effective, and efficient.", "(ii) Empirical evaluations on benchmark datasets over 20 languages show that our method achieves state-of-the-art performance on 21 different treebanks 1 .", "(iii) Comprehensive error analysis is conducted to compare the proposed method to a strong graph-based baseline using biaffine attention (Dozat and Manning, 2017) .", "Background We first briefly describe the task of dependency parsing, setup the notation, and review Pointer Networks (Vinyals et al., 2015) .", "Dependency Parsing and Notations Dependency trees represent syntactic relationships between words in the sentences through labeled directed edges between head words and their dependents.", "Figure 1 (a) shows a dependency tree for the sentence, \"But there were no buyers\".", "In this paper, we will use the following notation: Input: x = {w 1 , .", ".", ".", ", w n } represents a generic sentence, where w i is the ith word.", "Output: y = {p 1 , p 2 , · · · , p k } represents a generic (possibly non-projective) dependency tree, where each path p i = $, w i,1 , w i,2 , · · · , w i,l i is a sequence of words from the root to a leaf.", "\"$\" is an universal virtual root that is added to each tree.", "Stack: σ denotes a stack configuration, which is a sequence of words.", "We use σ|w to represent a stack configuration that pushes word w into the stack σ.", "Children: ch(w i ) denotes the list of all the children (modifiers) of word w i .", "Pointer Networks Pointer Networks (PTR-NET) (Vinyals et al., 2015) are a variety of neural network capable of learning the conditional probability of an output sequence with elements that are discrete tokens corresponding to positions in an input sequence.", "This model cannot be trivially expressed by standard sequence-to-sequence networks due to the variable number of input positions in each sentence.", "PTR-NET solves the problem by using attention (Bahdanau et al., 2015; Luong et al., 2015) as a pointer to select a member of the input sequence as the output.", "Formally, the words of the sentence x are fed one-by-one into the encoder (a multiple-layer bidirectional RNN), producing a sequence of encoder hidden states s i .", "At each time step t, the decoder (a uni-directional RNN) receives the input from last step and outputs decoder hidden state h t .", "The attention vector a t is calculated as follows: e t i = score(h t , s i ) a t = softmax (e t ) (1) where score(·, ·) is the attention scoring function, which has several variations such as dot-product, concatenation, and biaffine (Luong et al., 2015) .", "PTR-NET regards the attention vector a t as a probability distribution over the source words, i.e.", "it uses a t i as pointers to select the input elements.", "3 Stack-Pointer Networks Overview Similarly to PTR-NET, STACKPTR first reads the whole sentence and encodes each word into the encoder hidden state s i .", "The internal stack σ is always initialized with the root symbol $.", "At each time step t, the decoder receives the input vector corresponding to the top element of the stack σ (the head word w p where p is the word index), generates the hidden state h t , and computes the attention vector a t using Eq.", "(1).", "The parser chooses a specific position c according to the attention scores in a t to generate a new dependency arc (w h , w c ) by selecting w c as a child of w h .", "Then the parser pushes w c onto the stack, i.e.", "σ → σ|w c , and goes to the next step.", "At one step if the parser points w h to itself, i.e.", "c = h, it indicates that all children of the head word w h have already been selected.", "Then the parser goes to the next step by popping w h out of σ.", "At test time, in order to guarantee a valid dependency tree containing all the words in the input sentences exactly once, the decoder maintains a list of \"available\" words.", "At each decoding step, the parser selects a child for the current head word, and removes the child from the list of available words to make sure that it cannot be selected as a child of other head words.", "For head words with multiple children, it is possible that there is more than one valid selection for each time step.", "In order to define a deterministic decoding process to make sure that there is only one ground-truth choice at each step (which is necessary for simple maximum likelihood estimation), a predefined order for each ch(w i ) needs to be introduced.", "The predefined order of children can have different alternatives, such as leftto-right or inside-out 2 .", "In this paper, we adopt the inside-out order 3 since it enables us to utilize second-order sibling information, which has been proven beneficial for parsing performance (Mc-Donald and Pereira, 2006; ) (see § 3.4 for details).", "Figure 1 (b) depicts the architecture of STACKPTR and the decoding procedure for the example sentence in Figure 1 (a) .", "Encoder The encoder of our parsing model is based on the bi-directional LSTM-CNN architecture (BLSTM-CNNs) (Chiu and Nichols, 2016; where CNNs encode character-level information of a word into its character-level repre-sentation and BLSTM models context information of each word.", "Formally, for each word, the CNN, with character embeddings as inputs, encodes the character-level representation.", "Then the character-level representation vector is concatenated with the word embedding vector to feed into the BLSTM network.", "To enrich word-level information, we also use POS embeddings.", "Finally, the encoder outputs a sequence of hidden states s i .", "Decoder The decoder for our parser is a uni-directional LSTM.", "Different from previous work (Bahdanau et al., 2015; Vinyals et al., 2015) which uses word embeddings of the previous word as the input to the decoder, our decoder receives the encoder hidden state vector (s i ) of the top element in the stack σ (see Figure 1 (b)).", "Compared to word embeddings, the encoder hidden states contain more contextual information, benefiting both the training and decoding procedures.", "The decoder produces a sequence of decoder hidden states h i , one for each decoding step.", "Higher-order Information As mentioned before, our parser is capable of utilizing higher-order information.", "In this paper, we incorporate two kinds of higher-order structures grandparent and sibling.", "A sibling structure is a head word with two successive modifiers, and a grandparent structure is a pair of dependencies connected head-to-tail: 0 12 3 456 2782 96 56 986 2 5214 3 77543 9 5 2 52 ÿ ÿ ! \"", "\"#$% & #& ' % !", "#& (()& *% & & !", "*\"& (+(\"!", "\"% ) &, \" \"#$' (% & #-.,\"/\" \"#$ % %% #*) (& ** *% & 0 % #( !", "$% %()- 1 2 3 -45 67 896: ;<=>?", "@ ABCDE3 -F14 G H I .J3-4 %& *% &()!", "!& %#( & +!", ", & * -\"ABCDE3 -F14K!", "& %& *K&& *&& 1 %& *!", "$% %()% & #-'A #& *# \" !", "0 & $()J3-4 (,% (& !", "!", "$, & *& *!", "& *() & *% & #.\"", "#& !", "$%(!", "K EL' % #& #! '", "#((%& & $.", "\"(,* #*) ( %& *) (0 #%()& * % .", "%& (M NOP QR#*\" \"#$& & (%!", "!SNR P T .,* #*#+% #( \" % (!", "& (' #& ( \" % #+) ( !", "U\"%) (!", "!", "(,% ABCDE3 -F145 V W<; ABCDEXDY3 -FZ4 Y*& % .,& && *\" \"#$& 1%%& () & %Z.#*(),* #*[%% & #(& 0 +& (& (& *% #( ()1'(# & ) #& ( U& (% . )", "/# & % !", "( & *% % &) (% (!", "K EL'' \\\"/ A # /#!", "!", "$.A#& (%u' .u' v.\"u' w\"% # + % %& *& .", "% #& K!", "$.)", "#& (& % & ( \"0 #* ! \"", "& % .", "\"0 % +!", "& % .\"", "& () \"0 % +!", "\"& 0 % +!", "& % ' x yz{ | } {| { { } | C,& * \"0 ( \"\" \"#$ % %+ !", "\"( \"%) ( % & % !", "( & *%'_ & * % % #& (., (K \"+#[ (\"(& ,( !", "K& % %) ( K (%,( [' G H G +H vY*\"$ #0 ( % & #& % \"\" K& (%()& *E % G vH!", "( & *' B( !", "& % % \" #& \"%& !", "%\" 0 #( !", "& % %%& U( \"% '(+ K & $.,! \"", "& *% $& # *& 0 *\"\"K % (% ' x { | } } { } { Ỹ */ % && $ () % ,\"% # +% %/ % & 0 ( \" ) #& ( U& (.,* #*\"#( (% %\" 0 \"#$& & ( & % \" K \"!\"", "\"# % 'E % 0 G vH & (\"#\", \"!", "$0 % \"\"$ #0 ( !", "( & *) (/ % & 0 ( \" % % & %& *+% %) ($ % % .", "#! \"", "(, !", "( & *%.,% U & %\"% * ' Y*E % G vH!", "( & * %+% \"(& ,( & !", "& \"& $ %()\"$ #0 ( %& #0 & % OQ S¡P% % .,* #*#(% % &()*\"0 ,( \"\" & %\"% #\"& %((% \".", "\"¢ £OQ ¤ S¡P% % .,* #*#(% % &()\" \"#$\"& * (+& ,& **\"\"(\" / ' ( !", "!", "$.,\"(& #( !", "& % %¥ ¦ § ,* ©\"ª & * \" #%()& *% « %*\"0 ,( \"\"\" ( & ' #( !", "& % %\"0 (& \"%¬ ¦ § ,* ©\"® & * \"()& * *\"\"(\" /()\" \"#$' _ & & K!", "$.", "#( !", "& % % & %*! )", "0 #(% & & & *\"\"+$©.,* % #( !", "& % %(!", "$ & !*! )", "0 #(% & & & .% #& *#(% & & &# +& \"\"+$\"\" ( (\" / %& (®' E#*& $ ()% %# & \"+$ # % K!", "$ #(+ & ,(%!", "!", ".", "\"#&% % & *#(0 % & #& (% % # /\" * #!", "!", "$ v' #( !", "& % %#(% & #& \") ( ()#( !", "& % % . \"", "#& & *\" K % (()& * °©F®± & (#(% & & & %*\"\"+$©\" ®' #( !", "& % %# & \"+$#( !", "& 0 #( !", "& % , & *& *(& **!", ")() ®« %#(% & & & 'Y* ( &()#(#& & ( #*#(% & #& (²® vG H(³ 0 vG +H ² %& *T S¡ ¢ PSQ¢ £P .)", "\"& *&% & + & \"& (/\"& *( & !#(% & #& (' _ ( \"& ( % %& #-.", "&% ) /#%& ( /\"( & !#(% & #& (%) (!", "!#( !", "& \" #( !", "& % %\"/\"(-' Y* %#+ To utilize higher-order information, the decoder's input at each step is the sum of the encoder hidden states of three words: β t = s h + s g + s s where β t is the input vector of decoder at time t and h, g, s are the indices of the head word and its grandparent and sibling, respectively.", "Figure 1 (b) illustrates the details.", "Here we use the element-wise sum operation instead of concatenation because it does not increase the dimension of the input vector β t , thus introducing no additional model parameters.", "Biaffine Attention Mechanism For attention score function (Eq.", "(1)), we adopt the biaffine attention mechanism (Luong et al., 2015; Dozat and Manning, 2017) : e t i = h T t Ws i + U T h t + V T s i + b where W, U, V, b are parameters, denoting the weight matrix of the bi-linear term, the two weight vectors of the linear terms, and the bias vector.", "As discussed in Dozat and Manning (2017) , applying a multilayer perceptron (MLP) to the output vectors of the BLSTM before the score function can both reduce the dimensionality and overfitting of the model.", "We follow this work by using a one-layer perceptron to s i and h i with elu (Clevert et al., 2015) as its activation function.", "Similarly, the dependency label classifier also uses a biaffine function to score each label, given the head word vector h t and child vector s i as inputs.", "Again, we use MLPs to transform h t and s i before feeding them into the classifier.", "Training Objectives The STACKPTR parser is trained to optimize the probability of the dependency trees given sentences: P θ (y|x), which can be factorized as: P θ (y|x) = k i=1 P θ (p i |p <i , x) = k i=1 l i j=1 P θ (c i,j |c i,<j , p <i , x), (2) where θ represents model parameters.", "p <i denotes the preceding paths that have already been generated.", "c i,j represents the jth word in p i and c i,<j denotes all the proceeding words on the path p i .", "Thus, the STACKPTR parser is an autoregressive model, like sequence-to-sequence models, but it factors the distribution according to a top-down tree structure as opposed to a left-to-right chain.", "We define P θ (c i,j |c i,<j , p <i , x) = a t , where attention vector a t (of dimension n) is used as the distribution over the indices of words in a sentence.", "Arc Prediction Our parser is trained by optimizing the conditional likelihood in Eq (2), which is implemented as the cross-entropy loss.", "Label Prediction We train a separated multiclass classifier in parallel to predict the dependency labels.", "Following Dozat and Manning (2017) , the classifier takes the information of the head word and its child as features.", "The label classifier is trained simultaneously with the parser by optimizing the sum of their objectives.", "Discussion Time Complexity.", "The number of decoding steps to build a parse tree for a sentence of length n is 2n−1, linear in n. Together with the attention mechanism (at each step, we need to compute the attention vector a t , whose runtime is O(n)), the time complexity of decoding algorithm is O(n 2 ), which is more efficient than graph-based parsers that have O(n 3 ) or worse complexity when using dynamic programming or maximum spanning tree (MST) decoding algorithms.", "Top-down Parsing.", "When humans comprehend a natural language sentence, they arguably do it in an incremental, left-to-right manner.", "However, when humans consciously annotate a sentence with syntactic structure, they rarely ever process in fixed left-to-right order.", "Rather, they start by reading the whole sentence, then seeking the main predicates, jumping back-and-forth over the sentence and recursively proceeding to the subtree structures governed by certain head words.", "Our parser follows a similar kind of annotation process: starting from reading the whole sentence, and processing in a top-down manner by finding the main predicates first and only then search for sub-trees governed by them.", "When making latter decisions, the parser has access to the entire structure built in earlier steps.", "Implementation Details Pre-trained Word Embeddings.", "For all the parsing models in different languages, we initialize word vectors with pretrained word embeddings.", "For Chinese, Dutch, English, German and Spanish, we use the structured-skipgram embeddings.", "For other languages we use Polyglot embeddings (Al-Rfou et al., 2013) .", "Optimization.", "Parameter optimization is performed with the Adam optimizer (Kingma and Ba, 2014) with β 1 = β 2 = 0.9.", "We choose an initial learning rate of η 0 = 0.001.", "The learning rate η is annealed by multiplying a fixed decay rate ρ = 0.75 when parsing performance stops increasing on validation sets.", "To reduce the effects of \"gradient exploding\", we use gradient clipping of 5.0 (Pascanu et al., 2013) .", "Dropout Training.", "To mitigate overfitting, we apply dropout (Srivastava et al., 2014; .", "For BLSTM, we use recurrent dropout (Gal and Ghahramani, 2016) with a drop rate of 0.33 between hidden states and 0.33 between layers.", "Following Dozat and Manning (2017) , we also use embedding dropout with a rate of 0.33 on all word, character, and POS embeddings.", "Hyper-Parameters.", "Some parameters are chosen from those reported in Dozat and Manning (2017) .", "We use the same hyper-parameters across the models on different treebanks and languages, due to time constraints.", "The details of the chosen hyper-parameters for all experiments are summarized in Appendix A.", "Experiments Setup We evaluate our STACKPTR parser mainly on three treebanks: the English Penn Treebank (PTB version 3.0) (Marcus et al., 1993) , the Penn Chinese Treebank (CTB version 5.1) (Xue et al., 2002) , and the German CoNLL 2009 corpus (Hajič et al., 2009) .", "We use the same experimental settings as Kuncoro et al.", "(2016) .", "To make a thorough empirical comparison with previous studies, we also evaluate our system on treebanks from CoNLL shared task and the Universal Dependency (UD) Treebanks 4 .", "For the CoNLL Treebanks, we use the English treebank from CoNLL-2008 shared task (Surdeanu et al., 2008) and all 13 treebanks from CoNLL-2006 shared task (Buchholz and Marsi, 2006) .", "The experimental settings are the same as .", "For UD Treebanks, we select 12 languages.", "The details of the treebanks and experimental settings are in § 4.5 and Appendix B.", "Evaluation Metrics Parsing performance is measured with five metrics: unlabeled attachment score (UAS), labeled attachment score (LAS), unlabeled complete match (UCM), labeled complete match (LCM), and root accuracy (RA).", "Following previous work (Kuncoro et al., 2016; Dozat and Manning, 2017) , we report results excluding punctuations for Chinese and English.", "For each experiment, we report the mean values with corresponding standard deviations over 5 repetitions.", "Baseline For fair comparison of the parsing performance, we re-implemented the graph-based Deep Biaffine (BIAF) parser (Dozat and Manning, 2017) , which achieved state-of-the-art results on a wide range of languages.", "Our re-implementation adds character-level information using the same LSTM-CNN encoder as our model ( § 3.2) to the original BIAF model, which boosts its performance on all languages.", "Main Results We first conduct experiments to demonstrate the effectiveness of our neural architecture by comparing with the strong baseline BIAF.", "We compare the performance of four variations of our model with different decoder inputs -Org, +gpar, +sib and Full -where the Org model utilizes only the encoder hidden states of head words, while the +gpar and +sib models augments the original one with grandparent and sibling information, respectively.", "The Full model includes all the three information as inputs.", "Figure 2 illustrates the performance (five metrics) of different variations of our STACKPTR parser together with the results of baseline BIAF re-implemented by us, on the test sets of the three languages.", "On UAS and LAS, the Full variation of STACKPTR with decoding beam size 10 outperforms BIAF on Chinese, and obtains competitive performance on English and German.", "An interesting observation is that the Full model achieves the best accuracy on English and Chinese, while performs slightly worse than +sib on German.", "This shows that the importance of higher-order information varies in languages.", "On LCM and UCM, STACKPTR significantly outperforms BIAF on all languages, showing the superiority of our parser on complete sentence parsing.", "The results of our parser on RA are slightly worse than BIAF.", "More details of results are provided in Appendix C. Table 1 illustrates the UAS and LAS of the four versions of our model (with decoding beam size 10) on the three treebanks, together with previous top-performing systems for comparison.", "Note that the results of STACKPTR and our reimplementation of BIAF are the average of 5 repetitions instead of a single run.", "Our Full model significantly outperforms all the transition-based parsers on all three languages, and achieves better results than most graph-based parsers.", "Our Table 1 : UAS and LAS of four versions of our model on test sets for three languages, together with topperforming parsing systems.", "\"T\" and \"G\" indicate transition-and graph-based models, respectively.", "For BIAF, we provide the original results reported in Dozat and Manning (2017) and our re-implementation.", "For STACKPTR and our re-implementation of BiAF, we report the average over 5 runs.", "re-implementation of BIAF obtains better performance than the original one in Dozat and Manning (2017) , demonstrating the effectiveness of the character-level information.", "Our model achieves state-of-the-art performance on both UAS and LAS on Chinese, and best UAS on English.", "On German, the performance is competitive with BIAF, and significantly better than other models.", "Comparison with Previous Work Error Analysis In this section, we characterize the errors made by BIAF and STACKPTR by presenting a number of experiments that relate parsing errors to a set of linguistic and structural properties.", "For simplicity, we follow McDonald and Nivre (2011) and report labeled parsing metrics (either accuracy, precision, or recall) for all experiments.", "Length and Graph Factors Following McDonald and Nivre (2011), we analyze parsing errors related to structural factors.", "Sentence Length.", "Figure 3 (a) shows the accuracy of both parsing models relative to sentence lengths.", "Consistent with the analysis in Mc-Donald and Nivre (2011) , STACKPTR tends to perform better on shorter sentences, which make fewer parsing decisions, significantly reducing the chance of error propagation.", "Dependency Length.", "Figure 3 (b) measures the precision and recall relative to dependency lengths.", "While the graph-based BIAF parser still performs better for longer dependency arcs and transition-based STACKPTR parser does better for shorter ones, the gap between the two systems is marginal, much smaller than that shown Table 3 : UAS and LAS on 14 treebanks from CoNLL shared tasks, together with several state-of-the-art parsers.", "Bi-Att is the bi-directional attention based parser (Cheng et al., 2016) , and NeuroMST is the neural MST parser .", "\"Best Published\" includes the most accurate parsers in term of UAS among , Martins et al.", "(2011) , Martins et al.", "(2013) , , , Zhang and McDonald (2014) , Pitler and McDonald (2015) , and Cheng et al.", "(2016) .", "in McDonald and Nivre (2011) .", "One possible reason is that, unlike traditional transition-based parsers that scan the sentence from left to right, STACKPTR processes in a top-down manner, thus sometimes unnecessarily creating shorter dependency arcs first.", "Root Distance.", "Figure 3 (c) plots the precision and recall of each system for arcs of varying distance to the root.", "Different from the observation in McDonald and Nivre (2011) , STACKPTR does not show an obvious advantage on the precision for arcs further away from the root.", "Furthermore, the STACKPTR parser does not have the tendency to over-predict root modifiers reported in McDonald and Nivre (2011) .", "This behavior can be explained using the same reasoning as above: the fact that arcs further away from the root are usually constructed early in the parsing algorithm of traditional transition-based parsers is not true for the STACKPTR parser.", "Effect of POS Embedding The only prerequisite information that our parsing model relies on is POS tags.", "With the goal of achieving an end-to-end parser, we explore the effect of POS tags on parsing performance.", "We run experiments on PTB using our STACKPTR parser with gold-standard and predicted POS tags, and without tags, respectively.", "STACKPTR in these experiments is the Full model with beam=10.", "Table 2 gives results of the parsers with different versions of POS tags on the test data of PTB.", "The parser with gold-standard POS tags significantly outperforms the other two parsers, showing that dependency parsers can still benefit from accurate POS information.", "The parser with predicted (imperfect) POS tags, however, performs even slightly worse than the parser without using POS tags.", "It illustrates that an end-to-end parser that doesn't rely on POS information can obtain competitive (or even better) performance than parsers using imperfect predicted POS tags, even if the POS tagger is relative high accuracy (accuracy > 97% in this experiment on PTB).", "Table 3 summarizes the parsing results of our model on the test sets of 14 treebanks from the CoNLL shared task, along with the state-of-theart baselines.", "Along with BIAF, we also list the performance of the bi-directional attention based Parser (Bi-Att) (Cheng et al., 2016) and the neural MST parser (NeuroMST) for comparison.", "Our parser achieves state-of-theart performance on both UAS and LAS on eight languages -Arabic, Czech, English, German, Portuguese, Slovene, Spanish, and Swedish.", "On Bulgarian and Dutch, our parser obtains the best UAS.", "On other languages, the performance of our parser is competitive with BIAF, and significantly better than others.", "The only exception is Japanese, on which NeuroMST obtains the best scores.", "Experiments on Other Treebanks CoNLL Treebanks UD Treebanks For UD Treebanks, we select 12 languages -Bulgarian, Catalan, Czech, Dutch, English, French, German, Italian, Norwegian, Romanian, Russian and Spanish.", "For all the languages, we adopt the standard training/dev/test splits, and use the universal POS tags (Petrov et al., 2012) provided in each treebank.", "The statistics of these corpora are provided in Appendix B.", "Table 4 summarizes the results of the STACKPTR parser, along with BIAF for comparison, on both the development and test datasets for each language.", "First, both BIAF and STACKPTR parsers achieve relatively high parsing accuracies on all the 12 languages -all with UAS are higher than 90%.", "On nine languages -Catalan, Czech, Dutch, English, French, German, Norwegian, Russian and Spanish -STACKPTR outperforms BIAF for both UAS and LAS.", "On Bulgarian, STACKPTR achieves slightly better UAS while LAS is slightly worse than BIAF.", "On Italian and Romanian, BIAF obtains marginally better parsing performance than STACKPTR.", "Conclusion In this paper, we proposed STACKPTR, a transition-based neural network architecture, for dependency parsing.", "Combining pointer networks with an internal stack to track the status of the top-down, depth-first search in the decoding procedure, the STACKPTR parser is able to capture information from the whole sentence and all the previously derived subtrees, removing the leftto-right restriction in classical transition-based parsers, while maintaining linear parsing steps, w.r.t the length of the sentences.", "Experimental re-sults on 29 treebanks show the effectiveness of our parser across 20 languages, by achieving state-ofthe-art performance on 21 corpora.", "There are several potential directions for future work.", "First, we intend to consider how to conduct experiments to improve the analysis of parsing errors qualitatively and quantitatively.", "Another interesting direction is to further improve our model by exploring reinforcement learning approaches to learn an optimal order for the children of head words, instead of using a predefined fixed order." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.1", "3.2", "3.3", "3.4", "3.5", "3.6", "3.7", "3.8", "4.1", "4.2", "4.4", "4.4.1", "4.4.2", "4.5.2", "5" ], "paper_header_content": [ "Introduction", "Background", "Dependency Parsing and Notations", "Pointer Networks", "Overview", "Encoder", "Decoder", "Higher-order Information", "Biaffine Attention Mechanism", "Training Objectives", "Discussion", "Implementation Details", "Setup", "Main Results", "Error Analysis", "Length and Graph Factors", "Effect of POS Embedding", "UD Treebanks", "Conclusion" ] }
GEM-SciDuet-train-37#paper-1053#slide-16
Experiment 2 Universal Dependency Treebanks
Universal Dependency Treebanks (V2.2) Languages: Bulgarian, Catalan, Czech, Dutch, English, French, German, Italian, Norwegian, Romanian, Russian and Spanish Note: we also ran experiments on 14 CoNLL Treebanks. (see the paper for details)
Universal Dependency Treebanks (V2.2) Languages: Bulgarian, Catalan, Czech, Dutch, English, French, German, Italian, Norwegian, Romanian, Russian and Spanish Note: we also ran experiments on 14 CoNLL Treebanks. (see the paper for details)
[]
GEM-SciDuet-train-37#paper-1053#slide-17
1053
Stack-Pointer Networks for Dependency Parsing
We introduce a novel architecture for dependency parsing: stack-pointer networks (STACKPTR). Combining pointer networks (Vinyals et al., 2015) with an internal stack, the proposed model first reads and encodes the whole sentence, then builds the dependency tree top-down (from root-to-leaf) in a depth-first fashion. The stack tracks the status of the depthfirst search and the pointer networks select one child for the word at the top of the stack at each step. The STACKPTR parser benefits from the information of the whole sentence and all previously derived subtree structures, and removes the leftto-right restriction in classical transitionbased parsers. Yet, the number of steps for building any (including non-projective) parse tree is linear in the length of the sentence just as other transition-based parsers, yielding an efficient decoding algorithm with O(n 2 ) time complexity. We evaluate our model on 29 treebanks spanning 20 languages and different dependency annotation schemas, and achieve state-of-theart performance on 21 of them.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281 ], "paper_content_text": [ "Introduction Dependency parsing, which predicts the existence and type of linguistic dependency relations between words, is a first step towards deep language understanding.", "Its importance is widely recognized in the natural language processing (NLP) community, with it benefiting a wide range of NLP applications, such as coreference resolution (Ng, 2010; Durrett and Klein, 2013; Work done while at Carnegie Mellon University.", "2016), sentiment analysis (Tai et al., 2015) , machine translation (Bastings et al., 2017) , information extraction (Nguyen et al., 2009; Angeli et al., 2015; Peng et al., 2017) , word sense disambiguation (Fauceglia et al., 2015) , and low-resource languages processing (McDonald et al., 2013; Ma and Xia, 2014) .", "There are two dominant approaches to dependency parsing (Buchholz and Marsi, 2006; Nivre et al., 2007) : local and greedy transitionbased algorithms (Yamada and Matsumoto, 2003; Nivre and Scholz, 2004; Zhang and Nivre, 2011; Chen and Manning, 2014) , and the globally optimized graph-based algorithms (Eisner, 1996; Mc-Donald et al., 2005a,b; .", "Transition-based dependency parsers read words sequentially (commonly from left-to-right) and build dependency trees incrementally by making series of multiple choice decisions.", "The advantage of this formalism is that the number of operations required to build any projective parse tree is linear with respect to the length of the sentence.", "The challenge, however, is that the decision made at each step is based on local information, leading to error propagation and worse performance compared to graph-based parsers on root and long dependencies (McDonald and Nivre, 2011) .", "Previous studies have explored solutions to address this challenge.", "Stack LSTMs are capable of learning representations of the parser state that are sensitive to the complete contents of the parser's state.", "Andor et al.", "(2016) proposed a globally normalized transition model to replace the locally normalized classifier.", "However, the parsing accuracy is still behind state-of-the-art graph-based parsers (Dozat and Manning, 2017) .", "Graph-based dependency parsers, on the other hand, learn scoring functions for parse trees and perform exhaustive search over all possible trees for a sentence to find the globally highest scoring tree.", "Incorporating this global search algorithm with distributed representations learned from neural networks, neural graph-based parsers (Kiperwasser and Goldberg, 2016; Wang and Chang, 2016; Kuncoro et al., 2016; Dozat and Manning, 2017) have achieved the state-of-the-art accuracies on a number of treebanks in different languages.", "Nevertheless, these models, while accurate, are usually slow (e.g.", "decoding is O(n 3 ) time complexity for first-order models (McDonald et al., 2005a,b) and higher polynomials for higherorder models (McDonald and Pereira, 2006; Ma and Zhao, 2012b,a) ).", "In this paper, we propose a novel neural network architecture for dependency parsing, stackpointer networks (STACKPTR).", "STACKPTR is a transition-based architecture, with the corresponding asymptotic efficiency, but still maintains a global view of the sentence that proves essential for achieving competitive accuracy.", "Our STACKPTR parser has a pointer network (Vinyals et al., 2015) as its backbone, and is equipped with an internal stack to maintain the order of head words in tree structures.", "The STACKPTR parser performs parsing in an incremental, topdown, depth-first fashion; at each step, it generates an arc by assigning a child for the head word at the top of the internal stack.", "This architecture makes it possible to capture information from the whole sentence and all the previously derived subtrees, while maintaining a number of parsing steps linear in the sentence length.", "We evaluate our parser on 29 treebanks across 20 languages and different dependency annotation schemas, and achieve state-of-the-art performance on 21 of them.", "The contributions of this work are summarized as follows: (i) We propose a neural network architecture for dependency parsing that is simple, effective, and efficient.", "(ii) Empirical evaluations on benchmark datasets over 20 languages show that our method achieves state-of-the-art performance on 21 different treebanks 1 .", "(iii) Comprehensive error analysis is conducted to compare the proposed method to a strong graph-based baseline using biaffine attention (Dozat and Manning, 2017) .", "Background We first briefly describe the task of dependency parsing, setup the notation, and review Pointer Networks (Vinyals et al., 2015) .", "Dependency Parsing and Notations Dependency trees represent syntactic relationships between words in the sentences through labeled directed edges between head words and their dependents.", "Figure 1 (a) shows a dependency tree for the sentence, \"But there were no buyers\".", "In this paper, we will use the following notation: Input: x = {w 1 , .", ".", ".", ", w n } represents a generic sentence, where w i is the ith word.", "Output: y = {p 1 , p 2 , · · · , p k } represents a generic (possibly non-projective) dependency tree, where each path p i = $, w i,1 , w i,2 , · · · , w i,l i is a sequence of words from the root to a leaf.", "\"$\" is an universal virtual root that is added to each tree.", "Stack: σ denotes a stack configuration, which is a sequence of words.", "We use σ|w to represent a stack configuration that pushes word w into the stack σ.", "Children: ch(w i ) denotes the list of all the children (modifiers) of word w i .", "Pointer Networks Pointer Networks (PTR-NET) (Vinyals et al., 2015) are a variety of neural network capable of learning the conditional probability of an output sequence with elements that are discrete tokens corresponding to positions in an input sequence.", "This model cannot be trivially expressed by standard sequence-to-sequence networks due to the variable number of input positions in each sentence.", "PTR-NET solves the problem by using attention (Bahdanau et al., 2015; Luong et al., 2015) as a pointer to select a member of the input sequence as the output.", "Formally, the words of the sentence x are fed one-by-one into the encoder (a multiple-layer bidirectional RNN), producing a sequence of encoder hidden states s i .", "At each time step t, the decoder (a uni-directional RNN) receives the input from last step and outputs decoder hidden state h t .", "The attention vector a t is calculated as follows: e t i = score(h t , s i ) a t = softmax (e t ) (1) where score(·, ·) is the attention scoring function, which has several variations such as dot-product, concatenation, and biaffine (Luong et al., 2015) .", "PTR-NET regards the attention vector a t as a probability distribution over the source words, i.e.", "it uses a t i as pointers to select the input elements.", "3 Stack-Pointer Networks Overview Similarly to PTR-NET, STACKPTR first reads the whole sentence and encodes each word into the encoder hidden state s i .", "The internal stack σ is always initialized with the root symbol $.", "At each time step t, the decoder receives the input vector corresponding to the top element of the stack σ (the head word w p where p is the word index), generates the hidden state h t , and computes the attention vector a t using Eq.", "(1).", "The parser chooses a specific position c according to the attention scores in a t to generate a new dependency arc (w h , w c ) by selecting w c as a child of w h .", "Then the parser pushes w c onto the stack, i.e.", "σ → σ|w c , and goes to the next step.", "At one step if the parser points w h to itself, i.e.", "c = h, it indicates that all children of the head word w h have already been selected.", "Then the parser goes to the next step by popping w h out of σ.", "At test time, in order to guarantee a valid dependency tree containing all the words in the input sentences exactly once, the decoder maintains a list of \"available\" words.", "At each decoding step, the parser selects a child for the current head word, and removes the child from the list of available words to make sure that it cannot be selected as a child of other head words.", "For head words with multiple children, it is possible that there is more than one valid selection for each time step.", "In order to define a deterministic decoding process to make sure that there is only one ground-truth choice at each step (which is necessary for simple maximum likelihood estimation), a predefined order for each ch(w i ) needs to be introduced.", "The predefined order of children can have different alternatives, such as leftto-right or inside-out 2 .", "In this paper, we adopt the inside-out order 3 since it enables us to utilize second-order sibling information, which has been proven beneficial for parsing performance (Mc-Donald and Pereira, 2006; ) (see § 3.4 for details).", "Figure 1 (b) depicts the architecture of STACKPTR and the decoding procedure for the example sentence in Figure 1 (a) .", "Encoder The encoder of our parsing model is based on the bi-directional LSTM-CNN architecture (BLSTM-CNNs) (Chiu and Nichols, 2016; where CNNs encode character-level information of a word into its character-level repre-sentation and BLSTM models context information of each word.", "Formally, for each word, the CNN, with character embeddings as inputs, encodes the character-level representation.", "Then the character-level representation vector is concatenated with the word embedding vector to feed into the BLSTM network.", "To enrich word-level information, we also use POS embeddings.", "Finally, the encoder outputs a sequence of hidden states s i .", "Decoder The decoder for our parser is a uni-directional LSTM.", "Different from previous work (Bahdanau et al., 2015; Vinyals et al., 2015) which uses word embeddings of the previous word as the input to the decoder, our decoder receives the encoder hidden state vector (s i ) of the top element in the stack σ (see Figure 1 (b)).", "Compared to word embeddings, the encoder hidden states contain more contextual information, benefiting both the training and decoding procedures.", "The decoder produces a sequence of decoder hidden states h i , one for each decoding step.", "Higher-order Information As mentioned before, our parser is capable of utilizing higher-order information.", "In this paper, we incorporate two kinds of higher-order structures grandparent and sibling.", "A sibling structure is a head word with two successive modifiers, and a grandparent structure is a pair of dependencies connected head-to-tail: 0 12 3 456 2782 96 56 986 2 5214 3 77543 9 5 2 52 ÿ ÿ ! \"", "\"#$% & #& ' % !", "#& (()& *% & & !", "*\"& (+(\"!", "\"% ) &, \" \"#$' (% & #-.,\"/\" \"#$ % %% #*) (& ** *% & 0 % #( !", "$% %()- 1 2 3 -45 67 896: ;<=>?", "@ ABCDE3 -F14 G H I .J3-4 %& *% &()!", "!& %#( & +!", ", & * -\"ABCDE3 -F14K!", "& %& *K&& *&& 1 %& *!", "$% %()% & #-'A #& *# \" !", "0 & $()J3-4 (,% (& !", "!", "$, & *& *!", "& *() & *% & #.\"", "#& !", "$%(!", "K EL' % #& #! '", "#((%& & $.", "\"(,* #*) ( %& *) (0 #%()& * % .", "%& (M NOP QR#*\" \"#$& & (%!", "!SNR P T .,* #*#+% #( \" % (!", "& (' #& ( \" % #+) ( !", "U\"%) (!", "!", "(,% ABCDE3 -F145 V W<; ABCDEXDY3 -FZ4 Y*& % .,& && *\" \"#$& 1%%& () & %Z.#*(),* #*[%% & #(& 0 +& (& (& *% #( ()1'(# & ) #& ( U& (% . )", "/# & % !", "( & *% % &) (% (!", "K EL'' \\\"/ A # /#!", "!", "$.A#& (%u' .u' v.\"u' w\"% # + % %& *& .", "% #& K!", "$.)", "#& (& % & ( \"0 #* ! \"", "& % .", "\"0 % +!", "& % .\"", "& () \"0 % +!", "\"& 0 % +!", "& % ' x yz{ | } {| { { } | C,& * \"0 ( \"\" \"#$ % %+ !", "\"( \"%) ( % & % !", "( & *%'_ & * % % #& (., (K \"+#[ (\"(& ,( !", "K& % %) ( K (%,( [' G H G +H vY*\"$ #0 ( % & #& % \"\" K& (%()& *E % G vH!", "( & *' B( !", "& % % \" #& \"%& !", "%\" 0 #( !", "& % %%& U( \"% '(+ K & $.,! \"", "& *% $& # *& 0 *\"\"K % (% ' x { | } } { } { Ỹ */ % && $ () % ,\"% # +% %/ % & 0 ( \" ) #& ( U& (.,* #*\"#( (% %\" 0 \"#$& & ( & % \" K \"!\"", "\"# % 'E % 0 G vH & (\"#\", \"!", "$0 % \"\"$ #0 ( !", "( & *) (/ % & 0 ( \" % % & %& *+% %) ($ % % .", "#! \"", "(, !", "( & *%.,% U & %\"% * ' Y*E % G vH!", "( & * %+% \"(& ,( & !", "& \"& $ %()\"$ #0 ( %& #0 & % OQ S¡P% % .,* #*#(% % &()*\"0 ,( \"\" & %\"% #\"& %((% \".", "\"¢ £OQ ¤ S¡P% % .,* #*#(% % &()\" \"#$\"& * (+& ,& **\"\"(\" / ' ( !", "!", "$.,\"(& #( !", "& % %¥ ¦ § ,* ©\"ª & * \" #%()& *% « %*\"0 ,( \"\"\" ( & ' #( !", "& % %\"0 (& \"%¬ ¦ § ,* ©\"® & * \"()& * *\"\"(\" /()\" \"#$' _ & & K!", "$.", "#( !", "& % % & %*! )", "0 #(% & & & *\"\"+$©.,* % #( !", "& % %(!", "$ & !*! )", "0 #(% & & & .% #& *#(% & & &# +& \"\"+$\"\" ( (\" / %& (®' E#*& $ ()% %# & \"+$ # % K!", "$ #(+ & ,(%!", "!", ".", "\"#&% % & *#(0 % & #& (% % # /\" * #!", "!", "$ v' #( !", "& % %#(% & #& \") ( ()#( !", "& % % . \"", "#& & *\" K % (()& * °©F®± & (#(% & & & %*\"\"+$©\" ®' #( !", "& % %# & \"+$#( !", "& 0 #( !", "& % , & *& *(& **!", ")() ®« %#(% & & & 'Y* ( &()#(#& & ( #*#(% & #& (²® vG H(³ 0 vG +H ² %& *T S¡ ¢ PSQ¢ £P .)", "\"& *&% & + & \"& (/\"& *( & !#(% & #& (' _ ( \"& ( % %& #-.", "&% ) /#%& ( /\"( & !#(% & #& (%) (!", "!#( !", "& \" #( !", "& % %\"/\"(-' Y* %#+ To utilize higher-order information, the decoder's input at each step is the sum of the encoder hidden states of three words: β t = s h + s g + s s where β t is the input vector of decoder at time t and h, g, s are the indices of the head word and its grandparent and sibling, respectively.", "Figure 1 (b) illustrates the details.", "Here we use the element-wise sum operation instead of concatenation because it does not increase the dimension of the input vector β t , thus introducing no additional model parameters.", "Biaffine Attention Mechanism For attention score function (Eq.", "(1)), we adopt the biaffine attention mechanism (Luong et al., 2015; Dozat and Manning, 2017) : e t i = h T t Ws i + U T h t + V T s i + b where W, U, V, b are parameters, denoting the weight matrix of the bi-linear term, the two weight vectors of the linear terms, and the bias vector.", "As discussed in Dozat and Manning (2017) , applying a multilayer perceptron (MLP) to the output vectors of the BLSTM before the score function can both reduce the dimensionality and overfitting of the model.", "We follow this work by using a one-layer perceptron to s i and h i with elu (Clevert et al., 2015) as its activation function.", "Similarly, the dependency label classifier also uses a biaffine function to score each label, given the head word vector h t and child vector s i as inputs.", "Again, we use MLPs to transform h t and s i before feeding them into the classifier.", "Training Objectives The STACKPTR parser is trained to optimize the probability of the dependency trees given sentences: P θ (y|x), which can be factorized as: P θ (y|x) = k i=1 P θ (p i |p <i , x) = k i=1 l i j=1 P θ (c i,j |c i,<j , p <i , x), (2) where θ represents model parameters.", "p <i denotes the preceding paths that have already been generated.", "c i,j represents the jth word in p i and c i,<j denotes all the proceeding words on the path p i .", "Thus, the STACKPTR parser is an autoregressive model, like sequence-to-sequence models, but it factors the distribution according to a top-down tree structure as opposed to a left-to-right chain.", "We define P θ (c i,j |c i,<j , p <i , x) = a t , where attention vector a t (of dimension n) is used as the distribution over the indices of words in a sentence.", "Arc Prediction Our parser is trained by optimizing the conditional likelihood in Eq (2), which is implemented as the cross-entropy loss.", "Label Prediction We train a separated multiclass classifier in parallel to predict the dependency labels.", "Following Dozat and Manning (2017) , the classifier takes the information of the head word and its child as features.", "The label classifier is trained simultaneously with the parser by optimizing the sum of their objectives.", "Discussion Time Complexity.", "The number of decoding steps to build a parse tree for a sentence of length n is 2n−1, linear in n. Together with the attention mechanism (at each step, we need to compute the attention vector a t , whose runtime is O(n)), the time complexity of decoding algorithm is O(n 2 ), which is more efficient than graph-based parsers that have O(n 3 ) or worse complexity when using dynamic programming or maximum spanning tree (MST) decoding algorithms.", "Top-down Parsing.", "When humans comprehend a natural language sentence, they arguably do it in an incremental, left-to-right manner.", "However, when humans consciously annotate a sentence with syntactic structure, they rarely ever process in fixed left-to-right order.", "Rather, they start by reading the whole sentence, then seeking the main predicates, jumping back-and-forth over the sentence and recursively proceeding to the subtree structures governed by certain head words.", "Our parser follows a similar kind of annotation process: starting from reading the whole sentence, and processing in a top-down manner by finding the main predicates first and only then search for sub-trees governed by them.", "When making latter decisions, the parser has access to the entire structure built in earlier steps.", "Implementation Details Pre-trained Word Embeddings.", "For all the parsing models in different languages, we initialize word vectors with pretrained word embeddings.", "For Chinese, Dutch, English, German and Spanish, we use the structured-skipgram embeddings.", "For other languages we use Polyglot embeddings (Al-Rfou et al., 2013) .", "Optimization.", "Parameter optimization is performed with the Adam optimizer (Kingma and Ba, 2014) with β 1 = β 2 = 0.9.", "We choose an initial learning rate of η 0 = 0.001.", "The learning rate η is annealed by multiplying a fixed decay rate ρ = 0.75 when parsing performance stops increasing on validation sets.", "To reduce the effects of \"gradient exploding\", we use gradient clipping of 5.0 (Pascanu et al., 2013) .", "Dropout Training.", "To mitigate overfitting, we apply dropout (Srivastava et al., 2014; .", "For BLSTM, we use recurrent dropout (Gal and Ghahramani, 2016) with a drop rate of 0.33 between hidden states and 0.33 between layers.", "Following Dozat and Manning (2017) , we also use embedding dropout with a rate of 0.33 on all word, character, and POS embeddings.", "Hyper-Parameters.", "Some parameters are chosen from those reported in Dozat and Manning (2017) .", "We use the same hyper-parameters across the models on different treebanks and languages, due to time constraints.", "The details of the chosen hyper-parameters for all experiments are summarized in Appendix A.", "Experiments Setup We evaluate our STACKPTR parser mainly on three treebanks: the English Penn Treebank (PTB version 3.0) (Marcus et al., 1993) , the Penn Chinese Treebank (CTB version 5.1) (Xue et al., 2002) , and the German CoNLL 2009 corpus (Hajič et al., 2009) .", "We use the same experimental settings as Kuncoro et al.", "(2016) .", "To make a thorough empirical comparison with previous studies, we also evaluate our system on treebanks from CoNLL shared task and the Universal Dependency (UD) Treebanks 4 .", "For the CoNLL Treebanks, we use the English treebank from CoNLL-2008 shared task (Surdeanu et al., 2008) and all 13 treebanks from CoNLL-2006 shared task (Buchholz and Marsi, 2006) .", "The experimental settings are the same as .", "For UD Treebanks, we select 12 languages.", "The details of the treebanks and experimental settings are in § 4.5 and Appendix B.", "Evaluation Metrics Parsing performance is measured with five metrics: unlabeled attachment score (UAS), labeled attachment score (LAS), unlabeled complete match (UCM), labeled complete match (LCM), and root accuracy (RA).", "Following previous work (Kuncoro et al., 2016; Dozat and Manning, 2017) , we report results excluding punctuations for Chinese and English.", "For each experiment, we report the mean values with corresponding standard deviations over 5 repetitions.", "Baseline For fair comparison of the parsing performance, we re-implemented the graph-based Deep Biaffine (BIAF) parser (Dozat and Manning, 2017) , which achieved state-of-the-art results on a wide range of languages.", "Our re-implementation adds character-level information using the same LSTM-CNN encoder as our model ( § 3.2) to the original BIAF model, which boosts its performance on all languages.", "Main Results We first conduct experiments to demonstrate the effectiveness of our neural architecture by comparing with the strong baseline BIAF.", "We compare the performance of four variations of our model with different decoder inputs -Org, +gpar, +sib and Full -where the Org model utilizes only the encoder hidden states of head words, while the +gpar and +sib models augments the original one with grandparent and sibling information, respectively.", "The Full model includes all the three information as inputs.", "Figure 2 illustrates the performance (five metrics) of different variations of our STACKPTR parser together with the results of baseline BIAF re-implemented by us, on the test sets of the three languages.", "On UAS and LAS, the Full variation of STACKPTR with decoding beam size 10 outperforms BIAF on Chinese, and obtains competitive performance on English and German.", "An interesting observation is that the Full model achieves the best accuracy on English and Chinese, while performs slightly worse than +sib on German.", "This shows that the importance of higher-order information varies in languages.", "On LCM and UCM, STACKPTR significantly outperforms BIAF on all languages, showing the superiority of our parser on complete sentence parsing.", "The results of our parser on RA are slightly worse than BIAF.", "More details of results are provided in Appendix C. Table 1 illustrates the UAS and LAS of the four versions of our model (with decoding beam size 10) on the three treebanks, together with previous top-performing systems for comparison.", "Note that the results of STACKPTR and our reimplementation of BIAF are the average of 5 repetitions instead of a single run.", "Our Full model significantly outperforms all the transition-based parsers on all three languages, and achieves better results than most graph-based parsers.", "Our Table 1 : UAS and LAS of four versions of our model on test sets for three languages, together with topperforming parsing systems.", "\"T\" and \"G\" indicate transition-and graph-based models, respectively.", "For BIAF, we provide the original results reported in Dozat and Manning (2017) and our re-implementation.", "For STACKPTR and our re-implementation of BiAF, we report the average over 5 runs.", "re-implementation of BIAF obtains better performance than the original one in Dozat and Manning (2017) , demonstrating the effectiveness of the character-level information.", "Our model achieves state-of-the-art performance on both UAS and LAS on Chinese, and best UAS on English.", "On German, the performance is competitive with BIAF, and significantly better than other models.", "Comparison with Previous Work Error Analysis In this section, we characterize the errors made by BIAF and STACKPTR by presenting a number of experiments that relate parsing errors to a set of linguistic and structural properties.", "For simplicity, we follow McDonald and Nivre (2011) and report labeled parsing metrics (either accuracy, precision, or recall) for all experiments.", "Length and Graph Factors Following McDonald and Nivre (2011), we analyze parsing errors related to structural factors.", "Sentence Length.", "Figure 3 (a) shows the accuracy of both parsing models relative to sentence lengths.", "Consistent with the analysis in Mc-Donald and Nivre (2011) , STACKPTR tends to perform better on shorter sentences, which make fewer parsing decisions, significantly reducing the chance of error propagation.", "Dependency Length.", "Figure 3 (b) measures the precision and recall relative to dependency lengths.", "While the graph-based BIAF parser still performs better for longer dependency arcs and transition-based STACKPTR parser does better for shorter ones, the gap between the two systems is marginal, much smaller than that shown Table 3 : UAS and LAS on 14 treebanks from CoNLL shared tasks, together with several state-of-the-art parsers.", "Bi-Att is the bi-directional attention based parser (Cheng et al., 2016) , and NeuroMST is the neural MST parser .", "\"Best Published\" includes the most accurate parsers in term of UAS among , Martins et al.", "(2011) , Martins et al.", "(2013) , , , Zhang and McDonald (2014) , Pitler and McDonald (2015) , and Cheng et al.", "(2016) .", "in McDonald and Nivre (2011) .", "One possible reason is that, unlike traditional transition-based parsers that scan the sentence from left to right, STACKPTR processes in a top-down manner, thus sometimes unnecessarily creating shorter dependency arcs first.", "Root Distance.", "Figure 3 (c) plots the precision and recall of each system for arcs of varying distance to the root.", "Different from the observation in McDonald and Nivre (2011) , STACKPTR does not show an obvious advantage on the precision for arcs further away from the root.", "Furthermore, the STACKPTR parser does not have the tendency to over-predict root modifiers reported in McDonald and Nivre (2011) .", "This behavior can be explained using the same reasoning as above: the fact that arcs further away from the root are usually constructed early in the parsing algorithm of traditional transition-based parsers is not true for the STACKPTR parser.", "Effect of POS Embedding The only prerequisite information that our parsing model relies on is POS tags.", "With the goal of achieving an end-to-end parser, we explore the effect of POS tags on parsing performance.", "We run experiments on PTB using our STACKPTR parser with gold-standard and predicted POS tags, and without tags, respectively.", "STACKPTR in these experiments is the Full model with beam=10.", "Table 2 gives results of the parsers with different versions of POS tags on the test data of PTB.", "The parser with gold-standard POS tags significantly outperforms the other two parsers, showing that dependency parsers can still benefit from accurate POS information.", "The parser with predicted (imperfect) POS tags, however, performs even slightly worse than the parser without using POS tags.", "It illustrates that an end-to-end parser that doesn't rely on POS information can obtain competitive (or even better) performance than parsers using imperfect predicted POS tags, even if the POS tagger is relative high accuracy (accuracy > 97% in this experiment on PTB).", "Table 3 summarizes the parsing results of our model on the test sets of 14 treebanks from the CoNLL shared task, along with the state-of-theart baselines.", "Along with BIAF, we also list the performance of the bi-directional attention based Parser (Bi-Att) (Cheng et al., 2016) and the neural MST parser (NeuroMST) for comparison.", "Our parser achieves state-of-theart performance on both UAS and LAS on eight languages -Arabic, Czech, English, German, Portuguese, Slovene, Spanish, and Swedish.", "On Bulgarian and Dutch, our parser obtains the best UAS.", "On other languages, the performance of our parser is competitive with BIAF, and significantly better than others.", "The only exception is Japanese, on which NeuroMST obtains the best scores.", "Experiments on Other Treebanks CoNLL Treebanks UD Treebanks For UD Treebanks, we select 12 languages -Bulgarian, Catalan, Czech, Dutch, English, French, German, Italian, Norwegian, Romanian, Russian and Spanish.", "For all the languages, we adopt the standard training/dev/test splits, and use the universal POS tags (Petrov et al., 2012) provided in each treebank.", "The statistics of these corpora are provided in Appendix B.", "Table 4 summarizes the results of the STACKPTR parser, along with BIAF for comparison, on both the development and test datasets for each language.", "First, both BIAF and STACKPTR parsers achieve relatively high parsing accuracies on all the 12 languages -all with UAS are higher than 90%.", "On nine languages -Catalan, Czech, Dutch, English, French, German, Norwegian, Russian and Spanish -STACKPTR outperforms BIAF for both UAS and LAS.", "On Bulgarian, STACKPTR achieves slightly better UAS while LAS is slightly worse than BIAF.", "On Italian and Romanian, BIAF obtains marginally better parsing performance than STACKPTR.", "Conclusion In this paper, we proposed STACKPTR, a transition-based neural network architecture, for dependency parsing.", "Combining pointer networks with an internal stack to track the status of the top-down, depth-first search in the decoding procedure, the STACKPTR parser is able to capture information from the whole sentence and all the previously derived subtrees, removing the leftto-right restriction in classical transition-based parsers, while maintaining linear parsing steps, w.r.t the length of the sentences.", "Experimental re-sults on 29 treebanks show the effectiveness of our parser across 20 languages, by achieving state-ofthe-art performance on 21 corpora.", "There are several potential directions for future work.", "First, we intend to consider how to conduct experiments to improve the analysis of parsing errors qualitatively and quantitatively.", "Another interesting direction is to further improve our model by exploring reinforcement learning approaches to learn an optimal order for the children of head words, instead of using a predefined fixed order." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.1", "3.2", "3.3", "3.4", "3.5", "3.6", "3.7", "3.8", "4.1", "4.2", "4.4", "4.4.1", "4.4.2", "4.5.2", "5" ], "paper_header_content": [ "Introduction", "Background", "Dependency Parsing and Notations", "Pointer Networks", "Overview", "Encoder", "Decoder", "Higher-order Information", "Biaffine Attention Mechanism", "Training Objectives", "Discussion", "Implementation Details", "Setup", "Main Results", "Error Analysis", "Length and Graph Factors", "Effect of POS Embedding", "UD Treebanks", "Conclusion" ] }
GEM-SciDuet-train-37#paper-1053#slide-17
LAS on UD Treebanks
bg ca cs de en es fr it nl no ro ru
bg ca cs de en es fr it nl no ro ru
[]
GEM-SciDuet-train-37#paper-1053#slide-18
1053
Stack-Pointer Networks for Dependency Parsing
We introduce a novel architecture for dependency parsing: stack-pointer networks (STACKPTR). Combining pointer networks (Vinyals et al., 2015) with an internal stack, the proposed model first reads and encodes the whole sentence, then builds the dependency tree top-down (from root-to-leaf) in a depth-first fashion. The stack tracks the status of the depthfirst search and the pointer networks select one child for the word at the top of the stack at each step. The STACKPTR parser benefits from the information of the whole sentence and all previously derived subtree structures, and removes the leftto-right restriction in classical transitionbased parsers. Yet, the number of steps for building any (including non-projective) parse tree is linear in the length of the sentence just as other transition-based parsers, yielding an efficient decoding algorithm with O(n 2 ) time complexity. We evaluate our model on 29 treebanks spanning 20 languages and different dependency annotation schemas, and achieve state-of-theart performance on 21 of them.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281 ], "paper_content_text": [ "Introduction Dependency parsing, which predicts the existence and type of linguistic dependency relations between words, is a first step towards deep language understanding.", "Its importance is widely recognized in the natural language processing (NLP) community, with it benefiting a wide range of NLP applications, such as coreference resolution (Ng, 2010; Durrett and Klein, 2013; Work done while at Carnegie Mellon University.", "2016), sentiment analysis (Tai et al., 2015) , machine translation (Bastings et al., 2017) , information extraction (Nguyen et al., 2009; Angeli et al., 2015; Peng et al., 2017) , word sense disambiguation (Fauceglia et al., 2015) , and low-resource languages processing (McDonald et al., 2013; Ma and Xia, 2014) .", "There are two dominant approaches to dependency parsing (Buchholz and Marsi, 2006; Nivre et al., 2007) : local and greedy transitionbased algorithms (Yamada and Matsumoto, 2003; Nivre and Scholz, 2004; Zhang and Nivre, 2011; Chen and Manning, 2014) , and the globally optimized graph-based algorithms (Eisner, 1996; Mc-Donald et al., 2005a,b; .", "Transition-based dependency parsers read words sequentially (commonly from left-to-right) and build dependency trees incrementally by making series of multiple choice decisions.", "The advantage of this formalism is that the number of operations required to build any projective parse tree is linear with respect to the length of the sentence.", "The challenge, however, is that the decision made at each step is based on local information, leading to error propagation and worse performance compared to graph-based parsers on root and long dependencies (McDonald and Nivre, 2011) .", "Previous studies have explored solutions to address this challenge.", "Stack LSTMs are capable of learning representations of the parser state that are sensitive to the complete contents of the parser's state.", "Andor et al.", "(2016) proposed a globally normalized transition model to replace the locally normalized classifier.", "However, the parsing accuracy is still behind state-of-the-art graph-based parsers (Dozat and Manning, 2017) .", "Graph-based dependency parsers, on the other hand, learn scoring functions for parse trees and perform exhaustive search over all possible trees for a sentence to find the globally highest scoring tree.", "Incorporating this global search algorithm with distributed representations learned from neural networks, neural graph-based parsers (Kiperwasser and Goldberg, 2016; Wang and Chang, 2016; Kuncoro et al., 2016; Dozat and Manning, 2017) have achieved the state-of-the-art accuracies on a number of treebanks in different languages.", "Nevertheless, these models, while accurate, are usually slow (e.g.", "decoding is O(n 3 ) time complexity for first-order models (McDonald et al., 2005a,b) and higher polynomials for higherorder models (McDonald and Pereira, 2006; Ma and Zhao, 2012b,a) ).", "In this paper, we propose a novel neural network architecture for dependency parsing, stackpointer networks (STACKPTR).", "STACKPTR is a transition-based architecture, with the corresponding asymptotic efficiency, but still maintains a global view of the sentence that proves essential for achieving competitive accuracy.", "Our STACKPTR parser has a pointer network (Vinyals et al., 2015) as its backbone, and is equipped with an internal stack to maintain the order of head words in tree structures.", "The STACKPTR parser performs parsing in an incremental, topdown, depth-first fashion; at each step, it generates an arc by assigning a child for the head word at the top of the internal stack.", "This architecture makes it possible to capture information from the whole sentence and all the previously derived subtrees, while maintaining a number of parsing steps linear in the sentence length.", "We evaluate our parser on 29 treebanks across 20 languages and different dependency annotation schemas, and achieve state-of-the-art performance on 21 of them.", "The contributions of this work are summarized as follows: (i) We propose a neural network architecture for dependency parsing that is simple, effective, and efficient.", "(ii) Empirical evaluations on benchmark datasets over 20 languages show that our method achieves state-of-the-art performance on 21 different treebanks 1 .", "(iii) Comprehensive error analysis is conducted to compare the proposed method to a strong graph-based baseline using biaffine attention (Dozat and Manning, 2017) .", "Background We first briefly describe the task of dependency parsing, setup the notation, and review Pointer Networks (Vinyals et al., 2015) .", "Dependency Parsing and Notations Dependency trees represent syntactic relationships between words in the sentences through labeled directed edges between head words and their dependents.", "Figure 1 (a) shows a dependency tree for the sentence, \"But there were no buyers\".", "In this paper, we will use the following notation: Input: x = {w 1 , .", ".", ".", ", w n } represents a generic sentence, where w i is the ith word.", "Output: y = {p 1 , p 2 , · · · , p k } represents a generic (possibly non-projective) dependency tree, where each path p i = $, w i,1 , w i,2 , · · · , w i,l i is a sequence of words from the root to a leaf.", "\"$\" is an universal virtual root that is added to each tree.", "Stack: σ denotes a stack configuration, which is a sequence of words.", "We use σ|w to represent a stack configuration that pushes word w into the stack σ.", "Children: ch(w i ) denotes the list of all the children (modifiers) of word w i .", "Pointer Networks Pointer Networks (PTR-NET) (Vinyals et al., 2015) are a variety of neural network capable of learning the conditional probability of an output sequence with elements that are discrete tokens corresponding to positions in an input sequence.", "This model cannot be trivially expressed by standard sequence-to-sequence networks due to the variable number of input positions in each sentence.", "PTR-NET solves the problem by using attention (Bahdanau et al., 2015; Luong et al., 2015) as a pointer to select a member of the input sequence as the output.", "Formally, the words of the sentence x are fed one-by-one into the encoder (a multiple-layer bidirectional RNN), producing a sequence of encoder hidden states s i .", "At each time step t, the decoder (a uni-directional RNN) receives the input from last step and outputs decoder hidden state h t .", "The attention vector a t is calculated as follows: e t i = score(h t , s i ) a t = softmax (e t ) (1) where score(·, ·) is the attention scoring function, which has several variations such as dot-product, concatenation, and biaffine (Luong et al., 2015) .", "PTR-NET regards the attention vector a t as a probability distribution over the source words, i.e.", "it uses a t i as pointers to select the input elements.", "3 Stack-Pointer Networks Overview Similarly to PTR-NET, STACKPTR first reads the whole sentence and encodes each word into the encoder hidden state s i .", "The internal stack σ is always initialized with the root symbol $.", "At each time step t, the decoder receives the input vector corresponding to the top element of the stack σ (the head word w p where p is the word index), generates the hidden state h t , and computes the attention vector a t using Eq.", "(1).", "The parser chooses a specific position c according to the attention scores in a t to generate a new dependency arc (w h , w c ) by selecting w c as a child of w h .", "Then the parser pushes w c onto the stack, i.e.", "σ → σ|w c , and goes to the next step.", "At one step if the parser points w h to itself, i.e.", "c = h, it indicates that all children of the head word w h have already been selected.", "Then the parser goes to the next step by popping w h out of σ.", "At test time, in order to guarantee a valid dependency tree containing all the words in the input sentences exactly once, the decoder maintains a list of \"available\" words.", "At each decoding step, the parser selects a child for the current head word, and removes the child from the list of available words to make sure that it cannot be selected as a child of other head words.", "For head words with multiple children, it is possible that there is more than one valid selection for each time step.", "In order to define a deterministic decoding process to make sure that there is only one ground-truth choice at each step (which is necessary for simple maximum likelihood estimation), a predefined order for each ch(w i ) needs to be introduced.", "The predefined order of children can have different alternatives, such as leftto-right or inside-out 2 .", "In this paper, we adopt the inside-out order 3 since it enables us to utilize second-order sibling information, which has been proven beneficial for parsing performance (Mc-Donald and Pereira, 2006; ) (see § 3.4 for details).", "Figure 1 (b) depicts the architecture of STACKPTR and the decoding procedure for the example sentence in Figure 1 (a) .", "Encoder The encoder of our parsing model is based on the bi-directional LSTM-CNN architecture (BLSTM-CNNs) (Chiu and Nichols, 2016; where CNNs encode character-level information of a word into its character-level repre-sentation and BLSTM models context information of each word.", "Formally, for each word, the CNN, with character embeddings as inputs, encodes the character-level representation.", "Then the character-level representation vector is concatenated with the word embedding vector to feed into the BLSTM network.", "To enrich word-level information, we also use POS embeddings.", "Finally, the encoder outputs a sequence of hidden states s i .", "Decoder The decoder for our parser is a uni-directional LSTM.", "Different from previous work (Bahdanau et al., 2015; Vinyals et al., 2015) which uses word embeddings of the previous word as the input to the decoder, our decoder receives the encoder hidden state vector (s i ) of the top element in the stack σ (see Figure 1 (b)).", "Compared to word embeddings, the encoder hidden states contain more contextual information, benefiting both the training and decoding procedures.", "The decoder produces a sequence of decoder hidden states h i , one for each decoding step.", "Higher-order Information As mentioned before, our parser is capable of utilizing higher-order information.", "In this paper, we incorporate two kinds of higher-order structures grandparent and sibling.", "A sibling structure is a head word with two successive modifiers, and a grandparent structure is a pair of dependencies connected head-to-tail: 0 12 3 456 2782 96 56 986 2 5214 3 77543 9 5 2 52 ÿ ÿ ! \"", "\"#$% & #& ' % !", "#& (()& *% & & !", "*\"& (+(\"!", "\"% ) &, \" \"#$' (% & #-.,\"/\" \"#$ % %% #*) (& ** *% & 0 % #( !", "$% %()- 1 2 3 -45 67 896: ;<=>?", "@ ABCDE3 -F14 G H I .J3-4 %& *% &()!", "!& %#( & +!", ", & * -\"ABCDE3 -F14K!", "& %& *K&& *&& 1 %& *!", "$% %()% & #-'A #& *# \" !", "0 & $()J3-4 (,% (& !", "!", "$, & *& *!", "& *() & *% & #.\"", "#& !", "$%(!", "K EL' % #& #! '", "#((%& & $.", "\"(,* #*) ( %& *) (0 #%()& * % .", "%& (M NOP QR#*\" \"#$& & (%!", "!SNR P T .,* #*#+% #( \" % (!", "& (' #& ( \" % #+) ( !", "U\"%) (!", "!", "(,% ABCDE3 -F145 V W<; ABCDEXDY3 -FZ4 Y*& % .,& && *\" \"#$& 1%%& () & %Z.#*(),* #*[%% & #(& 0 +& (& (& *% #( ()1'(# & ) #& ( U& (% . )", "/# & % !", "( & *% % &) (% (!", "K EL'' \\\"/ A # /#!", "!", "$.A#& (%u' .u' v.\"u' w\"% # + % %& *& .", "% #& K!", "$.)", "#& (& % & ( \"0 #* ! \"", "& % .", "\"0 % +!", "& % .\"", "& () \"0 % +!", "\"& 0 % +!", "& % ' x yz{ | } {| { { } | C,& * \"0 ( \"\" \"#$ % %+ !", "\"( \"%) ( % & % !", "( & *%'_ & * % % #& (., (K \"+#[ (\"(& ,( !", "K& % %) ( K (%,( [' G H G +H vY*\"$ #0 ( % & #& % \"\" K& (%()& *E % G vH!", "( & *' B( !", "& % % \" #& \"%& !", "%\" 0 #( !", "& % %%& U( \"% '(+ K & $.,! \"", "& *% $& # *& 0 *\"\"K % (% ' x { | } } { } { Ỹ */ % && $ () % ,\"% # +% %/ % & 0 ( \" ) #& ( U& (.,* #*\"#( (% %\" 0 \"#$& & ( & % \" K \"!\"", "\"# % 'E % 0 G vH & (\"#\", \"!", "$0 % \"\"$ #0 ( !", "( & *) (/ % & 0 ( \" % % & %& *+% %) ($ % % .", "#! \"", "(, !", "( & *%.,% U & %\"% * ' Y*E % G vH!", "( & * %+% \"(& ,( & !", "& \"& $ %()\"$ #0 ( %& #0 & % OQ S¡P% % .,* #*#(% % &()*\"0 ,( \"\" & %\"% #\"& %((% \".", "\"¢ £OQ ¤ S¡P% % .,* #*#(% % &()\" \"#$\"& * (+& ,& **\"\"(\" / ' ( !", "!", "$.,\"(& #( !", "& % %¥ ¦ § ,* ©\"ª & * \" #%()& *% « %*\"0 ,( \"\"\" ( & ' #( !", "& % %\"0 (& \"%¬ ¦ § ,* ©\"® & * \"()& * *\"\"(\" /()\" \"#$' _ & & K!", "$.", "#( !", "& % % & %*! )", "0 #(% & & & *\"\"+$©.,* % #( !", "& % %(!", "$ & !*! )", "0 #(% & & & .% #& *#(% & & &# +& \"\"+$\"\" ( (\" / %& (®' E#*& $ ()% %# & \"+$ # % K!", "$ #(+ & ,(%!", "!", ".", "\"#&% % & *#(0 % & #& (% % # /\" * #!", "!", "$ v' #( !", "& % %#(% & #& \") ( ()#( !", "& % % . \"", "#& & *\" K % (()& * °©F®± & (#(% & & & %*\"\"+$©\" ®' #( !", "& % %# & \"+$#( !", "& 0 #( !", "& % , & *& *(& **!", ")() ®« %#(% & & & 'Y* ( &()#(#& & ( #*#(% & #& (²® vG H(³ 0 vG +H ² %& *T S¡ ¢ PSQ¢ £P .)", "\"& *&% & + & \"& (/\"& *( & !#(% & #& (' _ ( \"& ( % %& #-.", "&% ) /#%& ( /\"( & !#(% & #& (%) (!", "!#( !", "& \" #( !", "& % %\"/\"(-' Y* %#+ To utilize higher-order information, the decoder's input at each step is the sum of the encoder hidden states of three words: β t = s h + s g + s s where β t is the input vector of decoder at time t and h, g, s are the indices of the head word and its grandparent and sibling, respectively.", "Figure 1 (b) illustrates the details.", "Here we use the element-wise sum operation instead of concatenation because it does not increase the dimension of the input vector β t , thus introducing no additional model parameters.", "Biaffine Attention Mechanism For attention score function (Eq.", "(1)), we adopt the biaffine attention mechanism (Luong et al., 2015; Dozat and Manning, 2017) : e t i = h T t Ws i + U T h t + V T s i + b where W, U, V, b are parameters, denoting the weight matrix of the bi-linear term, the two weight vectors of the linear terms, and the bias vector.", "As discussed in Dozat and Manning (2017) , applying a multilayer perceptron (MLP) to the output vectors of the BLSTM before the score function can both reduce the dimensionality and overfitting of the model.", "We follow this work by using a one-layer perceptron to s i and h i with elu (Clevert et al., 2015) as its activation function.", "Similarly, the dependency label classifier also uses a biaffine function to score each label, given the head word vector h t and child vector s i as inputs.", "Again, we use MLPs to transform h t and s i before feeding them into the classifier.", "Training Objectives The STACKPTR parser is trained to optimize the probability of the dependency trees given sentences: P θ (y|x), which can be factorized as: P θ (y|x) = k i=1 P θ (p i |p <i , x) = k i=1 l i j=1 P θ (c i,j |c i,<j , p <i , x), (2) where θ represents model parameters.", "p <i denotes the preceding paths that have already been generated.", "c i,j represents the jth word in p i and c i,<j denotes all the proceeding words on the path p i .", "Thus, the STACKPTR parser is an autoregressive model, like sequence-to-sequence models, but it factors the distribution according to a top-down tree structure as opposed to a left-to-right chain.", "We define P θ (c i,j |c i,<j , p <i , x) = a t , where attention vector a t (of dimension n) is used as the distribution over the indices of words in a sentence.", "Arc Prediction Our parser is trained by optimizing the conditional likelihood in Eq (2), which is implemented as the cross-entropy loss.", "Label Prediction We train a separated multiclass classifier in parallel to predict the dependency labels.", "Following Dozat and Manning (2017) , the classifier takes the information of the head word and its child as features.", "The label classifier is trained simultaneously with the parser by optimizing the sum of their objectives.", "Discussion Time Complexity.", "The number of decoding steps to build a parse tree for a sentence of length n is 2n−1, linear in n. Together with the attention mechanism (at each step, we need to compute the attention vector a t , whose runtime is O(n)), the time complexity of decoding algorithm is O(n 2 ), which is more efficient than graph-based parsers that have O(n 3 ) or worse complexity when using dynamic programming or maximum spanning tree (MST) decoding algorithms.", "Top-down Parsing.", "When humans comprehend a natural language sentence, they arguably do it in an incremental, left-to-right manner.", "However, when humans consciously annotate a sentence with syntactic structure, they rarely ever process in fixed left-to-right order.", "Rather, they start by reading the whole sentence, then seeking the main predicates, jumping back-and-forth over the sentence and recursively proceeding to the subtree structures governed by certain head words.", "Our parser follows a similar kind of annotation process: starting from reading the whole sentence, and processing in a top-down manner by finding the main predicates first and only then search for sub-trees governed by them.", "When making latter decisions, the parser has access to the entire structure built in earlier steps.", "Implementation Details Pre-trained Word Embeddings.", "For all the parsing models in different languages, we initialize word vectors with pretrained word embeddings.", "For Chinese, Dutch, English, German and Spanish, we use the structured-skipgram embeddings.", "For other languages we use Polyglot embeddings (Al-Rfou et al., 2013) .", "Optimization.", "Parameter optimization is performed with the Adam optimizer (Kingma and Ba, 2014) with β 1 = β 2 = 0.9.", "We choose an initial learning rate of η 0 = 0.001.", "The learning rate η is annealed by multiplying a fixed decay rate ρ = 0.75 when parsing performance stops increasing on validation sets.", "To reduce the effects of \"gradient exploding\", we use gradient clipping of 5.0 (Pascanu et al., 2013) .", "Dropout Training.", "To mitigate overfitting, we apply dropout (Srivastava et al., 2014; .", "For BLSTM, we use recurrent dropout (Gal and Ghahramani, 2016) with a drop rate of 0.33 between hidden states and 0.33 between layers.", "Following Dozat and Manning (2017) , we also use embedding dropout with a rate of 0.33 on all word, character, and POS embeddings.", "Hyper-Parameters.", "Some parameters are chosen from those reported in Dozat and Manning (2017) .", "We use the same hyper-parameters across the models on different treebanks and languages, due to time constraints.", "The details of the chosen hyper-parameters for all experiments are summarized in Appendix A.", "Experiments Setup We evaluate our STACKPTR parser mainly on three treebanks: the English Penn Treebank (PTB version 3.0) (Marcus et al., 1993) , the Penn Chinese Treebank (CTB version 5.1) (Xue et al., 2002) , and the German CoNLL 2009 corpus (Hajič et al., 2009) .", "We use the same experimental settings as Kuncoro et al.", "(2016) .", "To make a thorough empirical comparison with previous studies, we also evaluate our system on treebanks from CoNLL shared task and the Universal Dependency (UD) Treebanks 4 .", "For the CoNLL Treebanks, we use the English treebank from CoNLL-2008 shared task (Surdeanu et al., 2008) and all 13 treebanks from CoNLL-2006 shared task (Buchholz and Marsi, 2006) .", "The experimental settings are the same as .", "For UD Treebanks, we select 12 languages.", "The details of the treebanks and experimental settings are in § 4.5 and Appendix B.", "Evaluation Metrics Parsing performance is measured with five metrics: unlabeled attachment score (UAS), labeled attachment score (LAS), unlabeled complete match (UCM), labeled complete match (LCM), and root accuracy (RA).", "Following previous work (Kuncoro et al., 2016; Dozat and Manning, 2017) , we report results excluding punctuations for Chinese and English.", "For each experiment, we report the mean values with corresponding standard deviations over 5 repetitions.", "Baseline For fair comparison of the parsing performance, we re-implemented the graph-based Deep Biaffine (BIAF) parser (Dozat and Manning, 2017) , which achieved state-of-the-art results on a wide range of languages.", "Our re-implementation adds character-level information using the same LSTM-CNN encoder as our model ( § 3.2) to the original BIAF model, which boosts its performance on all languages.", "Main Results We first conduct experiments to demonstrate the effectiveness of our neural architecture by comparing with the strong baseline BIAF.", "We compare the performance of four variations of our model with different decoder inputs -Org, +gpar, +sib and Full -where the Org model utilizes only the encoder hidden states of head words, while the +gpar and +sib models augments the original one with grandparent and sibling information, respectively.", "The Full model includes all the three information as inputs.", "Figure 2 illustrates the performance (five metrics) of different variations of our STACKPTR parser together with the results of baseline BIAF re-implemented by us, on the test sets of the three languages.", "On UAS and LAS, the Full variation of STACKPTR with decoding beam size 10 outperforms BIAF on Chinese, and obtains competitive performance on English and German.", "An interesting observation is that the Full model achieves the best accuracy on English and Chinese, while performs slightly worse than +sib on German.", "This shows that the importance of higher-order information varies in languages.", "On LCM and UCM, STACKPTR significantly outperforms BIAF on all languages, showing the superiority of our parser on complete sentence parsing.", "The results of our parser on RA are slightly worse than BIAF.", "More details of results are provided in Appendix C. Table 1 illustrates the UAS and LAS of the four versions of our model (with decoding beam size 10) on the three treebanks, together with previous top-performing systems for comparison.", "Note that the results of STACKPTR and our reimplementation of BIAF are the average of 5 repetitions instead of a single run.", "Our Full model significantly outperforms all the transition-based parsers on all three languages, and achieves better results than most graph-based parsers.", "Our Table 1 : UAS and LAS of four versions of our model on test sets for three languages, together with topperforming parsing systems.", "\"T\" and \"G\" indicate transition-and graph-based models, respectively.", "For BIAF, we provide the original results reported in Dozat and Manning (2017) and our re-implementation.", "For STACKPTR and our re-implementation of BiAF, we report the average over 5 runs.", "re-implementation of BIAF obtains better performance than the original one in Dozat and Manning (2017) , demonstrating the effectiveness of the character-level information.", "Our model achieves state-of-the-art performance on both UAS and LAS on Chinese, and best UAS on English.", "On German, the performance is competitive with BIAF, and significantly better than other models.", "Comparison with Previous Work Error Analysis In this section, we characterize the errors made by BIAF and STACKPTR by presenting a number of experiments that relate parsing errors to a set of linguistic and structural properties.", "For simplicity, we follow McDonald and Nivre (2011) and report labeled parsing metrics (either accuracy, precision, or recall) for all experiments.", "Length and Graph Factors Following McDonald and Nivre (2011), we analyze parsing errors related to structural factors.", "Sentence Length.", "Figure 3 (a) shows the accuracy of both parsing models relative to sentence lengths.", "Consistent with the analysis in Mc-Donald and Nivre (2011) , STACKPTR tends to perform better on shorter sentences, which make fewer parsing decisions, significantly reducing the chance of error propagation.", "Dependency Length.", "Figure 3 (b) measures the precision and recall relative to dependency lengths.", "While the graph-based BIAF parser still performs better for longer dependency arcs and transition-based STACKPTR parser does better for shorter ones, the gap between the two systems is marginal, much smaller than that shown Table 3 : UAS and LAS on 14 treebanks from CoNLL shared tasks, together with several state-of-the-art parsers.", "Bi-Att is the bi-directional attention based parser (Cheng et al., 2016) , and NeuroMST is the neural MST parser .", "\"Best Published\" includes the most accurate parsers in term of UAS among , Martins et al.", "(2011) , Martins et al.", "(2013) , , , Zhang and McDonald (2014) , Pitler and McDonald (2015) , and Cheng et al.", "(2016) .", "in McDonald and Nivre (2011) .", "One possible reason is that, unlike traditional transition-based parsers that scan the sentence from left to right, STACKPTR processes in a top-down manner, thus sometimes unnecessarily creating shorter dependency arcs first.", "Root Distance.", "Figure 3 (c) plots the precision and recall of each system for arcs of varying distance to the root.", "Different from the observation in McDonald and Nivre (2011) , STACKPTR does not show an obvious advantage on the precision for arcs further away from the root.", "Furthermore, the STACKPTR parser does not have the tendency to over-predict root modifiers reported in McDonald and Nivre (2011) .", "This behavior can be explained using the same reasoning as above: the fact that arcs further away from the root are usually constructed early in the parsing algorithm of traditional transition-based parsers is not true for the STACKPTR parser.", "Effect of POS Embedding The only prerequisite information that our parsing model relies on is POS tags.", "With the goal of achieving an end-to-end parser, we explore the effect of POS tags on parsing performance.", "We run experiments on PTB using our STACKPTR parser with gold-standard and predicted POS tags, and without tags, respectively.", "STACKPTR in these experiments is the Full model with beam=10.", "Table 2 gives results of the parsers with different versions of POS tags on the test data of PTB.", "The parser with gold-standard POS tags significantly outperforms the other two parsers, showing that dependency parsers can still benefit from accurate POS information.", "The parser with predicted (imperfect) POS tags, however, performs even slightly worse than the parser without using POS tags.", "It illustrates that an end-to-end parser that doesn't rely on POS information can obtain competitive (or even better) performance than parsers using imperfect predicted POS tags, even if the POS tagger is relative high accuracy (accuracy > 97% in this experiment on PTB).", "Table 3 summarizes the parsing results of our model on the test sets of 14 treebanks from the CoNLL shared task, along with the state-of-theart baselines.", "Along with BIAF, we also list the performance of the bi-directional attention based Parser (Bi-Att) (Cheng et al., 2016) and the neural MST parser (NeuroMST) for comparison.", "Our parser achieves state-of-theart performance on both UAS and LAS on eight languages -Arabic, Czech, English, German, Portuguese, Slovene, Spanish, and Swedish.", "On Bulgarian and Dutch, our parser obtains the best UAS.", "On other languages, the performance of our parser is competitive with BIAF, and significantly better than others.", "The only exception is Japanese, on which NeuroMST obtains the best scores.", "Experiments on Other Treebanks CoNLL Treebanks UD Treebanks For UD Treebanks, we select 12 languages -Bulgarian, Catalan, Czech, Dutch, English, French, German, Italian, Norwegian, Romanian, Russian and Spanish.", "For all the languages, we adopt the standard training/dev/test splits, and use the universal POS tags (Petrov et al., 2012) provided in each treebank.", "The statistics of these corpora are provided in Appendix B.", "Table 4 summarizes the results of the STACKPTR parser, along with BIAF for comparison, on both the development and test datasets for each language.", "First, both BIAF and STACKPTR parsers achieve relatively high parsing accuracies on all the 12 languages -all with UAS are higher than 90%.", "On nine languages -Catalan, Czech, Dutch, English, French, German, Norwegian, Russian and Spanish -STACKPTR outperforms BIAF for both UAS and LAS.", "On Bulgarian, STACKPTR achieves slightly better UAS while LAS is slightly worse than BIAF.", "On Italian and Romanian, BIAF obtains marginally better parsing performance than STACKPTR.", "Conclusion In this paper, we proposed STACKPTR, a transition-based neural network architecture, for dependency parsing.", "Combining pointer networks with an internal stack to track the status of the top-down, depth-first search in the decoding procedure, the STACKPTR parser is able to capture information from the whole sentence and all the previously derived subtrees, removing the leftto-right restriction in classical transition-based parsers, while maintaining linear parsing steps, w.r.t the length of the sentences.", "Experimental re-sults on 29 treebanks show the effectiveness of our parser across 20 languages, by achieving state-ofthe-art performance on 21 corpora.", "There are several potential directions for future work.", "First, we intend to consider how to conduct experiments to improve the analysis of parsing errors qualitatively and quantitatively.", "Another interesting direction is to further improve our model by exploring reinforcement learning approaches to learn an optimal order for the children of head words, instead of using a predefined fixed order." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.1", "3.2", "3.3", "3.4", "3.5", "3.6", "3.7", "3.8", "4.1", "4.2", "4.4", "4.4.1", "4.4.2", "4.5.2", "5" ], "paper_header_content": [ "Introduction", "Background", "Dependency Parsing and Notations", "Pointer Networks", "Overview", "Encoder", "Decoder", "Higher-order Information", "Biaffine Attention Mechanism", "Training Objectives", "Discussion", "Implementation Details", "Setup", "Main Results", "Error Analysis", "Length and Graph Factors", "Effect of POS Embedding", "UD Treebanks", "Conclusion" ] }
GEM-SciDuet-train-37#paper-1053#slide-18
Conclusion and Future Work
Stack-Pointer network for dependency parsing A transition-based neural network architecture Top-down, depth-first decoding procedure State-of-the-art performance on 21 out of 29 treebanks - Learn an optimal order for the children of head words, instead of using a pre-defined fixed order
Stack-Pointer network for dependency parsing A transition-based neural network architecture Top-down, depth-first decoding procedure State-of-the-art performance on 21 out of 29 treebanks - Learn an optimal order for the children of head words, instead of using a pre-defined fixed order
[]
GEM-SciDuet-train-37#paper-1053#slide-19
1053
Stack-Pointer Networks for Dependency Parsing
We introduce a novel architecture for dependency parsing: stack-pointer networks (STACKPTR). Combining pointer networks (Vinyals et al., 2015) with an internal stack, the proposed model first reads and encodes the whole sentence, then builds the dependency tree top-down (from root-to-leaf) in a depth-first fashion. The stack tracks the status of the depthfirst search and the pointer networks select one child for the word at the top of the stack at each step. The STACKPTR parser benefits from the information of the whole sentence and all previously derived subtree structures, and removes the leftto-right restriction in classical transitionbased parsers. Yet, the number of steps for building any (including non-projective) parse tree is linear in the length of the sentence just as other transition-based parsers, yielding an efficient decoding algorithm with O(n 2 ) time complexity. We evaluate our model on 29 treebanks spanning 20 languages and different dependency annotation schemas, and achieve state-of-theart performance on 21 of them.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281 ], "paper_content_text": [ "Introduction Dependency parsing, which predicts the existence and type of linguistic dependency relations between words, is a first step towards deep language understanding.", "Its importance is widely recognized in the natural language processing (NLP) community, with it benefiting a wide range of NLP applications, such as coreference resolution (Ng, 2010; Durrett and Klein, 2013; Work done while at Carnegie Mellon University.", "2016), sentiment analysis (Tai et al., 2015) , machine translation (Bastings et al., 2017) , information extraction (Nguyen et al., 2009; Angeli et al., 2015; Peng et al., 2017) , word sense disambiguation (Fauceglia et al., 2015) , and low-resource languages processing (McDonald et al., 2013; Ma and Xia, 2014) .", "There are two dominant approaches to dependency parsing (Buchholz and Marsi, 2006; Nivre et al., 2007) : local and greedy transitionbased algorithms (Yamada and Matsumoto, 2003; Nivre and Scholz, 2004; Zhang and Nivre, 2011; Chen and Manning, 2014) , and the globally optimized graph-based algorithms (Eisner, 1996; Mc-Donald et al., 2005a,b; .", "Transition-based dependency parsers read words sequentially (commonly from left-to-right) and build dependency trees incrementally by making series of multiple choice decisions.", "The advantage of this formalism is that the number of operations required to build any projective parse tree is linear with respect to the length of the sentence.", "The challenge, however, is that the decision made at each step is based on local information, leading to error propagation and worse performance compared to graph-based parsers on root and long dependencies (McDonald and Nivre, 2011) .", "Previous studies have explored solutions to address this challenge.", "Stack LSTMs are capable of learning representations of the parser state that are sensitive to the complete contents of the parser's state.", "Andor et al.", "(2016) proposed a globally normalized transition model to replace the locally normalized classifier.", "However, the parsing accuracy is still behind state-of-the-art graph-based parsers (Dozat and Manning, 2017) .", "Graph-based dependency parsers, on the other hand, learn scoring functions for parse trees and perform exhaustive search over all possible trees for a sentence to find the globally highest scoring tree.", "Incorporating this global search algorithm with distributed representations learned from neural networks, neural graph-based parsers (Kiperwasser and Goldberg, 2016; Wang and Chang, 2016; Kuncoro et al., 2016; Dozat and Manning, 2017) have achieved the state-of-the-art accuracies on a number of treebanks in different languages.", "Nevertheless, these models, while accurate, are usually slow (e.g.", "decoding is O(n 3 ) time complexity for first-order models (McDonald et al., 2005a,b) and higher polynomials for higherorder models (McDonald and Pereira, 2006; Ma and Zhao, 2012b,a) ).", "In this paper, we propose a novel neural network architecture for dependency parsing, stackpointer networks (STACKPTR).", "STACKPTR is a transition-based architecture, with the corresponding asymptotic efficiency, but still maintains a global view of the sentence that proves essential for achieving competitive accuracy.", "Our STACKPTR parser has a pointer network (Vinyals et al., 2015) as its backbone, and is equipped with an internal stack to maintain the order of head words in tree structures.", "The STACKPTR parser performs parsing in an incremental, topdown, depth-first fashion; at each step, it generates an arc by assigning a child for the head word at the top of the internal stack.", "This architecture makes it possible to capture information from the whole sentence and all the previously derived subtrees, while maintaining a number of parsing steps linear in the sentence length.", "We evaluate our parser on 29 treebanks across 20 languages and different dependency annotation schemas, and achieve state-of-the-art performance on 21 of them.", "The contributions of this work are summarized as follows: (i) We propose a neural network architecture for dependency parsing that is simple, effective, and efficient.", "(ii) Empirical evaluations on benchmark datasets over 20 languages show that our method achieves state-of-the-art performance on 21 different treebanks 1 .", "(iii) Comprehensive error analysis is conducted to compare the proposed method to a strong graph-based baseline using biaffine attention (Dozat and Manning, 2017) .", "Background We first briefly describe the task of dependency parsing, setup the notation, and review Pointer Networks (Vinyals et al., 2015) .", "Dependency Parsing and Notations Dependency trees represent syntactic relationships between words in the sentences through labeled directed edges between head words and their dependents.", "Figure 1 (a) shows a dependency tree for the sentence, \"But there were no buyers\".", "In this paper, we will use the following notation: Input: x = {w 1 , .", ".", ".", ", w n } represents a generic sentence, where w i is the ith word.", "Output: y = {p 1 , p 2 , · · · , p k } represents a generic (possibly non-projective) dependency tree, where each path p i = $, w i,1 , w i,2 , · · · , w i,l i is a sequence of words from the root to a leaf.", "\"$\" is an universal virtual root that is added to each tree.", "Stack: σ denotes a stack configuration, which is a sequence of words.", "We use σ|w to represent a stack configuration that pushes word w into the stack σ.", "Children: ch(w i ) denotes the list of all the children (modifiers) of word w i .", "Pointer Networks Pointer Networks (PTR-NET) (Vinyals et al., 2015) are a variety of neural network capable of learning the conditional probability of an output sequence with elements that are discrete tokens corresponding to positions in an input sequence.", "This model cannot be trivially expressed by standard sequence-to-sequence networks due to the variable number of input positions in each sentence.", "PTR-NET solves the problem by using attention (Bahdanau et al., 2015; Luong et al., 2015) as a pointer to select a member of the input sequence as the output.", "Formally, the words of the sentence x are fed one-by-one into the encoder (a multiple-layer bidirectional RNN), producing a sequence of encoder hidden states s i .", "At each time step t, the decoder (a uni-directional RNN) receives the input from last step and outputs decoder hidden state h t .", "The attention vector a t is calculated as follows: e t i = score(h t , s i ) a t = softmax (e t ) (1) where score(·, ·) is the attention scoring function, which has several variations such as dot-product, concatenation, and biaffine (Luong et al., 2015) .", "PTR-NET regards the attention vector a t as a probability distribution over the source words, i.e.", "it uses a t i as pointers to select the input elements.", "3 Stack-Pointer Networks Overview Similarly to PTR-NET, STACKPTR first reads the whole sentence and encodes each word into the encoder hidden state s i .", "The internal stack σ is always initialized with the root symbol $.", "At each time step t, the decoder receives the input vector corresponding to the top element of the stack σ (the head word w p where p is the word index), generates the hidden state h t , and computes the attention vector a t using Eq.", "(1).", "The parser chooses a specific position c according to the attention scores in a t to generate a new dependency arc (w h , w c ) by selecting w c as a child of w h .", "Then the parser pushes w c onto the stack, i.e.", "σ → σ|w c , and goes to the next step.", "At one step if the parser points w h to itself, i.e.", "c = h, it indicates that all children of the head word w h have already been selected.", "Then the parser goes to the next step by popping w h out of σ.", "At test time, in order to guarantee a valid dependency tree containing all the words in the input sentences exactly once, the decoder maintains a list of \"available\" words.", "At each decoding step, the parser selects a child for the current head word, and removes the child from the list of available words to make sure that it cannot be selected as a child of other head words.", "For head words with multiple children, it is possible that there is more than one valid selection for each time step.", "In order to define a deterministic decoding process to make sure that there is only one ground-truth choice at each step (which is necessary for simple maximum likelihood estimation), a predefined order for each ch(w i ) needs to be introduced.", "The predefined order of children can have different alternatives, such as leftto-right or inside-out 2 .", "In this paper, we adopt the inside-out order 3 since it enables us to utilize second-order sibling information, which has been proven beneficial for parsing performance (Mc-Donald and Pereira, 2006; ) (see § 3.4 for details).", "Figure 1 (b) depicts the architecture of STACKPTR and the decoding procedure for the example sentence in Figure 1 (a) .", "Encoder The encoder of our parsing model is based on the bi-directional LSTM-CNN architecture (BLSTM-CNNs) (Chiu and Nichols, 2016; where CNNs encode character-level information of a word into its character-level repre-sentation and BLSTM models context information of each word.", "Formally, for each word, the CNN, with character embeddings as inputs, encodes the character-level representation.", "Then the character-level representation vector is concatenated with the word embedding vector to feed into the BLSTM network.", "To enrich word-level information, we also use POS embeddings.", "Finally, the encoder outputs a sequence of hidden states s i .", "Decoder The decoder for our parser is a uni-directional LSTM.", "Different from previous work (Bahdanau et al., 2015; Vinyals et al., 2015) which uses word embeddings of the previous word as the input to the decoder, our decoder receives the encoder hidden state vector (s i ) of the top element in the stack σ (see Figure 1 (b)).", "Compared to word embeddings, the encoder hidden states contain more contextual information, benefiting both the training and decoding procedures.", "The decoder produces a sequence of decoder hidden states h i , one for each decoding step.", "Higher-order Information As mentioned before, our parser is capable of utilizing higher-order information.", "In this paper, we incorporate two kinds of higher-order structures grandparent and sibling.", "A sibling structure is a head word with two successive modifiers, and a grandparent structure is a pair of dependencies connected head-to-tail: 0 12 3 456 2782 96 56 986 2 5214 3 77543 9 5 2 52 ÿ ÿ ! \"", "\"#$% & #& ' % !", "#& (()& *% & & !", "*\"& (+(\"!", "\"% ) &, \" \"#$' (% & #-.,\"/\" \"#$ % %% #*) (& ** *% & 0 % #( !", "$% %()- 1 2 3 -45 67 896: ;<=>?", "@ ABCDE3 -F14 G H I .J3-4 %& *% &()!", "!& %#( & +!", ", & * -\"ABCDE3 -F14K!", "& %& *K&& *&& 1 %& *!", "$% %()% & #-'A #& *# \" !", "0 & $()J3-4 (,% (& !", "!", "$, & *& *!", "& *() & *% & #.\"", "#& !", "$%(!", "K EL' % #& #! '", "#((%& & $.", "\"(,* #*) ( %& *) (0 #%()& * % .", "%& (M NOP QR#*\" \"#$& & (%!", "!SNR P T .,* #*#+% #( \" % (!", "& (' #& ( \" % #+) ( !", "U\"%) (!", "!", "(,% ABCDE3 -F145 V W<; ABCDEXDY3 -FZ4 Y*& % .,& && *\" \"#$& 1%%& () & %Z.#*(),* #*[%% & #(& 0 +& (& (& *% #( ()1'(# & ) #& ( U& (% . )", "/# & % !", "( & *% % &) (% (!", "K EL'' \\\"/ A # /#!", "!", "$.A#& (%u' .u' v.\"u' w\"% # + % %& *& .", "% #& K!", "$.)", "#& (& % & ( \"0 #* ! \"", "& % .", "\"0 % +!", "& % .\"", "& () \"0 % +!", "\"& 0 % +!", "& % ' x yz{ | } {| { { } | C,& * \"0 ( \"\" \"#$ % %+ !", "\"( \"%) ( % & % !", "( & *%'_ & * % % #& (., (K \"+#[ (\"(& ,( !", "K& % %) ( K (%,( [' G H G +H vY*\"$ #0 ( % & #& % \"\" K& (%()& *E % G vH!", "( & *' B( !", "& % % \" #& \"%& !", "%\" 0 #( !", "& % %%& U( \"% '(+ K & $.,! \"", "& *% $& # *& 0 *\"\"K % (% ' x { | } } { } { Ỹ */ % && $ () % ,\"% # +% %/ % & 0 ( \" ) #& ( U& (.,* #*\"#( (% %\" 0 \"#$& & ( & % \" K \"!\"", "\"# % 'E % 0 G vH & (\"#\", \"!", "$0 % \"\"$ #0 ( !", "( & *) (/ % & 0 ( \" % % & %& *+% %) ($ % % .", "#! \"", "(, !", "( & *%.,% U & %\"% * ' Y*E % G vH!", "( & * %+% \"(& ,( & !", "& \"& $ %()\"$ #0 ( %& #0 & % OQ S¡P% % .,* #*#(% % &()*\"0 ,( \"\" & %\"% #\"& %((% \".", "\"¢ £OQ ¤ S¡P% % .,* #*#(% % &()\" \"#$\"& * (+& ,& **\"\"(\" / ' ( !", "!", "$.,\"(& #( !", "& % %¥ ¦ § ,* ©\"ª & * \" #%()& *% « %*\"0 ,( \"\"\" ( & ' #( !", "& % %\"0 (& \"%¬ ¦ § ,* ©\"® & * \"()& * *\"\"(\" /()\" \"#$' _ & & K!", "$.", "#( !", "& % % & %*! )", "0 #(% & & & *\"\"+$©.,* % #( !", "& % %(!", "$ & !*! )", "0 #(% & & & .% #& *#(% & & &# +& \"\"+$\"\" ( (\" / %& (®' E#*& $ ()% %# & \"+$ # % K!", "$ #(+ & ,(%!", "!", ".", "\"#&% % & *#(0 % & #& (% % # /\" * #!", "!", "$ v' #( !", "& % %#(% & #& \") ( ()#( !", "& % % . \"", "#& & *\" K % (()& * °©F®± & (#(% & & & %*\"\"+$©\" ®' #( !", "& % %# & \"+$#( !", "& 0 #( !", "& % , & *& *(& **!", ")() ®« %#(% & & & 'Y* ( &()#(#& & ( #*#(% & #& (²® vG H(³ 0 vG +H ² %& *T S¡ ¢ PSQ¢ £P .)", "\"& *&% & + & \"& (/\"& *( & !#(% & #& (' _ ( \"& ( % %& #-.", "&% ) /#%& ( /\"( & !#(% & #& (%) (!", "!#( !", "& \" #( !", "& % %\"/\"(-' Y* %#+ To utilize higher-order information, the decoder's input at each step is the sum of the encoder hidden states of three words: β t = s h + s g + s s where β t is the input vector of decoder at time t and h, g, s are the indices of the head word and its grandparent and sibling, respectively.", "Figure 1 (b) illustrates the details.", "Here we use the element-wise sum operation instead of concatenation because it does not increase the dimension of the input vector β t , thus introducing no additional model parameters.", "Biaffine Attention Mechanism For attention score function (Eq.", "(1)), we adopt the biaffine attention mechanism (Luong et al., 2015; Dozat and Manning, 2017) : e t i = h T t Ws i + U T h t + V T s i + b where W, U, V, b are parameters, denoting the weight matrix of the bi-linear term, the two weight vectors of the linear terms, and the bias vector.", "As discussed in Dozat and Manning (2017) , applying a multilayer perceptron (MLP) to the output vectors of the BLSTM before the score function can both reduce the dimensionality and overfitting of the model.", "We follow this work by using a one-layer perceptron to s i and h i with elu (Clevert et al., 2015) as its activation function.", "Similarly, the dependency label classifier also uses a biaffine function to score each label, given the head word vector h t and child vector s i as inputs.", "Again, we use MLPs to transform h t and s i before feeding them into the classifier.", "Training Objectives The STACKPTR parser is trained to optimize the probability of the dependency trees given sentences: P θ (y|x), which can be factorized as: P θ (y|x) = k i=1 P θ (p i |p <i , x) = k i=1 l i j=1 P θ (c i,j |c i,<j , p <i , x), (2) where θ represents model parameters.", "p <i denotes the preceding paths that have already been generated.", "c i,j represents the jth word in p i and c i,<j denotes all the proceeding words on the path p i .", "Thus, the STACKPTR parser is an autoregressive model, like sequence-to-sequence models, but it factors the distribution according to a top-down tree structure as opposed to a left-to-right chain.", "We define P θ (c i,j |c i,<j , p <i , x) = a t , where attention vector a t (of dimension n) is used as the distribution over the indices of words in a sentence.", "Arc Prediction Our parser is trained by optimizing the conditional likelihood in Eq (2), which is implemented as the cross-entropy loss.", "Label Prediction We train a separated multiclass classifier in parallel to predict the dependency labels.", "Following Dozat and Manning (2017) , the classifier takes the information of the head word and its child as features.", "The label classifier is trained simultaneously with the parser by optimizing the sum of their objectives.", "Discussion Time Complexity.", "The number of decoding steps to build a parse tree for a sentence of length n is 2n−1, linear in n. Together with the attention mechanism (at each step, we need to compute the attention vector a t , whose runtime is O(n)), the time complexity of decoding algorithm is O(n 2 ), which is more efficient than graph-based parsers that have O(n 3 ) or worse complexity when using dynamic programming or maximum spanning tree (MST) decoding algorithms.", "Top-down Parsing.", "When humans comprehend a natural language sentence, they arguably do it in an incremental, left-to-right manner.", "However, when humans consciously annotate a sentence with syntactic structure, they rarely ever process in fixed left-to-right order.", "Rather, they start by reading the whole sentence, then seeking the main predicates, jumping back-and-forth over the sentence and recursively proceeding to the subtree structures governed by certain head words.", "Our parser follows a similar kind of annotation process: starting from reading the whole sentence, and processing in a top-down manner by finding the main predicates first and only then search for sub-trees governed by them.", "When making latter decisions, the parser has access to the entire structure built in earlier steps.", "Implementation Details Pre-trained Word Embeddings.", "For all the parsing models in different languages, we initialize word vectors with pretrained word embeddings.", "For Chinese, Dutch, English, German and Spanish, we use the structured-skipgram embeddings.", "For other languages we use Polyglot embeddings (Al-Rfou et al., 2013) .", "Optimization.", "Parameter optimization is performed with the Adam optimizer (Kingma and Ba, 2014) with β 1 = β 2 = 0.9.", "We choose an initial learning rate of η 0 = 0.001.", "The learning rate η is annealed by multiplying a fixed decay rate ρ = 0.75 when parsing performance stops increasing on validation sets.", "To reduce the effects of \"gradient exploding\", we use gradient clipping of 5.0 (Pascanu et al., 2013) .", "Dropout Training.", "To mitigate overfitting, we apply dropout (Srivastava et al., 2014; .", "For BLSTM, we use recurrent dropout (Gal and Ghahramani, 2016) with a drop rate of 0.33 between hidden states and 0.33 between layers.", "Following Dozat and Manning (2017) , we also use embedding dropout with a rate of 0.33 on all word, character, and POS embeddings.", "Hyper-Parameters.", "Some parameters are chosen from those reported in Dozat and Manning (2017) .", "We use the same hyper-parameters across the models on different treebanks and languages, due to time constraints.", "The details of the chosen hyper-parameters for all experiments are summarized in Appendix A.", "Experiments Setup We evaluate our STACKPTR parser mainly on three treebanks: the English Penn Treebank (PTB version 3.0) (Marcus et al., 1993) , the Penn Chinese Treebank (CTB version 5.1) (Xue et al., 2002) , and the German CoNLL 2009 corpus (Hajič et al., 2009) .", "We use the same experimental settings as Kuncoro et al.", "(2016) .", "To make a thorough empirical comparison with previous studies, we also evaluate our system on treebanks from CoNLL shared task and the Universal Dependency (UD) Treebanks 4 .", "For the CoNLL Treebanks, we use the English treebank from CoNLL-2008 shared task (Surdeanu et al., 2008) and all 13 treebanks from CoNLL-2006 shared task (Buchholz and Marsi, 2006) .", "The experimental settings are the same as .", "For UD Treebanks, we select 12 languages.", "The details of the treebanks and experimental settings are in § 4.5 and Appendix B.", "Evaluation Metrics Parsing performance is measured with five metrics: unlabeled attachment score (UAS), labeled attachment score (LAS), unlabeled complete match (UCM), labeled complete match (LCM), and root accuracy (RA).", "Following previous work (Kuncoro et al., 2016; Dozat and Manning, 2017) , we report results excluding punctuations for Chinese and English.", "For each experiment, we report the mean values with corresponding standard deviations over 5 repetitions.", "Baseline For fair comparison of the parsing performance, we re-implemented the graph-based Deep Biaffine (BIAF) parser (Dozat and Manning, 2017) , which achieved state-of-the-art results on a wide range of languages.", "Our re-implementation adds character-level information using the same LSTM-CNN encoder as our model ( § 3.2) to the original BIAF model, which boosts its performance on all languages.", "Main Results We first conduct experiments to demonstrate the effectiveness of our neural architecture by comparing with the strong baseline BIAF.", "We compare the performance of four variations of our model with different decoder inputs -Org, +gpar, +sib and Full -where the Org model utilizes only the encoder hidden states of head words, while the +gpar and +sib models augments the original one with grandparent and sibling information, respectively.", "The Full model includes all the three information as inputs.", "Figure 2 illustrates the performance (five metrics) of different variations of our STACKPTR parser together with the results of baseline BIAF re-implemented by us, on the test sets of the three languages.", "On UAS and LAS, the Full variation of STACKPTR with decoding beam size 10 outperforms BIAF on Chinese, and obtains competitive performance on English and German.", "An interesting observation is that the Full model achieves the best accuracy on English and Chinese, while performs slightly worse than +sib on German.", "This shows that the importance of higher-order information varies in languages.", "On LCM and UCM, STACKPTR significantly outperforms BIAF on all languages, showing the superiority of our parser on complete sentence parsing.", "The results of our parser on RA are slightly worse than BIAF.", "More details of results are provided in Appendix C. Table 1 illustrates the UAS and LAS of the four versions of our model (with decoding beam size 10) on the three treebanks, together with previous top-performing systems for comparison.", "Note that the results of STACKPTR and our reimplementation of BIAF are the average of 5 repetitions instead of a single run.", "Our Full model significantly outperforms all the transition-based parsers on all three languages, and achieves better results than most graph-based parsers.", "Our Table 1 : UAS and LAS of four versions of our model on test sets for three languages, together with topperforming parsing systems.", "\"T\" and \"G\" indicate transition-and graph-based models, respectively.", "For BIAF, we provide the original results reported in Dozat and Manning (2017) and our re-implementation.", "For STACKPTR and our re-implementation of BiAF, we report the average over 5 runs.", "re-implementation of BIAF obtains better performance than the original one in Dozat and Manning (2017) , demonstrating the effectiveness of the character-level information.", "Our model achieves state-of-the-art performance on both UAS and LAS on Chinese, and best UAS on English.", "On German, the performance is competitive with BIAF, and significantly better than other models.", "Comparison with Previous Work Error Analysis In this section, we characterize the errors made by BIAF and STACKPTR by presenting a number of experiments that relate parsing errors to a set of linguistic and structural properties.", "For simplicity, we follow McDonald and Nivre (2011) and report labeled parsing metrics (either accuracy, precision, or recall) for all experiments.", "Length and Graph Factors Following McDonald and Nivre (2011), we analyze parsing errors related to structural factors.", "Sentence Length.", "Figure 3 (a) shows the accuracy of both parsing models relative to sentence lengths.", "Consistent with the analysis in Mc-Donald and Nivre (2011) , STACKPTR tends to perform better on shorter sentences, which make fewer parsing decisions, significantly reducing the chance of error propagation.", "Dependency Length.", "Figure 3 (b) measures the precision and recall relative to dependency lengths.", "While the graph-based BIAF parser still performs better for longer dependency arcs and transition-based STACKPTR parser does better for shorter ones, the gap between the two systems is marginal, much smaller than that shown Table 3 : UAS and LAS on 14 treebanks from CoNLL shared tasks, together with several state-of-the-art parsers.", "Bi-Att is the bi-directional attention based parser (Cheng et al., 2016) , and NeuroMST is the neural MST parser .", "\"Best Published\" includes the most accurate parsers in term of UAS among , Martins et al.", "(2011) , Martins et al.", "(2013) , , , Zhang and McDonald (2014) , Pitler and McDonald (2015) , and Cheng et al.", "(2016) .", "in McDonald and Nivre (2011) .", "One possible reason is that, unlike traditional transition-based parsers that scan the sentence from left to right, STACKPTR processes in a top-down manner, thus sometimes unnecessarily creating shorter dependency arcs first.", "Root Distance.", "Figure 3 (c) plots the precision and recall of each system for arcs of varying distance to the root.", "Different from the observation in McDonald and Nivre (2011) , STACKPTR does not show an obvious advantage on the precision for arcs further away from the root.", "Furthermore, the STACKPTR parser does not have the tendency to over-predict root modifiers reported in McDonald and Nivre (2011) .", "This behavior can be explained using the same reasoning as above: the fact that arcs further away from the root are usually constructed early in the parsing algorithm of traditional transition-based parsers is not true for the STACKPTR parser.", "Effect of POS Embedding The only prerequisite information that our parsing model relies on is POS tags.", "With the goal of achieving an end-to-end parser, we explore the effect of POS tags on parsing performance.", "We run experiments on PTB using our STACKPTR parser with gold-standard and predicted POS tags, and without tags, respectively.", "STACKPTR in these experiments is the Full model with beam=10.", "Table 2 gives results of the parsers with different versions of POS tags on the test data of PTB.", "The parser with gold-standard POS tags significantly outperforms the other two parsers, showing that dependency parsers can still benefit from accurate POS information.", "The parser with predicted (imperfect) POS tags, however, performs even slightly worse than the parser without using POS tags.", "It illustrates that an end-to-end parser that doesn't rely on POS information can obtain competitive (or even better) performance than parsers using imperfect predicted POS tags, even if the POS tagger is relative high accuracy (accuracy > 97% in this experiment on PTB).", "Table 3 summarizes the parsing results of our model on the test sets of 14 treebanks from the CoNLL shared task, along with the state-of-theart baselines.", "Along with BIAF, we also list the performance of the bi-directional attention based Parser (Bi-Att) (Cheng et al., 2016) and the neural MST parser (NeuroMST) for comparison.", "Our parser achieves state-of-theart performance on both UAS and LAS on eight languages -Arabic, Czech, English, German, Portuguese, Slovene, Spanish, and Swedish.", "On Bulgarian and Dutch, our parser obtains the best UAS.", "On other languages, the performance of our parser is competitive with BIAF, and significantly better than others.", "The only exception is Japanese, on which NeuroMST obtains the best scores.", "Experiments on Other Treebanks CoNLL Treebanks UD Treebanks For UD Treebanks, we select 12 languages -Bulgarian, Catalan, Czech, Dutch, English, French, German, Italian, Norwegian, Romanian, Russian and Spanish.", "For all the languages, we adopt the standard training/dev/test splits, and use the universal POS tags (Petrov et al., 2012) provided in each treebank.", "The statistics of these corpora are provided in Appendix B.", "Table 4 summarizes the results of the STACKPTR parser, along with BIAF for comparison, on both the development and test datasets for each language.", "First, both BIAF and STACKPTR parsers achieve relatively high parsing accuracies on all the 12 languages -all with UAS are higher than 90%.", "On nine languages -Catalan, Czech, Dutch, English, French, German, Norwegian, Russian and Spanish -STACKPTR outperforms BIAF for both UAS and LAS.", "On Bulgarian, STACKPTR achieves slightly better UAS while LAS is slightly worse than BIAF.", "On Italian and Romanian, BIAF obtains marginally better parsing performance than STACKPTR.", "Conclusion In this paper, we proposed STACKPTR, a transition-based neural network architecture, for dependency parsing.", "Combining pointer networks with an internal stack to track the status of the top-down, depth-first search in the decoding procedure, the STACKPTR parser is able to capture information from the whole sentence and all the previously derived subtrees, removing the leftto-right restriction in classical transition-based parsers, while maintaining linear parsing steps, w.r.t the length of the sentences.", "Experimental re-sults on 29 treebanks show the effectiveness of our parser across 20 languages, by achieving state-ofthe-art performance on 21 corpora.", "There are several potential directions for future work.", "First, we intend to consider how to conduct experiments to improve the analysis of parsing errors qualitatively and quantitatively.", "Another interesting direction is to further improve our model by exploring reinforcement learning approaches to learn an optimal order for the children of head words, instead of using a predefined fixed order." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.1", "3.2", "3.3", "3.4", "3.5", "3.6", "3.7", "3.8", "4.1", "4.2", "4.4", "4.4.1", "4.4.2", "4.5.2", "5" ], "paper_header_content": [ "Introduction", "Background", "Dependency Parsing and Notations", "Pointer Networks", "Overview", "Encoder", "Decoder", "Higher-order Information", "Biaffine Attention Mechanism", "Training Objectives", "Discussion", "Implementation Details", "Setup", "Main Results", "Error Analysis", "Length and Graph Factors", "Effect of POS Embedding", "UD Treebanks", "Conclusion" ] }
GEM-SciDuet-train-37#paper-1053#slide-19
Model Details
Bi-directional LSTM-CNN (Chiu and Nichols 2016; Ma and Hovy Three input embeddings: word, character and POS CNN encodes character-level information 3-layer LSTM with recurrent dropout (Gal et al., 2016) - Use encoder hidden states as input instead of word embeddings
Bi-directional LSTM-CNN (Chiu and Nichols 2016; Ma and Hovy Three input embeddings: word, character and POS CNN encodes character-level information 3-layer LSTM with recurrent dropout (Gal et al., 2016) - Use encoder hidden states as input instead of word embeddings
[]
GEM-SciDuet-train-38#paper-1054#slide-0
1054
Saliency-driven Word Alignment Interpretation for Neural Machine Translation
Despite their original goal to jointly learn to align and translate, Neural Machine Translation (NMT) models, especially Transformer, are often perceived as not learning interpretable word alignments. In this paper, we show that NMT models do learn interpretable word alignments, which could only be revealed with proper interpretation methods. We propose a series of such methods that are model-agnostic, are able to be applied either offline or online, and do not require parameter update or architectural change. We show that under the force decoding setup, the alignments induced by our interpretation method are of better quality than fast-align for some systems, and when performing free decoding, they agree well with the alignments induced by automatic alignment tools.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223 ], "paper_content_text": [ "Introduction Neural Machine Translation (NMT) has made lots of advancements since its inception.", "One of the key innovations that led to the largest improvements is the introduction of the attention mechanism (Bahdanau et al., 2014; Luong et al., 2015) , which jointly learns word alignment and translation.", "Since then, the attention mechanism has gradually become a general technique in various NLP tasks, including summarization (Rush et al., 2015; See et al., 2017) , natural language inference (Parikh et al., 2016) and speech recognition (Chorowski et al., 2015; Chan et al., 2016) .", "Although word alignment is no longer a integral step like the case for Statistical Machine Translation (SMT) systems (Brown et al., 1993; Koehn et al., 2003) , there is a resurgence of interest in the community to study word alignment for NMT models.", "Even for NMT, word alignments are useful for error analysis, inserting external vocabularies, and providing guidance for human translators in computer-aided translation.", "When aiming for the most accurate alignments, the state-of-the-art tools include GIZA++ (Brown et al., 1993; Och and Ney, 2003) and fast-align (Dyer et al., 2013) , which are all external models invented in SMT era and need to be run as a separate post-processing step after the full sentence translation is complete.", "As a direct result, they are not suitable for analyzing the internal decision processes of the neural machine translation models.", "Besides, these models are hard to apply in the online fashion, i.e.", "in the middle of left-to-right translation process, such as the scenario in certain constrained decoding algorithms (Hasler et al., 2018) and in computeraided translation (Bouma and Parmentier, 2014; Arcan et al., 2014) .", "For these cases, the current common practice is to simply generate word alignments from attention weights between the encoder and decoder.", "However, there are problems with this practice.", "Koehn and Knowles (2017) showed that attention-based word alignment interpretation may be subject to \"off-by-one\" errors.", "Zenkel et al.", "(2019) ; Tang et al.", "(2018b) ; Raganato and Tiedemann (2018) pointed out that the attention-induced alignment is particularly noisy with Transformer models.", "Because of this, some studies, such as Nguyen and Chiang (2018); Zenkel et al.", "(2019) proposed either to add extra modules to generate higher quality word alignments, or to use these modules to further improve the model performance or interpretability.", "This paper is a step towards interpreting word alignments from NMT without relying on external models.", "We argue that using only attention weights is insufficient for generating clean word alignment interpretations, which we demonstrate both conceptually and empirically.", "We propose to use the notion of saliency to obtain word alignment interpretation of NMT predictions.", "Different from previous alignment models, our proposal is a pure interpretation method and does not require any parameter update or architecture change.", "Nevertheless, we are able to reduce Alignment Error Rate (AER) by 10-20 points over the attention weight baseline under two evaluation settings we adopt (see Figure 1 for an example), and beat fast-align (Dyer et al., 2013) by as much as 8.7 points.", "Not only have we proposed a superior model interpretation method, but our empirical results also uncover that, contrary to common beliefs, architectures such as convolutional sequenceto-sequence models (Gehring et al., 2017) have already implicitly learned highly interpretable word alignments, which sheds light on how future improvement should be made on these architectures.", "Related Work We start with work that combines word alignments with NMT.", "Research in this area generally falls into one of three themes: (1) employing the notion of word alignments to interpret the prediction of NMT; (2) making use of word alignments to improve NMT performance; (3) making use of NMT to improve word alignments.", "We mainly focus on related work in the first theme as this is the problem we are addressing in this work.", "Then we briefly introduce work in the other themes that is relevant to our study.", "We conclude by briefly summarizing related work to our proposed interpretation method.", "For the attention in RNN-based sequence-tosequence model, the first comprehensive analysis is conducted by Ghader and Monz (2017) .", "They argued that the attention in such systems agree with word alignment to a certain extent by showing that the RNN-based system achieves comparable alignment error rate comparable to that of bidirectional GIZA++ with symmetrization.", "However, they also point out that they are not exactly the same, as training the attention with alignments would occasionally cause the model to forget important information.", "Lee et al.", "(2017) presented a toolkit that facilitates study for the attention in RNN-based models.", "There is also a number of other studies that analyze the attention in Transformer models.", "Tang et al.", "(2018a,b) conducted targeted evaluation of neural machine translation models in two different evaluation tasks, namely subject-verb agreement and word sense disambiguation.", "During the analysis, they noted that the pattern in Transformer model (what they refer to as advanced attention mechanism) is very different from that of the attention in RNN-based architecture, in that a lot of the probability mass is focused on the last input token.", "They did not dive deeper in this phenomenon in their analysis.", "Raganato and Tiedemann (2018) performed a brief but more refined analysis on each attention head and each layer, where they noticed several different patterns inside the modules, and concluded that Transformer tends to focus on local dependencies in lower layers but finds long dependencies on higher ones.", "Beyond interpretation, in order to improve the translation of rare words, Nguyen and Chiang (2018) introduced LexNet, a feed-forward neural network that directly predicts the target word from a weighted sum of the source embeddings, on top of an RNN-based Seq2Seq models.", "Their goal was to improve translation output and hence they did not empirically show AER improvements on manually-aligned corpora.", "There are also a few other studies that inject alignment supervision during NMT training (Mi et al., 2016; Liu et al., 2016) .", "In terms of improvements in word alignment quality, Legrand et al.", "(2016) ; Wang et al.", "(2018) ; proposed neu-ral word alignment modules decoupled from NMT systems, while Zenkel et al.", "(2019) introduced a separate module to extract alignment from NMT decoder states, with which they achieved comparable AER with fast-align with Transformer models.", "The saliency method we propose in this work draws its inspiration from visual saliency proposed by Simonyan et al.", "(2013); Springenberg et al.", "(2014) ; Smilkov et al.", "(2017) .", "It should be noted that these methods were mostly applied to computer vision tasks.", "To the best of our knowledge, Li et al.", "(2016) presented the only work that directly employs saliency methods to interpret NLP models.", "Most similar to our work in spirit, Ding et al.", "(2017) used Layer-wise Relevance Propagation (LRP; Bach et al.", "2015) , an interpretation method resembling saliency, to interpret the internal working mechanisms of RNN-based neural machine translation systems.", "Although conceptually LRP is also a good fit for word alignment interpretation, we have some concerns with the mathematical soundness of LRP when applied to attention models.", "Our proposed method is also considerably more flexible and easier to implement than LRP.", "The Interpretation Problem Formally, by interpreting model prediction, we are referring to the following problem: given a trained MT model and input tokens S = {s 0 , s 1 , .", ".", ".", ", s I−1 }, at a certain time step j when the models predicts t j , we want to know which source word in S \"contributed\" most to this prediction.", "Note that the prediction t j might not be arg max t j p(t j | t 1:j−1 ), as the locally optimal option may be pruned during beam search and not end up in the final translation.", "Under this framework, we can see an important conceptual problem regarding interpreting attention weights as word alignment.", "Suppose for the same source sentence, there are two alternative translations that diverge at target time step j, generating t j and t ′ j which respectively correspond to different source words.", "Presumably, the source word that is aligned to t j and t ′ j should changed correspondingly.", "However, this is not possible with the attention weight interpretation, because the attention weight is computed before prediction of t j or t ′ j .", "With that, we argue that an ideal interpretation algorithm should be able to adapt the interpretation with the specified output label, regard-less of whether it is the most likely label predicted by the model.", "As a final note, the term \"attention weights\" here refers to the weights of the attention between encoder and decoder (the \"encoder-decoder attention\" in Vaswani et al.", "(2017) ).", "Specifically, they do not refer to the weight of self-attention modules that only exist in the Transformer architecture, which do not establish alignment between the source and target words.", "Method Our proposal is based on the notion of visual saliency (Simonyan et al., 2013) in computer vision.", "In brief, the saliency of an input feature is defined by the partial gradient of the output score with regard to the input.", "We propose to extend this idea to NMT by drawing analogy between input pixels and the embedding look-up operation.", "Visual Saliency Suppose we have an image classification example (x 0 , y 0 ), with y 0 being a specific image class and x 0 being an |X |-dimensional vector.", "Each entry of x 0 is an input feature (i.e., a pixel) to the classifier.", "Given the input x 0 , a trained classifier can generate a prediction score for class y 0 , denoted as p(y 0 | x 0 ).", "Consider the first-order Taylor expansion of a perturbed version of this score at the neighborhood of input x 0 : p(y 0 | x) ≈ p(y 0 | x 0 ) + ∂p(y 0 | x) ∂x x 0 · (x − x 0 ) (1) This is essentially re-formulating the perturbed prediction score p(y 0 | x) as an affine approximation of the input features, while the \"contribution\" of each feature to the final prediction being the partial derivative of the prediction score with regard to the feature.", "Assuming a feature that is deemed as salient for the local perturbation of the prediction score would also be globally salient, the saliency of an input feature is defined as follows: Definition 1 Denoted as Ψ(x, y), the saliency of feature vector x with regard to output class y is defined as ∂p(y | x) ∂x .", "Note that Ψ(x, y) is also a vector, with each entry corresponding to the saliency of a single input feature in x.", "Such formulation has following nice properties: • The saliency of an input feature is related to the choice of output class y, as model scores of different output classes correspond to a different set of parameters, and hence resulting in different partial gradients for the input features.", "This makes up for the aforementioned deficiency of attention weights in addressing the interpretation problem.", "• The partial gradient could be computed by back-propagation, which is efficiently implemented in most deep learning frameworks.", "• The formulation is agnostic to the model that generates p(y | x), so it could be applied to any deep learning architecture.", "Word Saliency In computer vision, the input feature is a 3D Tensor corresponding to the level in each channel.", "The key question to apply such method to NMT is what constitutes the input feature to a NMT system.", "Li et al.", "(2016) proposed to use the embedding of of the input words as the input feature to formulate saliency score, which results in the saliency of an input word being a vector of the same dimension as embedding vectors.", "To obtain a scalar saliency value, they computed the mean of the absolute value of the embedding gradients.", "We argue that there is a more mathematically principled way to approach this.", "To start, we treat the word embedding look-up operation as a dot product between the embedding weight matrix W and an one-hot vector z.", "The size of z is the same as the source vocabulary size.", "Similarly, the input sentence could be formulated as a matrix Z with only 0 and 1 entries.", "Notice that z has certain resemblance to the pixels of an image, with each cell representing the pixel-wise activation level of the words in the vocabulary.", "For the output word t j at time step j, we can similarly define the saliency of the one-hot vector z as: Ψ(z, t j ) = ∂p(t j | Z) ∂z (2) where p(t j | Z) is the probability of word t j generated by the NMT model given source sentence Z. Ψ(z, t j ) is a vector of the same size as z.", "However, note that there is a key difference between z and pixels.", "If the pixel level is 0, it means that the pixel is black, while a 0-entry in z means that the input word is not the word denoted by the corresponding cell.", "While the black region of an input image may still carry important information, we are not interested in the saliency of the 0-entries in z.", "1 Hence, we only take the 1-entries of matrix Z as the input to the NMT model.", "For a source word s i in the source sentence, this means we only care about the saliency of the 1-entries, i.e., the entry corresponding to source word s i : ψ(s i , t j ) = [ ∂p(t j | Z) ∂z ] s i = [ ∂p(t j | Z) ∂W s i · ∂W s i ∂z ] s i = [ ∂p(t j | Z) ∂W s i · W ] s i = ∂p(t j | Z) ∂W s i · W s i (3) where [·] i denotes the i-th row of a matrix or the ith element of a vector.", "In other words, the saliency ψ(s i , t j ) is a weighted sum of the word embedding of input word s i , with the partial gradient of each cell as the weight.", "By comparison, the word saliency 2 in Li et al.", "(2016) is defined as: ψ ′ (s i , t j ) = mean ( ∂p(t j | Z) ∂W s i ) (4) There are two implementation details that we would like to call for the reader's attention: • When the same word occurs multiple times in the source sentence, multiple copies of embedding for such word need to be made to ensure that the gradients flowing to different instances of the same word are not merged; • Note that ψ(s i , t j ) is not a probability distribution, which does not affect word alignment results because we are taking arg max.", "For visualizations presented herein, we normalized the distribution by p( s i | t j ) ∝ max(0, ψ(s i , t j )).", "One may also use softmax function for applications that need more well-formed probability distribution.", "1 Although we introduce z to facilitate presentation, note that word embedding look-up is never implemented as a matrix multiplication.", "Instead, it is implemented as a table lookup, so for each input word, only one row of the word embedding is fed into the subsequent computation.", "As a consequence, during training, since the other rows are not part of the computation graph, only parameters in the rows corresponding to the 1-entries will be updated.", "This is another reason why we choose to discard the saliency of 0-entries.", "2 Li et al.", "(2016) mostly focused on studying saliency on the level of word embedding dimensions.", "This word-level formulation is proposed as part of the analysis in Section 5.2 and Section 6 of that work.", "SmoothGrad There are two scenarios where the naïve gradientbased saliency may make mistakes: • For highly non-linear models, the saliency obtained from local perturbation may not be a good representation of the global saliency.", "• If the model fits the distribution nearly perfectly, some data points or input features may become saturated, i.e.", "having a partial gradient of 0.", "This does not necessarily mean they are not salient with regard to the prediction.", "We alleviate these problems with SmoothGrad, a method proposed by Smilkov et al.", "(2017) .", "The idea is to augment the input to the network into n samples by adding random noise generated by normal distribution N (0, σ 2 ).", "The saliency scores of each augmented sample are then averaged to cancel out the noise in the gradients.", "We made one small modification to this method in our experiments: rather than adding noise to the word inputs that are represented as one-hot vectors, we instead add noise to the queried embedding vectors.", "This allows us to introduce more randomness for each word input.", "Experiments Evaluation Method The best evaluation method would compare predicted word alignments against manually labeled word alignments between source sentences and NMT output sentences, but this is too costly for our study.", "Instead, we conduct two automatic evaluations for our proposed method using resources available: • force decoding: take a human-annotated corpus, run NMT models to force-generate the target side of the corpus and measure AER against the human alignment; • free decoding: take the NMT prediction, obtain reasonably clean reference alignments between the prediction and the source and measure AER against this reference.", "3 Notice that both automatic evaluation methods have their respective limitation: the force decoding method may force the model to predict something it deems unlikely, and thus generating noisy alignment; whereas the free decoding method lacks authentic references.", "Setup We follow Zenkel et al.", "(2019) in data setup and use the accompanied scripts of that paper 4 for preprocessing.", "Their training data consists of 1.9M, 1.1M and 0.4M sentence pairs for German-English (de-en), English-French (en-fr) and Romanian-English (ro-en) language pairs, respectively, whereas the manually-aligned test data contains 508, 447 and 248 sentence pairs for each language pair.", "There is no development data provided in their setup, and it is not clear what they used for NMT system training, so we set aside the last 1,000 sentences of the training data for each language as the development set.", "For our NMT systems, we use fairseq 5 to train attention-based RNN systems (LSTM) (Bahdanau et al., 2014) , convolution systems (FConv) (Gehring et al., 2017) , and Transformer systems (Transformer) (Vaswani et al., 2017) .", "We use the pre-configured model architectures for IWSLT German-English experiments 6 to build all NMT systems.", "Our experiments cover the following interpretation methods: • Attention: directly take the attention weights as soft alignment scores.", "For transformer, we follow the implementation in fairseq and used the attention weights from the final layer averaged across all heads; • Smoothed Attention: obtain multiple version of attention weights with the same data augmentation procedure as SmoothGrad and average them.", "This is to prove that smoothing itself does not improve the interpretation quality, and has to be used together with effective interpretation method; • (Li et al., 2016) : applied with normal backpropagation (Grad) and SmoothGrad; • Ours: applied with normal back-propagation (Grad) and SmoothGrad.", "For all the methods above, we follow the same procedure in (Zenkel et al., 2019) to convert soft alignment scores to hard alignment.", "For force decoding experiments, we generate symmetrized alignment results with growdiag-final.", "We also include AER results 7 of fast-align (Dyer et al., 2013) , GIZA++ 8 and the best model (Add+SGD) from Zenkel et al.", "(2019) on the same dataset for comparison.", "However, the readers should be aware that there are certain caveats in this comparison: • All of these models are specifically designed and optimized to generate high-quality alignments, while our method is an interpretation method and is not making any architecture modifications or parameter updates; • fast-align and GIZA++ usually need to update model with full sentence to generate optimal alignments, while our system and Zenkel et al.", "(2019) can do so on-the-fly.", "7 We reproduced the fast-align results as a sanity check and we were able to perfectly replicate their numbers with their released scripts.", "8 https://github.com/moses-smt/giza-pp Realizing the second caveat, we also run fastalign under the online alignment scenario, where we first train a fast-align model and decode on the test set.", "This is a real-world scenario in applications such as computer-aided translation (Bouma and Parmentier, 2014; Arcan et al., 2014) , where we cannot practically update alignment models onthe-fly.", "On the other hand, we believe this is a slightly better comparison for methods with online alignment capabilities such as Zenkel et al.", "(2019) and this work.", "The data used in Zenkel et al.", "(2019) did not provide a manually-aligned development set, so we tune the SmoothGrad hyperparameters (noise standard deviation σ and sample size n) on a 30sentence subset of the German-English test data with the Transformer model.", "We ended up using the recommended σ = 0.15 in the original paper and a slightly smaller sample size n = 30 for speed.", "This hyperparameter setting is applied to the other SmoothGrad experiments as-is.", "For com-parison with previous work, we do not exclude these sentences from the reported results, we instead mark the numbers affected to raise caution.", "Table 1 shows the AER results under the force decoding setting.", "First, note that after applying our saliency method with normal back-propagation, AER is only reduced for FConv model but instead increases for LSTM and Transformer.", "The largest increase is observed for Transformer, where the AER increases by about 20 points on average.", "However, after applying SmoothGrad on top of that, we observe a sharp drop in AER, which ends up with 10-20 points lower than the attention weight baseline.", "We can also see that this is not just an effect introduced by input noise, as the same smoothing procedure for attention increases the AER most of the times.", "To summarize, at least under force decoding settings, our saliency method with SmoothGrad obtains word alignment interpretations of much higher quality than the attention weight baseline.", "Force Decoding Results As for Li et al.", "(2016) , for FConv and LSTM architectures, it is not only consistently worse than our method, but at times also worse than attention.", "Besides, the effect of SmoothGrad is also not as consistent on their saliency formulation as ours.", "Although with the Transformer model, the Li et al.", "(2016) method obtained better AER than our method under several settings, it is still pretty clear overall that the superior mathematical soundness of our method is translated into better interpretation quality.", "While the GIZA++ model obtains the best alignment result in Table 1 9 , most of our word alignment interpretation of FConv model with Smooth-Grad surpasses the alignment quality of fast-align (either Online or Offline), sometimes by as much as 8.7 points (symmetrized ro<>en result).", "Our best models are also largely on-par with (Zenkel et al., 2019) .", "These are notable results as our method is an interpretation method and no extra parameter is updated to optimize the quality of alignment.", "On the other hand, this also indicates that it is possible to induce high-quality 9 While Ghader and Monz (2017) showed that the AER obtained by LSTM model is close to that of GIZA++, our experiments yield a much larger difference.", "We think this is largely due to the fact that we choose to train our model with BPE, while Ghader and Monz (2017) explicitly avoided doing so.", "alignments from NMT model without modifying its parameters, showing that it has acquired such information in an implicit way.", "Most interestingly, although NMT is often deemed as performing poorly under low-resource setting, our interpretation seems to work relatively well on ro<>en language pair, which happens to be the language pair that we have least training data for.", "We think this is a phenomenon that merits further exploration.", "Besides, it can be seen that for all reported methods, the overall order for the number of alignment errors is FConv < LSTM < Transformer.", "To our best knowledge, this is also a novel insight, as no one has analyzed attention weights of FConv with other architectures before.", "We can also observe that while our method is not strong enough to fully bridge the gap of the attention noise level between different model architecture, it does manage to narrow the difference in some cases.", "Table 2 shows the result under free decoding setting.", "The trend in this group of experiment is similar to Table 1 , except that Transformer occasionally outperforms LSTM.", "We think this is mainly due to the fact that Transformer generates higher quality translations, but could also be partially attributed to the noise in fast-align reference.", "Also, notice that the AER numbers are also generally lower compared to Table 1 under this setting.", "One reason is that our model is aligning output with which it is most confident, so less noise should be expected in the model behavior.", "On the other hand, by qualitatively comparing the reference translation in the test set and the NMT output, we find that it is generally easier to align the translation as it is often a more literal translation.", "6 Analysis 6.1 Comparison with Li et al.", "(2016) The main reason why the word saliency formulation in Li et al.", "(2016) does not work as well for word alignment is the lack of polarity in the formulation.", "In other words, it only quantifies how much the input influences the output, but does not specify in what way does the input influence.", "This is sufficient for error analysis, but does not suit the purpose of word alignment, as humans will only align a target word to the input words that constitute a translation pair, i.e.", "have positive influence.", "Figure 2 shows a case where this problem occurs in our German-English experiments.", "Note that in Subfigure (a), the source word nur has high saliency on several target words, e.g.", "should, but the word nur is actually not translated in the reference.", "On the other hand, as shown in Subfigure (b), our method correctly assigns negative (shown as white) or small positive values at all time steps for this source word.", "Specifically, the saliency value of nur for should is negative with large magnitude, indicating significant negative contributions to the prediction of that target word.", "Hence, a good word alignment interpreta-tion should strongly avoid aligning them.", "Free Decoding Results SmoothGrad Tables 1 and 2 show that SmoothGrad is a crucial factor to reduce AER, especially for Transformer.", "Figure 3 Table 1 .", "By comparing Subfigures (a) and (c), we notice that (1) without SmoothGrad, the word saliency obtained from the Transformer model is extremely noisy, and (2) the output of SmoothGrad is not only a smoother version of the naïve gradient output, but also gains new information by performing extra forward and backward evaluations with the noisy input.", "For example, compare the alignment point between source word wir and target word we: in Subfigure (a), this word pair has very low saliency, but in (c), they become the most likely alignment pair for that target word.", "Referring back to our motivation for using SmoothGrad in Section 4.3, we think the observations above verify that the Transformer model is a case where very high non-linearities occur almost everywhere in the parameter space, such that the saliency obtained from local perturbation is a very Table 3 : Alignment distribution entropy for selected deen models.", "att stands for attention in Table 1. poor representation of the global saliency almost all the time.", "On the other hand, this is also why the Transformer especially relies on SmoothGrad to work well, as the perturbation will give a better estimation of the global saliency.", "It could also be observed from Subfigures (b) and (d) that when the noise is too moderate, the evaluation does not deviate enough from the original spot to gain non-local information, and at (d) it deviates too much and hence the resulting alignment is almost random.", "Intuitively, the noise parameter σ should be sensitive to the model architecture or even specific input feature values, but interestingly we end up finding that a single choice from the computer vision literature works well with all of our systems.", "We encourage future work to conduct more comprehensive analysis of the effect of SmoothGrad on more complicated architectures beyond convolutional neural nets.", "Alignment Dispersion We run German-English alignments under several different SmoothGrad noise deviation σ and report their dispersion as measured by entropy of the (soft) alignment distribution averaged by number of target words.", "Results are summarized in Ta-ble 3, where lower entropy indicates more peaky alignments.", "First, we observe that dispersion of word saliency gets higher as we increase σ, which aligns with the observations in Figure 3 .", "It should also be noted that the alignment dispersion is consistently lower for free decoding than force decoding.", "This verifies our conjecture that the force decoding setting might introduce more noise in the model behavior, but judging from this result, that gap seems to be minimal.", "Comparing different architectures, the dispersion of attention weights does not correlate well with the dispersion of word saliency.", "We also notice that, while the Transformer attention interpretation consistently results in higher AER, its dispersion is lower than the other architectures, indicating that with attention, a lot of the probability mass might be concentrated in the wrong place more often.", "This corroborates the finding in Raganato and Tiedemann (2018) .", "Discussion And Future Work There are several extensions to this work that we would like to discuss in this section.", "First, in this paper we only explored two saliency methods among many others available (Montavon et al., 2018) .", "In our preliminary study, we also experimented with guided back-propagation (Springenberg et al., 2014), a frequently used saliency method in computer vision, which did not work well for our problem.", "We suspect that there is a gap between applying these methods on mostlyconvolutional architectures in computer vision and architectures with more non-linearities in NLP.", "We hope the future research from the NLP and machine learning communities could bridge this gap.", "Secondly, the alignment errors in our method comes from three different sources: the limitation of NMT models on learning word alignments, the limitation of interpretation method on recovering interpretable word alignments, and the ambiguity in word alignments itself.", "Although we have shown that high quality alignment could be recovered from NMT systems (thus pushing our understanding on the limitation of NMT models), we are not yet able to separate these sources of errors in this work.", "While exploration on this direction will help us better understand both NMT models and the capability of saliency methods in NLP, researchers may want to avoid using word alignment as a benchmark for saliency methods because of its ambiguity.", "For such purpose, simpler tasks with clear ground truth, such as subject-verb agreement, might be a better choice.", "Finally, as mentioned before, we are only conducting approximate evaluation to measure the ability of our interpretation method.", "An immediate future work would be evaluating this on human-annotated translation outputs generated by the NMT system.", "Conclusion We propose to use word saliency and SmoothGrad to interpret word alignments from NMT predictions.", "Our proposal is model-agnostic, is able to be applied either offline or online, and does not require any parameter updates or architectural change.", "Both force decoding and free decoding evaluations show that our method is capable of generating word alignment interpretations of much higher quality compared to its attentionbased counterpart.", "Our empirical results also probe into the NMT black-box and reveal that even without any special architecture or training algorithm, some NMT models have already implicitly learned interpretable word alignments of comparable quality to fast-align.", "The model and code for our experiments are available at https://github.com/shuoyangd/meerkat." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "4.3", "5.1", "5.2", "5.3", "6.2", "6.3", "7", "8" ], "paper_header_content": [ "Introduction", "Related Work", "The Interpretation Problem", "Method", "Visual Saliency", "Word Saliency", "SmoothGrad", "Evaluation Method", "Setup", "Force Decoding Results", "SmoothGrad", "Alignment Dispersion", "Discussion And Future Work", "Conclusion" ] }
GEM-SciDuet-train-38#paper-1054#slide-0
Revisiting Six Challenges
attention is not word alignment large beam does not help Saliency-driven Word Alignment Interpretation for NMT
attention is not word alignment large beam does not help Saliency-driven Word Alignment Interpretation for NMT
[]
GEM-SciDuet-train-38#paper-1054#slide-1
1054
Saliency-driven Word Alignment Interpretation for Neural Machine Translation
Despite their original goal to jointly learn to align and translate, Neural Machine Translation (NMT) models, especially Transformer, are often perceived as not learning interpretable word alignments. In this paper, we show that NMT models do learn interpretable word alignments, which could only be revealed with proper interpretation methods. We propose a series of such methods that are model-agnostic, are able to be applied either offline or online, and do not require parameter update or architectural change. We show that under the force decoding setup, the alignments induced by our interpretation method are of better quality than fast-align for some systems, and when performing free decoding, they agree well with the alignments induced by automatic alignment tools.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223 ], "paper_content_text": [ "Introduction Neural Machine Translation (NMT) has made lots of advancements since its inception.", "One of the key innovations that led to the largest improvements is the introduction of the attention mechanism (Bahdanau et al., 2014; Luong et al., 2015) , which jointly learns word alignment and translation.", "Since then, the attention mechanism has gradually become a general technique in various NLP tasks, including summarization (Rush et al., 2015; See et al., 2017) , natural language inference (Parikh et al., 2016) and speech recognition (Chorowski et al., 2015; Chan et al., 2016) .", "Although word alignment is no longer a integral step like the case for Statistical Machine Translation (SMT) systems (Brown et al., 1993; Koehn et al., 2003) , there is a resurgence of interest in the community to study word alignment for NMT models.", "Even for NMT, word alignments are useful for error analysis, inserting external vocabularies, and providing guidance for human translators in computer-aided translation.", "When aiming for the most accurate alignments, the state-of-the-art tools include GIZA++ (Brown et al., 1993; Och and Ney, 2003) and fast-align (Dyer et al., 2013) , which are all external models invented in SMT era and need to be run as a separate post-processing step after the full sentence translation is complete.", "As a direct result, they are not suitable for analyzing the internal decision processes of the neural machine translation models.", "Besides, these models are hard to apply in the online fashion, i.e.", "in the middle of left-to-right translation process, such as the scenario in certain constrained decoding algorithms (Hasler et al., 2018) and in computeraided translation (Bouma and Parmentier, 2014; Arcan et al., 2014) .", "For these cases, the current common practice is to simply generate word alignments from attention weights between the encoder and decoder.", "However, there are problems with this practice.", "Koehn and Knowles (2017) showed that attention-based word alignment interpretation may be subject to \"off-by-one\" errors.", "Zenkel et al.", "(2019) ; Tang et al.", "(2018b) ; Raganato and Tiedemann (2018) pointed out that the attention-induced alignment is particularly noisy with Transformer models.", "Because of this, some studies, such as Nguyen and Chiang (2018); Zenkel et al.", "(2019) proposed either to add extra modules to generate higher quality word alignments, or to use these modules to further improve the model performance or interpretability.", "This paper is a step towards interpreting word alignments from NMT without relying on external models.", "We argue that using only attention weights is insufficient for generating clean word alignment interpretations, which we demonstrate both conceptually and empirically.", "We propose to use the notion of saliency to obtain word alignment interpretation of NMT predictions.", "Different from previous alignment models, our proposal is a pure interpretation method and does not require any parameter update or architecture change.", "Nevertheless, we are able to reduce Alignment Error Rate (AER) by 10-20 points over the attention weight baseline under two evaluation settings we adopt (see Figure 1 for an example), and beat fast-align (Dyer et al., 2013) by as much as 8.7 points.", "Not only have we proposed a superior model interpretation method, but our empirical results also uncover that, contrary to common beliefs, architectures such as convolutional sequenceto-sequence models (Gehring et al., 2017) have already implicitly learned highly interpretable word alignments, which sheds light on how future improvement should be made on these architectures.", "Related Work We start with work that combines word alignments with NMT.", "Research in this area generally falls into one of three themes: (1) employing the notion of word alignments to interpret the prediction of NMT; (2) making use of word alignments to improve NMT performance; (3) making use of NMT to improve word alignments.", "We mainly focus on related work in the first theme as this is the problem we are addressing in this work.", "Then we briefly introduce work in the other themes that is relevant to our study.", "We conclude by briefly summarizing related work to our proposed interpretation method.", "For the attention in RNN-based sequence-tosequence model, the first comprehensive analysis is conducted by Ghader and Monz (2017) .", "They argued that the attention in such systems agree with word alignment to a certain extent by showing that the RNN-based system achieves comparable alignment error rate comparable to that of bidirectional GIZA++ with symmetrization.", "However, they also point out that they are not exactly the same, as training the attention with alignments would occasionally cause the model to forget important information.", "Lee et al.", "(2017) presented a toolkit that facilitates study for the attention in RNN-based models.", "There is also a number of other studies that analyze the attention in Transformer models.", "Tang et al.", "(2018a,b) conducted targeted evaluation of neural machine translation models in two different evaluation tasks, namely subject-verb agreement and word sense disambiguation.", "During the analysis, they noted that the pattern in Transformer model (what they refer to as advanced attention mechanism) is very different from that of the attention in RNN-based architecture, in that a lot of the probability mass is focused on the last input token.", "They did not dive deeper in this phenomenon in their analysis.", "Raganato and Tiedemann (2018) performed a brief but more refined analysis on each attention head and each layer, where they noticed several different patterns inside the modules, and concluded that Transformer tends to focus on local dependencies in lower layers but finds long dependencies on higher ones.", "Beyond interpretation, in order to improve the translation of rare words, Nguyen and Chiang (2018) introduced LexNet, a feed-forward neural network that directly predicts the target word from a weighted sum of the source embeddings, on top of an RNN-based Seq2Seq models.", "Their goal was to improve translation output and hence they did not empirically show AER improvements on manually-aligned corpora.", "There are also a few other studies that inject alignment supervision during NMT training (Mi et al., 2016; Liu et al., 2016) .", "In terms of improvements in word alignment quality, Legrand et al.", "(2016) ; Wang et al.", "(2018) ; proposed neu-ral word alignment modules decoupled from NMT systems, while Zenkel et al.", "(2019) introduced a separate module to extract alignment from NMT decoder states, with which they achieved comparable AER with fast-align with Transformer models.", "The saliency method we propose in this work draws its inspiration from visual saliency proposed by Simonyan et al.", "(2013); Springenberg et al.", "(2014) ; Smilkov et al.", "(2017) .", "It should be noted that these methods were mostly applied to computer vision tasks.", "To the best of our knowledge, Li et al.", "(2016) presented the only work that directly employs saliency methods to interpret NLP models.", "Most similar to our work in spirit, Ding et al.", "(2017) used Layer-wise Relevance Propagation (LRP; Bach et al.", "2015) , an interpretation method resembling saliency, to interpret the internal working mechanisms of RNN-based neural machine translation systems.", "Although conceptually LRP is also a good fit for word alignment interpretation, we have some concerns with the mathematical soundness of LRP when applied to attention models.", "Our proposed method is also considerably more flexible and easier to implement than LRP.", "The Interpretation Problem Formally, by interpreting model prediction, we are referring to the following problem: given a trained MT model and input tokens S = {s 0 , s 1 , .", ".", ".", ", s I−1 }, at a certain time step j when the models predicts t j , we want to know which source word in S \"contributed\" most to this prediction.", "Note that the prediction t j might not be arg max t j p(t j | t 1:j−1 ), as the locally optimal option may be pruned during beam search and not end up in the final translation.", "Under this framework, we can see an important conceptual problem regarding interpreting attention weights as word alignment.", "Suppose for the same source sentence, there are two alternative translations that diverge at target time step j, generating t j and t ′ j which respectively correspond to different source words.", "Presumably, the source word that is aligned to t j and t ′ j should changed correspondingly.", "However, this is not possible with the attention weight interpretation, because the attention weight is computed before prediction of t j or t ′ j .", "With that, we argue that an ideal interpretation algorithm should be able to adapt the interpretation with the specified output label, regard-less of whether it is the most likely label predicted by the model.", "As a final note, the term \"attention weights\" here refers to the weights of the attention between encoder and decoder (the \"encoder-decoder attention\" in Vaswani et al.", "(2017) ).", "Specifically, they do not refer to the weight of self-attention modules that only exist in the Transformer architecture, which do not establish alignment between the source and target words.", "Method Our proposal is based on the notion of visual saliency (Simonyan et al., 2013) in computer vision.", "In brief, the saliency of an input feature is defined by the partial gradient of the output score with regard to the input.", "We propose to extend this idea to NMT by drawing analogy between input pixels and the embedding look-up operation.", "Visual Saliency Suppose we have an image classification example (x 0 , y 0 ), with y 0 being a specific image class and x 0 being an |X |-dimensional vector.", "Each entry of x 0 is an input feature (i.e., a pixel) to the classifier.", "Given the input x 0 , a trained classifier can generate a prediction score for class y 0 , denoted as p(y 0 | x 0 ).", "Consider the first-order Taylor expansion of a perturbed version of this score at the neighborhood of input x 0 : p(y 0 | x) ≈ p(y 0 | x 0 ) + ∂p(y 0 | x) ∂x x 0 · (x − x 0 ) (1) This is essentially re-formulating the perturbed prediction score p(y 0 | x) as an affine approximation of the input features, while the \"contribution\" of each feature to the final prediction being the partial derivative of the prediction score with regard to the feature.", "Assuming a feature that is deemed as salient for the local perturbation of the prediction score would also be globally salient, the saliency of an input feature is defined as follows: Definition 1 Denoted as Ψ(x, y), the saliency of feature vector x with regard to output class y is defined as ∂p(y | x) ∂x .", "Note that Ψ(x, y) is also a vector, with each entry corresponding to the saliency of a single input feature in x.", "Such formulation has following nice properties: • The saliency of an input feature is related to the choice of output class y, as model scores of different output classes correspond to a different set of parameters, and hence resulting in different partial gradients for the input features.", "This makes up for the aforementioned deficiency of attention weights in addressing the interpretation problem.", "• The partial gradient could be computed by back-propagation, which is efficiently implemented in most deep learning frameworks.", "• The formulation is agnostic to the model that generates p(y | x), so it could be applied to any deep learning architecture.", "Word Saliency In computer vision, the input feature is a 3D Tensor corresponding to the level in each channel.", "The key question to apply such method to NMT is what constitutes the input feature to a NMT system.", "Li et al.", "(2016) proposed to use the embedding of of the input words as the input feature to formulate saliency score, which results in the saliency of an input word being a vector of the same dimension as embedding vectors.", "To obtain a scalar saliency value, they computed the mean of the absolute value of the embedding gradients.", "We argue that there is a more mathematically principled way to approach this.", "To start, we treat the word embedding look-up operation as a dot product between the embedding weight matrix W and an one-hot vector z.", "The size of z is the same as the source vocabulary size.", "Similarly, the input sentence could be formulated as a matrix Z with only 0 and 1 entries.", "Notice that z has certain resemblance to the pixels of an image, with each cell representing the pixel-wise activation level of the words in the vocabulary.", "For the output word t j at time step j, we can similarly define the saliency of the one-hot vector z as: Ψ(z, t j ) = ∂p(t j | Z) ∂z (2) where p(t j | Z) is the probability of word t j generated by the NMT model given source sentence Z. Ψ(z, t j ) is a vector of the same size as z.", "However, note that there is a key difference between z and pixels.", "If the pixel level is 0, it means that the pixel is black, while a 0-entry in z means that the input word is not the word denoted by the corresponding cell.", "While the black region of an input image may still carry important information, we are not interested in the saliency of the 0-entries in z.", "1 Hence, we only take the 1-entries of matrix Z as the input to the NMT model.", "For a source word s i in the source sentence, this means we only care about the saliency of the 1-entries, i.e., the entry corresponding to source word s i : ψ(s i , t j ) = [ ∂p(t j | Z) ∂z ] s i = [ ∂p(t j | Z) ∂W s i · ∂W s i ∂z ] s i = [ ∂p(t j | Z) ∂W s i · W ] s i = ∂p(t j | Z) ∂W s i · W s i (3) where [·] i denotes the i-th row of a matrix or the ith element of a vector.", "In other words, the saliency ψ(s i , t j ) is a weighted sum of the word embedding of input word s i , with the partial gradient of each cell as the weight.", "By comparison, the word saliency 2 in Li et al.", "(2016) is defined as: ψ ′ (s i , t j ) = mean ( ∂p(t j | Z) ∂W s i ) (4) There are two implementation details that we would like to call for the reader's attention: • When the same word occurs multiple times in the source sentence, multiple copies of embedding for such word need to be made to ensure that the gradients flowing to different instances of the same word are not merged; • Note that ψ(s i , t j ) is not a probability distribution, which does not affect word alignment results because we are taking arg max.", "For visualizations presented herein, we normalized the distribution by p( s i | t j ) ∝ max(0, ψ(s i , t j )).", "One may also use softmax function for applications that need more well-formed probability distribution.", "1 Although we introduce z to facilitate presentation, note that word embedding look-up is never implemented as a matrix multiplication.", "Instead, it is implemented as a table lookup, so for each input word, only one row of the word embedding is fed into the subsequent computation.", "As a consequence, during training, since the other rows are not part of the computation graph, only parameters in the rows corresponding to the 1-entries will be updated.", "This is another reason why we choose to discard the saliency of 0-entries.", "2 Li et al.", "(2016) mostly focused on studying saliency on the level of word embedding dimensions.", "This word-level formulation is proposed as part of the analysis in Section 5.2 and Section 6 of that work.", "SmoothGrad There are two scenarios where the naïve gradientbased saliency may make mistakes: • For highly non-linear models, the saliency obtained from local perturbation may not be a good representation of the global saliency.", "• If the model fits the distribution nearly perfectly, some data points or input features may become saturated, i.e.", "having a partial gradient of 0.", "This does not necessarily mean they are not salient with regard to the prediction.", "We alleviate these problems with SmoothGrad, a method proposed by Smilkov et al.", "(2017) .", "The idea is to augment the input to the network into n samples by adding random noise generated by normal distribution N (0, σ 2 ).", "The saliency scores of each augmented sample are then averaged to cancel out the noise in the gradients.", "We made one small modification to this method in our experiments: rather than adding noise to the word inputs that are represented as one-hot vectors, we instead add noise to the queried embedding vectors.", "This allows us to introduce more randomness for each word input.", "Experiments Evaluation Method The best evaluation method would compare predicted word alignments against manually labeled word alignments between source sentences and NMT output sentences, but this is too costly for our study.", "Instead, we conduct two automatic evaluations for our proposed method using resources available: • force decoding: take a human-annotated corpus, run NMT models to force-generate the target side of the corpus and measure AER against the human alignment; • free decoding: take the NMT prediction, obtain reasonably clean reference alignments between the prediction and the source and measure AER against this reference.", "3 Notice that both automatic evaluation methods have their respective limitation: the force decoding method may force the model to predict something it deems unlikely, and thus generating noisy alignment; whereas the free decoding method lacks authentic references.", "Setup We follow Zenkel et al.", "(2019) in data setup and use the accompanied scripts of that paper 4 for preprocessing.", "Their training data consists of 1.9M, 1.1M and 0.4M sentence pairs for German-English (de-en), English-French (en-fr) and Romanian-English (ro-en) language pairs, respectively, whereas the manually-aligned test data contains 508, 447 and 248 sentence pairs for each language pair.", "There is no development data provided in their setup, and it is not clear what they used for NMT system training, so we set aside the last 1,000 sentences of the training data for each language as the development set.", "For our NMT systems, we use fairseq 5 to train attention-based RNN systems (LSTM) (Bahdanau et al., 2014) , convolution systems (FConv) (Gehring et al., 2017) , and Transformer systems (Transformer) (Vaswani et al., 2017) .", "We use the pre-configured model architectures for IWSLT German-English experiments 6 to build all NMT systems.", "Our experiments cover the following interpretation methods: • Attention: directly take the attention weights as soft alignment scores.", "For transformer, we follow the implementation in fairseq and used the attention weights from the final layer averaged across all heads; • Smoothed Attention: obtain multiple version of attention weights with the same data augmentation procedure as SmoothGrad and average them.", "This is to prove that smoothing itself does not improve the interpretation quality, and has to be used together with effective interpretation method; • (Li et al., 2016) : applied with normal backpropagation (Grad) and SmoothGrad; • Ours: applied with normal back-propagation (Grad) and SmoothGrad.", "For all the methods above, we follow the same procedure in (Zenkel et al., 2019) to convert soft alignment scores to hard alignment.", "For force decoding experiments, we generate symmetrized alignment results with growdiag-final.", "We also include AER results 7 of fast-align (Dyer et al., 2013) , GIZA++ 8 and the best model (Add+SGD) from Zenkel et al.", "(2019) on the same dataset for comparison.", "However, the readers should be aware that there are certain caveats in this comparison: • All of these models are specifically designed and optimized to generate high-quality alignments, while our method is an interpretation method and is not making any architecture modifications or parameter updates; • fast-align and GIZA++ usually need to update model with full sentence to generate optimal alignments, while our system and Zenkel et al.", "(2019) can do so on-the-fly.", "7 We reproduced the fast-align results as a sanity check and we were able to perfectly replicate their numbers with their released scripts.", "8 https://github.com/moses-smt/giza-pp Realizing the second caveat, we also run fastalign under the online alignment scenario, where we first train a fast-align model and decode on the test set.", "This is a real-world scenario in applications such as computer-aided translation (Bouma and Parmentier, 2014; Arcan et al., 2014) , where we cannot practically update alignment models onthe-fly.", "On the other hand, we believe this is a slightly better comparison for methods with online alignment capabilities such as Zenkel et al.", "(2019) and this work.", "The data used in Zenkel et al.", "(2019) did not provide a manually-aligned development set, so we tune the SmoothGrad hyperparameters (noise standard deviation σ and sample size n) on a 30sentence subset of the German-English test data with the Transformer model.", "We ended up using the recommended σ = 0.15 in the original paper and a slightly smaller sample size n = 30 for speed.", "This hyperparameter setting is applied to the other SmoothGrad experiments as-is.", "For com-parison with previous work, we do not exclude these sentences from the reported results, we instead mark the numbers affected to raise caution.", "Table 1 shows the AER results under the force decoding setting.", "First, note that after applying our saliency method with normal back-propagation, AER is only reduced for FConv model but instead increases for LSTM and Transformer.", "The largest increase is observed for Transformer, where the AER increases by about 20 points on average.", "However, after applying SmoothGrad on top of that, we observe a sharp drop in AER, which ends up with 10-20 points lower than the attention weight baseline.", "We can also see that this is not just an effect introduced by input noise, as the same smoothing procedure for attention increases the AER most of the times.", "To summarize, at least under force decoding settings, our saliency method with SmoothGrad obtains word alignment interpretations of much higher quality than the attention weight baseline.", "Force Decoding Results As for Li et al.", "(2016) , for FConv and LSTM architectures, it is not only consistently worse than our method, but at times also worse than attention.", "Besides, the effect of SmoothGrad is also not as consistent on their saliency formulation as ours.", "Although with the Transformer model, the Li et al.", "(2016) method obtained better AER than our method under several settings, it is still pretty clear overall that the superior mathematical soundness of our method is translated into better interpretation quality.", "While the GIZA++ model obtains the best alignment result in Table 1 9 , most of our word alignment interpretation of FConv model with Smooth-Grad surpasses the alignment quality of fast-align (either Online or Offline), sometimes by as much as 8.7 points (symmetrized ro<>en result).", "Our best models are also largely on-par with (Zenkel et al., 2019) .", "These are notable results as our method is an interpretation method and no extra parameter is updated to optimize the quality of alignment.", "On the other hand, this also indicates that it is possible to induce high-quality 9 While Ghader and Monz (2017) showed that the AER obtained by LSTM model is close to that of GIZA++, our experiments yield a much larger difference.", "We think this is largely due to the fact that we choose to train our model with BPE, while Ghader and Monz (2017) explicitly avoided doing so.", "alignments from NMT model without modifying its parameters, showing that it has acquired such information in an implicit way.", "Most interestingly, although NMT is often deemed as performing poorly under low-resource setting, our interpretation seems to work relatively well on ro<>en language pair, which happens to be the language pair that we have least training data for.", "We think this is a phenomenon that merits further exploration.", "Besides, it can be seen that for all reported methods, the overall order for the number of alignment errors is FConv < LSTM < Transformer.", "To our best knowledge, this is also a novel insight, as no one has analyzed attention weights of FConv with other architectures before.", "We can also observe that while our method is not strong enough to fully bridge the gap of the attention noise level between different model architecture, it does manage to narrow the difference in some cases.", "Table 2 shows the result under free decoding setting.", "The trend in this group of experiment is similar to Table 1 , except that Transformer occasionally outperforms LSTM.", "We think this is mainly due to the fact that Transformer generates higher quality translations, but could also be partially attributed to the noise in fast-align reference.", "Also, notice that the AER numbers are also generally lower compared to Table 1 under this setting.", "One reason is that our model is aligning output with which it is most confident, so less noise should be expected in the model behavior.", "On the other hand, by qualitatively comparing the reference translation in the test set and the NMT output, we find that it is generally easier to align the translation as it is often a more literal translation.", "6 Analysis 6.1 Comparison with Li et al.", "(2016) The main reason why the word saliency formulation in Li et al.", "(2016) does not work as well for word alignment is the lack of polarity in the formulation.", "In other words, it only quantifies how much the input influences the output, but does not specify in what way does the input influence.", "This is sufficient for error analysis, but does not suit the purpose of word alignment, as humans will only align a target word to the input words that constitute a translation pair, i.e.", "have positive influence.", "Figure 2 shows a case where this problem occurs in our German-English experiments.", "Note that in Subfigure (a), the source word nur has high saliency on several target words, e.g.", "should, but the word nur is actually not translated in the reference.", "On the other hand, as shown in Subfigure (b), our method correctly assigns negative (shown as white) or small positive values at all time steps for this source word.", "Specifically, the saliency value of nur for should is negative with large magnitude, indicating significant negative contributions to the prediction of that target word.", "Hence, a good word alignment interpreta-tion should strongly avoid aligning them.", "Free Decoding Results SmoothGrad Tables 1 and 2 show that SmoothGrad is a crucial factor to reduce AER, especially for Transformer.", "Figure 3 Table 1 .", "By comparing Subfigures (a) and (c), we notice that (1) without SmoothGrad, the word saliency obtained from the Transformer model is extremely noisy, and (2) the output of SmoothGrad is not only a smoother version of the naïve gradient output, but also gains new information by performing extra forward and backward evaluations with the noisy input.", "For example, compare the alignment point between source word wir and target word we: in Subfigure (a), this word pair has very low saliency, but in (c), they become the most likely alignment pair for that target word.", "Referring back to our motivation for using SmoothGrad in Section 4.3, we think the observations above verify that the Transformer model is a case where very high non-linearities occur almost everywhere in the parameter space, such that the saliency obtained from local perturbation is a very Table 3 : Alignment distribution entropy for selected deen models.", "att stands for attention in Table 1. poor representation of the global saliency almost all the time.", "On the other hand, this is also why the Transformer especially relies on SmoothGrad to work well, as the perturbation will give a better estimation of the global saliency.", "It could also be observed from Subfigures (b) and (d) that when the noise is too moderate, the evaluation does not deviate enough from the original spot to gain non-local information, and at (d) it deviates too much and hence the resulting alignment is almost random.", "Intuitively, the noise parameter σ should be sensitive to the model architecture or even specific input feature values, but interestingly we end up finding that a single choice from the computer vision literature works well with all of our systems.", "We encourage future work to conduct more comprehensive analysis of the effect of SmoothGrad on more complicated architectures beyond convolutional neural nets.", "Alignment Dispersion We run German-English alignments under several different SmoothGrad noise deviation σ and report their dispersion as measured by entropy of the (soft) alignment distribution averaged by number of target words.", "Results are summarized in Ta-ble 3, where lower entropy indicates more peaky alignments.", "First, we observe that dispersion of word saliency gets higher as we increase σ, which aligns with the observations in Figure 3 .", "It should also be noted that the alignment dispersion is consistently lower for free decoding than force decoding.", "This verifies our conjecture that the force decoding setting might introduce more noise in the model behavior, but judging from this result, that gap seems to be minimal.", "Comparing different architectures, the dispersion of attention weights does not correlate well with the dispersion of word saliency.", "We also notice that, while the Transformer attention interpretation consistently results in higher AER, its dispersion is lower than the other architectures, indicating that with attention, a lot of the probability mass might be concentrated in the wrong place more often.", "This corroborates the finding in Raganato and Tiedemann (2018) .", "Discussion And Future Work There are several extensions to this work that we would like to discuss in this section.", "First, in this paper we only explored two saliency methods among many others available (Montavon et al., 2018) .", "In our preliminary study, we also experimented with guided back-propagation (Springenberg et al., 2014), a frequently used saliency method in computer vision, which did not work well for our problem.", "We suspect that there is a gap between applying these methods on mostlyconvolutional architectures in computer vision and architectures with more non-linearities in NLP.", "We hope the future research from the NLP and machine learning communities could bridge this gap.", "Secondly, the alignment errors in our method comes from three different sources: the limitation of NMT models on learning word alignments, the limitation of interpretation method on recovering interpretable word alignments, and the ambiguity in word alignments itself.", "Although we have shown that high quality alignment could be recovered from NMT systems (thus pushing our understanding on the limitation of NMT models), we are not yet able to separate these sources of errors in this work.", "While exploration on this direction will help us better understand both NMT models and the capability of saliency methods in NLP, researchers may want to avoid using word alignment as a benchmark for saliency methods because of its ambiguity.", "For such purpose, simpler tasks with clear ground truth, such as subject-verb agreement, might be a better choice.", "Finally, as mentioned before, we are only conducting approximate evaluation to measure the ability of our interpretation method.", "An immediate future work would be evaluating this on human-annotated translation outputs generated by the NMT system.", "Conclusion We propose to use word saliency and SmoothGrad to interpret word alignments from NMT predictions.", "Our proposal is model-agnostic, is able to be applied either offline or online, and does not require any parameter updates or architectural change.", "Both force decoding and free decoding evaluations show that our method is capable of generating word alignment interpretations of much higher quality compared to its attentionbased counterpart.", "Our empirical results also probe into the NMT black-box and reveal that even without any special architecture or training algorithm, some NMT models have already implicitly learned interpretable word alignments of comparable quality to fast-align.", "The model and code for our experiments are available at https://github.com/shuoyangd/meerkat." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "4.3", "5.1", "5.2", "5.3", "6.2", "6.3", "7", "8" ], "paper_header_content": [ "Introduction", "Related Work", "The Interpretation Problem", "Method", "Visual Saliency", "Word Saliency", "SmoothGrad", "Evaluation Method", "Setup", "Force Decoding Results", "SmoothGrad", "Alignment Dispersion", "Discussion And Future Work", "Conclusion" ] }
GEM-SciDuet-train-38#paper-1054#slide-1
A Model Interpretation Problem
Saliency-driven Word Alignment Interpretation for NMT
Saliency-driven Word Alignment Interpretation for NMT
[]