gem_id
stringlengths
37
41
paper_id
stringlengths
3
4
paper_title
stringlengths
19
183
paper_abstract
stringlengths
168
1.38k
paper_content
sequence
paper_headers
sequence
slide_id
stringlengths
37
41
slide_title
stringlengths
2
85
slide_content_text
stringlengths
11
2.55k
target
stringlengths
11
2.55k
references
list
GEM-SciDuet-train-38#paper-1054#slide-2
1054
Saliency-driven Word Alignment Interpretation for Neural Machine Translation
Despite their original goal to jointly learn to align and translate, Neural Machine Translation (NMT) models, especially Transformer, are often perceived as not learning interpretable word alignments. In this paper, we show that NMT models do learn interpretable word alignments, which could only be revealed with proper interpretation methods. We propose a series of such methods that are model-agnostic, are able to be applied either offline or online, and do not require parameter update or architectural change. We show that under the force decoding setup, the alignments induced by our interpretation method are of better quality than fast-align for some systems, and when performing free decoding, they agree well with the alignments induced by automatic alignment tools.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223 ], "paper_content_text": [ "Introduction Neural Machine Translation (NMT) has made lots of advancements since its inception.", "One of the key innovations that led to the largest improvements is the introduction of the attention mechanism (Bahdanau et al., 2014; Luong et al., 2015) , which jointly learns word alignment and translation.", "Since then, the attention mechanism has gradually become a general technique in various NLP tasks, including summarization (Rush et al., 2015; See et al., 2017) , natural language inference (Parikh et al., 2016) and speech recognition (Chorowski et al., 2015; Chan et al., 2016) .", "Although word alignment is no longer a integral step like the case for Statistical Machine Translation (SMT) systems (Brown et al., 1993; Koehn et al., 2003) , there is a resurgence of interest in the community to study word alignment for NMT models.", "Even for NMT, word alignments are useful for error analysis, inserting external vocabularies, and providing guidance for human translators in computer-aided translation.", "When aiming for the most accurate alignments, the state-of-the-art tools include GIZA++ (Brown et al., 1993; Och and Ney, 2003) and fast-align (Dyer et al., 2013) , which are all external models invented in SMT era and need to be run as a separate post-processing step after the full sentence translation is complete.", "As a direct result, they are not suitable for analyzing the internal decision processes of the neural machine translation models.", "Besides, these models are hard to apply in the online fashion, i.e.", "in the middle of left-to-right translation process, such as the scenario in certain constrained decoding algorithms (Hasler et al., 2018) and in computeraided translation (Bouma and Parmentier, 2014; Arcan et al., 2014) .", "For these cases, the current common practice is to simply generate word alignments from attention weights between the encoder and decoder.", "However, there are problems with this practice.", "Koehn and Knowles (2017) showed that attention-based word alignment interpretation may be subject to \"off-by-one\" errors.", "Zenkel et al.", "(2019) ; Tang et al.", "(2018b) ; Raganato and Tiedemann (2018) pointed out that the attention-induced alignment is particularly noisy with Transformer models.", "Because of this, some studies, such as Nguyen and Chiang (2018); Zenkel et al.", "(2019) proposed either to add extra modules to generate higher quality word alignments, or to use these modules to further improve the model performance or interpretability.", "This paper is a step towards interpreting word alignments from NMT without relying on external models.", "We argue that using only attention weights is insufficient for generating clean word alignment interpretations, which we demonstrate both conceptually and empirically.", "We propose to use the notion of saliency to obtain word alignment interpretation of NMT predictions.", "Different from previous alignment models, our proposal is a pure interpretation method and does not require any parameter update or architecture change.", "Nevertheless, we are able to reduce Alignment Error Rate (AER) by 10-20 points over the attention weight baseline under two evaluation settings we adopt (see Figure 1 for an example), and beat fast-align (Dyer et al., 2013) by as much as 8.7 points.", "Not only have we proposed a superior model interpretation method, but our empirical results also uncover that, contrary to common beliefs, architectures such as convolutional sequenceto-sequence models (Gehring et al., 2017) have already implicitly learned highly interpretable word alignments, which sheds light on how future improvement should be made on these architectures.", "Related Work We start with work that combines word alignments with NMT.", "Research in this area generally falls into one of three themes: (1) employing the notion of word alignments to interpret the prediction of NMT; (2) making use of word alignments to improve NMT performance; (3) making use of NMT to improve word alignments.", "We mainly focus on related work in the first theme as this is the problem we are addressing in this work.", "Then we briefly introduce work in the other themes that is relevant to our study.", "We conclude by briefly summarizing related work to our proposed interpretation method.", "For the attention in RNN-based sequence-tosequence model, the first comprehensive analysis is conducted by Ghader and Monz (2017) .", "They argued that the attention in such systems agree with word alignment to a certain extent by showing that the RNN-based system achieves comparable alignment error rate comparable to that of bidirectional GIZA++ with symmetrization.", "However, they also point out that they are not exactly the same, as training the attention with alignments would occasionally cause the model to forget important information.", "Lee et al.", "(2017) presented a toolkit that facilitates study for the attention in RNN-based models.", "There is also a number of other studies that analyze the attention in Transformer models.", "Tang et al.", "(2018a,b) conducted targeted evaluation of neural machine translation models in two different evaluation tasks, namely subject-verb agreement and word sense disambiguation.", "During the analysis, they noted that the pattern in Transformer model (what they refer to as advanced attention mechanism) is very different from that of the attention in RNN-based architecture, in that a lot of the probability mass is focused on the last input token.", "They did not dive deeper in this phenomenon in their analysis.", "Raganato and Tiedemann (2018) performed a brief but more refined analysis on each attention head and each layer, where they noticed several different patterns inside the modules, and concluded that Transformer tends to focus on local dependencies in lower layers but finds long dependencies on higher ones.", "Beyond interpretation, in order to improve the translation of rare words, Nguyen and Chiang (2018) introduced LexNet, a feed-forward neural network that directly predicts the target word from a weighted sum of the source embeddings, on top of an RNN-based Seq2Seq models.", "Their goal was to improve translation output and hence they did not empirically show AER improvements on manually-aligned corpora.", "There are also a few other studies that inject alignment supervision during NMT training (Mi et al., 2016; Liu et al., 2016) .", "In terms of improvements in word alignment quality, Legrand et al.", "(2016) ; Wang et al.", "(2018) ; proposed neu-ral word alignment modules decoupled from NMT systems, while Zenkel et al.", "(2019) introduced a separate module to extract alignment from NMT decoder states, with which they achieved comparable AER with fast-align with Transformer models.", "The saliency method we propose in this work draws its inspiration from visual saliency proposed by Simonyan et al.", "(2013); Springenberg et al.", "(2014) ; Smilkov et al.", "(2017) .", "It should be noted that these methods were mostly applied to computer vision tasks.", "To the best of our knowledge, Li et al.", "(2016) presented the only work that directly employs saliency methods to interpret NLP models.", "Most similar to our work in spirit, Ding et al.", "(2017) used Layer-wise Relevance Propagation (LRP; Bach et al.", "2015) , an interpretation method resembling saliency, to interpret the internal working mechanisms of RNN-based neural machine translation systems.", "Although conceptually LRP is also a good fit for word alignment interpretation, we have some concerns with the mathematical soundness of LRP when applied to attention models.", "Our proposed method is also considerably more flexible and easier to implement than LRP.", "The Interpretation Problem Formally, by interpreting model prediction, we are referring to the following problem: given a trained MT model and input tokens S = {s 0 , s 1 , .", ".", ".", ", s I−1 }, at a certain time step j when the models predicts t j , we want to know which source word in S \"contributed\" most to this prediction.", "Note that the prediction t j might not be arg max t j p(t j | t 1:j−1 ), as the locally optimal option may be pruned during beam search and not end up in the final translation.", "Under this framework, we can see an important conceptual problem regarding interpreting attention weights as word alignment.", "Suppose for the same source sentence, there are two alternative translations that diverge at target time step j, generating t j and t ′ j which respectively correspond to different source words.", "Presumably, the source word that is aligned to t j and t ′ j should changed correspondingly.", "However, this is not possible with the attention weight interpretation, because the attention weight is computed before prediction of t j or t ′ j .", "With that, we argue that an ideal interpretation algorithm should be able to adapt the interpretation with the specified output label, regard-less of whether it is the most likely label predicted by the model.", "As a final note, the term \"attention weights\" here refers to the weights of the attention between encoder and decoder (the \"encoder-decoder attention\" in Vaswani et al.", "(2017) ).", "Specifically, they do not refer to the weight of self-attention modules that only exist in the Transformer architecture, which do not establish alignment between the source and target words.", "Method Our proposal is based on the notion of visual saliency (Simonyan et al., 2013) in computer vision.", "In brief, the saliency of an input feature is defined by the partial gradient of the output score with regard to the input.", "We propose to extend this idea to NMT by drawing analogy between input pixels and the embedding look-up operation.", "Visual Saliency Suppose we have an image classification example (x 0 , y 0 ), with y 0 being a specific image class and x 0 being an |X |-dimensional vector.", "Each entry of x 0 is an input feature (i.e., a pixel) to the classifier.", "Given the input x 0 , a trained classifier can generate a prediction score for class y 0 , denoted as p(y 0 | x 0 ).", "Consider the first-order Taylor expansion of a perturbed version of this score at the neighborhood of input x 0 : p(y 0 | x) ≈ p(y 0 | x 0 ) + ∂p(y 0 | x) ∂x x 0 · (x − x 0 ) (1) This is essentially re-formulating the perturbed prediction score p(y 0 | x) as an affine approximation of the input features, while the \"contribution\" of each feature to the final prediction being the partial derivative of the prediction score with regard to the feature.", "Assuming a feature that is deemed as salient for the local perturbation of the prediction score would also be globally salient, the saliency of an input feature is defined as follows: Definition 1 Denoted as Ψ(x, y), the saliency of feature vector x with regard to output class y is defined as ∂p(y | x) ∂x .", "Note that Ψ(x, y) is also a vector, with each entry corresponding to the saliency of a single input feature in x.", "Such formulation has following nice properties: • The saliency of an input feature is related to the choice of output class y, as model scores of different output classes correspond to a different set of parameters, and hence resulting in different partial gradients for the input features.", "This makes up for the aforementioned deficiency of attention weights in addressing the interpretation problem.", "• The partial gradient could be computed by back-propagation, which is efficiently implemented in most deep learning frameworks.", "• The formulation is agnostic to the model that generates p(y | x), so it could be applied to any deep learning architecture.", "Word Saliency In computer vision, the input feature is a 3D Tensor corresponding to the level in each channel.", "The key question to apply such method to NMT is what constitutes the input feature to a NMT system.", "Li et al.", "(2016) proposed to use the embedding of of the input words as the input feature to formulate saliency score, which results in the saliency of an input word being a vector of the same dimension as embedding vectors.", "To obtain a scalar saliency value, they computed the mean of the absolute value of the embedding gradients.", "We argue that there is a more mathematically principled way to approach this.", "To start, we treat the word embedding look-up operation as a dot product between the embedding weight matrix W and an one-hot vector z.", "The size of z is the same as the source vocabulary size.", "Similarly, the input sentence could be formulated as a matrix Z with only 0 and 1 entries.", "Notice that z has certain resemblance to the pixels of an image, with each cell representing the pixel-wise activation level of the words in the vocabulary.", "For the output word t j at time step j, we can similarly define the saliency of the one-hot vector z as: Ψ(z, t j ) = ∂p(t j | Z) ∂z (2) where p(t j | Z) is the probability of word t j generated by the NMT model given source sentence Z. Ψ(z, t j ) is a vector of the same size as z.", "However, note that there is a key difference between z and pixels.", "If the pixel level is 0, it means that the pixel is black, while a 0-entry in z means that the input word is not the word denoted by the corresponding cell.", "While the black region of an input image may still carry important information, we are not interested in the saliency of the 0-entries in z.", "1 Hence, we only take the 1-entries of matrix Z as the input to the NMT model.", "For a source word s i in the source sentence, this means we only care about the saliency of the 1-entries, i.e., the entry corresponding to source word s i : ψ(s i , t j ) = [ ∂p(t j | Z) ∂z ] s i = [ ∂p(t j | Z) ∂W s i · ∂W s i ∂z ] s i = [ ∂p(t j | Z) ∂W s i · W ] s i = ∂p(t j | Z) ∂W s i · W s i (3) where [·] i denotes the i-th row of a matrix or the ith element of a vector.", "In other words, the saliency ψ(s i , t j ) is a weighted sum of the word embedding of input word s i , with the partial gradient of each cell as the weight.", "By comparison, the word saliency 2 in Li et al.", "(2016) is defined as: ψ ′ (s i , t j ) = mean ( ∂p(t j | Z) ∂W s i ) (4) There are two implementation details that we would like to call for the reader's attention: • When the same word occurs multiple times in the source sentence, multiple copies of embedding for such word need to be made to ensure that the gradients flowing to different instances of the same word are not merged; • Note that ψ(s i , t j ) is not a probability distribution, which does not affect word alignment results because we are taking arg max.", "For visualizations presented herein, we normalized the distribution by p( s i | t j ) ∝ max(0, ψ(s i , t j )).", "One may also use softmax function for applications that need more well-formed probability distribution.", "1 Although we introduce z to facilitate presentation, note that word embedding look-up is never implemented as a matrix multiplication.", "Instead, it is implemented as a table lookup, so for each input word, only one row of the word embedding is fed into the subsequent computation.", "As a consequence, during training, since the other rows are not part of the computation graph, only parameters in the rows corresponding to the 1-entries will be updated.", "This is another reason why we choose to discard the saliency of 0-entries.", "2 Li et al.", "(2016) mostly focused on studying saliency on the level of word embedding dimensions.", "This word-level formulation is proposed as part of the analysis in Section 5.2 and Section 6 of that work.", "SmoothGrad There are two scenarios where the naïve gradientbased saliency may make mistakes: • For highly non-linear models, the saliency obtained from local perturbation may not be a good representation of the global saliency.", "• If the model fits the distribution nearly perfectly, some data points or input features may become saturated, i.e.", "having a partial gradient of 0.", "This does not necessarily mean they are not salient with regard to the prediction.", "We alleviate these problems with SmoothGrad, a method proposed by Smilkov et al.", "(2017) .", "The idea is to augment the input to the network into n samples by adding random noise generated by normal distribution N (0, σ 2 ).", "The saliency scores of each augmented sample are then averaged to cancel out the noise in the gradients.", "We made one small modification to this method in our experiments: rather than adding noise to the word inputs that are represented as one-hot vectors, we instead add noise to the queried embedding vectors.", "This allows us to introduce more randomness for each word input.", "Experiments Evaluation Method The best evaluation method would compare predicted word alignments against manually labeled word alignments between source sentences and NMT output sentences, but this is too costly for our study.", "Instead, we conduct two automatic evaluations for our proposed method using resources available: • force decoding: take a human-annotated corpus, run NMT models to force-generate the target side of the corpus and measure AER against the human alignment; • free decoding: take the NMT prediction, obtain reasonably clean reference alignments between the prediction and the source and measure AER against this reference.", "3 Notice that both automatic evaluation methods have their respective limitation: the force decoding method may force the model to predict something it deems unlikely, and thus generating noisy alignment; whereas the free decoding method lacks authentic references.", "Setup We follow Zenkel et al.", "(2019) in data setup and use the accompanied scripts of that paper 4 for preprocessing.", "Their training data consists of 1.9M, 1.1M and 0.4M sentence pairs for German-English (de-en), English-French (en-fr) and Romanian-English (ro-en) language pairs, respectively, whereas the manually-aligned test data contains 508, 447 and 248 sentence pairs for each language pair.", "There is no development data provided in their setup, and it is not clear what they used for NMT system training, so we set aside the last 1,000 sentences of the training data for each language as the development set.", "For our NMT systems, we use fairseq 5 to train attention-based RNN systems (LSTM) (Bahdanau et al., 2014) , convolution systems (FConv) (Gehring et al., 2017) , and Transformer systems (Transformer) (Vaswani et al., 2017) .", "We use the pre-configured model architectures for IWSLT German-English experiments 6 to build all NMT systems.", "Our experiments cover the following interpretation methods: • Attention: directly take the attention weights as soft alignment scores.", "For transformer, we follow the implementation in fairseq and used the attention weights from the final layer averaged across all heads; • Smoothed Attention: obtain multiple version of attention weights with the same data augmentation procedure as SmoothGrad and average them.", "This is to prove that smoothing itself does not improve the interpretation quality, and has to be used together with effective interpretation method; • (Li et al., 2016) : applied with normal backpropagation (Grad) and SmoothGrad; • Ours: applied with normal back-propagation (Grad) and SmoothGrad.", "For all the methods above, we follow the same procedure in (Zenkel et al., 2019) to convert soft alignment scores to hard alignment.", "For force decoding experiments, we generate symmetrized alignment results with growdiag-final.", "We also include AER results 7 of fast-align (Dyer et al., 2013) , GIZA++ 8 and the best model (Add+SGD) from Zenkel et al.", "(2019) on the same dataset for comparison.", "However, the readers should be aware that there are certain caveats in this comparison: • All of these models are specifically designed and optimized to generate high-quality alignments, while our method is an interpretation method and is not making any architecture modifications or parameter updates; • fast-align and GIZA++ usually need to update model with full sentence to generate optimal alignments, while our system and Zenkel et al.", "(2019) can do so on-the-fly.", "7 We reproduced the fast-align results as a sanity check and we were able to perfectly replicate their numbers with their released scripts.", "8 https://github.com/moses-smt/giza-pp Realizing the second caveat, we also run fastalign under the online alignment scenario, where we first train a fast-align model and decode on the test set.", "This is a real-world scenario in applications such as computer-aided translation (Bouma and Parmentier, 2014; Arcan et al., 2014) , where we cannot practically update alignment models onthe-fly.", "On the other hand, we believe this is a slightly better comparison for methods with online alignment capabilities such as Zenkel et al.", "(2019) and this work.", "The data used in Zenkel et al.", "(2019) did not provide a manually-aligned development set, so we tune the SmoothGrad hyperparameters (noise standard deviation σ and sample size n) on a 30sentence subset of the German-English test data with the Transformer model.", "We ended up using the recommended σ = 0.15 in the original paper and a slightly smaller sample size n = 30 for speed.", "This hyperparameter setting is applied to the other SmoothGrad experiments as-is.", "For com-parison with previous work, we do not exclude these sentences from the reported results, we instead mark the numbers affected to raise caution.", "Table 1 shows the AER results under the force decoding setting.", "First, note that after applying our saliency method with normal back-propagation, AER is only reduced for FConv model but instead increases for LSTM and Transformer.", "The largest increase is observed for Transformer, where the AER increases by about 20 points on average.", "However, after applying SmoothGrad on top of that, we observe a sharp drop in AER, which ends up with 10-20 points lower than the attention weight baseline.", "We can also see that this is not just an effect introduced by input noise, as the same smoothing procedure for attention increases the AER most of the times.", "To summarize, at least under force decoding settings, our saliency method with SmoothGrad obtains word alignment interpretations of much higher quality than the attention weight baseline.", "Force Decoding Results As for Li et al.", "(2016) , for FConv and LSTM architectures, it is not only consistently worse than our method, but at times also worse than attention.", "Besides, the effect of SmoothGrad is also not as consistent on their saliency formulation as ours.", "Although with the Transformer model, the Li et al.", "(2016) method obtained better AER than our method under several settings, it is still pretty clear overall that the superior mathematical soundness of our method is translated into better interpretation quality.", "While the GIZA++ model obtains the best alignment result in Table 1 9 , most of our word alignment interpretation of FConv model with Smooth-Grad surpasses the alignment quality of fast-align (either Online or Offline), sometimes by as much as 8.7 points (symmetrized ro<>en result).", "Our best models are also largely on-par with (Zenkel et al., 2019) .", "These are notable results as our method is an interpretation method and no extra parameter is updated to optimize the quality of alignment.", "On the other hand, this also indicates that it is possible to induce high-quality 9 While Ghader and Monz (2017) showed that the AER obtained by LSTM model is close to that of GIZA++, our experiments yield a much larger difference.", "We think this is largely due to the fact that we choose to train our model with BPE, while Ghader and Monz (2017) explicitly avoided doing so.", "alignments from NMT model without modifying its parameters, showing that it has acquired such information in an implicit way.", "Most interestingly, although NMT is often deemed as performing poorly under low-resource setting, our interpretation seems to work relatively well on ro<>en language pair, which happens to be the language pair that we have least training data for.", "We think this is a phenomenon that merits further exploration.", "Besides, it can be seen that for all reported methods, the overall order for the number of alignment errors is FConv < LSTM < Transformer.", "To our best knowledge, this is also a novel insight, as no one has analyzed attention weights of FConv with other architectures before.", "We can also observe that while our method is not strong enough to fully bridge the gap of the attention noise level between different model architecture, it does manage to narrow the difference in some cases.", "Table 2 shows the result under free decoding setting.", "The trend in this group of experiment is similar to Table 1 , except that Transformer occasionally outperforms LSTM.", "We think this is mainly due to the fact that Transformer generates higher quality translations, but could also be partially attributed to the noise in fast-align reference.", "Also, notice that the AER numbers are also generally lower compared to Table 1 under this setting.", "One reason is that our model is aligning output with which it is most confident, so less noise should be expected in the model behavior.", "On the other hand, by qualitatively comparing the reference translation in the test set and the NMT output, we find that it is generally easier to align the translation as it is often a more literal translation.", "6 Analysis 6.1 Comparison with Li et al.", "(2016) The main reason why the word saliency formulation in Li et al.", "(2016) does not work as well for word alignment is the lack of polarity in the formulation.", "In other words, it only quantifies how much the input influences the output, but does not specify in what way does the input influence.", "This is sufficient for error analysis, but does not suit the purpose of word alignment, as humans will only align a target word to the input words that constitute a translation pair, i.e.", "have positive influence.", "Figure 2 shows a case where this problem occurs in our German-English experiments.", "Note that in Subfigure (a), the source word nur has high saliency on several target words, e.g.", "should, but the word nur is actually not translated in the reference.", "On the other hand, as shown in Subfigure (b), our method correctly assigns negative (shown as white) or small positive values at all time steps for this source word.", "Specifically, the saliency value of nur for should is negative with large magnitude, indicating significant negative contributions to the prediction of that target word.", "Hence, a good word alignment interpreta-tion should strongly avoid aligning them.", "Free Decoding Results SmoothGrad Tables 1 and 2 show that SmoothGrad is a crucial factor to reduce AER, especially for Transformer.", "Figure 3 Table 1 .", "By comparing Subfigures (a) and (c), we notice that (1) without SmoothGrad, the word saliency obtained from the Transformer model is extremely noisy, and (2) the output of SmoothGrad is not only a smoother version of the naïve gradient output, but also gains new information by performing extra forward and backward evaluations with the noisy input.", "For example, compare the alignment point between source word wir and target word we: in Subfigure (a), this word pair has very low saliency, but in (c), they become the most likely alignment pair for that target word.", "Referring back to our motivation for using SmoothGrad in Section 4.3, we think the observations above verify that the Transformer model is a case where very high non-linearities occur almost everywhere in the parameter space, such that the saliency obtained from local perturbation is a very Table 3 : Alignment distribution entropy for selected deen models.", "att stands for attention in Table 1. poor representation of the global saliency almost all the time.", "On the other hand, this is also why the Transformer especially relies on SmoothGrad to work well, as the perturbation will give a better estimation of the global saliency.", "It could also be observed from Subfigures (b) and (d) that when the noise is too moderate, the evaluation does not deviate enough from the original spot to gain non-local information, and at (d) it deviates too much and hence the resulting alignment is almost random.", "Intuitively, the noise parameter σ should be sensitive to the model architecture or even specific input feature values, but interestingly we end up finding that a single choice from the computer vision literature works well with all of our systems.", "We encourage future work to conduct more comprehensive analysis of the effect of SmoothGrad on more complicated architectures beyond convolutional neural nets.", "Alignment Dispersion We run German-English alignments under several different SmoothGrad noise deviation σ and report their dispersion as measured by entropy of the (soft) alignment distribution averaged by number of target words.", "Results are summarized in Ta-ble 3, where lower entropy indicates more peaky alignments.", "First, we observe that dispersion of word saliency gets higher as we increase σ, which aligns with the observations in Figure 3 .", "It should also be noted that the alignment dispersion is consistently lower for free decoding than force decoding.", "This verifies our conjecture that the force decoding setting might introduce more noise in the model behavior, but judging from this result, that gap seems to be minimal.", "Comparing different architectures, the dispersion of attention weights does not correlate well with the dispersion of word saliency.", "We also notice that, while the Transformer attention interpretation consistently results in higher AER, its dispersion is lower than the other architectures, indicating that with attention, a lot of the probability mass might be concentrated in the wrong place more often.", "This corroborates the finding in Raganato and Tiedemann (2018) .", "Discussion And Future Work There are several extensions to this work that we would like to discuss in this section.", "First, in this paper we only explored two saliency methods among many others available (Montavon et al., 2018) .", "In our preliminary study, we also experimented with guided back-propagation (Springenberg et al., 2014), a frequently used saliency method in computer vision, which did not work well for our problem.", "We suspect that there is a gap between applying these methods on mostlyconvolutional architectures in computer vision and architectures with more non-linearities in NLP.", "We hope the future research from the NLP and machine learning communities could bridge this gap.", "Secondly, the alignment errors in our method comes from three different sources: the limitation of NMT models on learning word alignments, the limitation of interpretation method on recovering interpretable word alignments, and the ambiguity in word alignments itself.", "Although we have shown that high quality alignment could be recovered from NMT systems (thus pushing our understanding on the limitation of NMT models), we are not yet able to separate these sources of errors in this work.", "While exploration on this direction will help us better understand both NMT models and the capability of saliency methods in NLP, researchers may want to avoid using word alignment as a benchmark for saliency methods because of its ambiguity.", "For such purpose, simpler tasks with clear ground truth, such as subject-verb agreement, might be a better choice.", "Finally, as mentioned before, we are only conducting approximate evaluation to measure the ability of our interpretation method.", "An immediate future work would be evaluating this on human-annotated translation outputs generated by the NMT system.", "Conclusion We propose to use word saliency and SmoothGrad to interpret word alignments from NMT predictions.", "Our proposal is model-agnostic, is able to be applied either offline or online, and does not require any parameter updates or architectural change.", "Both force decoding and free decoding evaluations show that our method is capable of generating word alignment interpretations of much higher quality compared to its attentionbased counterpart.", "Our empirical results also probe into the NMT black-box and reveal that even without any special architecture or training algorithm, some NMT models have already implicitly learned interpretable word alignments of comparable quality to fast-align.", "The model and code for our experiments are available at https://github.com/shuoyangd/meerkat." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "4.3", "5.1", "5.2", "5.3", "6.2", "6.3", "7", "8" ], "paper_header_content": [ "Introduction", "Related Work", "The Interpretation Problem", "Method", "Visual Saliency", "Word Saliency", "SmoothGrad", "Evaluation Method", "Setup", "Force Decoding Results", "SmoothGrad", "Alignment Dispersion", "Discussion And Future Work", "Conclusion" ] }
GEM-SciDuet-train-38#paper-1054#slide-2
Related Findings Outside MT
Attention is not Explanation [Jain and Wallace NAACL 2019] Is Attention Interpretable? (Spoiler: No) [Serrano and Smith ACL 2019] We also have empirical results that corroborate these findings. and we have method that works better! Saliency-driven Word Alignment Interpretation for NMT
Attention is not Explanation [Jain and Wallace NAACL 2019] Is Attention Interpretable? (Spoiler: No) [Serrano and Smith ACL 2019] We also have empirical results that corroborate these findings. and we have method that works better! Saliency-driven Word Alignment Interpretation for NMT
[]
GEM-SciDuet-train-38#paper-1054#slide-3
1054
Saliency-driven Word Alignment Interpretation for Neural Machine Translation
Despite their original goal to jointly learn to align and translate, Neural Machine Translation (NMT) models, especially Transformer, are often perceived as not learning interpretable word alignments. In this paper, we show that NMT models do learn interpretable word alignments, which could only be revealed with proper interpretation methods. We propose a series of such methods that are model-agnostic, are able to be applied either offline or online, and do not require parameter update or architectural change. We show that under the force decoding setup, the alignments induced by our interpretation method are of better quality than fast-align for some systems, and when performing free decoding, they agree well with the alignments induced by automatic alignment tools.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223 ], "paper_content_text": [ "Introduction Neural Machine Translation (NMT) has made lots of advancements since its inception.", "One of the key innovations that led to the largest improvements is the introduction of the attention mechanism (Bahdanau et al., 2014; Luong et al., 2015) , which jointly learns word alignment and translation.", "Since then, the attention mechanism has gradually become a general technique in various NLP tasks, including summarization (Rush et al., 2015; See et al., 2017) , natural language inference (Parikh et al., 2016) and speech recognition (Chorowski et al., 2015; Chan et al., 2016) .", "Although word alignment is no longer a integral step like the case for Statistical Machine Translation (SMT) systems (Brown et al., 1993; Koehn et al., 2003) , there is a resurgence of interest in the community to study word alignment for NMT models.", "Even for NMT, word alignments are useful for error analysis, inserting external vocabularies, and providing guidance for human translators in computer-aided translation.", "When aiming for the most accurate alignments, the state-of-the-art tools include GIZA++ (Brown et al., 1993; Och and Ney, 2003) and fast-align (Dyer et al., 2013) , which are all external models invented in SMT era and need to be run as a separate post-processing step after the full sentence translation is complete.", "As a direct result, they are not suitable for analyzing the internal decision processes of the neural machine translation models.", "Besides, these models are hard to apply in the online fashion, i.e.", "in the middle of left-to-right translation process, such as the scenario in certain constrained decoding algorithms (Hasler et al., 2018) and in computeraided translation (Bouma and Parmentier, 2014; Arcan et al., 2014) .", "For these cases, the current common practice is to simply generate word alignments from attention weights between the encoder and decoder.", "However, there are problems with this practice.", "Koehn and Knowles (2017) showed that attention-based word alignment interpretation may be subject to \"off-by-one\" errors.", "Zenkel et al.", "(2019) ; Tang et al.", "(2018b) ; Raganato and Tiedemann (2018) pointed out that the attention-induced alignment is particularly noisy with Transformer models.", "Because of this, some studies, such as Nguyen and Chiang (2018); Zenkel et al.", "(2019) proposed either to add extra modules to generate higher quality word alignments, or to use these modules to further improve the model performance or interpretability.", "This paper is a step towards interpreting word alignments from NMT without relying on external models.", "We argue that using only attention weights is insufficient for generating clean word alignment interpretations, which we demonstrate both conceptually and empirically.", "We propose to use the notion of saliency to obtain word alignment interpretation of NMT predictions.", "Different from previous alignment models, our proposal is a pure interpretation method and does not require any parameter update or architecture change.", "Nevertheless, we are able to reduce Alignment Error Rate (AER) by 10-20 points over the attention weight baseline under two evaluation settings we adopt (see Figure 1 for an example), and beat fast-align (Dyer et al., 2013) by as much as 8.7 points.", "Not only have we proposed a superior model interpretation method, but our empirical results also uncover that, contrary to common beliefs, architectures such as convolutional sequenceto-sequence models (Gehring et al., 2017) have already implicitly learned highly interpretable word alignments, which sheds light on how future improvement should be made on these architectures.", "Related Work We start with work that combines word alignments with NMT.", "Research in this area generally falls into one of three themes: (1) employing the notion of word alignments to interpret the prediction of NMT; (2) making use of word alignments to improve NMT performance; (3) making use of NMT to improve word alignments.", "We mainly focus on related work in the first theme as this is the problem we are addressing in this work.", "Then we briefly introduce work in the other themes that is relevant to our study.", "We conclude by briefly summarizing related work to our proposed interpretation method.", "For the attention in RNN-based sequence-tosequence model, the first comprehensive analysis is conducted by Ghader and Monz (2017) .", "They argued that the attention in such systems agree with word alignment to a certain extent by showing that the RNN-based system achieves comparable alignment error rate comparable to that of bidirectional GIZA++ with symmetrization.", "However, they also point out that they are not exactly the same, as training the attention with alignments would occasionally cause the model to forget important information.", "Lee et al.", "(2017) presented a toolkit that facilitates study for the attention in RNN-based models.", "There is also a number of other studies that analyze the attention in Transformer models.", "Tang et al.", "(2018a,b) conducted targeted evaluation of neural machine translation models in two different evaluation tasks, namely subject-verb agreement and word sense disambiguation.", "During the analysis, they noted that the pattern in Transformer model (what they refer to as advanced attention mechanism) is very different from that of the attention in RNN-based architecture, in that a lot of the probability mass is focused on the last input token.", "They did not dive deeper in this phenomenon in their analysis.", "Raganato and Tiedemann (2018) performed a brief but more refined analysis on each attention head and each layer, where they noticed several different patterns inside the modules, and concluded that Transformer tends to focus on local dependencies in lower layers but finds long dependencies on higher ones.", "Beyond interpretation, in order to improve the translation of rare words, Nguyen and Chiang (2018) introduced LexNet, a feed-forward neural network that directly predicts the target word from a weighted sum of the source embeddings, on top of an RNN-based Seq2Seq models.", "Their goal was to improve translation output and hence they did not empirically show AER improvements on manually-aligned corpora.", "There are also a few other studies that inject alignment supervision during NMT training (Mi et al., 2016; Liu et al., 2016) .", "In terms of improvements in word alignment quality, Legrand et al.", "(2016) ; Wang et al.", "(2018) ; proposed neu-ral word alignment modules decoupled from NMT systems, while Zenkel et al.", "(2019) introduced a separate module to extract alignment from NMT decoder states, with which they achieved comparable AER with fast-align with Transformer models.", "The saliency method we propose in this work draws its inspiration from visual saliency proposed by Simonyan et al.", "(2013); Springenberg et al.", "(2014) ; Smilkov et al.", "(2017) .", "It should be noted that these methods were mostly applied to computer vision tasks.", "To the best of our knowledge, Li et al.", "(2016) presented the only work that directly employs saliency methods to interpret NLP models.", "Most similar to our work in spirit, Ding et al.", "(2017) used Layer-wise Relevance Propagation (LRP; Bach et al.", "2015) , an interpretation method resembling saliency, to interpret the internal working mechanisms of RNN-based neural machine translation systems.", "Although conceptually LRP is also a good fit for word alignment interpretation, we have some concerns with the mathematical soundness of LRP when applied to attention models.", "Our proposed method is also considerably more flexible and easier to implement than LRP.", "The Interpretation Problem Formally, by interpreting model prediction, we are referring to the following problem: given a trained MT model and input tokens S = {s 0 , s 1 , .", ".", ".", ", s I−1 }, at a certain time step j when the models predicts t j , we want to know which source word in S \"contributed\" most to this prediction.", "Note that the prediction t j might not be arg max t j p(t j | t 1:j−1 ), as the locally optimal option may be pruned during beam search and not end up in the final translation.", "Under this framework, we can see an important conceptual problem regarding interpreting attention weights as word alignment.", "Suppose for the same source sentence, there are two alternative translations that diverge at target time step j, generating t j and t ′ j which respectively correspond to different source words.", "Presumably, the source word that is aligned to t j and t ′ j should changed correspondingly.", "However, this is not possible with the attention weight interpretation, because the attention weight is computed before prediction of t j or t ′ j .", "With that, we argue that an ideal interpretation algorithm should be able to adapt the interpretation with the specified output label, regard-less of whether it is the most likely label predicted by the model.", "As a final note, the term \"attention weights\" here refers to the weights of the attention between encoder and decoder (the \"encoder-decoder attention\" in Vaswani et al.", "(2017) ).", "Specifically, they do not refer to the weight of self-attention modules that only exist in the Transformer architecture, which do not establish alignment between the source and target words.", "Method Our proposal is based on the notion of visual saliency (Simonyan et al., 2013) in computer vision.", "In brief, the saliency of an input feature is defined by the partial gradient of the output score with regard to the input.", "We propose to extend this idea to NMT by drawing analogy between input pixels and the embedding look-up operation.", "Visual Saliency Suppose we have an image classification example (x 0 , y 0 ), with y 0 being a specific image class and x 0 being an |X |-dimensional vector.", "Each entry of x 0 is an input feature (i.e., a pixel) to the classifier.", "Given the input x 0 , a trained classifier can generate a prediction score for class y 0 , denoted as p(y 0 | x 0 ).", "Consider the first-order Taylor expansion of a perturbed version of this score at the neighborhood of input x 0 : p(y 0 | x) ≈ p(y 0 | x 0 ) + ∂p(y 0 | x) ∂x x 0 · (x − x 0 ) (1) This is essentially re-formulating the perturbed prediction score p(y 0 | x) as an affine approximation of the input features, while the \"contribution\" of each feature to the final prediction being the partial derivative of the prediction score with regard to the feature.", "Assuming a feature that is deemed as salient for the local perturbation of the prediction score would also be globally salient, the saliency of an input feature is defined as follows: Definition 1 Denoted as Ψ(x, y), the saliency of feature vector x with regard to output class y is defined as ∂p(y | x) ∂x .", "Note that Ψ(x, y) is also a vector, with each entry corresponding to the saliency of a single input feature in x.", "Such formulation has following nice properties: • The saliency of an input feature is related to the choice of output class y, as model scores of different output classes correspond to a different set of parameters, and hence resulting in different partial gradients for the input features.", "This makes up for the aforementioned deficiency of attention weights in addressing the interpretation problem.", "• The partial gradient could be computed by back-propagation, which is efficiently implemented in most deep learning frameworks.", "• The formulation is agnostic to the model that generates p(y | x), so it could be applied to any deep learning architecture.", "Word Saliency In computer vision, the input feature is a 3D Tensor corresponding to the level in each channel.", "The key question to apply such method to NMT is what constitutes the input feature to a NMT system.", "Li et al.", "(2016) proposed to use the embedding of of the input words as the input feature to formulate saliency score, which results in the saliency of an input word being a vector of the same dimension as embedding vectors.", "To obtain a scalar saliency value, they computed the mean of the absolute value of the embedding gradients.", "We argue that there is a more mathematically principled way to approach this.", "To start, we treat the word embedding look-up operation as a dot product between the embedding weight matrix W and an one-hot vector z.", "The size of z is the same as the source vocabulary size.", "Similarly, the input sentence could be formulated as a matrix Z with only 0 and 1 entries.", "Notice that z has certain resemblance to the pixels of an image, with each cell representing the pixel-wise activation level of the words in the vocabulary.", "For the output word t j at time step j, we can similarly define the saliency of the one-hot vector z as: Ψ(z, t j ) = ∂p(t j | Z) ∂z (2) where p(t j | Z) is the probability of word t j generated by the NMT model given source sentence Z. Ψ(z, t j ) is a vector of the same size as z.", "However, note that there is a key difference between z and pixels.", "If the pixel level is 0, it means that the pixel is black, while a 0-entry in z means that the input word is not the word denoted by the corresponding cell.", "While the black region of an input image may still carry important information, we are not interested in the saliency of the 0-entries in z.", "1 Hence, we only take the 1-entries of matrix Z as the input to the NMT model.", "For a source word s i in the source sentence, this means we only care about the saliency of the 1-entries, i.e., the entry corresponding to source word s i : ψ(s i , t j ) = [ ∂p(t j | Z) ∂z ] s i = [ ∂p(t j | Z) ∂W s i · ∂W s i ∂z ] s i = [ ∂p(t j | Z) ∂W s i · W ] s i = ∂p(t j | Z) ∂W s i · W s i (3) where [·] i denotes the i-th row of a matrix or the ith element of a vector.", "In other words, the saliency ψ(s i , t j ) is a weighted sum of the word embedding of input word s i , with the partial gradient of each cell as the weight.", "By comparison, the word saliency 2 in Li et al.", "(2016) is defined as: ψ ′ (s i , t j ) = mean ( ∂p(t j | Z) ∂W s i ) (4) There are two implementation details that we would like to call for the reader's attention: • When the same word occurs multiple times in the source sentence, multiple copies of embedding for such word need to be made to ensure that the gradients flowing to different instances of the same word are not merged; • Note that ψ(s i , t j ) is not a probability distribution, which does not affect word alignment results because we are taking arg max.", "For visualizations presented herein, we normalized the distribution by p( s i | t j ) ∝ max(0, ψ(s i , t j )).", "One may also use softmax function for applications that need more well-formed probability distribution.", "1 Although we introduce z to facilitate presentation, note that word embedding look-up is never implemented as a matrix multiplication.", "Instead, it is implemented as a table lookup, so for each input word, only one row of the word embedding is fed into the subsequent computation.", "As a consequence, during training, since the other rows are not part of the computation graph, only parameters in the rows corresponding to the 1-entries will be updated.", "This is another reason why we choose to discard the saliency of 0-entries.", "2 Li et al.", "(2016) mostly focused on studying saliency on the level of word embedding dimensions.", "This word-level formulation is proposed as part of the analysis in Section 5.2 and Section 6 of that work.", "SmoothGrad There are two scenarios where the naïve gradientbased saliency may make mistakes: • For highly non-linear models, the saliency obtained from local perturbation may not be a good representation of the global saliency.", "• If the model fits the distribution nearly perfectly, some data points or input features may become saturated, i.e.", "having a partial gradient of 0.", "This does not necessarily mean they are not salient with regard to the prediction.", "We alleviate these problems with SmoothGrad, a method proposed by Smilkov et al.", "(2017) .", "The idea is to augment the input to the network into n samples by adding random noise generated by normal distribution N (0, σ 2 ).", "The saliency scores of each augmented sample are then averaged to cancel out the noise in the gradients.", "We made one small modification to this method in our experiments: rather than adding noise to the word inputs that are represented as one-hot vectors, we instead add noise to the queried embedding vectors.", "This allows us to introduce more randomness for each word input.", "Experiments Evaluation Method The best evaluation method would compare predicted word alignments against manually labeled word alignments between source sentences and NMT output sentences, but this is too costly for our study.", "Instead, we conduct two automatic evaluations for our proposed method using resources available: • force decoding: take a human-annotated corpus, run NMT models to force-generate the target side of the corpus and measure AER against the human alignment; • free decoding: take the NMT prediction, obtain reasonably clean reference alignments between the prediction and the source and measure AER against this reference.", "3 Notice that both automatic evaluation methods have their respective limitation: the force decoding method may force the model to predict something it deems unlikely, and thus generating noisy alignment; whereas the free decoding method lacks authentic references.", "Setup We follow Zenkel et al.", "(2019) in data setup and use the accompanied scripts of that paper 4 for preprocessing.", "Their training data consists of 1.9M, 1.1M and 0.4M sentence pairs for German-English (de-en), English-French (en-fr) and Romanian-English (ro-en) language pairs, respectively, whereas the manually-aligned test data contains 508, 447 and 248 sentence pairs for each language pair.", "There is no development data provided in their setup, and it is not clear what they used for NMT system training, so we set aside the last 1,000 sentences of the training data for each language as the development set.", "For our NMT systems, we use fairseq 5 to train attention-based RNN systems (LSTM) (Bahdanau et al., 2014) , convolution systems (FConv) (Gehring et al., 2017) , and Transformer systems (Transformer) (Vaswani et al., 2017) .", "We use the pre-configured model architectures for IWSLT German-English experiments 6 to build all NMT systems.", "Our experiments cover the following interpretation methods: • Attention: directly take the attention weights as soft alignment scores.", "For transformer, we follow the implementation in fairseq and used the attention weights from the final layer averaged across all heads; • Smoothed Attention: obtain multiple version of attention weights with the same data augmentation procedure as SmoothGrad and average them.", "This is to prove that smoothing itself does not improve the interpretation quality, and has to be used together with effective interpretation method; • (Li et al., 2016) : applied with normal backpropagation (Grad) and SmoothGrad; • Ours: applied with normal back-propagation (Grad) and SmoothGrad.", "For all the methods above, we follow the same procedure in (Zenkel et al., 2019) to convert soft alignment scores to hard alignment.", "For force decoding experiments, we generate symmetrized alignment results with growdiag-final.", "We also include AER results 7 of fast-align (Dyer et al., 2013) , GIZA++ 8 and the best model (Add+SGD) from Zenkel et al.", "(2019) on the same dataset for comparison.", "However, the readers should be aware that there are certain caveats in this comparison: • All of these models are specifically designed and optimized to generate high-quality alignments, while our method is an interpretation method and is not making any architecture modifications or parameter updates; • fast-align and GIZA++ usually need to update model with full sentence to generate optimal alignments, while our system and Zenkel et al.", "(2019) can do so on-the-fly.", "7 We reproduced the fast-align results as a sanity check and we were able to perfectly replicate their numbers with their released scripts.", "8 https://github.com/moses-smt/giza-pp Realizing the second caveat, we also run fastalign under the online alignment scenario, where we first train a fast-align model and decode on the test set.", "This is a real-world scenario in applications such as computer-aided translation (Bouma and Parmentier, 2014; Arcan et al., 2014) , where we cannot practically update alignment models onthe-fly.", "On the other hand, we believe this is a slightly better comparison for methods with online alignment capabilities such as Zenkel et al.", "(2019) and this work.", "The data used in Zenkel et al.", "(2019) did not provide a manually-aligned development set, so we tune the SmoothGrad hyperparameters (noise standard deviation σ and sample size n) on a 30sentence subset of the German-English test data with the Transformer model.", "We ended up using the recommended σ = 0.15 in the original paper and a slightly smaller sample size n = 30 for speed.", "This hyperparameter setting is applied to the other SmoothGrad experiments as-is.", "For com-parison with previous work, we do not exclude these sentences from the reported results, we instead mark the numbers affected to raise caution.", "Table 1 shows the AER results under the force decoding setting.", "First, note that after applying our saliency method with normal back-propagation, AER is only reduced for FConv model but instead increases for LSTM and Transformer.", "The largest increase is observed for Transformer, where the AER increases by about 20 points on average.", "However, after applying SmoothGrad on top of that, we observe a sharp drop in AER, which ends up with 10-20 points lower than the attention weight baseline.", "We can also see that this is not just an effect introduced by input noise, as the same smoothing procedure for attention increases the AER most of the times.", "To summarize, at least under force decoding settings, our saliency method with SmoothGrad obtains word alignment interpretations of much higher quality than the attention weight baseline.", "Force Decoding Results As for Li et al.", "(2016) , for FConv and LSTM architectures, it is not only consistently worse than our method, but at times also worse than attention.", "Besides, the effect of SmoothGrad is also not as consistent on their saliency formulation as ours.", "Although with the Transformer model, the Li et al.", "(2016) method obtained better AER than our method under several settings, it is still pretty clear overall that the superior mathematical soundness of our method is translated into better interpretation quality.", "While the GIZA++ model obtains the best alignment result in Table 1 9 , most of our word alignment interpretation of FConv model with Smooth-Grad surpasses the alignment quality of fast-align (either Online or Offline), sometimes by as much as 8.7 points (symmetrized ro<>en result).", "Our best models are also largely on-par with (Zenkel et al., 2019) .", "These are notable results as our method is an interpretation method and no extra parameter is updated to optimize the quality of alignment.", "On the other hand, this also indicates that it is possible to induce high-quality 9 While Ghader and Monz (2017) showed that the AER obtained by LSTM model is close to that of GIZA++, our experiments yield a much larger difference.", "We think this is largely due to the fact that we choose to train our model with BPE, while Ghader and Monz (2017) explicitly avoided doing so.", "alignments from NMT model without modifying its parameters, showing that it has acquired such information in an implicit way.", "Most interestingly, although NMT is often deemed as performing poorly under low-resource setting, our interpretation seems to work relatively well on ro<>en language pair, which happens to be the language pair that we have least training data for.", "We think this is a phenomenon that merits further exploration.", "Besides, it can be seen that for all reported methods, the overall order for the number of alignment errors is FConv < LSTM < Transformer.", "To our best knowledge, this is also a novel insight, as no one has analyzed attention weights of FConv with other architectures before.", "We can also observe that while our method is not strong enough to fully bridge the gap of the attention noise level between different model architecture, it does manage to narrow the difference in some cases.", "Table 2 shows the result under free decoding setting.", "The trend in this group of experiment is similar to Table 1 , except that Transformer occasionally outperforms LSTM.", "We think this is mainly due to the fact that Transformer generates higher quality translations, but could also be partially attributed to the noise in fast-align reference.", "Also, notice that the AER numbers are also generally lower compared to Table 1 under this setting.", "One reason is that our model is aligning output with which it is most confident, so less noise should be expected in the model behavior.", "On the other hand, by qualitatively comparing the reference translation in the test set and the NMT output, we find that it is generally easier to align the translation as it is often a more literal translation.", "6 Analysis 6.1 Comparison with Li et al.", "(2016) The main reason why the word saliency formulation in Li et al.", "(2016) does not work as well for word alignment is the lack of polarity in the formulation.", "In other words, it only quantifies how much the input influences the output, but does not specify in what way does the input influence.", "This is sufficient for error analysis, but does not suit the purpose of word alignment, as humans will only align a target word to the input words that constitute a translation pair, i.e.", "have positive influence.", "Figure 2 shows a case where this problem occurs in our German-English experiments.", "Note that in Subfigure (a), the source word nur has high saliency on several target words, e.g.", "should, but the word nur is actually not translated in the reference.", "On the other hand, as shown in Subfigure (b), our method correctly assigns negative (shown as white) or small positive values at all time steps for this source word.", "Specifically, the saliency value of nur for should is negative with large magnitude, indicating significant negative contributions to the prediction of that target word.", "Hence, a good word alignment interpreta-tion should strongly avoid aligning them.", "Free Decoding Results SmoothGrad Tables 1 and 2 show that SmoothGrad is a crucial factor to reduce AER, especially for Transformer.", "Figure 3 Table 1 .", "By comparing Subfigures (a) and (c), we notice that (1) without SmoothGrad, the word saliency obtained from the Transformer model is extremely noisy, and (2) the output of SmoothGrad is not only a smoother version of the naïve gradient output, but also gains new information by performing extra forward and backward evaluations with the noisy input.", "For example, compare the alignment point between source word wir and target word we: in Subfigure (a), this word pair has very low saliency, but in (c), they become the most likely alignment pair for that target word.", "Referring back to our motivation for using SmoothGrad in Section 4.3, we think the observations above verify that the Transformer model is a case where very high non-linearities occur almost everywhere in the parameter space, such that the saliency obtained from local perturbation is a very Table 3 : Alignment distribution entropy for selected deen models.", "att stands for attention in Table 1. poor representation of the global saliency almost all the time.", "On the other hand, this is also why the Transformer especially relies on SmoothGrad to work well, as the perturbation will give a better estimation of the global saliency.", "It could also be observed from Subfigures (b) and (d) that when the noise is too moderate, the evaluation does not deviate enough from the original spot to gain non-local information, and at (d) it deviates too much and hence the resulting alignment is almost random.", "Intuitively, the noise parameter σ should be sensitive to the model architecture or even specific input feature values, but interestingly we end up finding that a single choice from the computer vision literature works well with all of our systems.", "We encourage future work to conduct more comprehensive analysis of the effect of SmoothGrad on more complicated architectures beyond convolutional neural nets.", "Alignment Dispersion We run German-English alignments under several different SmoothGrad noise deviation σ and report their dispersion as measured by entropy of the (soft) alignment distribution averaged by number of target words.", "Results are summarized in Ta-ble 3, where lower entropy indicates more peaky alignments.", "First, we observe that dispersion of word saliency gets higher as we increase σ, which aligns with the observations in Figure 3 .", "It should also be noted that the alignment dispersion is consistently lower for free decoding than force decoding.", "This verifies our conjecture that the force decoding setting might introduce more noise in the model behavior, but judging from this result, that gap seems to be minimal.", "Comparing different architectures, the dispersion of attention weights does not correlate well with the dispersion of word saliency.", "We also notice that, while the Transformer attention interpretation consistently results in higher AER, its dispersion is lower than the other architectures, indicating that with attention, a lot of the probability mass might be concentrated in the wrong place more often.", "This corroborates the finding in Raganato and Tiedemann (2018) .", "Discussion And Future Work There are several extensions to this work that we would like to discuss in this section.", "First, in this paper we only explored two saliency methods among many others available (Montavon et al., 2018) .", "In our preliminary study, we also experimented with guided back-propagation (Springenberg et al., 2014), a frequently used saliency method in computer vision, which did not work well for our problem.", "We suspect that there is a gap between applying these methods on mostlyconvolutional architectures in computer vision and architectures with more non-linearities in NLP.", "We hope the future research from the NLP and machine learning communities could bridge this gap.", "Secondly, the alignment errors in our method comes from three different sources: the limitation of NMT models on learning word alignments, the limitation of interpretation method on recovering interpretable word alignments, and the ambiguity in word alignments itself.", "Although we have shown that high quality alignment could be recovered from NMT systems (thus pushing our understanding on the limitation of NMT models), we are not yet able to separate these sources of errors in this work.", "While exploration on this direction will help us better understand both NMT models and the capability of saliency methods in NLP, researchers may want to avoid using word alignment as a benchmark for saliency methods because of its ambiguity.", "For such purpose, simpler tasks with clear ground truth, such as subject-verb agreement, might be a better choice.", "Finally, as mentioned before, we are only conducting approximate evaluation to measure the ability of our interpretation method.", "An immediate future work would be evaluating this on human-annotated translation outputs generated by the NMT system.", "Conclusion We propose to use word saliency and SmoothGrad to interpret word alignments from NMT predictions.", "Our proposal is model-agnostic, is able to be applied either offline or online, and does not require any parameter updates or architectural change.", "Both force decoding and free decoding evaluations show that our method is capable of generating word alignment interpretations of much higher quality compared to its attentionbased counterpart.", "Our empirical results also probe into the NMT black-box and reveal that even without any special architecture or training algorithm, some NMT models have already implicitly learned interpretable word alignments of comparable quality to fast-align.", "The model and code for our experiments are available at https://github.com/shuoyangd/meerkat." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "4.3", "5.1", "5.2", "5.3", "6.2", "6.3", "7", "8" ], "paper_header_content": [ "Introduction", "Related Work", "The Interpretation Problem", "Method", "Visual Saliency", "Word Saliency", "SmoothGrad", "Evaluation Method", "Setup", "Force Decoding Results", "SmoothGrad", "Alignment Dispersion", "Discussion And Future Work", "Conclusion" ] }
GEM-SciDuet-train-38#paper-1054#slide-3
Recap
Saliency-driven Word Alignment Interpretation for NMT
Saliency-driven Word Alignment Interpretation for NMT
[]
GEM-SciDuet-train-38#paper-1054#slide-4
1054
Saliency-driven Word Alignment Interpretation for Neural Machine Translation
Despite their original goal to jointly learn to align and translate, Neural Machine Translation (NMT) models, especially Transformer, are often perceived as not learning interpretable word alignments. In this paper, we show that NMT models do learn interpretable word alignments, which could only be revealed with proper interpretation methods. We propose a series of such methods that are model-agnostic, are able to be applied either offline or online, and do not require parameter update or architectural change. We show that under the force decoding setup, the alignments induced by our interpretation method are of better quality than fast-align for some systems, and when performing free decoding, they agree well with the alignments induced by automatic alignment tools.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223 ], "paper_content_text": [ "Introduction Neural Machine Translation (NMT) has made lots of advancements since its inception.", "One of the key innovations that led to the largest improvements is the introduction of the attention mechanism (Bahdanau et al., 2014; Luong et al., 2015) , which jointly learns word alignment and translation.", "Since then, the attention mechanism has gradually become a general technique in various NLP tasks, including summarization (Rush et al., 2015; See et al., 2017) , natural language inference (Parikh et al., 2016) and speech recognition (Chorowski et al., 2015; Chan et al., 2016) .", "Although word alignment is no longer a integral step like the case for Statistical Machine Translation (SMT) systems (Brown et al., 1993; Koehn et al., 2003) , there is a resurgence of interest in the community to study word alignment for NMT models.", "Even for NMT, word alignments are useful for error analysis, inserting external vocabularies, and providing guidance for human translators in computer-aided translation.", "When aiming for the most accurate alignments, the state-of-the-art tools include GIZA++ (Brown et al., 1993; Och and Ney, 2003) and fast-align (Dyer et al., 2013) , which are all external models invented in SMT era and need to be run as a separate post-processing step after the full sentence translation is complete.", "As a direct result, they are not suitable for analyzing the internal decision processes of the neural machine translation models.", "Besides, these models are hard to apply in the online fashion, i.e.", "in the middle of left-to-right translation process, such as the scenario in certain constrained decoding algorithms (Hasler et al., 2018) and in computeraided translation (Bouma and Parmentier, 2014; Arcan et al., 2014) .", "For these cases, the current common practice is to simply generate word alignments from attention weights between the encoder and decoder.", "However, there are problems with this practice.", "Koehn and Knowles (2017) showed that attention-based word alignment interpretation may be subject to \"off-by-one\" errors.", "Zenkel et al.", "(2019) ; Tang et al.", "(2018b) ; Raganato and Tiedemann (2018) pointed out that the attention-induced alignment is particularly noisy with Transformer models.", "Because of this, some studies, such as Nguyen and Chiang (2018); Zenkel et al.", "(2019) proposed either to add extra modules to generate higher quality word alignments, or to use these modules to further improve the model performance or interpretability.", "This paper is a step towards interpreting word alignments from NMT without relying on external models.", "We argue that using only attention weights is insufficient for generating clean word alignment interpretations, which we demonstrate both conceptually and empirically.", "We propose to use the notion of saliency to obtain word alignment interpretation of NMT predictions.", "Different from previous alignment models, our proposal is a pure interpretation method and does not require any parameter update or architecture change.", "Nevertheless, we are able to reduce Alignment Error Rate (AER) by 10-20 points over the attention weight baseline under two evaluation settings we adopt (see Figure 1 for an example), and beat fast-align (Dyer et al., 2013) by as much as 8.7 points.", "Not only have we proposed a superior model interpretation method, but our empirical results also uncover that, contrary to common beliefs, architectures such as convolutional sequenceto-sequence models (Gehring et al., 2017) have already implicitly learned highly interpretable word alignments, which sheds light on how future improvement should be made on these architectures.", "Related Work We start with work that combines word alignments with NMT.", "Research in this area generally falls into one of three themes: (1) employing the notion of word alignments to interpret the prediction of NMT; (2) making use of word alignments to improve NMT performance; (3) making use of NMT to improve word alignments.", "We mainly focus on related work in the first theme as this is the problem we are addressing in this work.", "Then we briefly introduce work in the other themes that is relevant to our study.", "We conclude by briefly summarizing related work to our proposed interpretation method.", "For the attention in RNN-based sequence-tosequence model, the first comprehensive analysis is conducted by Ghader and Monz (2017) .", "They argued that the attention in such systems agree with word alignment to a certain extent by showing that the RNN-based system achieves comparable alignment error rate comparable to that of bidirectional GIZA++ with symmetrization.", "However, they also point out that they are not exactly the same, as training the attention with alignments would occasionally cause the model to forget important information.", "Lee et al.", "(2017) presented a toolkit that facilitates study for the attention in RNN-based models.", "There is also a number of other studies that analyze the attention in Transformer models.", "Tang et al.", "(2018a,b) conducted targeted evaluation of neural machine translation models in two different evaluation tasks, namely subject-verb agreement and word sense disambiguation.", "During the analysis, they noted that the pattern in Transformer model (what they refer to as advanced attention mechanism) is very different from that of the attention in RNN-based architecture, in that a lot of the probability mass is focused on the last input token.", "They did not dive deeper in this phenomenon in their analysis.", "Raganato and Tiedemann (2018) performed a brief but more refined analysis on each attention head and each layer, where they noticed several different patterns inside the modules, and concluded that Transformer tends to focus on local dependencies in lower layers but finds long dependencies on higher ones.", "Beyond interpretation, in order to improve the translation of rare words, Nguyen and Chiang (2018) introduced LexNet, a feed-forward neural network that directly predicts the target word from a weighted sum of the source embeddings, on top of an RNN-based Seq2Seq models.", "Their goal was to improve translation output and hence they did not empirically show AER improvements on manually-aligned corpora.", "There are also a few other studies that inject alignment supervision during NMT training (Mi et al., 2016; Liu et al., 2016) .", "In terms of improvements in word alignment quality, Legrand et al.", "(2016) ; Wang et al.", "(2018) ; proposed neu-ral word alignment modules decoupled from NMT systems, while Zenkel et al.", "(2019) introduced a separate module to extract alignment from NMT decoder states, with which they achieved comparable AER with fast-align with Transformer models.", "The saliency method we propose in this work draws its inspiration from visual saliency proposed by Simonyan et al.", "(2013); Springenberg et al.", "(2014) ; Smilkov et al.", "(2017) .", "It should be noted that these methods were mostly applied to computer vision tasks.", "To the best of our knowledge, Li et al.", "(2016) presented the only work that directly employs saliency methods to interpret NLP models.", "Most similar to our work in spirit, Ding et al.", "(2017) used Layer-wise Relevance Propagation (LRP; Bach et al.", "2015) , an interpretation method resembling saliency, to interpret the internal working mechanisms of RNN-based neural machine translation systems.", "Although conceptually LRP is also a good fit for word alignment interpretation, we have some concerns with the mathematical soundness of LRP when applied to attention models.", "Our proposed method is also considerably more flexible and easier to implement than LRP.", "The Interpretation Problem Formally, by interpreting model prediction, we are referring to the following problem: given a trained MT model and input tokens S = {s 0 , s 1 , .", ".", ".", ", s I−1 }, at a certain time step j when the models predicts t j , we want to know which source word in S \"contributed\" most to this prediction.", "Note that the prediction t j might not be arg max t j p(t j | t 1:j−1 ), as the locally optimal option may be pruned during beam search and not end up in the final translation.", "Under this framework, we can see an important conceptual problem regarding interpreting attention weights as word alignment.", "Suppose for the same source sentence, there are two alternative translations that diverge at target time step j, generating t j and t ′ j which respectively correspond to different source words.", "Presumably, the source word that is aligned to t j and t ′ j should changed correspondingly.", "However, this is not possible with the attention weight interpretation, because the attention weight is computed before prediction of t j or t ′ j .", "With that, we argue that an ideal interpretation algorithm should be able to adapt the interpretation with the specified output label, regard-less of whether it is the most likely label predicted by the model.", "As a final note, the term \"attention weights\" here refers to the weights of the attention between encoder and decoder (the \"encoder-decoder attention\" in Vaswani et al.", "(2017) ).", "Specifically, they do not refer to the weight of self-attention modules that only exist in the Transformer architecture, which do not establish alignment between the source and target words.", "Method Our proposal is based on the notion of visual saliency (Simonyan et al., 2013) in computer vision.", "In brief, the saliency of an input feature is defined by the partial gradient of the output score with regard to the input.", "We propose to extend this idea to NMT by drawing analogy between input pixels and the embedding look-up operation.", "Visual Saliency Suppose we have an image classification example (x 0 , y 0 ), with y 0 being a specific image class and x 0 being an |X |-dimensional vector.", "Each entry of x 0 is an input feature (i.e., a pixel) to the classifier.", "Given the input x 0 , a trained classifier can generate a prediction score for class y 0 , denoted as p(y 0 | x 0 ).", "Consider the first-order Taylor expansion of a perturbed version of this score at the neighborhood of input x 0 : p(y 0 | x) ≈ p(y 0 | x 0 ) + ∂p(y 0 | x) ∂x x 0 · (x − x 0 ) (1) This is essentially re-formulating the perturbed prediction score p(y 0 | x) as an affine approximation of the input features, while the \"contribution\" of each feature to the final prediction being the partial derivative of the prediction score with regard to the feature.", "Assuming a feature that is deemed as salient for the local perturbation of the prediction score would also be globally salient, the saliency of an input feature is defined as follows: Definition 1 Denoted as Ψ(x, y), the saliency of feature vector x with regard to output class y is defined as ∂p(y | x) ∂x .", "Note that Ψ(x, y) is also a vector, with each entry corresponding to the saliency of a single input feature in x.", "Such formulation has following nice properties: • The saliency of an input feature is related to the choice of output class y, as model scores of different output classes correspond to a different set of parameters, and hence resulting in different partial gradients for the input features.", "This makes up for the aforementioned deficiency of attention weights in addressing the interpretation problem.", "• The partial gradient could be computed by back-propagation, which is efficiently implemented in most deep learning frameworks.", "• The formulation is agnostic to the model that generates p(y | x), so it could be applied to any deep learning architecture.", "Word Saliency In computer vision, the input feature is a 3D Tensor corresponding to the level in each channel.", "The key question to apply such method to NMT is what constitutes the input feature to a NMT system.", "Li et al.", "(2016) proposed to use the embedding of of the input words as the input feature to formulate saliency score, which results in the saliency of an input word being a vector of the same dimension as embedding vectors.", "To obtain a scalar saliency value, they computed the mean of the absolute value of the embedding gradients.", "We argue that there is a more mathematically principled way to approach this.", "To start, we treat the word embedding look-up operation as a dot product between the embedding weight matrix W and an one-hot vector z.", "The size of z is the same as the source vocabulary size.", "Similarly, the input sentence could be formulated as a matrix Z with only 0 and 1 entries.", "Notice that z has certain resemblance to the pixels of an image, with each cell representing the pixel-wise activation level of the words in the vocabulary.", "For the output word t j at time step j, we can similarly define the saliency of the one-hot vector z as: Ψ(z, t j ) = ∂p(t j | Z) ∂z (2) where p(t j | Z) is the probability of word t j generated by the NMT model given source sentence Z. Ψ(z, t j ) is a vector of the same size as z.", "However, note that there is a key difference between z and pixels.", "If the pixel level is 0, it means that the pixel is black, while a 0-entry in z means that the input word is not the word denoted by the corresponding cell.", "While the black region of an input image may still carry important information, we are not interested in the saliency of the 0-entries in z.", "1 Hence, we only take the 1-entries of matrix Z as the input to the NMT model.", "For a source word s i in the source sentence, this means we only care about the saliency of the 1-entries, i.e., the entry corresponding to source word s i : ψ(s i , t j ) = [ ∂p(t j | Z) ∂z ] s i = [ ∂p(t j | Z) ∂W s i · ∂W s i ∂z ] s i = [ ∂p(t j | Z) ∂W s i · W ] s i = ∂p(t j | Z) ∂W s i · W s i (3) where [·] i denotes the i-th row of a matrix or the ith element of a vector.", "In other words, the saliency ψ(s i , t j ) is a weighted sum of the word embedding of input word s i , with the partial gradient of each cell as the weight.", "By comparison, the word saliency 2 in Li et al.", "(2016) is defined as: ψ ′ (s i , t j ) = mean ( ∂p(t j | Z) ∂W s i ) (4) There are two implementation details that we would like to call for the reader's attention: • When the same word occurs multiple times in the source sentence, multiple copies of embedding for such word need to be made to ensure that the gradients flowing to different instances of the same word are not merged; • Note that ψ(s i , t j ) is not a probability distribution, which does not affect word alignment results because we are taking arg max.", "For visualizations presented herein, we normalized the distribution by p( s i | t j ) ∝ max(0, ψ(s i , t j )).", "One may also use softmax function for applications that need more well-formed probability distribution.", "1 Although we introduce z to facilitate presentation, note that word embedding look-up is never implemented as a matrix multiplication.", "Instead, it is implemented as a table lookup, so for each input word, only one row of the word embedding is fed into the subsequent computation.", "As a consequence, during training, since the other rows are not part of the computation graph, only parameters in the rows corresponding to the 1-entries will be updated.", "This is another reason why we choose to discard the saliency of 0-entries.", "2 Li et al.", "(2016) mostly focused on studying saliency on the level of word embedding dimensions.", "This word-level formulation is proposed as part of the analysis in Section 5.2 and Section 6 of that work.", "SmoothGrad There are two scenarios where the naïve gradientbased saliency may make mistakes: • For highly non-linear models, the saliency obtained from local perturbation may not be a good representation of the global saliency.", "• If the model fits the distribution nearly perfectly, some data points or input features may become saturated, i.e.", "having a partial gradient of 0.", "This does not necessarily mean they are not salient with regard to the prediction.", "We alleviate these problems with SmoothGrad, a method proposed by Smilkov et al.", "(2017) .", "The idea is to augment the input to the network into n samples by adding random noise generated by normal distribution N (0, σ 2 ).", "The saliency scores of each augmented sample are then averaged to cancel out the noise in the gradients.", "We made one small modification to this method in our experiments: rather than adding noise to the word inputs that are represented as one-hot vectors, we instead add noise to the queried embedding vectors.", "This allows us to introduce more randomness for each word input.", "Experiments Evaluation Method The best evaluation method would compare predicted word alignments against manually labeled word alignments between source sentences and NMT output sentences, but this is too costly for our study.", "Instead, we conduct two automatic evaluations for our proposed method using resources available: • force decoding: take a human-annotated corpus, run NMT models to force-generate the target side of the corpus and measure AER against the human alignment; • free decoding: take the NMT prediction, obtain reasonably clean reference alignments between the prediction and the source and measure AER against this reference.", "3 Notice that both automatic evaluation methods have their respective limitation: the force decoding method may force the model to predict something it deems unlikely, and thus generating noisy alignment; whereas the free decoding method lacks authentic references.", "Setup We follow Zenkel et al.", "(2019) in data setup and use the accompanied scripts of that paper 4 for preprocessing.", "Their training data consists of 1.9M, 1.1M and 0.4M sentence pairs for German-English (de-en), English-French (en-fr) and Romanian-English (ro-en) language pairs, respectively, whereas the manually-aligned test data contains 508, 447 and 248 sentence pairs for each language pair.", "There is no development data provided in their setup, and it is not clear what they used for NMT system training, so we set aside the last 1,000 sentences of the training data for each language as the development set.", "For our NMT systems, we use fairseq 5 to train attention-based RNN systems (LSTM) (Bahdanau et al., 2014) , convolution systems (FConv) (Gehring et al., 2017) , and Transformer systems (Transformer) (Vaswani et al., 2017) .", "We use the pre-configured model architectures for IWSLT German-English experiments 6 to build all NMT systems.", "Our experiments cover the following interpretation methods: • Attention: directly take the attention weights as soft alignment scores.", "For transformer, we follow the implementation in fairseq and used the attention weights from the final layer averaged across all heads; • Smoothed Attention: obtain multiple version of attention weights with the same data augmentation procedure as SmoothGrad and average them.", "This is to prove that smoothing itself does not improve the interpretation quality, and has to be used together with effective interpretation method; • (Li et al., 2016) : applied with normal backpropagation (Grad) and SmoothGrad; • Ours: applied with normal back-propagation (Grad) and SmoothGrad.", "For all the methods above, we follow the same procedure in (Zenkel et al., 2019) to convert soft alignment scores to hard alignment.", "For force decoding experiments, we generate symmetrized alignment results with growdiag-final.", "We also include AER results 7 of fast-align (Dyer et al., 2013) , GIZA++ 8 and the best model (Add+SGD) from Zenkel et al.", "(2019) on the same dataset for comparison.", "However, the readers should be aware that there are certain caveats in this comparison: • All of these models are specifically designed and optimized to generate high-quality alignments, while our method is an interpretation method and is not making any architecture modifications or parameter updates; • fast-align and GIZA++ usually need to update model with full sentence to generate optimal alignments, while our system and Zenkel et al.", "(2019) can do so on-the-fly.", "7 We reproduced the fast-align results as a sanity check and we were able to perfectly replicate their numbers with their released scripts.", "8 https://github.com/moses-smt/giza-pp Realizing the second caveat, we also run fastalign under the online alignment scenario, where we first train a fast-align model and decode on the test set.", "This is a real-world scenario in applications such as computer-aided translation (Bouma and Parmentier, 2014; Arcan et al., 2014) , where we cannot practically update alignment models onthe-fly.", "On the other hand, we believe this is a slightly better comparison for methods with online alignment capabilities such as Zenkel et al.", "(2019) and this work.", "The data used in Zenkel et al.", "(2019) did not provide a manually-aligned development set, so we tune the SmoothGrad hyperparameters (noise standard deviation σ and sample size n) on a 30sentence subset of the German-English test data with the Transformer model.", "We ended up using the recommended σ = 0.15 in the original paper and a slightly smaller sample size n = 30 for speed.", "This hyperparameter setting is applied to the other SmoothGrad experiments as-is.", "For com-parison with previous work, we do not exclude these sentences from the reported results, we instead mark the numbers affected to raise caution.", "Table 1 shows the AER results under the force decoding setting.", "First, note that after applying our saliency method with normal back-propagation, AER is only reduced for FConv model but instead increases for LSTM and Transformer.", "The largest increase is observed for Transformer, where the AER increases by about 20 points on average.", "However, after applying SmoothGrad on top of that, we observe a sharp drop in AER, which ends up with 10-20 points lower than the attention weight baseline.", "We can also see that this is not just an effect introduced by input noise, as the same smoothing procedure for attention increases the AER most of the times.", "To summarize, at least under force decoding settings, our saliency method with SmoothGrad obtains word alignment interpretations of much higher quality than the attention weight baseline.", "Force Decoding Results As for Li et al.", "(2016) , for FConv and LSTM architectures, it is not only consistently worse than our method, but at times also worse than attention.", "Besides, the effect of SmoothGrad is also not as consistent on their saliency formulation as ours.", "Although with the Transformer model, the Li et al.", "(2016) method obtained better AER than our method under several settings, it is still pretty clear overall that the superior mathematical soundness of our method is translated into better interpretation quality.", "While the GIZA++ model obtains the best alignment result in Table 1 9 , most of our word alignment interpretation of FConv model with Smooth-Grad surpasses the alignment quality of fast-align (either Online or Offline), sometimes by as much as 8.7 points (symmetrized ro<>en result).", "Our best models are also largely on-par with (Zenkel et al., 2019) .", "These are notable results as our method is an interpretation method and no extra parameter is updated to optimize the quality of alignment.", "On the other hand, this also indicates that it is possible to induce high-quality 9 While Ghader and Monz (2017) showed that the AER obtained by LSTM model is close to that of GIZA++, our experiments yield a much larger difference.", "We think this is largely due to the fact that we choose to train our model with BPE, while Ghader and Monz (2017) explicitly avoided doing so.", "alignments from NMT model without modifying its parameters, showing that it has acquired such information in an implicit way.", "Most interestingly, although NMT is often deemed as performing poorly under low-resource setting, our interpretation seems to work relatively well on ro<>en language pair, which happens to be the language pair that we have least training data for.", "We think this is a phenomenon that merits further exploration.", "Besides, it can be seen that for all reported methods, the overall order for the number of alignment errors is FConv < LSTM < Transformer.", "To our best knowledge, this is also a novel insight, as no one has analyzed attention weights of FConv with other architectures before.", "We can also observe that while our method is not strong enough to fully bridge the gap of the attention noise level between different model architecture, it does manage to narrow the difference in some cases.", "Table 2 shows the result under free decoding setting.", "The trend in this group of experiment is similar to Table 1 , except that Transformer occasionally outperforms LSTM.", "We think this is mainly due to the fact that Transformer generates higher quality translations, but could also be partially attributed to the noise in fast-align reference.", "Also, notice that the AER numbers are also generally lower compared to Table 1 under this setting.", "One reason is that our model is aligning output with which it is most confident, so less noise should be expected in the model behavior.", "On the other hand, by qualitatively comparing the reference translation in the test set and the NMT output, we find that it is generally easier to align the translation as it is often a more literal translation.", "6 Analysis 6.1 Comparison with Li et al.", "(2016) The main reason why the word saliency formulation in Li et al.", "(2016) does not work as well for word alignment is the lack of polarity in the formulation.", "In other words, it only quantifies how much the input influences the output, but does not specify in what way does the input influence.", "This is sufficient for error analysis, but does not suit the purpose of word alignment, as humans will only align a target word to the input words that constitute a translation pair, i.e.", "have positive influence.", "Figure 2 shows a case where this problem occurs in our German-English experiments.", "Note that in Subfigure (a), the source word nur has high saliency on several target words, e.g.", "should, but the word nur is actually not translated in the reference.", "On the other hand, as shown in Subfigure (b), our method correctly assigns negative (shown as white) or small positive values at all time steps for this source word.", "Specifically, the saliency value of nur for should is negative with large magnitude, indicating significant negative contributions to the prediction of that target word.", "Hence, a good word alignment interpreta-tion should strongly avoid aligning them.", "Free Decoding Results SmoothGrad Tables 1 and 2 show that SmoothGrad is a crucial factor to reduce AER, especially for Transformer.", "Figure 3 Table 1 .", "By comparing Subfigures (a) and (c), we notice that (1) without SmoothGrad, the word saliency obtained from the Transformer model is extremely noisy, and (2) the output of SmoothGrad is not only a smoother version of the naïve gradient output, but also gains new information by performing extra forward and backward evaluations with the noisy input.", "For example, compare the alignment point between source word wir and target word we: in Subfigure (a), this word pair has very low saliency, but in (c), they become the most likely alignment pair for that target word.", "Referring back to our motivation for using SmoothGrad in Section 4.3, we think the observations above verify that the Transformer model is a case where very high non-linearities occur almost everywhere in the parameter space, such that the saliency obtained from local perturbation is a very Table 3 : Alignment distribution entropy for selected deen models.", "att stands for attention in Table 1. poor representation of the global saliency almost all the time.", "On the other hand, this is also why the Transformer especially relies on SmoothGrad to work well, as the perturbation will give a better estimation of the global saliency.", "It could also be observed from Subfigures (b) and (d) that when the noise is too moderate, the evaluation does not deviate enough from the original spot to gain non-local information, and at (d) it deviates too much and hence the resulting alignment is almost random.", "Intuitively, the noise parameter σ should be sensitive to the model architecture or even specific input feature values, but interestingly we end up finding that a single choice from the computer vision literature works well with all of our systems.", "We encourage future work to conduct more comprehensive analysis of the effect of SmoothGrad on more complicated architectures beyond convolutional neural nets.", "Alignment Dispersion We run German-English alignments under several different SmoothGrad noise deviation σ and report their dispersion as measured by entropy of the (soft) alignment distribution averaged by number of target words.", "Results are summarized in Ta-ble 3, where lower entropy indicates more peaky alignments.", "First, we observe that dispersion of word saliency gets higher as we increase σ, which aligns with the observations in Figure 3 .", "It should also be noted that the alignment dispersion is consistently lower for free decoding than force decoding.", "This verifies our conjecture that the force decoding setting might introduce more noise in the model behavior, but judging from this result, that gap seems to be minimal.", "Comparing different architectures, the dispersion of attention weights does not correlate well with the dispersion of word saliency.", "We also notice that, while the Transformer attention interpretation consistently results in higher AER, its dispersion is lower than the other architectures, indicating that with attention, a lot of the probability mass might be concentrated in the wrong place more often.", "This corroborates the finding in Raganato and Tiedemann (2018) .", "Discussion And Future Work There are several extensions to this work that we would like to discuss in this section.", "First, in this paper we only explored two saliency methods among many others available (Montavon et al., 2018) .", "In our preliminary study, we also experimented with guided back-propagation (Springenberg et al., 2014), a frequently used saliency method in computer vision, which did not work well for our problem.", "We suspect that there is a gap between applying these methods on mostlyconvolutional architectures in computer vision and architectures with more non-linearities in NLP.", "We hope the future research from the NLP and machine learning communities could bridge this gap.", "Secondly, the alignment errors in our method comes from three different sources: the limitation of NMT models on learning word alignments, the limitation of interpretation method on recovering interpretable word alignments, and the ambiguity in word alignments itself.", "Although we have shown that high quality alignment could be recovered from NMT systems (thus pushing our understanding on the limitation of NMT models), we are not yet able to separate these sources of errors in this work.", "While exploration on this direction will help us better understand both NMT models and the capability of saliency methods in NLP, researchers may want to avoid using word alignment as a benchmark for saliency methods because of its ambiguity.", "For such purpose, simpler tasks with clear ground truth, such as subject-verb agreement, might be a better choice.", "Finally, as mentioned before, we are only conducting approximate evaluation to measure the ability of our interpretation method.", "An immediate future work would be evaluating this on human-annotated translation outputs generated by the NMT system.", "Conclusion We propose to use word saliency and SmoothGrad to interpret word alignments from NMT predictions.", "Our proposal is model-agnostic, is able to be applied either offline or online, and does not require any parameter updates or architectural change.", "Both force decoding and free decoding evaluations show that our method is capable of generating word alignment interpretations of much higher quality compared to its attentionbased counterpart.", "Our empirical results also probe into the NMT black-box and reveal that even without any special architecture or training algorithm, some NMT models have already implicitly learned interpretable word alignments of comparable quality to fast-align.", "The model and code for our experiments are available at https://github.com/shuoyangd/meerkat." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "4.3", "5.1", "5.2", "5.3", "6.2", "6.3", "7", "8" ], "paper_header_content": [ "Introduction", "Related Work", "The Interpretation Problem", "Method", "Visual Saliency", "Word Saliency", "SmoothGrad", "Evaluation Method", "Setup", "Force Decoding Results", "SmoothGrad", "Alignment Dispersion", "Discussion And Future Work", "Conclusion" ] }
GEM-SciDuet-train-38#paper-1054#slide-4
Focus on solten
Saliency-driven Word Alignment Interpretation for NMT
Saliency-driven Word Alignment Interpretation for NMT
[]
GEM-SciDuet-train-38#paper-1054#slide-5
1054
Saliency-driven Word Alignment Interpretation for Neural Machine Translation
Despite their original goal to jointly learn to align and translate, Neural Machine Translation (NMT) models, especially Transformer, are often perceived as not learning interpretable word alignments. In this paper, we show that NMT models do learn interpretable word alignments, which could only be revealed with proper interpretation methods. We propose a series of such methods that are model-agnostic, are able to be applied either offline or online, and do not require parameter update or architectural change. We show that under the force decoding setup, the alignments induced by our interpretation method are of better quality than fast-align for some systems, and when performing free decoding, they agree well with the alignments induced by automatic alignment tools.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223 ], "paper_content_text": [ "Introduction Neural Machine Translation (NMT) has made lots of advancements since its inception.", "One of the key innovations that led to the largest improvements is the introduction of the attention mechanism (Bahdanau et al., 2014; Luong et al., 2015) , which jointly learns word alignment and translation.", "Since then, the attention mechanism has gradually become a general technique in various NLP tasks, including summarization (Rush et al., 2015; See et al., 2017) , natural language inference (Parikh et al., 2016) and speech recognition (Chorowski et al., 2015; Chan et al., 2016) .", "Although word alignment is no longer a integral step like the case for Statistical Machine Translation (SMT) systems (Brown et al., 1993; Koehn et al., 2003) , there is a resurgence of interest in the community to study word alignment for NMT models.", "Even for NMT, word alignments are useful for error analysis, inserting external vocabularies, and providing guidance for human translators in computer-aided translation.", "When aiming for the most accurate alignments, the state-of-the-art tools include GIZA++ (Brown et al., 1993; Och and Ney, 2003) and fast-align (Dyer et al., 2013) , which are all external models invented in SMT era and need to be run as a separate post-processing step after the full sentence translation is complete.", "As a direct result, they are not suitable for analyzing the internal decision processes of the neural machine translation models.", "Besides, these models are hard to apply in the online fashion, i.e.", "in the middle of left-to-right translation process, such as the scenario in certain constrained decoding algorithms (Hasler et al., 2018) and in computeraided translation (Bouma and Parmentier, 2014; Arcan et al., 2014) .", "For these cases, the current common practice is to simply generate word alignments from attention weights between the encoder and decoder.", "However, there are problems with this practice.", "Koehn and Knowles (2017) showed that attention-based word alignment interpretation may be subject to \"off-by-one\" errors.", "Zenkel et al.", "(2019) ; Tang et al.", "(2018b) ; Raganato and Tiedemann (2018) pointed out that the attention-induced alignment is particularly noisy with Transformer models.", "Because of this, some studies, such as Nguyen and Chiang (2018); Zenkel et al.", "(2019) proposed either to add extra modules to generate higher quality word alignments, or to use these modules to further improve the model performance or interpretability.", "This paper is a step towards interpreting word alignments from NMT without relying on external models.", "We argue that using only attention weights is insufficient for generating clean word alignment interpretations, which we demonstrate both conceptually and empirically.", "We propose to use the notion of saliency to obtain word alignment interpretation of NMT predictions.", "Different from previous alignment models, our proposal is a pure interpretation method and does not require any parameter update or architecture change.", "Nevertheless, we are able to reduce Alignment Error Rate (AER) by 10-20 points over the attention weight baseline under two evaluation settings we adopt (see Figure 1 for an example), and beat fast-align (Dyer et al., 2013) by as much as 8.7 points.", "Not only have we proposed a superior model interpretation method, but our empirical results also uncover that, contrary to common beliefs, architectures such as convolutional sequenceto-sequence models (Gehring et al., 2017) have already implicitly learned highly interpretable word alignments, which sheds light on how future improvement should be made on these architectures.", "Related Work We start with work that combines word alignments with NMT.", "Research in this area generally falls into one of three themes: (1) employing the notion of word alignments to interpret the prediction of NMT; (2) making use of word alignments to improve NMT performance; (3) making use of NMT to improve word alignments.", "We mainly focus on related work in the first theme as this is the problem we are addressing in this work.", "Then we briefly introduce work in the other themes that is relevant to our study.", "We conclude by briefly summarizing related work to our proposed interpretation method.", "For the attention in RNN-based sequence-tosequence model, the first comprehensive analysis is conducted by Ghader and Monz (2017) .", "They argued that the attention in such systems agree with word alignment to a certain extent by showing that the RNN-based system achieves comparable alignment error rate comparable to that of bidirectional GIZA++ with symmetrization.", "However, they also point out that they are not exactly the same, as training the attention with alignments would occasionally cause the model to forget important information.", "Lee et al.", "(2017) presented a toolkit that facilitates study for the attention in RNN-based models.", "There is also a number of other studies that analyze the attention in Transformer models.", "Tang et al.", "(2018a,b) conducted targeted evaluation of neural machine translation models in two different evaluation tasks, namely subject-verb agreement and word sense disambiguation.", "During the analysis, they noted that the pattern in Transformer model (what they refer to as advanced attention mechanism) is very different from that of the attention in RNN-based architecture, in that a lot of the probability mass is focused on the last input token.", "They did not dive deeper in this phenomenon in their analysis.", "Raganato and Tiedemann (2018) performed a brief but more refined analysis on each attention head and each layer, where they noticed several different patterns inside the modules, and concluded that Transformer tends to focus on local dependencies in lower layers but finds long dependencies on higher ones.", "Beyond interpretation, in order to improve the translation of rare words, Nguyen and Chiang (2018) introduced LexNet, a feed-forward neural network that directly predicts the target word from a weighted sum of the source embeddings, on top of an RNN-based Seq2Seq models.", "Their goal was to improve translation output and hence they did not empirically show AER improvements on manually-aligned corpora.", "There are also a few other studies that inject alignment supervision during NMT training (Mi et al., 2016; Liu et al., 2016) .", "In terms of improvements in word alignment quality, Legrand et al.", "(2016) ; Wang et al.", "(2018) ; proposed neu-ral word alignment modules decoupled from NMT systems, while Zenkel et al.", "(2019) introduced a separate module to extract alignment from NMT decoder states, with which they achieved comparable AER with fast-align with Transformer models.", "The saliency method we propose in this work draws its inspiration from visual saliency proposed by Simonyan et al.", "(2013); Springenberg et al.", "(2014) ; Smilkov et al.", "(2017) .", "It should be noted that these methods were mostly applied to computer vision tasks.", "To the best of our knowledge, Li et al.", "(2016) presented the only work that directly employs saliency methods to interpret NLP models.", "Most similar to our work in spirit, Ding et al.", "(2017) used Layer-wise Relevance Propagation (LRP; Bach et al.", "2015) , an interpretation method resembling saliency, to interpret the internal working mechanisms of RNN-based neural machine translation systems.", "Although conceptually LRP is also a good fit for word alignment interpretation, we have some concerns with the mathematical soundness of LRP when applied to attention models.", "Our proposed method is also considerably more flexible and easier to implement than LRP.", "The Interpretation Problem Formally, by interpreting model prediction, we are referring to the following problem: given a trained MT model and input tokens S = {s 0 , s 1 , .", ".", ".", ", s I−1 }, at a certain time step j when the models predicts t j , we want to know which source word in S \"contributed\" most to this prediction.", "Note that the prediction t j might not be arg max t j p(t j | t 1:j−1 ), as the locally optimal option may be pruned during beam search and not end up in the final translation.", "Under this framework, we can see an important conceptual problem regarding interpreting attention weights as word alignment.", "Suppose for the same source sentence, there are two alternative translations that diverge at target time step j, generating t j and t ′ j which respectively correspond to different source words.", "Presumably, the source word that is aligned to t j and t ′ j should changed correspondingly.", "However, this is not possible with the attention weight interpretation, because the attention weight is computed before prediction of t j or t ′ j .", "With that, we argue that an ideal interpretation algorithm should be able to adapt the interpretation with the specified output label, regard-less of whether it is the most likely label predicted by the model.", "As a final note, the term \"attention weights\" here refers to the weights of the attention between encoder and decoder (the \"encoder-decoder attention\" in Vaswani et al.", "(2017) ).", "Specifically, they do not refer to the weight of self-attention modules that only exist in the Transformer architecture, which do not establish alignment between the source and target words.", "Method Our proposal is based on the notion of visual saliency (Simonyan et al., 2013) in computer vision.", "In brief, the saliency of an input feature is defined by the partial gradient of the output score with regard to the input.", "We propose to extend this idea to NMT by drawing analogy between input pixels and the embedding look-up operation.", "Visual Saliency Suppose we have an image classification example (x 0 , y 0 ), with y 0 being a specific image class and x 0 being an |X |-dimensional vector.", "Each entry of x 0 is an input feature (i.e., a pixel) to the classifier.", "Given the input x 0 , a trained classifier can generate a prediction score for class y 0 , denoted as p(y 0 | x 0 ).", "Consider the first-order Taylor expansion of a perturbed version of this score at the neighborhood of input x 0 : p(y 0 | x) ≈ p(y 0 | x 0 ) + ∂p(y 0 | x) ∂x x 0 · (x − x 0 ) (1) This is essentially re-formulating the perturbed prediction score p(y 0 | x) as an affine approximation of the input features, while the \"contribution\" of each feature to the final prediction being the partial derivative of the prediction score with regard to the feature.", "Assuming a feature that is deemed as salient for the local perturbation of the prediction score would also be globally salient, the saliency of an input feature is defined as follows: Definition 1 Denoted as Ψ(x, y), the saliency of feature vector x with regard to output class y is defined as ∂p(y | x) ∂x .", "Note that Ψ(x, y) is also a vector, with each entry corresponding to the saliency of a single input feature in x.", "Such formulation has following nice properties: • The saliency of an input feature is related to the choice of output class y, as model scores of different output classes correspond to a different set of parameters, and hence resulting in different partial gradients for the input features.", "This makes up for the aforementioned deficiency of attention weights in addressing the interpretation problem.", "• The partial gradient could be computed by back-propagation, which is efficiently implemented in most deep learning frameworks.", "• The formulation is agnostic to the model that generates p(y | x), so it could be applied to any deep learning architecture.", "Word Saliency In computer vision, the input feature is a 3D Tensor corresponding to the level in each channel.", "The key question to apply such method to NMT is what constitutes the input feature to a NMT system.", "Li et al.", "(2016) proposed to use the embedding of of the input words as the input feature to formulate saliency score, which results in the saliency of an input word being a vector of the same dimension as embedding vectors.", "To obtain a scalar saliency value, they computed the mean of the absolute value of the embedding gradients.", "We argue that there is a more mathematically principled way to approach this.", "To start, we treat the word embedding look-up operation as a dot product between the embedding weight matrix W and an one-hot vector z.", "The size of z is the same as the source vocabulary size.", "Similarly, the input sentence could be formulated as a matrix Z with only 0 and 1 entries.", "Notice that z has certain resemblance to the pixels of an image, with each cell representing the pixel-wise activation level of the words in the vocabulary.", "For the output word t j at time step j, we can similarly define the saliency of the one-hot vector z as: Ψ(z, t j ) = ∂p(t j | Z) ∂z (2) where p(t j | Z) is the probability of word t j generated by the NMT model given source sentence Z. Ψ(z, t j ) is a vector of the same size as z.", "However, note that there is a key difference between z and pixels.", "If the pixel level is 0, it means that the pixel is black, while a 0-entry in z means that the input word is not the word denoted by the corresponding cell.", "While the black region of an input image may still carry important information, we are not interested in the saliency of the 0-entries in z.", "1 Hence, we only take the 1-entries of matrix Z as the input to the NMT model.", "For a source word s i in the source sentence, this means we only care about the saliency of the 1-entries, i.e., the entry corresponding to source word s i : ψ(s i , t j ) = [ ∂p(t j | Z) ∂z ] s i = [ ∂p(t j | Z) ∂W s i · ∂W s i ∂z ] s i = [ ∂p(t j | Z) ∂W s i · W ] s i = ∂p(t j | Z) ∂W s i · W s i (3) where [·] i denotes the i-th row of a matrix or the ith element of a vector.", "In other words, the saliency ψ(s i , t j ) is a weighted sum of the word embedding of input word s i , with the partial gradient of each cell as the weight.", "By comparison, the word saliency 2 in Li et al.", "(2016) is defined as: ψ ′ (s i , t j ) = mean ( ∂p(t j | Z) ∂W s i ) (4) There are two implementation details that we would like to call for the reader's attention: • When the same word occurs multiple times in the source sentence, multiple copies of embedding for such word need to be made to ensure that the gradients flowing to different instances of the same word are not merged; • Note that ψ(s i , t j ) is not a probability distribution, which does not affect word alignment results because we are taking arg max.", "For visualizations presented herein, we normalized the distribution by p( s i | t j ) ∝ max(0, ψ(s i , t j )).", "One may also use softmax function for applications that need more well-formed probability distribution.", "1 Although we introduce z to facilitate presentation, note that word embedding look-up is never implemented as a matrix multiplication.", "Instead, it is implemented as a table lookup, so for each input word, only one row of the word embedding is fed into the subsequent computation.", "As a consequence, during training, since the other rows are not part of the computation graph, only parameters in the rows corresponding to the 1-entries will be updated.", "This is another reason why we choose to discard the saliency of 0-entries.", "2 Li et al.", "(2016) mostly focused on studying saliency on the level of word embedding dimensions.", "This word-level formulation is proposed as part of the analysis in Section 5.2 and Section 6 of that work.", "SmoothGrad There are two scenarios where the naïve gradientbased saliency may make mistakes: • For highly non-linear models, the saliency obtained from local perturbation may not be a good representation of the global saliency.", "• If the model fits the distribution nearly perfectly, some data points or input features may become saturated, i.e.", "having a partial gradient of 0.", "This does not necessarily mean they are not salient with regard to the prediction.", "We alleviate these problems with SmoothGrad, a method proposed by Smilkov et al.", "(2017) .", "The idea is to augment the input to the network into n samples by adding random noise generated by normal distribution N (0, σ 2 ).", "The saliency scores of each augmented sample are then averaged to cancel out the noise in the gradients.", "We made one small modification to this method in our experiments: rather than adding noise to the word inputs that are represented as one-hot vectors, we instead add noise to the queried embedding vectors.", "This allows us to introduce more randomness for each word input.", "Experiments Evaluation Method The best evaluation method would compare predicted word alignments against manually labeled word alignments between source sentences and NMT output sentences, but this is too costly for our study.", "Instead, we conduct two automatic evaluations for our proposed method using resources available: • force decoding: take a human-annotated corpus, run NMT models to force-generate the target side of the corpus and measure AER against the human alignment; • free decoding: take the NMT prediction, obtain reasonably clean reference alignments between the prediction and the source and measure AER against this reference.", "3 Notice that both automatic evaluation methods have their respective limitation: the force decoding method may force the model to predict something it deems unlikely, and thus generating noisy alignment; whereas the free decoding method lacks authentic references.", "Setup We follow Zenkel et al.", "(2019) in data setup and use the accompanied scripts of that paper 4 for preprocessing.", "Their training data consists of 1.9M, 1.1M and 0.4M sentence pairs for German-English (de-en), English-French (en-fr) and Romanian-English (ro-en) language pairs, respectively, whereas the manually-aligned test data contains 508, 447 and 248 sentence pairs for each language pair.", "There is no development data provided in their setup, and it is not clear what they used for NMT system training, so we set aside the last 1,000 sentences of the training data for each language as the development set.", "For our NMT systems, we use fairseq 5 to train attention-based RNN systems (LSTM) (Bahdanau et al., 2014) , convolution systems (FConv) (Gehring et al., 2017) , and Transformer systems (Transformer) (Vaswani et al., 2017) .", "We use the pre-configured model architectures for IWSLT German-English experiments 6 to build all NMT systems.", "Our experiments cover the following interpretation methods: • Attention: directly take the attention weights as soft alignment scores.", "For transformer, we follow the implementation in fairseq and used the attention weights from the final layer averaged across all heads; • Smoothed Attention: obtain multiple version of attention weights with the same data augmentation procedure as SmoothGrad and average them.", "This is to prove that smoothing itself does not improve the interpretation quality, and has to be used together with effective interpretation method; • (Li et al., 2016) : applied with normal backpropagation (Grad) and SmoothGrad; • Ours: applied with normal back-propagation (Grad) and SmoothGrad.", "For all the methods above, we follow the same procedure in (Zenkel et al., 2019) to convert soft alignment scores to hard alignment.", "For force decoding experiments, we generate symmetrized alignment results with growdiag-final.", "We also include AER results 7 of fast-align (Dyer et al., 2013) , GIZA++ 8 and the best model (Add+SGD) from Zenkel et al.", "(2019) on the same dataset for comparison.", "However, the readers should be aware that there are certain caveats in this comparison: • All of these models are specifically designed and optimized to generate high-quality alignments, while our method is an interpretation method and is not making any architecture modifications or parameter updates; • fast-align and GIZA++ usually need to update model with full sentence to generate optimal alignments, while our system and Zenkel et al.", "(2019) can do so on-the-fly.", "7 We reproduced the fast-align results as a sanity check and we were able to perfectly replicate their numbers with their released scripts.", "8 https://github.com/moses-smt/giza-pp Realizing the second caveat, we also run fastalign under the online alignment scenario, where we first train a fast-align model and decode on the test set.", "This is a real-world scenario in applications such as computer-aided translation (Bouma and Parmentier, 2014; Arcan et al., 2014) , where we cannot practically update alignment models onthe-fly.", "On the other hand, we believe this is a slightly better comparison for methods with online alignment capabilities such as Zenkel et al.", "(2019) and this work.", "The data used in Zenkel et al.", "(2019) did not provide a manually-aligned development set, so we tune the SmoothGrad hyperparameters (noise standard deviation σ and sample size n) on a 30sentence subset of the German-English test data with the Transformer model.", "We ended up using the recommended σ = 0.15 in the original paper and a slightly smaller sample size n = 30 for speed.", "This hyperparameter setting is applied to the other SmoothGrad experiments as-is.", "For com-parison with previous work, we do not exclude these sentences from the reported results, we instead mark the numbers affected to raise caution.", "Table 1 shows the AER results under the force decoding setting.", "First, note that after applying our saliency method with normal back-propagation, AER is only reduced for FConv model but instead increases for LSTM and Transformer.", "The largest increase is observed for Transformer, where the AER increases by about 20 points on average.", "However, after applying SmoothGrad on top of that, we observe a sharp drop in AER, which ends up with 10-20 points lower than the attention weight baseline.", "We can also see that this is not just an effect introduced by input noise, as the same smoothing procedure for attention increases the AER most of the times.", "To summarize, at least under force decoding settings, our saliency method with SmoothGrad obtains word alignment interpretations of much higher quality than the attention weight baseline.", "Force Decoding Results As for Li et al.", "(2016) , for FConv and LSTM architectures, it is not only consistently worse than our method, but at times also worse than attention.", "Besides, the effect of SmoothGrad is also not as consistent on their saliency formulation as ours.", "Although with the Transformer model, the Li et al.", "(2016) method obtained better AER than our method under several settings, it is still pretty clear overall that the superior mathematical soundness of our method is translated into better interpretation quality.", "While the GIZA++ model obtains the best alignment result in Table 1 9 , most of our word alignment interpretation of FConv model with Smooth-Grad surpasses the alignment quality of fast-align (either Online or Offline), sometimes by as much as 8.7 points (symmetrized ro<>en result).", "Our best models are also largely on-par with (Zenkel et al., 2019) .", "These are notable results as our method is an interpretation method and no extra parameter is updated to optimize the quality of alignment.", "On the other hand, this also indicates that it is possible to induce high-quality 9 While Ghader and Monz (2017) showed that the AER obtained by LSTM model is close to that of GIZA++, our experiments yield a much larger difference.", "We think this is largely due to the fact that we choose to train our model with BPE, while Ghader and Monz (2017) explicitly avoided doing so.", "alignments from NMT model without modifying its parameters, showing that it has acquired such information in an implicit way.", "Most interestingly, although NMT is often deemed as performing poorly under low-resource setting, our interpretation seems to work relatively well on ro<>en language pair, which happens to be the language pair that we have least training data for.", "We think this is a phenomenon that merits further exploration.", "Besides, it can be seen that for all reported methods, the overall order for the number of alignment errors is FConv < LSTM < Transformer.", "To our best knowledge, this is also a novel insight, as no one has analyzed attention weights of FConv with other architectures before.", "We can also observe that while our method is not strong enough to fully bridge the gap of the attention noise level between different model architecture, it does manage to narrow the difference in some cases.", "Table 2 shows the result under free decoding setting.", "The trend in this group of experiment is similar to Table 1 , except that Transformer occasionally outperforms LSTM.", "We think this is mainly due to the fact that Transformer generates higher quality translations, but could also be partially attributed to the noise in fast-align reference.", "Also, notice that the AER numbers are also generally lower compared to Table 1 under this setting.", "One reason is that our model is aligning output with which it is most confident, so less noise should be expected in the model behavior.", "On the other hand, by qualitatively comparing the reference translation in the test set and the NMT output, we find that it is generally easier to align the translation as it is often a more literal translation.", "6 Analysis 6.1 Comparison with Li et al.", "(2016) The main reason why the word saliency formulation in Li et al.", "(2016) does not work as well for word alignment is the lack of polarity in the formulation.", "In other words, it only quantifies how much the input influences the output, but does not specify in what way does the input influence.", "This is sufficient for error analysis, but does not suit the purpose of word alignment, as humans will only align a target word to the input words that constitute a translation pair, i.e.", "have positive influence.", "Figure 2 shows a case where this problem occurs in our German-English experiments.", "Note that in Subfigure (a), the source word nur has high saliency on several target words, e.g.", "should, but the word nur is actually not translated in the reference.", "On the other hand, as shown in Subfigure (b), our method correctly assigns negative (shown as white) or small positive values at all time steps for this source word.", "Specifically, the saliency value of nur for should is negative with large magnitude, indicating significant negative contributions to the prediction of that target word.", "Hence, a good word alignment interpreta-tion should strongly avoid aligning them.", "Free Decoding Results SmoothGrad Tables 1 and 2 show that SmoothGrad is a crucial factor to reduce AER, especially for Transformer.", "Figure 3 Table 1 .", "By comparing Subfigures (a) and (c), we notice that (1) without SmoothGrad, the word saliency obtained from the Transformer model is extremely noisy, and (2) the output of SmoothGrad is not only a smoother version of the naïve gradient output, but also gains new information by performing extra forward and backward evaluations with the noisy input.", "For example, compare the alignment point between source word wir and target word we: in Subfigure (a), this word pair has very low saliency, but in (c), they become the most likely alignment pair for that target word.", "Referring back to our motivation for using SmoothGrad in Section 4.3, we think the observations above verify that the Transformer model is a case where very high non-linearities occur almost everywhere in the parameter space, such that the saliency obtained from local perturbation is a very Table 3 : Alignment distribution entropy for selected deen models.", "att stands for attention in Table 1. poor representation of the global saliency almost all the time.", "On the other hand, this is also why the Transformer especially relies on SmoothGrad to work well, as the perturbation will give a better estimation of the global saliency.", "It could also be observed from Subfigures (b) and (d) that when the noise is too moderate, the evaluation does not deviate enough from the original spot to gain non-local information, and at (d) it deviates too much and hence the resulting alignment is almost random.", "Intuitively, the noise parameter σ should be sensitive to the model architecture or even specific input feature values, but interestingly we end up finding that a single choice from the computer vision literature works well with all of our systems.", "We encourage future work to conduct more comprehensive analysis of the effect of SmoothGrad on more complicated architectures beyond convolutional neural nets.", "Alignment Dispersion We run German-English alignments under several different SmoothGrad noise deviation σ and report their dispersion as measured by entropy of the (soft) alignment distribution averaged by number of target words.", "Results are summarized in Ta-ble 3, where lower entropy indicates more peaky alignments.", "First, we observe that dispersion of word saliency gets higher as we increase σ, which aligns with the observations in Figure 3 .", "It should also be noted that the alignment dispersion is consistently lower for free decoding than force decoding.", "This verifies our conjecture that the force decoding setting might introduce more noise in the model behavior, but judging from this result, that gap seems to be minimal.", "Comparing different architectures, the dispersion of attention weights does not correlate well with the dispersion of word saliency.", "We also notice that, while the Transformer attention interpretation consistently results in higher AER, its dispersion is lower than the other architectures, indicating that with attention, a lot of the probability mass might be concentrated in the wrong place more often.", "This corroborates the finding in Raganato and Tiedemann (2018) .", "Discussion And Future Work There are several extensions to this work that we would like to discuss in this section.", "First, in this paper we only explored two saliency methods among many others available (Montavon et al., 2018) .", "In our preliminary study, we also experimented with guided back-propagation (Springenberg et al., 2014), a frequently used saliency method in computer vision, which did not work well for our problem.", "We suspect that there is a gap between applying these methods on mostlyconvolutional architectures in computer vision and architectures with more non-linearities in NLP.", "We hope the future research from the NLP and machine learning communities could bridge this gap.", "Secondly, the alignment errors in our method comes from three different sources: the limitation of NMT models on learning word alignments, the limitation of interpretation method on recovering interpretable word alignments, and the ambiguity in word alignments itself.", "Although we have shown that high quality alignment could be recovered from NMT systems (thus pushing our understanding on the limitation of NMT models), we are not yet able to separate these sources of errors in this work.", "While exploration on this direction will help us better understand both NMT models and the capability of saliency methods in NLP, researchers may want to avoid using word alignment as a benchmark for saliency methods because of its ambiguity.", "For such purpose, simpler tasks with clear ground truth, such as subject-verb agreement, might be a better choice.", "Finally, as mentioned before, we are only conducting approximate evaluation to measure the ability of our interpretation method.", "An immediate future work would be evaluating this on human-annotated translation outputs generated by the NMT system.", "Conclusion We propose to use word saliency and SmoothGrad to interpret word alignments from NMT predictions.", "Our proposal is model-agnostic, is able to be applied either offline or online, and does not require any parameter updates or architectural change.", "Both force decoding and free decoding evaluations show that our method is capable of generating word alignment interpretations of much higher quality compared to its attentionbased counterpart.", "Our empirical results also probe into the NMT black-box and reveal that even without any special architecture or training algorithm, some NMT models have already implicitly learned interpretable word alignments of comparable quality to fast-align.", "The model and code for our experiments are available at https://github.com/shuoyangd/meerkat." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "4.3", "5.1", "5.2", "5.3", "6.2", "6.3", "7", "8" ], "paper_header_content": [ "Introduction", "Related Work", "The Interpretation Problem", "Method", "Visual Saliency", "Word Saliency", "SmoothGrad", "Evaluation Method", "Setup", "Force Decoding Results", "SmoothGrad", "Alignment Dispersion", "Discussion And Future Work", "Conclusion" ] }
GEM-SciDuet-train-38#paper-1054#slide-5
Perturbation
Saliency-driven Word Alignment Interpretation for NMT
Saliency-driven Word Alignment Interpretation for NMT
[]
GEM-SciDuet-train-38#paper-1054#slide-6
1054
Saliency-driven Word Alignment Interpretation for Neural Machine Translation
Despite their original goal to jointly learn to align and translate, Neural Machine Translation (NMT) models, especially Transformer, are often perceived as not learning interpretable word alignments. In this paper, we show that NMT models do learn interpretable word alignments, which could only be revealed with proper interpretation methods. We propose a series of such methods that are model-agnostic, are able to be applied either offline or online, and do not require parameter update or architectural change. We show that under the force decoding setup, the alignments induced by our interpretation method are of better quality than fast-align for some systems, and when performing free decoding, they agree well with the alignments induced by automatic alignment tools.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223 ], "paper_content_text": [ "Introduction Neural Machine Translation (NMT) has made lots of advancements since its inception.", "One of the key innovations that led to the largest improvements is the introduction of the attention mechanism (Bahdanau et al., 2014; Luong et al., 2015) , which jointly learns word alignment and translation.", "Since then, the attention mechanism has gradually become a general technique in various NLP tasks, including summarization (Rush et al., 2015; See et al., 2017) , natural language inference (Parikh et al., 2016) and speech recognition (Chorowski et al., 2015; Chan et al., 2016) .", "Although word alignment is no longer a integral step like the case for Statistical Machine Translation (SMT) systems (Brown et al., 1993; Koehn et al., 2003) , there is a resurgence of interest in the community to study word alignment for NMT models.", "Even for NMT, word alignments are useful for error analysis, inserting external vocabularies, and providing guidance for human translators in computer-aided translation.", "When aiming for the most accurate alignments, the state-of-the-art tools include GIZA++ (Brown et al., 1993; Och and Ney, 2003) and fast-align (Dyer et al., 2013) , which are all external models invented in SMT era and need to be run as a separate post-processing step after the full sentence translation is complete.", "As a direct result, they are not suitable for analyzing the internal decision processes of the neural machine translation models.", "Besides, these models are hard to apply in the online fashion, i.e.", "in the middle of left-to-right translation process, such as the scenario in certain constrained decoding algorithms (Hasler et al., 2018) and in computeraided translation (Bouma and Parmentier, 2014; Arcan et al., 2014) .", "For these cases, the current common practice is to simply generate word alignments from attention weights between the encoder and decoder.", "However, there are problems with this practice.", "Koehn and Knowles (2017) showed that attention-based word alignment interpretation may be subject to \"off-by-one\" errors.", "Zenkel et al.", "(2019) ; Tang et al.", "(2018b) ; Raganato and Tiedemann (2018) pointed out that the attention-induced alignment is particularly noisy with Transformer models.", "Because of this, some studies, such as Nguyen and Chiang (2018); Zenkel et al.", "(2019) proposed either to add extra modules to generate higher quality word alignments, or to use these modules to further improve the model performance or interpretability.", "This paper is a step towards interpreting word alignments from NMT without relying on external models.", "We argue that using only attention weights is insufficient for generating clean word alignment interpretations, which we demonstrate both conceptually and empirically.", "We propose to use the notion of saliency to obtain word alignment interpretation of NMT predictions.", "Different from previous alignment models, our proposal is a pure interpretation method and does not require any parameter update or architecture change.", "Nevertheless, we are able to reduce Alignment Error Rate (AER) by 10-20 points over the attention weight baseline under two evaluation settings we adopt (see Figure 1 for an example), and beat fast-align (Dyer et al., 2013) by as much as 8.7 points.", "Not only have we proposed a superior model interpretation method, but our empirical results also uncover that, contrary to common beliefs, architectures such as convolutional sequenceto-sequence models (Gehring et al., 2017) have already implicitly learned highly interpretable word alignments, which sheds light on how future improvement should be made on these architectures.", "Related Work We start with work that combines word alignments with NMT.", "Research in this area generally falls into one of three themes: (1) employing the notion of word alignments to interpret the prediction of NMT; (2) making use of word alignments to improve NMT performance; (3) making use of NMT to improve word alignments.", "We mainly focus on related work in the first theme as this is the problem we are addressing in this work.", "Then we briefly introduce work in the other themes that is relevant to our study.", "We conclude by briefly summarizing related work to our proposed interpretation method.", "For the attention in RNN-based sequence-tosequence model, the first comprehensive analysis is conducted by Ghader and Monz (2017) .", "They argued that the attention in such systems agree with word alignment to a certain extent by showing that the RNN-based system achieves comparable alignment error rate comparable to that of bidirectional GIZA++ with symmetrization.", "However, they also point out that they are not exactly the same, as training the attention with alignments would occasionally cause the model to forget important information.", "Lee et al.", "(2017) presented a toolkit that facilitates study for the attention in RNN-based models.", "There is also a number of other studies that analyze the attention in Transformer models.", "Tang et al.", "(2018a,b) conducted targeted evaluation of neural machine translation models in two different evaluation tasks, namely subject-verb agreement and word sense disambiguation.", "During the analysis, they noted that the pattern in Transformer model (what they refer to as advanced attention mechanism) is very different from that of the attention in RNN-based architecture, in that a lot of the probability mass is focused on the last input token.", "They did not dive deeper in this phenomenon in their analysis.", "Raganato and Tiedemann (2018) performed a brief but more refined analysis on each attention head and each layer, where they noticed several different patterns inside the modules, and concluded that Transformer tends to focus on local dependencies in lower layers but finds long dependencies on higher ones.", "Beyond interpretation, in order to improve the translation of rare words, Nguyen and Chiang (2018) introduced LexNet, a feed-forward neural network that directly predicts the target word from a weighted sum of the source embeddings, on top of an RNN-based Seq2Seq models.", "Their goal was to improve translation output and hence they did not empirically show AER improvements on manually-aligned corpora.", "There are also a few other studies that inject alignment supervision during NMT training (Mi et al., 2016; Liu et al., 2016) .", "In terms of improvements in word alignment quality, Legrand et al.", "(2016) ; Wang et al.", "(2018) ; proposed neu-ral word alignment modules decoupled from NMT systems, while Zenkel et al.", "(2019) introduced a separate module to extract alignment from NMT decoder states, with which they achieved comparable AER with fast-align with Transformer models.", "The saliency method we propose in this work draws its inspiration from visual saliency proposed by Simonyan et al.", "(2013); Springenberg et al.", "(2014) ; Smilkov et al.", "(2017) .", "It should be noted that these methods were mostly applied to computer vision tasks.", "To the best of our knowledge, Li et al.", "(2016) presented the only work that directly employs saliency methods to interpret NLP models.", "Most similar to our work in spirit, Ding et al.", "(2017) used Layer-wise Relevance Propagation (LRP; Bach et al.", "2015) , an interpretation method resembling saliency, to interpret the internal working mechanisms of RNN-based neural machine translation systems.", "Although conceptually LRP is also a good fit for word alignment interpretation, we have some concerns with the mathematical soundness of LRP when applied to attention models.", "Our proposed method is also considerably more flexible and easier to implement than LRP.", "The Interpretation Problem Formally, by interpreting model prediction, we are referring to the following problem: given a trained MT model and input tokens S = {s 0 , s 1 , .", ".", ".", ", s I−1 }, at a certain time step j when the models predicts t j , we want to know which source word in S \"contributed\" most to this prediction.", "Note that the prediction t j might not be arg max t j p(t j | t 1:j−1 ), as the locally optimal option may be pruned during beam search and not end up in the final translation.", "Under this framework, we can see an important conceptual problem regarding interpreting attention weights as word alignment.", "Suppose for the same source sentence, there are two alternative translations that diverge at target time step j, generating t j and t ′ j which respectively correspond to different source words.", "Presumably, the source word that is aligned to t j and t ′ j should changed correspondingly.", "However, this is not possible with the attention weight interpretation, because the attention weight is computed before prediction of t j or t ′ j .", "With that, we argue that an ideal interpretation algorithm should be able to adapt the interpretation with the specified output label, regard-less of whether it is the most likely label predicted by the model.", "As a final note, the term \"attention weights\" here refers to the weights of the attention between encoder and decoder (the \"encoder-decoder attention\" in Vaswani et al.", "(2017) ).", "Specifically, they do not refer to the weight of self-attention modules that only exist in the Transformer architecture, which do not establish alignment between the source and target words.", "Method Our proposal is based on the notion of visual saliency (Simonyan et al., 2013) in computer vision.", "In brief, the saliency of an input feature is defined by the partial gradient of the output score with regard to the input.", "We propose to extend this idea to NMT by drawing analogy between input pixels and the embedding look-up operation.", "Visual Saliency Suppose we have an image classification example (x 0 , y 0 ), with y 0 being a specific image class and x 0 being an |X |-dimensional vector.", "Each entry of x 0 is an input feature (i.e., a pixel) to the classifier.", "Given the input x 0 , a trained classifier can generate a prediction score for class y 0 , denoted as p(y 0 | x 0 ).", "Consider the first-order Taylor expansion of a perturbed version of this score at the neighborhood of input x 0 : p(y 0 | x) ≈ p(y 0 | x 0 ) + ∂p(y 0 | x) ∂x x 0 · (x − x 0 ) (1) This is essentially re-formulating the perturbed prediction score p(y 0 | x) as an affine approximation of the input features, while the \"contribution\" of each feature to the final prediction being the partial derivative of the prediction score with regard to the feature.", "Assuming a feature that is deemed as salient for the local perturbation of the prediction score would also be globally salient, the saliency of an input feature is defined as follows: Definition 1 Denoted as Ψ(x, y), the saliency of feature vector x with regard to output class y is defined as ∂p(y | x) ∂x .", "Note that Ψ(x, y) is also a vector, with each entry corresponding to the saliency of a single input feature in x.", "Such formulation has following nice properties: • The saliency of an input feature is related to the choice of output class y, as model scores of different output classes correspond to a different set of parameters, and hence resulting in different partial gradients for the input features.", "This makes up for the aforementioned deficiency of attention weights in addressing the interpretation problem.", "• The partial gradient could be computed by back-propagation, which is efficiently implemented in most deep learning frameworks.", "• The formulation is agnostic to the model that generates p(y | x), so it could be applied to any deep learning architecture.", "Word Saliency In computer vision, the input feature is a 3D Tensor corresponding to the level in each channel.", "The key question to apply such method to NMT is what constitutes the input feature to a NMT system.", "Li et al.", "(2016) proposed to use the embedding of of the input words as the input feature to formulate saliency score, which results in the saliency of an input word being a vector of the same dimension as embedding vectors.", "To obtain a scalar saliency value, they computed the mean of the absolute value of the embedding gradients.", "We argue that there is a more mathematically principled way to approach this.", "To start, we treat the word embedding look-up operation as a dot product between the embedding weight matrix W and an one-hot vector z.", "The size of z is the same as the source vocabulary size.", "Similarly, the input sentence could be formulated as a matrix Z with only 0 and 1 entries.", "Notice that z has certain resemblance to the pixels of an image, with each cell representing the pixel-wise activation level of the words in the vocabulary.", "For the output word t j at time step j, we can similarly define the saliency of the one-hot vector z as: Ψ(z, t j ) = ∂p(t j | Z) ∂z (2) where p(t j | Z) is the probability of word t j generated by the NMT model given source sentence Z. Ψ(z, t j ) is a vector of the same size as z.", "However, note that there is a key difference between z and pixels.", "If the pixel level is 0, it means that the pixel is black, while a 0-entry in z means that the input word is not the word denoted by the corresponding cell.", "While the black region of an input image may still carry important information, we are not interested in the saliency of the 0-entries in z.", "1 Hence, we only take the 1-entries of matrix Z as the input to the NMT model.", "For a source word s i in the source sentence, this means we only care about the saliency of the 1-entries, i.e., the entry corresponding to source word s i : ψ(s i , t j ) = [ ∂p(t j | Z) ∂z ] s i = [ ∂p(t j | Z) ∂W s i · ∂W s i ∂z ] s i = [ ∂p(t j | Z) ∂W s i · W ] s i = ∂p(t j | Z) ∂W s i · W s i (3) where [·] i denotes the i-th row of a matrix or the ith element of a vector.", "In other words, the saliency ψ(s i , t j ) is a weighted sum of the word embedding of input word s i , with the partial gradient of each cell as the weight.", "By comparison, the word saliency 2 in Li et al.", "(2016) is defined as: ψ ′ (s i , t j ) = mean ( ∂p(t j | Z) ∂W s i ) (4) There are two implementation details that we would like to call for the reader's attention: • When the same word occurs multiple times in the source sentence, multiple copies of embedding for such word need to be made to ensure that the gradients flowing to different instances of the same word are not merged; • Note that ψ(s i , t j ) is not a probability distribution, which does not affect word alignment results because we are taking arg max.", "For visualizations presented herein, we normalized the distribution by p( s i | t j ) ∝ max(0, ψ(s i , t j )).", "One may also use softmax function for applications that need more well-formed probability distribution.", "1 Although we introduce z to facilitate presentation, note that word embedding look-up is never implemented as a matrix multiplication.", "Instead, it is implemented as a table lookup, so for each input word, only one row of the word embedding is fed into the subsequent computation.", "As a consequence, during training, since the other rows are not part of the computation graph, only parameters in the rows corresponding to the 1-entries will be updated.", "This is another reason why we choose to discard the saliency of 0-entries.", "2 Li et al.", "(2016) mostly focused on studying saliency on the level of word embedding dimensions.", "This word-level formulation is proposed as part of the analysis in Section 5.2 and Section 6 of that work.", "SmoothGrad There are two scenarios where the naïve gradientbased saliency may make mistakes: • For highly non-linear models, the saliency obtained from local perturbation may not be a good representation of the global saliency.", "• If the model fits the distribution nearly perfectly, some data points or input features may become saturated, i.e.", "having a partial gradient of 0.", "This does not necessarily mean they are not salient with regard to the prediction.", "We alleviate these problems with SmoothGrad, a method proposed by Smilkov et al.", "(2017) .", "The idea is to augment the input to the network into n samples by adding random noise generated by normal distribution N (0, σ 2 ).", "The saliency scores of each augmented sample are then averaged to cancel out the noise in the gradients.", "We made one small modification to this method in our experiments: rather than adding noise to the word inputs that are represented as one-hot vectors, we instead add noise to the queried embedding vectors.", "This allows us to introduce more randomness for each word input.", "Experiments Evaluation Method The best evaluation method would compare predicted word alignments against manually labeled word alignments between source sentences and NMT output sentences, but this is too costly for our study.", "Instead, we conduct two automatic evaluations for our proposed method using resources available: • force decoding: take a human-annotated corpus, run NMT models to force-generate the target side of the corpus and measure AER against the human alignment; • free decoding: take the NMT prediction, obtain reasonably clean reference alignments between the prediction and the source and measure AER against this reference.", "3 Notice that both automatic evaluation methods have their respective limitation: the force decoding method may force the model to predict something it deems unlikely, and thus generating noisy alignment; whereas the free decoding method lacks authentic references.", "Setup We follow Zenkel et al.", "(2019) in data setup and use the accompanied scripts of that paper 4 for preprocessing.", "Their training data consists of 1.9M, 1.1M and 0.4M sentence pairs for German-English (de-en), English-French (en-fr) and Romanian-English (ro-en) language pairs, respectively, whereas the manually-aligned test data contains 508, 447 and 248 sentence pairs for each language pair.", "There is no development data provided in their setup, and it is not clear what they used for NMT system training, so we set aside the last 1,000 sentences of the training data for each language as the development set.", "For our NMT systems, we use fairseq 5 to train attention-based RNN systems (LSTM) (Bahdanau et al., 2014) , convolution systems (FConv) (Gehring et al., 2017) , and Transformer systems (Transformer) (Vaswani et al., 2017) .", "We use the pre-configured model architectures for IWSLT German-English experiments 6 to build all NMT systems.", "Our experiments cover the following interpretation methods: • Attention: directly take the attention weights as soft alignment scores.", "For transformer, we follow the implementation in fairseq and used the attention weights from the final layer averaged across all heads; • Smoothed Attention: obtain multiple version of attention weights with the same data augmentation procedure as SmoothGrad and average them.", "This is to prove that smoothing itself does not improve the interpretation quality, and has to be used together with effective interpretation method; • (Li et al., 2016) : applied with normal backpropagation (Grad) and SmoothGrad; • Ours: applied with normal back-propagation (Grad) and SmoothGrad.", "For all the methods above, we follow the same procedure in (Zenkel et al., 2019) to convert soft alignment scores to hard alignment.", "For force decoding experiments, we generate symmetrized alignment results with growdiag-final.", "We also include AER results 7 of fast-align (Dyer et al., 2013) , GIZA++ 8 and the best model (Add+SGD) from Zenkel et al.", "(2019) on the same dataset for comparison.", "However, the readers should be aware that there are certain caveats in this comparison: • All of these models are specifically designed and optimized to generate high-quality alignments, while our method is an interpretation method and is not making any architecture modifications or parameter updates; • fast-align and GIZA++ usually need to update model with full sentence to generate optimal alignments, while our system and Zenkel et al.", "(2019) can do so on-the-fly.", "7 We reproduced the fast-align results as a sanity check and we were able to perfectly replicate their numbers with their released scripts.", "8 https://github.com/moses-smt/giza-pp Realizing the second caveat, we also run fastalign under the online alignment scenario, where we first train a fast-align model and decode on the test set.", "This is a real-world scenario in applications such as computer-aided translation (Bouma and Parmentier, 2014; Arcan et al., 2014) , where we cannot practically update alignment models onthe-fly.", "On the other hand, we believe this is a slightly better comparison for methods with online alignment capabilities such as Zenkel et al.", "(2019) and this work.", "The data used in Zenkel et al.", "(2019) did not provide a manually-aligned development set, so we tune the SmoothGrad hyperparameters (noise standard deviation σ and sample size n) on a 30sentence subset of the German-English test data with the Transformer model.", "We ended up using the recommended σ = 0.15 in the original paper and a slightly smaller sample size n = 30 for speed.", "This hyperparameter setting is applied to the other SmoothGrad experiments as-is.", "For com-parison with previous work, we do not exclude these sentences from the reported results, we instead mark the numbers affected to raise caution.", "Table 1 shows the AER results under the force decoding setting.", "First, note that after applying our saliency method with normal back-propagation, AER is only reduced for FConv model but instead increases for LSTM and Transformer.", "The largest increase is observed for Transformer, where the AER increases by about 20 points on average.", "However, after applying SmoothGrad on top of that, we observe a sharp drop in AER, which ends up with 10-20 points lower than the attention weight baseline.", "We can also see that this is not just an effect introduced by input noise, as the same smoothing procedure for attention increases the AER most of the times.", "To summarize, at least under force decoding settings, our saliency method with SmoothGrad obtains word alignment interpretations of much higher quality than the attention weight baseline.", "Force Decoding Results As for Li et al.", "(2016) , for FConv and LSTM architectures, it is not only consistently worse than our method, but at times also worse than attention.", "Besides, the effect of SmoothGrad is also not as consistent on their saliency formulation as ours.", "Although with the Transformer model, the Li et al.", "(2016) method obtained better AER than our method under several settings, it is still pretty clear overall that the superior mathematical soundness of our method is translated into better interpretation quality.", "While the GIZA++ model obtains the best alignment result in Table 1 9 , most of our word alignment interpretation of FConv model with Smooth-Grad surpasses the alignment quality of fast-align (either Online or Offline), sometimes by as much as 8.7 points (symmetrized ro<>en result).", "Our best models are also largely on-par with (Zenkel et al., 2019) .", "These are notable results as our method is an interpretation method and no extra parameter is updated to optimize the quality of alignment.", "On the other hand, this also indicates that it is possible to induce high-quality 9 While Ghader and Monz (2017) showed that the AER obtained by LSTM model is close to that of GIZA++, our experiments yield a much larger difference.", "We think this is largely due to the fact that we choose to train our model with BPE, while Ghader and Monz (2017) explicitly avoided doing so.", "alignments from NMT model without modifying its parameters, showing that it has acquired such information in an implicit way.", "Most interestingly, although NMT is often deemed as performing poorly under low-resource setting, our interpretation seems to work relatively well on ro<>en language pair, which happens to be the language pair that we have least training data for.", "We think this is a phenomenon that merits further exploration.", "Besides, it can be seen that for all reported methods, the overall order for the number of alignment errors is FConv < LSTM < Transformer.", "To our best knowledge, this is also a novel insight, as no one has analyzed attention weights of FConv with other architectures before.", "We can also observe that while our method is not strong enough to fully bridge the gap of the attention noise level between different model architecture, it does manage to narrow the difference in some cases.", "Table 2 shows the result under free decoding setting.", "The trend in this group of experiment is similar to Table 1 , except that Transformer occasionally outperforms LSTM.", "We think this is mainly due to the fact that Transformer generates higher quality translations, but could also be partially attributed to the noise in fast-align reference.", "Also, notice that the AER numbers are also generally lower compared to Table 1 under this setting.", "One reason is that our model is aligning output with which it is most confident, so less noise should be expected in the model behavior.", "On the other hand, by qualitatively comparing the reference translation in the test set and the NMT output, we find that it is generally easier to align the translation as it is often a more literal translation.", "6 Analysis 6.1 Comparison with Li et al.", "(2016) The main reason why the word saliency formulation in Li et al.", "(2016) does not work as well for word alignment is the lack of polarity in the formulation.", "In other words, it only quantifies how much the input influences the output, but does not specify in what way does the input influence.", "This is sufficient for error analysis, but does not suit the purpose of word alignment, as humans will only align a target word to the input words that constitute a translation pair, i.e.", "have positive influence.", "Figure 2 shows a case where this problem occurs in our German-English experiments.", "Note that in Subfigure (a), the source word nur has high saliency on several target words, e.g.", "should, but the word nur is actually not translated in the reference.", "On the other hand, as shown in Subfigure (b), our method correctly assigns negative (shown as white) or small positive values at all time steps for this source word.", "Specifically, the saliency value of nur for should is negative with large magnitude, indicating significant negative contributions to the prediction of that target word.", "Hence, a good word alignment interpreta-tion should strongly avoid aligning them.", "Free Decoding Results SmoothGrad Tables 1 and 2 show that SmoothGrad is a crucial factor to reduce AER, especially for Transformer.", "Figure 3 Table 1 .", "By comparing Subfigures (a) and (c), we notice that (1) without SmoothGrad, the word saliency obtained from the Transformer model is extremely noisy, and (2) the output of SmoothGrad is not only a smoother version of the naïve gradient output, but also gains new information by performing extra forward and backward evaluations with the noisy input.", "For example, compare the alignment point between source word wir and target word we: in Subfigure (a), this word pair has very low saliency, but in (c), they become the most likely alignment pair for that target word.", "Referring back to our motivation for using SmoothGrad in Section 4.3, we think the observations above verify that the Transformer model is a case where very high non-linearities occur almost everywhere in the parameter space, such that the saliency obtained from local perturbation is a very Table 3 : Alignment distribution entropy for selected deen models.", "att stands for attention in Table 1. poor representation of the global saliency almost all the time.", "On the other hand, this is also why the Transformer especially relies on SmoothGrad to work well, as the perturbation will give a better estimation of the global saliency.", "It could also be observed from Subfigures (b) and (d) that when the noise is too moderate, the evaluation does not deviate enough from the original spot to gain non-local information, and at (d) it deviates too much and hence the resulting alignment is almost random.", "Intuitively, the noise parameter σ should be sensitive to the model architecture or even specific input feature values, but interestingly we end up finding that a single choice from the computer vision literature works well with all of our systems.", "We encourage future work to conduct more comprehensive analysis of the effect of SmoothGrad on more complicated architectures beyond convolutional neural nets.", "Alignment Dispersion We run German-English alignments under several different SmoothGrad noise deviation σ and report their dispersion as measured by entropy of the (soft) alignment distribution averaged by number of target words.", "Results are summarized in Ta-ble 3, where lower entropy indicates more peaky alignments.", "First, we observe that dispersion of word saliency gets higher as we increase σ, which aligns with the observations in Figure 3 .", "It should also be noted that the alignment dispersion is consistently lower for free decoding than force decoding.", "This verifies our conjecture that the force decoding setting might introduce more noise in the model behavior, but judging from this result, that gap seems to be minimal.", "Comparing different architectures, the dispersion of attention weights does not correlate well with the dispersion of word saliency.", "We also notice that, while the Transformer attention interpretation consistently results in higher AER, its dispersion is lower than the other architectures, indicating that with attention, a lot of the probability mass might be concentrated in the wrong place more often.", "This corroborates the finding in Raganato and Tiedemann (2018) .", "Discussion And Future Work There are several extensions to this work that we would like to discuss in this section.", "First, in this paper we only explored two saliency methods among many others available (Montavon et al., 2018) .", "In our preliminary study, we also experimented with guided back-propagation (Springenberg et al., 2014), a frequently used saliency method in computer vision, which did not work well for our problem.", "We suspect that there is a gap between applying these methods on mostlyconvolutional architectures in computer vision and architectures with more non-linearities in NLP.", "We hope the future research from the NLP and machine learning communities could bridge this gap.", "Secondly, the alignment errors in our method comes from three different sources: the limitation of NMT models on learning word alignments, the limitation of interpretation method on recovering interpretable word alignments, and the ambiguity in word alignments itself.", "Although we have shown that high quality alignment could be recovered from NMT systems (thus pushing our understanding on the limitation of NMT models), we are not yet able to separate these sources of errors in this work.", "While exploration on this direction will help us better understand both NMT models and the capability of saliency methods in NLP, researchers may want to avoid using word alignment as a benchmark for saliency methods because of its ambiguity.", "For such purpose, simpler tasks with clear ground truth, such as subject-verb agreement, might be a better choice.", "Finally, as mentioned before, we are only conducting approximate evaluation to measure the ability of our interpretation method.", "An immediate future work would be evaluating this on human-annotated translation outputs generated by the NMT system.", "Conclusion We propose to use word saliency and SmoothGrad to interpret word alignments from NMT predictions.", "Our proposal is model-agnostic, is able to be applied either offline or online, and does not require any parameter updates or architectural change.", "Both force decoding and free decoding evaluations show that our method is capable of generating word alignment interpretations of much higher quality compared to its attentionbased counterpart.", "Our empirical results also probe into the NMT black-box and reveal that even without any special architecture or training algorithm, some NMT models have already implicitly learned interpretable word alignments of comparable quality to fast-align.", "The model and code for our experiments are available at https://github.com/shuoyangd/meerkat." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "4.3", "5.1", "5.2", "5.3", "6.2", "6.3", "7", "8" ], "paper_header_content": [ "Introduction", "Related Work", "The Interpretation Problem", "Method", "Visual Saliency", "Word Saliency", "SmoothGrad", "Evaluation Method", "Setup", "Force Decoding Results", "SmoothGrad", "Alignment Dispersion", "Discussion And Future Work", "Conclusion" ] }
GEM-SciDuet-train-38#paper-1054#slide-6
Assumption
The output score is more sensitive to perturbations in important features. Saliency-driven Word Alignment Interpretation for NMT
The output score is more sensitive to perturbations in important features. Saliency-driven Word Alignment Interpretation for NMT
[]
GEM-SciDuet-train-38#paper-1054#slide-7
1054
Saliency-driven Word Alignment Interpretation for Neural Machine Translation
Despite their original goal to jointly learn to align and translate, Neural Machine Translation (NMT) models, especially Transformer, are often perceived as not learning interpretable word alignments. In this paper, we show that NMT models do learn interpretable word alignments, which could only be revealed with proper interpretation methods. We propose a series of such methods that are model-agnostic, are able to be applied either offline or online, and do not require parameter update or architectural change. We show that under the force decoding setup, the alignments induced by our interpretation method are of better quality than fast-align for some systems, and when performing free decoding, they agree well with the alignments induced by automatic alignment tools.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223 ], "paper_content_text": [ "Introduction Neural Machine Translation (NMT) has made lots of advancements since its inception.", "One of the key innovations that led to the largest improvements is the introduction of the attention mechanism (Bahdanau et al., 2014; Luong et al., 2015) , which jointly learns word alignment and translation.", "Since then, the attention mechanism has gradually become a general technique in various NLP tasks, including summarization (Rush et al., 2015; See et al., 2017) , natural language inference (Parikh et al., 2016) and speech recognition (Chorowski et al., 2015; Chan et al., 2016) .", "Although word alignment is no longer a integral step like the case for Statistical Machine Translation (SMT) systems (Brown et al., 1993; Koehn et al., 2003) , there is a resurgence of interest in the community to study word alignment for NMT models.", "Even for NMT, word alignments are useful for error analysis, inserting external vocabularies, and providing guidance for human translators in computer-aided translation.", "When aiming for the most accurate alignments, the state-of-the-art tools include GIZA++ (Brown et al., 1993; Och and Ney, 2003) and fast-align (Dyer et al., 2013) , which are all external models invented in SMT era and need to be run as a separate post-processing step after the full sentence translation is complete.", "As a direct result, they are not suitable for analyzing the internal decision processes of the neural machine translation models.", "Besides, these models are hard to apply in the online fashion, i.e.", "in the middle of left-to-right translation process, such as the scenario in certain constrained decoding algorithms (Hasler et al., 2018) and in computeraided translation (Bouma and Parmentier, 2014; Arcan et al., 2014) .", "For these cases, the current common practice is to simply generate word alignments from attention weights between the encoder and decoder.", "However, there are problems with this practice.", "Koehn and Knowles (2017) showed that attention-based word alignment interpretation may be subject to \"off-by-one\" errors.", "Zenkel et al.", "(2019) ; Tang et al.", "(2018b) ; Raganato and Tiedemann (2018) pointed out that the attention-induced alignment is particularly noisy with Transformer models.", "Because of this, some studies, such as Nguyen and Chiang (2018); Zenkel et al.", "(2019) proposed either to add extra modules to generate higher quality word alignments, or to use these modules to further improve the model performance or interpretability.", "This paper is a step towards interpreting word alignments from NMT without relying on external models.", "We argue that using only attention weights is insufficient for generating clean word alignment interpretations, which we demonstrate both conceptually and empirically.", "We propose to use the notion of saliency to obtain word alignment interpretation of NMT predictions.", "Different from previous alignment models, our proposal is a pure interpretation method and does not require any parameter update or architecture change.", "Nevertheless, we are able to reduce Alignment Error Rate (AER) by 10-20 points over the attention weight baseline under two evaluation settings we adopt (see Figure 1 for an example), and beat fast-align (Dyer et al., 2013) by as much as 8.7 points.", "Not only have we proposed a superior model interpretation method, but our empirical results also uncover that, contrary to common beliefs, architectures such as convolutional sequenceto-sequence models (Gehring et al., 2017) have already implicitly learned highly interpretable word alignments, which sheds light on how future improvement should be made on these architectures.", "Related Work We start with work that combines word alignments with NMT.", "Research in this area generally falls into one of three themes: (1) employing the notion of word alignments to interpret the prediction of NMT; (2) making use of word alignments to improve NMT performance; (3) making use of NMT to improve word alignments.", "We mainly focus on related work in the first theme as this is the problem we are addressing in this work.", "Then we briefly introduce work in the other themes that is relevant to our study.", "We conclude by briefly summarizing related work to our proposed interpretation method.", "For the attention in RNN-based sequence-tosequence model, the first comprehensive analysis is conducted by Ghader and Monz (2017) .", "They argued that the attention in such systems agree with word alignment to a certain extent by showing that the RNN-based system achieves comparable alignment error rate comparable to that of bidirectional GIZA++ with symmetrization.", "However, they also point out that they are not exactly the same, as training the attention with alignments would occasionally cause the model to forget important information.", "Lee et al.", "(2017) presented a toolkit that facilitates study for the attention in RNN-based models.", "There is also a number of other studies that analyze the attention in Transformer models.", "Tang et al.", "(2018a,b) conducted targeted evaluation of neural machine translation models in two different evaluation tasks, namely subject-verb agreement and word sense disambiguation.", "During the analysis, they noted that the pattern in Transformer model (what they refer to as advanced attention mechanism) is very different from that of the attention in RNN-based architecture, in that a lot of the probability mass is focused on the last input token.", "They did not dive deeper in this phenomenon in their analysis.", "Raganato and Tiedemann (2018) performed a brief but more refined analysis on each attention head and each layer, where they noticed several different patterns inside the modules, and concluded that Transformer tends to focus on local dependencies in lower layers but finds long dependencies on higher ones.", "Beyond interpretation, in order to improve the translation of rare words, Nguyen and Chiang (2018) introduced LexNet, a feed-forward neural network that directly predicts the target word from a weighted sum of the source embeddings, on top of an RNN-based Seq2Seq models.", "Their goal was to improve translation output and hence they did not empirically show AER improvements on manually-aligned corpora.", "There are also a few other studies that inject alignment supervision during NMT training (Mi et al., 2016; Liu et al., 2016) .", "In terms of improvements in word alignment quality, Legrand et al.", "(2016) ; Wang et al.", "(2018) ; proposed neu-ral word alignment modules decoupled from NMT systems, while Zenkel et al.", "(2019) introduced a separate module to extract alignment from NMT decoder states, with which they achieved comparable AER with fast-align with Transformer models.", "The saliency method we propose in this work draws its inspiration from visual saliency proposed by Simonyan et al.", "(2013); Springenberg et al.", "(2014) ; Smilkov et al.", "(2017) .", "It should be noted that these methods were mostly applied to computer vision tasks.", "To the best of our knowledge, Li et al.", "(2016) presented the only work that directly employs saliency methods to interpret NLP models.", "Most similar to our work in spirit, Ding et al.", "(2017) used Layer-wise Relevance Propagation (LRP; Bach et al.", "2015) , an interpretation method resembling saliency, to interpret the internal working mechanisms of RNN-based neural machine translation systems.", "Although conceptually LRP is also a good fit for word alignment interpretation, we have some concerns with the mathematical soundness of LRP when applied to attention models.", "Our proposed method is also considerably more flexible and easier to implement than LRP.", "The Interpretation Problem Formally, by interpreting model prediction, we are referring to the following problem: given a trained MT model and input tokens S = {s 0 , s 1 , .", ".", ".", ", s I−1 }, at a certain time step j when the models predicts t j , we want to know which source word in S \"contributed\" most to this prediction.", "Note that the prediction t j might not be arg max t j p(t j | t 1:j−1 ), as the locally optimal option may be pruned during beam search and not end up in the final translation.", "Under this framework, we can see an important conceptual problem regarding interpreting attention weights as word alignment.", "Suppose for the same source sentence, there are two alternative translations that diverge at target time step j, generating t j and t ′ j which respectively correspond to different source words.", "Presumably, the source word that is aligned to t j and t ′ j should changed correspondingly.", "However, this is not possible with the attention weight interpretation, because the attention weight is computed before prediction of t j or t ′ j .", "With that, we argue that an ideal interpretation algorithm should be able to adapt the interpretation with the specified output label, regard-less of whether it is the most likely label predicted by the model.", "As a final note, the term \"attention weights\" here refers to the weights of the attention between encoder and decoder (the \"encoder-decoder attention\" in Vaswani et al.", "(2017) ).", "Specifically, they do not refer to the weight of self-attention modules that only exist in the Transformer architecture, which do not establish alignment between the source and target words.", "Method Our proposal is based on the notion of visual saliency (Simonyan et al., 2013) in computer vision.", "In brief, the saliency of an input feature is defined by the partial gradient of the output score with regard to the input.", "We propose to extend this idea to NMT by drawing analogy between input pixels and the embedding look-up operation.", "Visual Saliency Suppose we have an image classification example (x 0 , y 0 ), with y 0 being a specific image class and x 0 being an |X |-dimensional vector.", "Each entry of x 0 is an input feature (i.e., a pixel) to the classifier.", "Given the input x 0 , a trained classifier can generate a prediction score for class y 0 , denoted as p(y 0 | x 0 ).", "Consider the first-order Taylor expansion of a perturbed version of this score at the neighborhood of input x 0 : p(y 0 | x) ≈ p(y 0 | x 0 ) + ∂p(y 0 | x) ∂x x 0 · (x − x 0 ) (1) This is essentially re-formulating the perturbed prediction score p(y 0 | x) as an affine approximation of the input features, while the \"contribution\" of each feature to the final prediction being the partial derivative of the prediction score with regard to the feature.", "Assuming a feature that is deemed as salient for the local perturbation of the prediction score would also be globally salient, the saliency of an input feature is defined as follows: Definition 1 Denoted as Ψ(x, y), the saliency of feature vector x with regard to output class y is defined as ∂p(y | x) ∂x .", "Note that Ψ(x, y) is also a vector, with each entry corresponding to the saliency of a single input feature in x.", "Such formulation has following nice properties: • The saliency of an input feature is related to the choice of output class y, as model scores of different output classes correspond to a different set of parameters, and hence resulting in different partial gradients for the input features.", "This makes up for the aforementioned deficiency of attention weights in addressing the interpretation problem.", "• The partial gradient could be computed by back-propagation, which is efficiently implemented in most deep learning frameworks.", "• The formulation is agnostic to the model that generates p(y | x), so it could be applied to any deep learning architecture.", "Word Saliency In computer vision, the input feature is a 3D Tensor corresponding to the level in each channel.", "The key question to apply such method to NMT is what constitutes the input feature to a NMT system.", "Li et al.", "(2016) proposed to use the embedding of of the input words as the input feature to formulate saliency score, which results in the saliency of an input word being a vector of the same dimension as embedding vectors.", "To obtain a scalar saliency value, they computed the mean of the absolute value of the embedding gradients.", "We argue that there is a more mathematically principled way to approach this.", "To start, we treat the word embedding look-up operation as a dot product between the embedding weight matrix W and an one-hot vector z.", "The size of z is the same as the source vocabulary size.", "Similarly, the input sentence could be formulated as a matrix Z with only 0 and 1 entries.", "Notice that z has certain resemblance to the pixels of an image, with each cell representing the pixel-wise activation level of the words in the vocabulary.", "For the output word t j at time step j, we can similarly define the saliency of the one-hot vector z as: Ψ(z, t j ) = ∂p(t j | Z) ∂z (2) where p(t j | Z) is the probability of word t j generated by the NMT model given source sentence Z. Ψ(z, t j ) is a vector of the same size as z.", "However, note that there is a key difference between z and pixels.", "If the pixel level is 0, it means that the pixel is black, while a 0-entry in z means that the input word is not the word denoted by the corresponding cell.", "While the black region of an input image may still carry important information, we are not interested in the saliency of the 0-entries in z.", "1 Hence, we only take the 1-entries of matrix Z as the input to the NMT model.", "For a source word s i in the source sentence, this means we only care about the saliency of the 1-entries, i.e., the entry corresponding to source word s i : ψ(s i , t j ) = [ ∂p(t j | Z) ∂z ] s i = [ ∂p(t j | Z) ∂W s i · ∂W s i ∂z ] s i = [ ∂p(t j | Z) ∂W s i · W ] s i = ∂p(t j | Z) ∂W s i · W s i (3) where [·] i denotes the i-th row of a matrix or the ith element of a vector.", "In other words, the saliency ψ(s i , t j ) is a weighted sum of the word embedding of input word s i , with the partial gradient of each cell as the weight.", "By comparison, the word saliency 2 in Li et al.", "(2016) is defined as: ψ ′ (s i , t j ) = mean ( ∂p(t j | Z) ∂W s i ) (4) There are two implementation details that we would like to call for the reader's attention: • When the same word occurs multiple times in the source sentence, multiple copies of embedding for such word need to be made to ensure that the gradients flowing to different instances of the same word are not merged; • Note that ψ(s i , t j ) is not a probability distribution, which does not affect word alignment results because we are taking arg max.", "For visualizations presented herein, we normalized the distribution by p( s i | t j ) ∝ max(0, ψ(s i , t j )).", "One may also use softmax function for applications that need more well-formed probability distribution.", "1 Although we introduce z to facilitate presentation, note that word embedding look-up is never implemented as a matrix multiplication.", "Instead, it is implemented as a table lookup, so for each input word, only one row of the word embedding is fed into the subsequent computation.", "As a consequence, during training, since the other rows are not part of the computation graph, only parameters in the rows corresponding to the 1-entries will be updated.", "This is another reason why we choose to discard the saliency of 0-entries.", "2 Li et al.", "(2016) mostly focused on studying saliency on the level of word embedding dimensions.", "This word-level formulation is proposed as part of the analysis in Section 5.2 and Section 6 of that work.", "SmoothGrad There are two scenarios where the naïve gradientbased saliency may make mistakes: • For highly non-linear models, the saliency obtained from local perturbation may not be a good representation of the global saliency.", "• If the model fits the distribution nearly perfectly, some data points or input features may become saturated, i.e.", "having a partial gradient of 0.", "This does not necessarily mean they are not salient with regard to the prediction.", "We alleviate these problems with SmoothGrad, a method proposed by Smilkov et al.", "(2017) .", "The idea is to augment the input to the network into n samples by adding random noise generated by normal distribution N (0, σ 2 ).", "The saliency scores of each augmented sample are then averaged to cancel out the noise in the gradients.", "We made one small modification to this method in our experiments: rather than adding noise to the word inputs that are represented as one-hot vectors, we instead add noise to the queried embedding vectors.", "This allows us to introduce more randomness for each word input.", "Experiments Evaluation Method The best evaluation method would compare predicted word alignments against manually labeled word alignments between source sentences and NMT output sentences, but this is too costly for our study.", "Instead, we conduct two automatic evaluations for our proposed method using resources available: • force decoding: take a human-annotated corpus, run NMT models to force-generate the target side of the corpus and measure AER against the human alignment; • free decoding: take the NMT prediction, obtain reasonably clean reference alignments between the prediction and the source and measure AER against this reference.", "3 Notice that both automatic evaluation methods have their respective limitation: the force decoding method may force the model to predict something it deems unlikely, and thus generating noisy alignment; whereas the free decoding method lacks authentic references.", "Setup We follow Zenkel et al.", "(2019) in data setup and use the accompanied scripts of that paper 4 for preprocessing.", "Their training data consists of 1.9M, 1.1M and 0.4M sentence pairs for German-English (de-en), English-French (en-fr) and Romanian-English (ro-en) language pairs, respectively, whereas the manually-aligned test data contains 508, 447 and 248 sentence pairs for each language pair.", "There is no development data provided in their setup, and it is not clear what they used for NMT system training, so we set aside the last 1,000 sentences of the training data for each language as the development set.", "For our NMT systems, we use fairseq 5 to train attention-based RNN systems (LSTM) (Bahdanau et al., 2014) , convolution systems (FConv) (Gehring et al., 2017) , and Transformer systems (Transformer) (Vaswani et al., 2017) .", "We use the pre-configured model architectures for IWSLT German-English experiments 6 to build all NMT systems.", "Our experiments cover the following interpretation methods: • Attention: directly take the attention weights as soft alignment scores.", "For transformer, we follow the implementation in fairseq and used the attention weights from the final layer averaged across all heads; • Smoothed Attention: obtain multiple version of attention weights with the same data augmentation procedure as SmoothGrad and average them.", "This is to prove that smoothing itself does not improve the interpretation quality, and has to be used together with effective interpretation method; • (Li et al., 2016) : applied with normal backpropagation (Grad) and SmoothGrad; • Ours: applied with normal back-propagation (Grad) and SmoothGrad.", "For all the methods above, we follow the same procedure in (Zenkel et al., 2019) to convert soft alignment scores to hard alignment.", "For force decoding experiments, we generate symmetrized alignment results with growdiag-final.", "We also include AER results 7 of fast-align (Dyer et al., 2013) , GIZA++ 8 and the best model (Add+SGD) from Zenkel et al.", "(2019) on the same dataset for comparison.", "However, the readers should be aware that there are certain caveats in this comparison: • All of these models are specifically designed and optimized to generate high-quality alignments, while our method is an interpretation method and is not making any architecture modifications or parameter updates; • fast-align and GIZA++ usually need to update model with full sentence to generate optimal alignments, while our system and Zenkel et al.", "(2019) can do so on-the-fly.", "7 We reproduced the fast-align results as a sanity check and we were able to perfectly replicate their numbers with their released scripts.", "8 https://github.com/moses-smt/giza-pp Realizing the second caveat, we also run fastalign under the online alignment scenario, where we first train a fast-align model and decode on the test set.", "This is a real-world scenario in applications such as computer-aided translation (Bouma and Parmentier, 2014; Arcan et al., 2014) , where we cannot practically update alignment models onthe-fly.", "On the other hand, we believe this is a slightly better comparison for methods with online alignment capabilities such as Zenkel et al.", "(2019) and this work.", "The data used in Zenkel et al.", "(2019) did not provide a manually-aligned development set, so we tune the SmoothGrad hyperparameters (noise standard deviation σ and sample size n) on a 30sentence subset of the German-English test data with the Transformer model.", "We ended up using the recommended σ = 0.15 in the original paper and a slightly smaller sample size n = 30 for speed.", "This hyperparameter setting is applied to the other SmoothGrad experiments as-is.", "For com-parison with previous work, we do not exclude these sentences from the reported results, we instead mark the numbers affected to raise caution.", "Table 1 shows the AER results under the force decoding setting.", "First, note that after applying our saliency method with normal back-propagation, AER is only reduced for FConv model but instead increases for LSTM and Transformer.", "The largest increase is observed for Transformer, where the AER increases by about 20 points on average.", "However, after applying SmoothGrad on top of that, we observe a sharp drop in AER, which ends up with 10-20 points lower than the attention weight baseline.", "We can also see that this is not just an effect introduced by input noise, as the same smoothing procedure for attention increases the AER most of the times.", "To summarize, at least under force decoding settings, our saliency method with SmoothGrad obtains word alignment interpretations of much higher quality than the attention weight baseline.", "Force Decoding Results As for Li et al.", "(2016) , for FConv and LSTM architectures, it is not only consistently worse than our method, but at times also worse than attention.", "Besides, the effect of SmoothGrad is also not as consistent on their saliency formulation as ours.", "Although with the Transformer model, the Li et al.", "(2016) method obtained better AER than our method under several settings, it is still pretty clear overall that the superior mathematical soundness of our method is translated into better interpretation quality.", "While the GIZA++ model obtains the best alignment result in Table 1 9 , most of our word alignment interpretation of FConv model with Smooth-Grad surpasses the alignment quality of fast-align (either Online or Offline), sometimes by as much as 8.7 points (symmetrized ro<>en result).", "Our best models are also largely on-par with (Zenkel et al., 2019) .", "These are notable results as our method is an interpretation method and no extra parameter is updated to optimize the quality of alignment.", "On the other hand, this also indicates that it is possible to induce high-quality 9 While Ghader and Monz (2017) showed that the AER obtained by LSTM model is close to that of GIZA++, our experiments yield a much larger difference.", "We think this is largely due to the fact that we choose to train our model with BPE, while Ghader and Monz (2017) explicitly avoided doing so.", "alignments from NMT model without modifying its parameters, showing that it has acquired such information in an implicit way.", "Most interestingly, although NMT is often deemed as performing poorly under low-resource setting, our interpretation seems to work relatively well on ro<>en language pair, which happens to be the language pair that we have least training data for.", "We think this is a phenomenon that merits further exploration.", "Besides, it can be seen that for all reported methods, the overall order for the number of alignment errors is FConv < LSTM < Transformer.", "To our best knowledge, this is also a novel insight, as no one has analyzed attention weights of FConv with other architectures before.", "We can also observe that while our method is not strong enough to fully bridge the gap of the attention noise level between different model architecture, it does manage to narrow the difference in some cases.", "Table 2 shows the result under free decoding setting.", "The trend in this group of experiment is similar to Table 1 , except that Transformer occasionally outperforms LSTM.", "We think this is mainly due to the fact that Transformer generates higher quality translations, but could also be partially attributed to the noise in fast-align reference.", "Also, notice that the AER numbers are also generally lower compared to Table 1 under this setting.", "One reason is that our model is aligning output with which it is most confident, so less noise should be expected in the model behavior.", "On the other hand, by qualitatively comparing the reference translation in the test set and the NMT output, we find that it is generally easier to align the translation as it is often a more literal translation.", "6 Analysis 6.1 Comparison with Li et al.", "(2016) The main reason why the word saliency formulation in Li et al.", "(2016) does not work as well for word alignment is the lack of polarity in the formulation.", "In other words, it only quantifies how much the input influences the output, but does not specify in what way does the input influence.", "This is sufficient for error analysis, but does not suit the purpose of word alignment, as humans will only align a target word to the input words that constitute a translation pair, i.e.", "have positive influence.", "Figure 2 shows a case where this problem occurs in our German-English experiments.", "Note that in Subfigure (a), the source word nur has high saliency on several target words, e.g.", "should, but the word nur is actually not translated in the reference.", "On the other hand, as shown in Subfigure (b), our method correctly assigns negative (shown as white) or small positive values at all time steps for this source word.", "Specifically, the saliency value of nur for should is negative with large magnitude, indicating significant negative contributions to the prediction of that target word.", "Hence, a good word alignment interpreta-tion should strongly avoid aligning them.", "Free Decoding Results SmoothGrad Tables 1 and 2 show that SmoothGrad is a crucial factor to reduce AER, especially for Transformer.", "Figure 3 Table 1 .", "By comparing Subfigures (a) and (c), we notice that (1) without SmoothGrad, the word saliency obtained from the Transformer model is extremely noisy, and (2) the output of SmoothGrad is not only a smoother version of the naïve gradient output, but also gains new information by performing extra forward and backward evaluations with the noisy input.", "For example, compare the alignment point between source word wir and target word we: in Subfigure (a), this word pair has very low saliency, but in (c), they become the most likely alignment pair for that target word.", "Referring back to our motivation for using SmoothGrad in Section 4.3, we think the observations above verify that the Transformer model is a case where very high non-linearities occur almost everywhere in the parameter space, such that the saliency obtained from local perturbation is a very Table 3 : Alignment distribution entropy for selected deen models.", "att stands for attention in Table 1. poor representation of the global saliency almost all the time.", "On the other hand, this is also why the Transformer especially relies on SmoothGrad to work well, as the perturbation will give a better estimation of the global saliency.", "It could also be observed from Subfigures (b) and (d) that when the noise is too moderate, the evaluation does not deviate enough from the original spot to gain non-local information, and at (d) it deviates too much and hence the resulting alignment is almost random.", "Intuitively, the noise parameter σ should be sensitive to the model architecture or even specific input feature values, but interestingly we end up finding that a single choice from the computer vision literature works well with all of our systems.", "We encourage future work to conduct more comprehensive analysis of the effect of SmoothGrad on more complicated architectures beyond convolutional neural nets.", "Alignment Dispersion We run German-English alignments under several different SmoothGrad noise deviation σ and report their dispersion as measured by entropy of the (soft) alignment distribution averaged by number of target words.", "Results are summarized in Ta-ble 3, where lower entropy indicates more peaky alignments.", "First, we observe that dispersion of word saliency gets higher as we increase σ, which aligns with the observations in Figure 3 .", "It should also be noted that the alignment dispersion is consistently lower for free decoding than force decoding.", "This verifies our conjecture that the force decoding setting might introduce more noise in the model behavior, but judging from this result, that gap seems to be minimal.", "Comparing different architectures, the dispersion of attention weights does not correlate well with the dispersion of word saliency.", "We also notice that, while the Transformer attention interpretation consistently results in higher AER, its dispersion is lower than the other architectures, indicating that with attention, a lot of the probability mass might be concentrated in the wrong place more often.", "This corroborates the finding in Raganato and Tiedemann (2018) .", "Discussion And Future Work There are several extensions to this work that we would like to discuss in this section.", "First, in this paper we only explored two saliency methods among many others available (Montavon et al., 2018) .", "In our preliminary study, we also experimented with guided back-propagation (Springenberg et al., 2014), a frequently used saliency method in computer vision, which did not work well for our problem.", "We suspect that there is a gap between applying these methods on mostlyconvolutional architectures in computer vision and architectures with more non-linearities in NLP.", "We hope the future research from the NLP and machine learning communities could bridge this gap.", "Secondly, the alignment errors in our method comes from three different sources: the limitation of NMT models on learning word alignments, the limitation of interpretation method on recovering interpretable word alignments, and the ambiguity in word alignments itself.", "Although we have shown that high quality alignment could be recovered from NMT systems (thus pushing our understanding on the limitation of NMT models), we are not yet able to separate these sources of errors in this work.", "While exploration on this direction will help us better understand both NMT models and the capability of saliency methods in NLP, researchers may want to avoid using word alignment as a benchmark for saliency methods because of its ambiguity.", "For such purpose, simpler tasks with clear ground truth, such as subject-verb agreement, might be a better choice.", "Finally, as mentioned before, we are only conducting approximate evaluation to measure the ability of our interpretation method.", "An immediate future work would be evaluating this on human-annotated translation outputs generated by the NMT system.", "Conclusion We propose to use word saliency and SmoothGrad to interpret word alignments from NMT predictions.", "Our proposal is model-agnostic, is able to be applied either offline or online, and does not require any parameter updates or architectural change.", "Both force decoding and free decoding evaluations show that our method is capable of generating word alignment interpretations of much higher quality compared to its attentionbased counterpart.", "Our empirical results also probe into the NMT black-box and reveal that even without any special architecture or training algorithm, some NMT models have already implicitly learned interpretable word alignments of comparable quality to fast-align.", "The model and code for our experiments are available at https://github.com/shuoyangd/meerkat." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "4.3", "5.1", "5.2", "5.3", "6.2", "6.3", "7", "8" ], "paper_header_content": [ "Introduction", "Related Work", "The Interpretation Problem", "Method", "Visual Saliency", "Word Saliency", "SmoothGrad", "Evaluation Method", "Setup", "Force Decoding Results", "SmoothGrad", "Alignment Dispersion", "Discussion And Future Work", "Conclusion" ] }
GEM-SciDuet-train-38#paper-1054#slide-7
Eg
Saliency-driven Word Alignment Interpretation for NMT
Saliency-driven Word Alignment Interpretation for NMT
[]
GEM-SciDuet-train-38#paper-1054#slide-9
1054
Saliency-driven Word Alignment Interpretation for Neural Machine Translation
Despite their original goal to jointly learn to align and translate, Neural Machine Translation (NMT) models, especially Transformer, are often perceived as not learning interpretable word alignments. In this paper, we show that NMT models do learn interpretable word alignments, which could only be revealed with proper interpretation methods. We propose a series of such methods that are model-agnostic, are able to be applied either offline or online, and do not require parameter update or architectural change. We show that under the force decoding setup, the alignments induced by our interpretation method are of better quality than fast-align for some systems, and when performing free decoding, they agree well with the alignments induced by automatic alignment tools.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223 ], "paper_content_text": [ "Introduction Neural Machine Translation (NMT) has made lots of advancements since its inception.", "One of the key innovations that led to the largest improvements is the introduction of the attention mechanism (Bahdanau et al., 2014; Luong et al., 2015) , which jointly learns word alignment and translation.", "Since then, the attention mechanism has gradually become a general technique in various NLP tasks, including summarization (Rush et al., 2015; See et al., 2017) , natural language inference (Parikh et al., 2016) and speech recognition (Chorowski et al., 2015; Chan et al., 2016) .", "Although word alignment is no longer a integral step like the case for Statistical Machine Translation (SMT) systems (Brown et al., 1993; Koehn et al., 2003) , there is a resurgence of interest in the community to study word alignment for NMT models.", "Even for NMT, word alignments are useful for error analysis, inserting external vocabularies, and providing guidance for human translators in computer-aided translation.", "When aiming for the most accurate alignments, the state-of-the-art tools include GIZA++ (Brown et al., 1993; Och and Ney, 2003) and fast-align (Dyer et al., 2013) , which are all external models invented in SMT era and need to be run as a separate post-processing step after the full sentence translation is complete.", "As a direct result, they are not suitable for analyzing the internal decision processes of the neural machine translation models.", "Besides, these models are hard to apply in the online fashion, i.e.", "in the middle of left-to-right translation process, such as the scenario in certain constrained decoding algorithms (Hasler et al., 2018) and in computeraided translation (Bouma and Parmentier, 2014; Arcan et al., 2014) .", "For these cases, the current common practice is to simply generate word alignments from attention weights between the encoder and decoder.", "However, there are problems with this practice.", "Koehn and Knowles (2017) showed that attention-based word alignment interpretation may be subject to \"off-by-one\" errors.", "Zenkel et al.", "(2019) ; Tang et al.", "(2018b) ; Raganato and Tiedemann (2018) pointed out that the attention-induced alignment is particularly noisy with Transformer models.", "Because of this, some studies, such as Nguyen and Chiang (2018); Zenkel et al.", "(2019) proposed either to add extra modules to generate higher quality word alignments, or to use these modules to further improve the model performance or interpretability.", "This paper is a step towards interpreting word alignments from NMT without relying on external models.", "We argue that using only attention weights is insufficient for generating clean word alignment interpretations, which we demonstrate both conceptually and empirically.", "We propose to use the notion of saliency to obtain word alignment interpretation of NMT predictions.", "Different from previous alignment models, our proposal is a pure interpretation method and does not require any parameter update or architecture change.", "Nevertheless, we are able to reduce Alignment Error Rate (AER) by 10-20 points over the attention weight baseline under two evaluation settings we adopt (see Figure 1 for an example), and beat fast-align (Dyer et al., 2013) by as much as 8.7 points.", "Not only have we proposed a superior model interpretation method, but our empirical results also uncover that, contrary to common beliefs, architectures such as convolutional sequenceto-sequence models (Gehring et al., 2017) have already implicitly learned highly interpretable word alignments, which sheds light on how future improvement should be made on these architectures.", "Related Work We start with work that combines word alignments with NMT.", "Research in this area generally falls into one of three themes: (1) employing the notion of word alignments to interpret the prediction of NMT; (2) making use of word alignments to improve NMT performance; (3) making use of NMT to improve word alignments.", "We mainly focus on related work in the first theme as this is the problem we are addressing in this work.", "Then we briefly introduce work in the other themes that is relevant to our study.", "We conclude by briefly summarizing related work to our proposed interpretation method.", "For the attention in RNN-based sequence-tosequence model, the first comprehensive analysis is conducted by Ghader and Monz (2017) .", "They argued that the attention in such systems agree with word alignment to a certain extent by showing that the RNN-based system achieves comparable alignment error rate comparable to that of bidirectional GIZA++ with symmetrization.", "However, they also point out that they are not exactly the same, as training the attention with alignments would occasionally cause the model to forget important information.", "Lee et al.", "(2017) presented a toolkit that facilitates study for the attention in RNN-based models.", "There is also a number of other studies that analyze the attention in Transformer models.", "Tang et al.", "(2018a,b) conducted targeted evaluation of neural machine translation models in two different evaluation tasks, namely subject-verb agreement and word sense disambiguation.", "During the analysis, they noted that the pattern in Transformer model (what they refer to as advanced attention mechanism) is very different from that of the attention in RNN-based architecture, in that a lot of the probability mass is focused on the last input token.", "They did not dive deeper in this phenomenon in their analysis.", "Raganato and Tiedemann (2018) performed a brief but more refined analysis on each attention head and each layer, where they noticed several different patterns inside the modules, and concluded that Transformer tends to focus on local dependencies in lower layers but finds long dependencies on higher ones.", "Beyond interpretation, in order to improve the translation of rare words, Nguyen and Chiang (2018) introduced LexNet, a feed-forward neural network that directly predicts the target word from a weighted sum of the source embeddings, on top of an RNN-based Seq2Seq models.", "Their goal was to improve translation output and hence they did not empirically show AER improvements on manually-aligned corpora.", "There are also a few other studies that inject alignment supervision during NMT training (Mi et al., 2016; Liu et al., 2016) .", "In terms of improvements in word alignment quality, Legrand et al.", "(2016) ; Wang et al.", "(2018) ; proposed neu-ral word alignment modules decoupled from NMT systems, while Zenkel et al.", "(2019) introduced a separate module to extract alignment from NMT decoder states, with which they achieved comparable AER with fast-align with Transformer models.", "The saliency method we propose in this work draws its inspiration from visual saliency proposed by Simonyan et al.", "(2013); Springenberg et al.", "(2014) ; Smilkov et al.", "(2017) .", "It should be noted that these methods were mostly applied to computer vision tasks.", "To the best of our knowledge, Li et al.", "(2016) presented the only work that directly employs saliency methods to interpret NLP models.", "Most similar to our work in spirit, Ding et al.", "(2017) used Layer-wise Relevance Propagation (LRP; Bach et al.", "2015) , an interpretation method resembling saliency, to interpret the internal working mechanisms of RNN-based neural machine translation systems.", "Although conceptually LRP is also a good fit for word alignment interpretation, we have some concerns with the mathematical soundness of LRP when applied to attention models.", "Our proposed method is also considerably more flexible and easier to implement than LRP.", "The Interpretation Problem Formally, by interpreting model prediction, we are referring to the following problem: given a trained MT model and input tokens S = {s 0 , s 1 , .", ".", ".", ", s I−1 }, at a certain time step j when the models predicts t j , we want to know which source word in S \"contributed\" most to this prediction.", "Note that the prediction t j might not be arg max t j p(t j | t 1:j−1 ), as the locally optimal option may be pruned during beam search and not end up in the final translation.", "Under this framework, we can see an important conceptual problem regarding interpreting attention weights as word alignment.", "Suppose for the same source sentence, there are two alternative translations that diverge at target time step j, generating t j and t ′ j which respectively correspond to different source words.", "Presumably, the source word that is aligned to t j and t ′ j should changed correspondingly.", "However, this is not possible with the attention weight interpretation, because the attention weight is computed before prediction of t j or t ′ j .", "With that, we argue that an ideal interpretation algorithm should be able to adapt the interpretation with the specified output label, regard-less of whether it is the most likely label predicted by the model.", "As a final note, the term \"attention weights\" here refers to the weights of the attention between encoder and decoder (the \"encoder-decoder attention\" in Vaswani et al.", "(2017) ).", "Specifically, they do not refer to the weight of self-attention modules that only exist in the Transformer architecture, which do not establish alignment between the source and target words.", "Method Our proposal is based on the notion of visual saliency (Simonyan et al., 2013) in computer vision.", "In brief, the saliency of an input feature is defined by the partial gradient of the output score with regard to the input.", "We propose to extend this idea to NMT by drawing analogy between input pixels and the embedding look-up operation.", "Visual Saliency Suppose we have an image classification example (x 0 , y 0 ), with y 0 being a specific image class and x 0 being an |X |-dimensional vector.", "Each entry of x 0 is an input feature (i.e., a pixel) to the classifier.", "Given the input x 0 , a trained classifier can generate a prediction score for class y 0 , denoted as p(y 0 | x 0 ).", "Consider the first-order Taylor expansion of a perturbed version of this score at the neighborhood of input x 0 : p(y 0 | x) ≈ p(y 0 | x 0 ) + ∂p(y 0 | x) ∂x x 0 · (x − x 0 ) (1) This is essentially re-formulating the perturbed prediction score p(y 0 | x) as an affine approximation of the input features, while the \"contribution\" of each feature to the final prediction being the partial derivative of the prediction score with regard to the feature.", "Assuming a feature that is deemed as salient for the local perturbation of the prediction score would also be globally salient, the saliency of an input feature is defined as follows: Definition 1 Denoted as Ψ(x, y), the saliency of feature vector x with regard to output class y is defined as ∂p(y | x) ∂x .", "Note that Ψ(x, y) is also a vector, with each entry corresponding to the saliency of a single input feature in x.", "Such formulation has following nice properties: • The saliency of an input feature is related to the choice of output class y, as model scores of different output classes correspond to a different set of parameters, and hence resulting in different partial gradients for the input features.", "This makes up for the aforementioned deficiency of attention weights in addressing the interpretation problem.", "• The partial gradient could be computed by back-propagation, which is efficiently implemented in most deep learning frameworks.", "• The formulation is agnostic to the model that generates p(y | x), so it could be applied to any deep learning architecture.", "Word Saliency In computer vision, the input feature is a 3D Tensor corresponding to the level in each channel.", "The key question to apply such method to NMT is what constitutes the input feature to a NMT system.", "Li et al.", "(2016) proposed to use the embedding of of the input words as the input feature to formulate saliency score, which results in the saliency of an input word being a vector of the same dimension as embedding vectors.", "To obtain a scalar saliency value, they computed the mean of the absolute value of the embedding gradients.", "We argue that there is a more mathematically principled way to approach this.", "To start, we treat the word embedding look-up operation as a dot product between the embedding weight matrix W and an one-hot vector z.", "The size of z is the same as the source vocabulary size.", "Similarly, the input sentence could be formulated as a matrix Z with only 0 and 1 entries.", "Notice that z has certain resemblance to the pixels of an image, with each cell representing the pixel-wise activation level of the words in the vocabulary.", "For the output word t j at time step j, we can similarly define the saliency of the one-hot vector z as: Ψ(z, t j ) = ∂p(t j | Z) ∂z (2) where p(t j | Z) is the probability of word t j generated by the NMT model given source sentence Z. Ψ(z, t j ) is a vector of the same size as z.", "However, note that there is a key difference between z and pixels.", "If the pixel level is 0, it means that the pixel is black, while a 0-entry in z means that the input word is not the word denoted by the corresponding cell.", "While the black region of an input image may still carry important information, we are not interested in the saliency of the 0-entries in z.", "1 Hence, we only take the 1-entries of matrix Z as the input to the NMT model.", "For a source word s i in the source sentence, this means we only care about the saliency of the 1-entries, i.e., the entry corresponding to source word s i : ψ(s i , t j ) = [ ∂p(t j | Z) ∂z ] s i = [ ∂p(t j | Z) ∂W s i · ∂W s i ∂z ] s i = [ ∂p(t j | Z) ∂W s i · W ] s i = ∂p(t j | Z) ∂W s i · W s i (3) where [·] i denotes the i-th row of a matrix or the ith element of a vector.", "In other words, the saliency ψ(s i , t j ) is a weighted sum of the word embedding of input word s i , with the partial gradient of each cell as the weight.", "By comparison, the word saliency 2 in Li et al.", "(2016) is defined as: ψ ′ (s i , t j ) = mean ( ∂p(t j | Z) ∂W s i ) (4) There are two implementation details that we would like to call for the reader's attention: • When the same word occurs multiple times in the source sentence, multiple copies of embedding for such word need to be made to ensure that the gradients flowing to different instances of the same word are not merged; • Note that ψ(s i , t j ) is not a probability distribution, which does not affect word alignment results because we are taking arg max.", "For visualizations presented herein, we normalized the distribution by p( s i | t j ) ∝ max(0, ψ(s i , t j )).", "One may also use softmax function for applications that need more well-formed probability distribution.", "1 Although we introduce z to facilitate presentation, note that word embedding look-up is never implemented as a matrix multiplication.", "Instead, it is implemented as a table lookup, so for each input word, only one row of the word embedding is fed into the subsequent computation.", "As a consequence, during training, since the other rows are not part of the computation graph, only parameters in the rows corresponding to the 1-entries will be updated.", "This is another reason why we choose to discard the saliency of 0-entries.", "2 Li et al.", "(2016) mostly focused on studying saliency on the level of word embedding dimensions.", "This word-level formulation is proposed as part of the analysis in Section 5.2 and Section 6 of that work.", "SmoothGrad There are two scenarios where the naïve gradientbased saliency may make mistakes: • For highly non-linear models, the saliency obtained from local perturbation may not be a good representation of the global saliency.", "• If the model fits the distribution nearly perfectly, some data points or input features may become saturated, i.e.", "having a partial gradient of 0.", "This does not necessarily mean they are not salient with regard to the prediction.", "We alleviate these problems with SmoothGrad, a method proposed by Smilkov et al.", "(2017) .", "The idea is to augment the input to the network into n samples by adding random noise generated by normal distribution N (0, σ 2 ).", "The saliency scores of each augmented sample are then averaged to cancel out the noise in the gradients.", "We made one small modification to this method in our experiments: rather than adding noise to the word inputs that are represented as one-hot vectors, we instead add noise to the queried embedding vectors.", "This allows us to introduce more randomness for each word input.", "Experiments Evaluation Method The best evaluation method would compare predicted word alignments against manually labeled word alignments between source sentences and NMT output sentences, but this is too costly for our study.", "Instead, we conduct two automatic evaluations for our proposed method using resources available: • force decoding: take a human-annotated corpus, run NMT models to force-generate the target side of the corpus and measure AER against the human alignment; • free decoding: take the NMT prediction, obtain reasonably clean reference alignments between the prediction and the source and measure AER against this reference.", "3 Notice that both automatic evaluation methods have their respective limitation: the force decoding method may force the model to predict something it deems unlikely, and thus generating noisy alignment; whereas the free decoding method lacks authentic references.", "Setup We follow Zenkel et al.", "(2019) in data setup and use the accompanied scripts of that paper 4 for preprocessing.", "Their training data consists of 1.9M, 1.1M and 0.4M sentence pairs for German-English (de-en), English-French (en-fr) and Romanian-English (ro-en) language pairs, respectively, whereas the manually-aligned test data contains 508, 447 and 248 sentence pairs for each language pair.", "There is no development data provided in their setup, and it is not clear what they used for NMT system training, so we set aside the last 1,000 sentences of the training data for each language as the development set.", "For our NMT systems, we use fairseq 5 to train attention-based RNN systems (LSTM) (Bahdanau et al., 2014) , convolution systems (FConv) (Gehring et al., 2017) , and Transformer systems (Transformer) (Vaswani et al., 2017) .", "We use the pre-configured model architectures for IWSLT German-English experiments 6 to build all NMT systems.", "Our experiments cover the following interpretation methods: • Attention: directly take the attention weights as soft alignment scores.", "For transformer, we follow the implementation in fairseq and used the attention weights from the final layer averaged across all heads; • Smoothed Attention: obtain multiple version of attention weights with the same data augmentation procedure as SmoothGrad and average them.", "This is to prove that smoothing itself does not improve the interpretation quality, and has to be used together with effective interpretation method; • (Li et al., 2016) : applied with normal backpropagation (Grad) and SmoothGrad; • Ours: applied with normal back-propagation (Grad) and SmoothGrad.", "For all the methods above, we follow the same procedure in (Zenkel et al., 2019) to convert soft alignment scores to hard alignment.", "For force decoding experiments, we generate symmetrized alignment results with growdiag-final.", "We also include AER results 7 of fast-align (Dyer et al., 2013) , GIZA++ 8 and the best model (Add+SGD) from Zenkel et al.", "(2019) on the same dataset for comparison.", "However, the readers should be aware that there are certain caveats in this comparison: • All of these models are specifically designed and optimized to generate high-quality alignments, while our method is an interpretation method and is not making any architecture modifications or parameter updates; • fast-align and GIZA++ usually need to update model with full sentence to generate optimal alignments, while our system and Zenkel et al.", "(2019) can do so on-the-fly.", "7 We reproduced the fast-align results as a sanity check and we were able to perfectly replicate their numbers with their released scripts.", "8 https://github.com/moses-smt/giza-pp Realizing the second caveat, we also run fastalign under the online alignment scenario, where we first train a fast-align model and decode on the test set.", "This is a real-world scenario in applications such as computer-aided translation (Bouma and Parmentier, 2014; Arcan et al., 2014) , where we cannot practically update alignment models onthe-fly.", "On the other hand, we believe this is a slightly better comparison for methods with online alignment capabilities such as Zenkel et al.", "(2019) and this work.", "The data used in Zenkel et al.", "(2019) did not provide a manually-aligned development set, so we tune the SmoothGrad hyperparameters (noise standard deviation σ and sample size n) on a 30sentence subset of the German-English test data with the Transformer model.", "We ended up using the recommended σ = 0.15 in the original paper and a slightly smaller sample size n = 30 for speed.", "This hyperparameter setting is applied to the other SmoothGrad experiments as-is.", "For com-parison with previous work, we do not exclude these sentences from the reported results, we instead mark the numbers affected to raise caution.", "Table 1 shows the AER results under the force decoding setting.", "First, note that after applying our saliency method with normal back-propagation, AER is only reduced for FConv model but instead increases for LSTM and Transformer.", "The largest increase is observed for Transformer, where the AER increases by about 20 points on average.", "However, after applying SmoothGrad on top of that, we observe a sharp drop in AER, which ends up with 10-20 points lower than the attention weight baseline.", "We can also see that this is not just an effect introduced by input noise, as the same smoothing procedure for attention increases the AER most of the times.", "To summarize, at least under force decoding settings, our saliency method with SmoothGrad obtains word alignment interpretations of much higher quality than the attention weight baseline.", "Force Decoding Results As for Li et al.", "(2016) , for FConv and LSTM architectures, it is not only consistently worse than our method, but at times also worse than attention.", "Besides, the effect of SmoothGrad is also not as consistent on their saliency formulation as ours.", "Although with the Transformer model, the Li et al.", "(2016) method obtained better AER than our method under several settings, it is still pretty clear overall that the superior mathematical soundness of our method is translated into better interpretation quality.", "While the GIZA++ model obtains the best alignment result in Table 1 9 , most of our word alignment interpretation of FConv model with Smooth-Grad surpasses the alignment quality of fast-align (either Online or Offline), sometimes by as much as 8.7 points (symmetrized ro<>en result).", "Our best models are also largely on-par with (Zenkel et al., 2019) .", "These are notable results as our method is an interpretation method and no extra parameter is updated to optimize the quality of alignment.", "On the other hand, this also indicates that it is possible to induce high-quality 9 While Ghader and Monz (2017) showed that the AER obtained by LSTM model is close to that of GIZA++, our experiments yield a much larger difference.", "We think this is largely due to the fact that we choose to train our model with BPE, while Ghader and Monz (2017) explicitly avoided doing so.", "alignments from NMT model without modifying its parameters, showing that it has acquired such information in an implicit way.", "Most interestingly, although NMT is often deemed as performing poorly under low-resource setting, our interpretation seems to work relatively well on ro<>en language pair, which happens to be the language pair that we have least training data for.", "We think this is a phenomenon that merits further exploration.", "Besides, it can be seen that for all reported methods, the overall order for the number of alignment errors is FConv < LSTM < Transformer.", "To our best knowledge, this is also a novel insight, as no one has analyzed attention weights of FConv with other architectures before.", "We can also observe that while our method is not strong enough to fully bridge the gap of the attention noise level between different model architecture, it does manage to narrow the difference in some cases.", "Table 2 shows the result under free decoding setting.", "The trend in this group of experiment is similar to Table 1 , except that Transformer occasionally outperforms LSTM.", "We think this is mainly due to the fact that Transformer generates higher quality translations, but could also be partially attributed to the noise in fast-align reference.", "Also, notice that the AER numbers are also generally lower compared to Table 1 under this setting.", "One reason is that our model is aligning output with which it is most confident, so less noise should be expected in the model behavior.", "On the other hand, by qualitatively comparing the reference translation in the test set and the NMT output, we find that it is generally easier to align the translation as it is often a more literal translation.", "6 Analysis 6.1 Comparison with Li et al.", "(2016) The main reason why the word saliency formulation in Li et al.", "(2016) does not work as well for word alignment is the lack of polarity in the formulation.", "In other words, it only quantifies how much the input influences the output, but does not specify in what way does the input influence.", "This is sufficient for error analysis, but does not suit the purpose of word alignment, as humans will only align a target word to the input words that constitute a translation pair, i.e.", "have positive influence.", "Figure 2 shows a case where this problem occurs in our German-English experiments.", "Note that in Subfigure (a), the source word nur has high saliency on several target words, e.g.", "should, but the word nur is actually not translated in the reference.", "On the other hand, as shown in Subfigure (b), our method correctly assigns negative (shown as white) or small positive values at all time steps for this source word.", "Specifically, the saliency value of nur for should is negative with large magnitude, indicating significant negative contributions to the prediction of that target word.", "Hence, a good word alignment interpreta-tion should strongly avoid aligning them.", "Free Decoding Results SmoothGrad Tables 1 and 2 show that SmoothGrad is a crucial factor to reduce AER, especially for Transformer.", "Figure 3 Table 1 .", "By comparing Subfigures (a) and (c), we notice that (1) without SmoothGrad, the word saliency obtained from the Transformer model is extremely noisy, and (2) the output of SmoothGrad is not only a smoother version of the naïve gradient output, but also gains new information by performing extra forward and backward evaluations with the noisy input.", "For example, compare the alignment point between source word wir and target word we: in Subfigure (a), this word pair has very low saliency, but in (c), they become the most likely alignment pair for that target word.", "Referring back to our motivation for using SmoothGrad in Section 4.3, we think the observations above verify that the Transformer model is a case where very high non-linearities occur almost everywhere in the parameter space, such that the saliency obtained from local perturbation is a very Table 3 : Alignment distribution entropy for selected deen models.", "att stands for attention in Table 1. poor representation of the global saliency almost all the time.", "On the other hand, this is also why the Transformer especially relies on SmoothGrad to work well, as the perturbation will give a better estimation of the global saliency.", "It could also be observed from Subfigures (b) and (d) that when the noise is too moderate, the evaluation does not deviate enough from the original spot to gain non-local information, and at (d) it deviates too much and hence the resulting alignment is almost random.", "Intuitively, the noise parameter σ should be sensitive to the model architecture or even specific input feature values, but interestingly we end up finding that a single choice from the computer vision literature works well with all of our systems.", "We encourage future work to conduct more comprehensive analysis of the effect of SmoothGrad on more complicated architectures beyond convolutional neural nets.", "Alignment Dispersion We run German-English alignments under several different SmoothGrad noise deviation σ and report their dispersion as measured by entropy of the (soft) alignment distribution averaged by number of target words.", "Results are summarized in Ta-ble 3, where lower entropy indicates more peaky alignments.", "First, we observe that dispersion of word saliency gets higher as we increase σ, which aligns with the observations in Figure 3 .", "It should also be noted that the alignment dispersion is consistently lower for free decoding than force decoding.", "This verifies our conjecture that the force decoding setting might introduce more noise in the model behavior, but judging from this result, that gap seems to be minimal.", "Comparing different architectures, the dispersion of attention weights does not correlate well with the dispersion of word saliency.", "We also notice that, while the Transformer attention interpretation consistently results in higher AER, its dispersion is lower than the other architectures, indicating that with attention, a lot of the probability mass might be concentrated in the wrong place more often.", "This corroborates the finding in Raganato and Tiedemann (2018) .", "Discussion And Future Work There are several extensions to this work that we would like to discuss in this section.", "First, in this paper we only explored two saliency methods among many others available (Montavon et al., 2018) .", "In our preliminary study, we also experimented with guided back-propagation (Springenberg et al., 2014), a frequently used saliency method in computer vision, which did not work well for our problem.", "We suspect that there is a gap between applying these methods on mostlyconvolutional architectures in computer vision and architectures with more non-linearities in NLP.", "We hope the future research from the NLP and machine learning communities could bridge this gap.", "Secondly, the alignment errors in our method comes from three different sources: the limitation of NMT models on learning word alignments, the limitation of interpretation method on recovering interpretable word alignments, and the ambiguity in word alignments itself.", "Although we have shown that high quality alignment could be recovered from NMT systems (thus pushing our understanding on the limitation of NMT models), we are not yet able to separate these sources of errors in this work.", "While exploration on this direction will help us better understand both NMT models and the capability of saliency methods in NLP, researchers may want to avoid using word alignment as a benchmark for saliency methods because of its ambiguity.", "For such purpose, simpler tasks with clear ground truth, such as subject-verb agreement, might be a better choice.", "Finally, as mentioned before, we are only conducting approximate evaluation to measure the ability of our interpretation method.", "An immediate future work would be evaluating this on human-annotated translation outputs generated by the NMT system.", "Conclusion We propose to use word saliency and SmoothGrad to interpret word alignments from NMT predictions.", "Our proposal is model-agnostic, is able to be applied either offline or online, and does not require any parameter updates or architectural change.", "Both force decoding and free decoding evaluations show that our method is capable of generating word alignment interpretations of much higher quality compared to its attentionbased counterpart.", "Our empirical results also probe into the NMT black-box and reveal that even without any special architecture or training algorithm, some NMT models have already implicitly learned interpretable word alignments of comparable quality to fast-align.", "The model and code for our experiments are available at https://github.com/shuoyangd/meerkat." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "4.3", "5.1", "5.2", "5.3", "6.2", "6.3", "7", "8" ], "paper_header_content": [ "Introduction", "Related Work", "The Interpretation Problem", "Method", "Visual Saliency", "Word Saliency", "SmoothGrad", "Evaluation Method", "Setup", "Force Decoding Results", "SmoothGrad", "Alignment Dispersion", "Discussion And Future Work", "Conclusion" ] }
GEM-SciDuet-train-38#paper-1054#slide-9
Whats good about this
Derivatives are easy to obtain for any DL toolkit Adapts with the choice of output words Saliency-driven Word Alignment Interpretation for NMT
Derivatives are easy to obtain for any DL toolkit Adapts with the choice of output words Saliency-driven Word Alignment Interpretation for NMT
[]
GEM-SciDuet-train-38#paper-1054#slide-10
1054
Saliency-driven Word Alignment Interpretation for Neural Machine Translation
Despite their original goal to jointly learn to align and translate, Neural Machine Translation (NMT) models, especially Transformer, are often perceived as not learning interpretable word alignments. In this paper, we show that NMT models do learn interpretable word alignments, which could only be revealed with proper interpretation methods. We propose a series of such methods that are model-agnostic, are able to be applied either offline or online, and do not require parameter update or architectural change. We show that under the force decoding setup, the alignments induced by our interpretation method are of better quality than fast-align for some systems, and when performing free decoding, they agree well with the alignments induced by automatic alignment tools.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223 ], "paper_content_text": [ "Introduction Neural Machine Translation (NMT) has made lots of advancements since its inception.", "One of the key innovations that led to the largest improvements is the introduction of the attention mechanism (Bahdanau et al., 2014; Luong et al., 2015) , which jointly learns word alignment and translation.", "Since then, the attention mechanism has gradually become a general technique in various NLP tasks, including summarization (Rush et al., 2015; See et al., 2017) , natural language inference (Parikh et al., 2016) and speech recognition (Chorowski et al., 2015; Chan et al., 2016) .", "Although word alignment is no longer a integral step like the case for Statistical Machine Translation (SMT) systems (Brown et al., 1993; Koehn et al., 2003) , there is a resurgence of interest in the community to study word alignment for NMT models.", "Even for NMT, word alignments are useful for error analysis, inserting external vocabularies, and providing guidance for human translators in computer-aided translation.", "When aiming for the most accurate alignments, the state-of-the-art tools include GIZA++ (Brown et al., 1993; Och and Ney, 2003) and fast-align (Dyer et al., 2013) , which are all external models invented in SMT era and need to be run as a separate post-processing step after the full sentence translation is complete.", "As a direct result, they are not suitable for analyzing the internal decision processes of the neural machine translation models.", "Besides, these models are hard to apply in the online fashion, i.e.", "in the middle of left-to-right translation process, such as the scenario in certain constrained decoding algorithms (Hasler et al., 2018) and in computeraided translation (Bouma and Parmentier, 2014; Arcan et al., 2014) .", "For these cases, the current common practice is to simply generate word alignments from attention weights between the encoder and decoder.", "However, there are problems with this practice.", "Koehn and Knowles (2017) showed that attention-based word alignment interpretation may be subject to \"off-by-one\" errors.", "Zenkel et al.", "(2019) ; Tang et al.", "(2018b) ; Raganato and Tiedemann (2018) pointed out that the attention-induced alignment is particularly noisy with Transformer models.", "Because of this, some studies, such as Nguyen and Chiang (2018); Zenkel et al.", "(2019) proposed either to add extra modules to generate higher quality word alignments, or to use these modules to further improve the model performance or interpretability.", "This paper is a step towards interpreting word alignments from NMT without relying on external models.", "We argue that using only attention weights is insufficient for generating clean word alignment interpretations, which we demonstrate both conceptually and empirically.", "We propose to use the notion of saliency to obtain word alignment interpretation of NMT predictions.", "Different from previous alignment models, our proposal is a pure interpretation method and does not require any parameter update or architecture change.", "Nevertheless, we are able to reduce Alignment Error Rate (AER) by 10-20 points over the attention weight baseline under two evaluation settings we adopt (see Figure 1 for an example), and beat fast-align (Dyer et al., 2013) by as much as 8.7 points.", "Not only have we proposed a superior model interpretation method, but our empirical results also uncover that, contrary to common beliefs, architectures such as convolutional sequenceto-sequence models (Gehring et al., 2017) have already implicitly learned highly interpretable word alignments, which sheds light on how future improvement should be made on these architectures.", "Related Work We start with work that combines word alignments with NMT.", "Research in this area generally falls into one of three themes: (1) employing the notion of word alignments to interpret the prediction of NMT; (2) making use of word alignments to improve NMT performance; (3) making use of NMT to improve word alignments.", "We mainly focus on related work in the first theme as this is the problem we are addressing in this work.", "Then we briefly introduce work in the other themes that is relevant to our study.", "We conclude by briefly summarizing related work to our proposed interpretation method.", "For the attention in RNN-based sequence-tosequence model, the first comprehensive analysis is conducted by Ghader and Monz (2017) .", "They argued that the attention in such systems agree with word alignment to a certain extent by showing that the RNN-based system achieves comparable alignment error rate comparable to that of bidirectional GIZA++ with symmetrization.", "However, they also point out that they are not exactly the same, as training the attention with alignments would occasionally cause the model to forget important information.", "Lee et al.", "(2017) presented a toolkit that facilitates study for the attention in RNN-based models.", "There is also a number of other studies that analyze the attention in Transformer models.", "Tang et al.", "(2018a,b) conducted targeted evaluation of neural machine translation models in two different evaluation tasks, namely subject-verb agreement and word sense disambiguation.", "During the analysis, they noted that the pattern in Transformer model (what they refer to as advanced attention mechanism) is very different from that of the attention in RNN-based architecture, in that a lot of the probability mass is focused on the last input token.", "They did not dive deeper in this phenomenon in their analysis.", "Raganato and Tiedemann (2018) performed a brief but more refined analysis on each attention head and each layer, where they noticed several different patterns inside the modules, and concluded that Transformer tends to focus on local dependencies in lower layers but finds long dependencies on higher ones.", "Beyond interpretation, in order to improve the translation of rare words, Nguyen and Chiang (2018) introduced LexNet, a feed-forward neural network that directly predicts the target word from a weighted sum of the source embeddings, on top of an RNN-based Seq2Seq models.", "Their goal was to improve translation output and hence they did not empirically show AER improvements on manually-aligned corpora.", "There are also a few other studies that inject alignment supervision during NMT training (Mi et al., 2016; Liu et al., 2016) .", "In terms of improvements in word alignment quality, Legrand et al.", "(2016) ; Wang et al.", "(2018) ; proposed neu-ral word alignment modules decoupled from NMT systems, while Zenkel et al.", "(2019) introduced a separate module to extract alignment from NMT decoder states, with which they achieved comparable AER with fast-align with Transformer models.", "The saliency method we propose in this work draws its inspiration from visual saliency proposed by Simonyan et al.", "(2013); Springenberg et al.", "(2014) ; Smilkov et al.", "(2017) .", "It should be noted that these methods were mostly applied to computer vision tasks.", "To the best of our knowledge, Li et al.", "(2016) presented the only work that directly employs saliency methods to interpret NLP models.", "Most similar to our work in spirit, Ding et al.", "(2017) used Layer-wise Relevance Propagation (LRP; Bach et al.", "2015) , an interpretation method resembling saliency, to interpret the internal working mechanisms of RNN-based neural machine translation systems.", "Although conceptually LRP is also a good fit for word alignment interpretation, we have some concerns with the mathematical soundness of LRP when applied to attention models.", "Our proposed method is also considerably more flexible and easier to implement than LRP.", "The Interpretation Problem Formally, by interpreting model prediction, we are referring to the following problem: given a trained MT model and input tokens S = {s 0 , s 1 , .", ".", ".", ", s I−1 }, at a certain time step j when the models predicts t j , we want to know which source word in S \"contributed\" most to this prediction.", "Note that the prediction t j might not be arg max t j p(t j | t 1:j−1 ), as the locally optimal option may be pruned during beam search and not end up in the final translation.", "Under this framework, we can see an important conceptual problem regarding interpreting attention weights as word alignment.", "Suppose for the same source sentence, there are two alternative translations that diverge at target time step j, generating t j and t ′ j which respectively correspond to different source words.", "Presumably, the source word that is aligned to t j and t ′ j should changed correspondingly.", "However, this is not possible with the attention weight interpretation, because the attention weight is computed before prediction of t j or t ′ j .", "With that, we argue that an ideal interpretation algorithm should be able to adapt the interpretation with the specified output label, regard-less of whether it is the most likely label predicted by the model.", "As a final note, the term \"attention weights\" here refers to the weights of the attention between encoder and decoder (the \"encoder-decoder attention\" in Vaswani et al.", "(2017) ).", "Specifically, they do not refer to the weight of self-attention modules that only exist in the Transformer architecture, which do not establish alignment between the source and target words.", "Method Our proposal is based on the notion of visual saliency (Simonyan et al., 2013) in computer vision.", "In brief, the saliency of an input feature is defined by the partial gradient of the output score with regard to the input.", "We propose to extend this idea to NMT by drawing analogy between input pixels and the embedding look-up operation.", "Visual Saliency Suppose we have an image classification example (x 0 , y 0 ), with y 0 being a specific image class and x 0 being an |X |-dimensional vector.", "Each entry of x 0 is an input feature (i.e., a pixel) to the classifier.", "Given the input x 0 , a trained classifier can generate a prediction score for class y 0 , denoted as p(y 0 | x 0 ).", "Consider the first-order Taylor expansion of a perturbed version of this score at the neighborhood of input x 0 : p(y 0 | x) ≈ p(y 0 | x 0 ) + ∂p(y 0 | x) ∂x x 0 · (x − x 0 ) (1) This is essentially re-formulating the perturbed prediction score p(y 0 | x) as an affine approximation of the input features, while the \"contribution\" of each feature to the final prediction being the partial derivative of the prediction score with regard to the feature.", "Assuming a feature that is deemed as salient for the local perturbation of the prediction score would also be globally salient, the saliency of an input feature is defined as follows: Definition 1 Denoted as Ψ(x, y), the saliency of feature vector x with regard to output class y is defined as ∂p(y | x) ∂x .", "Note that Ψ(x, y) is also a vector, with each entry corresponding to the saliency of a single input feature in x.", "Such formulation has following nice properties: • The saliency of an input feature is related to the choice of output class y, as model scores of different output classes correspond to a different set of parameters, and hence resulting in different partial gradients for the input features.", "This makes up for the aforementioned deficiency of attention weights in addressing the interpretation problem.", "• The partial gradient could be computed by back-propagation, which is efficiently implemented in most deep learning frameworks.", "• The formulation is agnostic to the model that generates p(y | x), so it could be applied to any deep learning architecture.", "Word Saliency In computer vision, the input feature is a 3D Tensor corresponding to the level in each channel.", "The key question to apply such method to NMT is what constitutes the input feature to a NMT system.", "Li et al.", "(2016) proposed to use the embedding of of the input words as the input feature to formulate saliency score, which results in the saliency of an input word being a vector of the same dimension as embedding vectors.", "To obtain a scalar saliency value, they computed the mean of the absolute value of the embedding gradients.", "We argue that there is a more mathematically principled way to approach this.", "To start, we treat the word embedding look-up operation as a dot product between the embedding weight matrix W and an one-hot vector z.", "The size of z is the same as the source vocabulary size.", "Similarly, the input sentence could be formulated as a matrix Z with only 0 and 1 entries.", "Notice that z has certain resemblance to the pixels of an image, with each cell representing the pixel-wise activation level of the words in the vocabulary.", "For the output word t j at time step j, we can similarly define the saliency of the one-hot vector z as: Ψ(z, t j ) = ∂p(t j | Z) ∂z (2) where p(t j | Z) is the probability of word t j generated by the NMT model given source sentence Z. Ψ(z, t j ) is a vector of the same size as z.", "However, note that there is a key difference between z and pixels.", "If the pixel level is 0, it means that the pixel is black, while a 0-entry in z means that the input word is not the word denoted by the corresponding cell.", "While the black region of an input image may still carry important information, we are not interested in the saliency of the 0-entries in z.", "1 Hence, we only take the 1-entries of matrix Z as the input to the NMT model.", "For a source word s i in the source sentence, this means we only care about the saliency of the 1-entries, i.e., the entry corresponding to source word s i : ψ(s i , t j ) = [ ∂p(t j | Z) ∂z ] s i = [ ∂p(t j | Z) ∂W s i · ∂W s i ∂z ] s i = [ ∂p(t j | Z) ∂W s i · W ] s i = ∂p(t j | Z) ∂W s i · W s i (3) where [·] i denotes the i-th row of a matrix or the ith element of a vector.", "In other words, the saliency ψ(s i , t j ) is a weighted sum of the word embedding of input word s i , with the partial gradient of each cell as the weight.", "By comparison, the word saliency 2 in Li et al.", "(2016) is defined as: ψ ′ (s i , t j ) = mean ( ∂p(t j | Z) ∂W s i ) (4) There are two implementation details that we would like to call for the reader's attention: • When the same word occurs multiple times in the source sentence, multiple copies of embedding for such word need to be made to ensure that the gradients flowing to different instances of the same word are not merged; • Note that ψ(s i , t j ) is not a probability distribution, which does not affect word alignment results because we are taking arg max.", "For visualizations presented herein, we normalized the distribution by p( s i | t j ) ∝ max(0, ψ(s i , t j )).", "One may also use softmax function for applications that need more well-formed probability distribution.", "1 Although we introduce z to facilitate presentation, note that word embedding look-up is never implemented as a matrix multiplication.", "Instead, it is implemented as a table lookup, so for each input word, only one row of the word embedding is fed into the subsequent computation.", "As a consequence, during training, since the other rows are not part of the computation graph, only parameters in the rows corresponding to the 1-entries will be updated.", "This is another reason why we choose to discard the saliency of 0-entries.", "2 Li et al.", "(2016) mostly focused on studying saliency on the level of word embedding dimensions.", "This word-level formulation is proposed as part of the analysis in Section 5.2 and Section 6 of that work.", "SmoothGrad There are two scenarios where the naïve gradientbased saliency may make mistakes: • For highly non-linear models, the saliency obtained from local perturbation may not be a good representation of the global saliency.", "• If the model fits the distribution nearly perfectly, some data points or input features may become saturated, i.e.", "having a partial gradient of 0.", "This does not necessarily mean they are not salient with regard to the prediction.", "We alleviate these problems with SmoothGrad, a method proposed by Smilkov et al.", "(2017) .", "The idea is to augment the input to the network into n samples by adding random noise generated by normal distribution N (0, σ 2 ).", "The saliency scores of each augmented sample are then averaged to cancel out the noise in the gradients.", "We made one small modification to this method in our experiments: rather than adding noise to the word inputs that are represented as one-hot vectors, we instead add noise to the queried embedding vectors.", "This allows us to introduce more randomness for each word input.", "Experiments Evaluation Method The best evaluation method would compare predicted word alignments against manually labeled word alignments between source sentences and NMT output sentences, but this is too costly for our study.", "Instead, we conduct two automatic evaluations for our proposed method using resources available: • force decoding: take a human-annotated corpus, run NMT models to force-generate the target side of the corpus and measure AER against the human alignment; • free decoding: take the NMT prediction, obtain reasonably clean reference alignments between the prediction and the source and measure AER against this reference.", "3 Notice that both automatic evaluation methods have their respective limitation: the force decoding method may force the model to predict something it deems unlikely, and thus generating noisy alignment; whereas the free decoding method lacks authentic references.", "Setup We follow Zenkel et al.", "(2019) in data setup and use the accompanied scripts of that paper 4 for preprocessing.", "Their training data consists of 1.9M, 1.1M and 0.4M sentence pairs for German-English (de-en), English-French (en-fr) and Romanian-English (ro-en) language pairs, respectively, whereas the manually-aligned test data contains 508, 447 and 248 sentence pairs for each language pair.", "There is no development data provided in their setup, and it is not clear what they used for NMT system training, so we set aside the last 1,000 sentences of the training data for each language as the development set.", "For our NMT systems, we use fairseq 5 to train attention-based RNN systems (LSTM) (Bahdanau et al., 2014) , convolution systems (FConv) (Gehring et al., 2017) , and Transformer systems (Transformer) (Vaswani et al., 2017) .", "We use the pre-configured model architectures for IWSLT German-English experiments 6 to build all NMT systems.", "Our experiments cover the following interpretation methods: • Attention: directly take the attention weights as soft alignment scores.", "For transformer, we follow the implementation in fairseq and used the attention weights from the final layer averaged across all heads; • Smoothed Attention: obtain multiple version of attention weights with the same data augmentation procedure as SmoothGrad and average them.", "This is to prove that smoothing itself does not improve the interpretation quality, and has to be used together with effective interpretation method; • (Li et al., 2016) : applied with normal backpropagation (Grad) and SmoothGrad; • Ours: applied with normal back-propagation (Grad) and SmoothGrad.", "For all the methods above, we follow the same procedure in (Zenkel et al., 2019) to convert soft alignment scores to hard alignment.", "For force decoding experiments, we generate symmetrized alignment results with growdiag-final.", "We also include AER results 7 of fast-align (Dyer et al., 2013) , GIZA++ 8 and the best model (Add+SGD) from Zenkel et al.", "(2019) on the same dataset for comparison.", "However, the readers should be aware that there are certain caveats in this comparison: • All of these models are specifically designed and optimized to generate high-quality alignments, while our method is an interpretation method and is not making any architecture modifications or parameter updates; • fast-align and GIZA++ usually need to update model with full sentence to generate optimal alignments, while our system and Zenkel et al.", "(2019) can do so on-the-fly.", "7 We reproduced the fast-align results as a sanity check and we were able to perfectly replicate their numbers with their released scripts.", "8 https://github.com/moses-smt/giza-pp Realizing the second caveat, we also run fastalign under the online alignment scenario, where we first train a fast-align model and decode on the test set.", "This is a real-world scenario in applications such as computer-aided translation (Bouma and Parmentier, 2014; Arcan et al., 2014) , where we cannot practically update alignment models onthe-fly.", "On the other hand, we believe this is a slightly better comparison for methods with online alignment capabilities such as Zenkel et al.", "(2019) and this work.", "The data used in Zenkel et al.", "(2019) did not provide a manually-aligned development set, so we tune the SmoothGrad hyperparameters (noise standard deviation σ and sample size n) on a 30sentence subset of the German-English test data with the Transformer model.", "We ended up using the recommended σ = 0.15 in the original paper and a slightly smaller sample size n = 30 for speed.", "This hyperparameter setting is applied to the other SmoothGrad experiments as-is.", "For com-parison with previous work, we do not exclude these sentences from the reported results, we instead mark the numbers affected to raise caution.", "Table 1 shows the AER results under the force decoding setting.", "First, note that after applying our saliency method with normal back-propagation, AER is only reduced for FConv model but instead increases for LSTM and Transformer.", "The largest increase is observed for Transformer, where the AER increases by about 20 points on average.", "However, after applying SmoothGrad on top of that, we observe a sharp drop in AER, which ends up with 10-20 points lower than the attention weight baseline.", "We can also see that this is not just an effect introduced by input noise, as the same smoothing procedure for attention increases the AER most of the times.", "To summarize, at least under force decoding settings, our saliency method with SmoothGrad obtains word alignment interpretations of much higher quality than the attention weight baseline.", "Force Decoding Results As for Li et al.", "(2016) , for FConv and LSTM architectures, it is not only consistently worse than our method, but at times also worse than attention.", "Besides, the effect of SmoothGrad is also not as consistent on their saliency formulation as ours.", "Although with the Transformer model, the Li et al.", "(2016) method obtained better AER than our method under several settings, it is still pretty clear overall that the superior mathematical soundness of our method is translated into better interpretation quality.", "While the GIZA++ model obtains the best alignment result in Table 1 9 , most of our word alignment interpretation of FConv model with Smooth-Grad surpasses the alignment quality of fast-align (either Online or Offline), sometimes by as much as 8.7 points (symmetrized ro<>en result).", "Our best models are also largely on-par with (Zenkel et al., 2019) .", "These are notable results as our method is an interpretation method and no extra parameter is updated to optimize the quality of alignment.", "On the other hand, this also indicates that it is possible to induce high-quality 9 While Ghader and Monz (2017) showed that the AER obtained by LSTM model is close to that of GIZA++, our experiments yield a much larger difference.", "We think this is largely due to the fact that we choose to train our model with BPE, while Ghader and Monz (2017) explicitly avoided doing so.", "alignments from NMT model without modifying its parameters, showing that it has acquired such information in an implicit way.", "Most interestingly, although NMT is often deemed as performing poorly under low-resource setting, our interpretation seems to work relatively well on ro<>en language pair, which happens to be the language pair that we have least training data for.", "We think this is a phenomenon that merits further exploration.", "Besides, it can be seen that for all reported methods, the overall order for the number of alignment errors is FConv < LSTM < Transformer.", "To our best knowledge, this is also a novel insight, as no one has analyzed attention weights of FConv with other architectures before.", "We can also observe that while our method is not strong enough to fully bridge the gap of the attention noise level between different model architecture, it does manage to narrow the difference in some cases.", "Table 2 shows the result under free decoding setting.", "The trend in this group of experiment is similar to Table 1 , except that Transformer occasionally outperforms LSTM.", "We think this is mainly due to the fact that Transformer generates higher quality translations, but could also be partially attributed to the noise in fast-align reference.", "Also, notice that the AER numbers are also generally lower compared to Table 1 under this setting.", "One reason is that our model is aligning output with which it is most confident, so less noise should be expected in the model behavior.", "On the other hand, by qualitatively comparing the reference translation in the test set and the NMT output, we find that it is generally easier to align the translation as it is often a more literal translation.", "6 Analysis 6.1 Comparison with Li et al.", "(2016) The main reason why the word saliency formulation in Li et al.", "(2016) does not work as well for word alignment is the lack of polarity in the formulation.", "In other words, it only quantifies how much the input influences the output, but does not specify in what way does the input influence.", "This is sufficient for error analysis, but does not suit the purpose of word alignment, as humans will only align a target word to the input words that constitute a translation pair, i.e.", "have positive influence.", "Figure 2 shows a case where this problem occurs in our German-English experiments.", "Note that in Subfigure (a), the source word nur has high saliency on several target words, e.g.", "should, but the word nur is actually not translated in the reference.", "On the other hand, as shown in Subfigure (b), our method correctly assigns negative (shown as white) or small positive values at all time steps for this source word.", "Specifically, the saliency value of nur for should is negative with large magnitude, indicating significant negative contributions to the prediction of that target word.", "Hence, a good word alignment interpreta-tion should strongly avoid aligning them.", "Free Decoding Results SmoothGrad Tables 1 and 2 show that SmoothGrad is a crucial factor to reduce AER, especially for Transformer.", "Figure 3 Table 1 .", "By comparing Subfigures (a) and (c), we notice that (1) without SmoothGrad, the word saliency obtained from the Transformer model is extremely noisy, and (2) the output of SmoothGrad is not only a smoother version of the naïve gradient output, but also gains new information by performing extra forward and backward evaluations with the noisy input.", "For example, compare the alignment point between source word wir and target word we: in Subfigure (a), this word pair has very low saliency, but in (c), they become the most likely alignment pair for that target word.", "Referring back to our motivation for using SmoothGrad in Section 4.3, we think the observations above verify that the Transformer model is a case where very high non-linearities occur almost everywhere in the parameter space, such that the saliency obtained from local perturbation is a very Table 3 : Alignment distribution entropy for selected deen models.", "att stands for attention in Table 1. poor representation of the global saliency almost all the time.", "On the other hand, this is also why the Transformer especially relies on SmoothGrad to work well, as the perturbation will give a better estimation of the global saliency.", "It could also be observed from Subfigures (b) and (d) that when the noise is too moderate, the evaluation does not deviate enough from the original spot to gain non-local information, and at (d) it deviates too much and hence the resulting alignment is almost random.", "Intuitively, the noise parameter σ should be sensitive to the model architecture or even specific input feature values, but interestingly we end up finding that a single choice from the computer vision literature works well with all of our systems.", "We encourage future work to conduct more comprehensive analysis of the effect of SmoothGrad on more complicated architectures beyond convolutional neural nets.", "Alignment Dispersion We run German-English alignments under several different SmoothGrad noise deviation σ and report their dispersion as measured by entropy of the (soft) alignment distribution averaged by number of target words.", "Results are summarized in Ta-ble 3, where lower entropy indicates more peaky alignments.", "First, we observe that dispersion of word saliency gets higher as we increase σ, which aligns with the observations in Figure 3 .", "It should also be noted that the alignment dispersion is consistently lower for free decoding than force decoding.", "This verifies our conjecture that the force decoding setting might introduce more noise in the model behavior, but judging from this result, that gap seems to be minimal.", "Comparing different architectures, the dispersion of attention weights does not correlate well with the dispersion of word saliency.", "We also notice that, while the Transformer attention interpretation consistently results in higher AER, its dispersion is lower than the other architectures, indicating that with attention, a lot of the probability mass might be concentrated in the wrong place more often.", "This corroborates the finding in Raganato and Tiedemann (2018) .", "Discussion And Future Work There are several extensions to this work that we would like to discuss in this section.", "First, in this paper we only explored two saliency methods among many others available (Montavon et al., 2018) .", "In our preliminary study, we also experimented with guided back-propagation (Springenberg et al., 2014), a frequently used saliency method in computer vision, which did not work well for our problem.", "We suspect that there is a gap between applying these methods on mostlyconvolutional architectures in computer vision and architectures with more non-linearities in NLP.", "We hope the future research from the NLP and machine learning communities could bridge this gap.", "Secondly, the alignment errors in our method comes from three different sources: the limitation of NMT models on learning word alignments, the limitation of interpretation method on recovering interpretable word alignments, and the ambiguity in word alignments itself.", "Although we have shown that high quality alignment could be recovered from NMT systems (thus pushing our understanding on the limitation of NMT models), we are not yet able to separate these sources of errors in this work.", "While exploration on this direction will help us better understand both NMT models and the capability of saliency methods in NLP, researchers may want to avoid using word alignment as a benchmark for saliency methods because of its ambiguity.", "For such purpose, simpler tasks with clear ground truth, such as subject-verb agreement, might be a better choice.", "Finally, as mentioned before, we are only conducting approximate evaluation to measure the ability of our interpretation method.", "An immediate future work would be evaluating this on human-annotated translation outputs generated by the NMT system.", "Conclusion We propose to use word saliency and SmoothGrad to interpret word alignments from NMT predictions.", "Our proposal is model-agnostic, is able to be applied either offline or online, and does not require any parameter updates or architectural change.", "Both force decoding and free decoding evaluations show that our method is capable of generating word alignment interpretations of much higher quality compared to its attentionbased counterpart.", "Our empirical results also probe into the NMT black-box and reveal that even without any special architecture or training algorithm, some NMT models have already implicitly learned interpretable word alignments of comparable quality to fast-align.", "The model and code for our experiments are available at https://github.com/shuoyangd/meerkat." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "4.3", "5.1", "5.2", "5.3", "6.2", "6.3", "7", "8" ], "paper_header_content": [ "Introduction", "Related Work", "The Interpretation Problem", "Method", "Visual Saliency", "Word Saliency", "SmoothGrad", "Evaluation Method", "Setup", "Force Decoding Results", "SmoothGrad", "Alignment Dispersion", "Discussion And Future Work", "Conclusion" ] }
GEM-SciDuet-train-38#paper-1054#slide-10
Prior Work on Saliency
Widely used and studied in Computer Vision! Also in a few NLP work for qualitative analysis Saliency-driven Word Alignment Interpretation for NMT
Widely used and studied in Computer Vision! Also in a few NLP work for qualitative analysis Saliency-driven Word Alignment Interpretation for NMT
[]
GEM-SciDuet-train-38#paper-1054#slide-11
1054
Saliency-driven Word Alignment Interpretation for Neural Machine Translation
Despite their original goal to jointly learn to align and translate, Neural Machine Translation (NMT) models, especially Transformer, are often perceived as not learning interpretable word alignments. In this paper, we show that NMT models do learn interpretable word alignments, which could only be revealed with proper interpretation methods. We propose a series of such methods that are model-agnostic, are able to be applied either offline or online, and do not require parameter update or architectural change. We show that under the force decoding setup, the alignments induced by our interpretation method are of better quality than fast-align for some systems, and when performing free decoding, they agree well with the alignments induced by automatic alignment tools.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223 ], "paper_content_text": [ "Introduction Neural Machine Translation (NMT) has made lots of advancements since its inception.", "One of the key innovations that led to the largest improvements is the introduction of the attention mechanism (Bahdanau et al., 2014; Luong et al., 2015) , which jointly learns word alignment and translation.", "Since then, the attention mechanism has gradually become a general technique in various NLP tasks, including summarization (Rush et al., 2015; See et al., 2017) , natural language inference (Parikh et al., 2016) and speech recognition (Chorowski et al., 2015; Chan et al., 2016) .", "Although word alignment is no longer a integral step like the case for Statistical Machine Translation (SMT) systems (Brown et al., 1993; Koehn et al., 2003) , there is a resurgence of interest in the community to study word alignment for NMT models.", "Even for NMT, word alignments are useful for error analysis, inserting external vocabularies, and providing guidance for human translators in computer-aided translation.", "When aiming for the most accurate alignments, the state-of-the-art tools include GIZA++ (Brown et al., 1993; Och and Ney, 2003) and fast-align (Dyer et al., 2013) , which are all external models invented in SMT era and need to be run as a separate post-processing step after the full sentence translation is complete.", "As a direct result, they are not suitable for analyzing the internal decision processes of the neural machine translation models.", "Besides, these models are hard to apply in the online fashion, i.e.", "in the middle of left-to-right translation process, such as the scenario in certain constrained decoding algorithms (Hasler et al., 2018) and in computeraided translation (Bouma and Parmentier, 2014; Arcan et al., 2014) .", "For these cases, the current common practice is to simply generate word alignments from attention weights between the encoder and decoder.", "However, there are problems with this practice.", "Koehn and Knowles (2017) showed that attention-based word alignment interpretation may be subject to \"off-by-one\" errors.", "Zenkel et al.", "(2019) ; Tang et al.", "(2018b) ; Raganato and Tiedemann (2018) pointed out that the attention-induced alignment is particularly noisy with Transformer models.", "Because of this, some studies, such as Nguyen and Chiang (2018); Zenkel et al.", "(2019) proposed either to add extra modules to generate higher quality word alignments, or to use these modules to further improve the model performance or interpretability.", "This paper is a step towards interpreting word alignments from NMT without relying on external models.", "We argue that using only attention weights is insufficient for generating clean word alignment interpretations, which we demonstrate both conceptually and empirically.", "We propose to use the notion of saliency to obtain word alignment interpretation of NMT predictions.", "Different from previous alignment models, our proposal is a pure interpretation method and does not require any parameter update or architecture change.", "Nevertheless, we are able to reduce Alignment Error Rate (AER) by 10-20 points over the attention weight baseline under two evaluation settings we adopt (see Figure 1 for an example), and beat fast-align (Dyer et al., 2013) by as much as 8.7 points.", "Not only have we proposed a superior model interpretation method, but our empirical results also uncover that, contrary to common beliefs, architectures such as convolutional sequenceto-sequence models (Gehring et al., 2017) have already implicitly learned highly interpretable word alignments, which sheds light on how future improvement should be made on these architectures.", "Related Work We start with work that combines word alignments with NMT.", "Research in this area generally falls into one of three themes: (1) employing the notion of word alignments to interpret the prediction of NMT; (2) making use of word alignments to improve NMT performance; (3) making use of NMT to improve word alignments.", "We mainly focus on related work in the first theme as this is the problem we are addressing in this work.", "Then we briefly introduce work in the other themes that is relevant to our study.", "We conclude by briefly summarizing related work to our proposed interpretation method.", "For the attention in RNN-based sequence-tosequence model, the first comprehensive analysis is conducted by Ghader and Monz (2017) .", "They argued that the attention in such systems agree with word alignment to a certain extent by showing that the RNN-based system achieves comparable alignment error rate comparable to that of bidirectional GIZA++ with symmetrization.", "However, they also point out that they are not exactly the same, as training the attention with alignments would occasionally cause the model to forget important information.", "Lee et al.", "(2017) presented a toolkit that facilitates study for the attention in RNN-based models.", "There is also a number of other studies that analyze the attention in Transformer models.", "Tang et al.", "(2018a,b) conducted targeted evaluation of neural machine translation models in two different evaluation tasks, namely subject-verb agreement and word sense disambiguation.", "During the analysis, they noted that the pattern in Transformer model (what they refer to as advanced attention mechanism) is very different from that of the attention in RNN-based architecture, in that a lot of the probability mass is focused on the last input token.", "They did not dive deeper in this phenomenon in their analysis.", "Raganato and Tiedemann (2018) performed a brief but more refined analysis on each attention head and each layer, where they noticed several different patterns inside the modules, and concluded that Transformer tends to focus on local dependencies in lower layers but finds long dependencies on higher ones.", "Beyond interpretation, in order to improve the translation of rare words, Nguyen and Chiang (2018) introduced LexNet, a feed-forward neural network that directly predicts the target word from a weighted sum of the source embeddings, on top of an RNN-based Seq2Seq models.", "Their goal was to improve translation output and hence they did not empirically show AER improvements on manually-aligned corpora.", "There are also a few other studies that inject alignment supervision during NMT training (Mi et al., 2016; Liu et al., 2016) .", "In terms of improvements in word alignment quality, Legrand et al.", "(2016) ; Wang et al.", "(2018) ; proposed neu-ral word alignment modules decoupled from NMT systems, while Zenkel et al.", "(2019) introduced a separate module to extract alignment from NMT decoder states, with which they achieved comparable AER with fast-align with Transformer models.", "The saliency method we propose in this work draws its inspiration from visual saliency proposed by Simonyan et al.", "(2013); Springenberg et al.", "(2014) ; Smilkov et al.", "(2017) .", "It should be noted that these methods were mostly applied to computer vision tasks.", "To the best of our knowledge, Li et al.", "(2016) presented the only work that directly employs saliency methods to interpret NLP models.", "Most similar to our work in spirit, Ding et al.", "(2017) used Layer-wise Relevance Propagation (LRP; Bach et al.", "2015) , an interpretation method resembling saliency, to interpret the internal working mechanisms of RNN-based neural machine translation systems.", "Although conceptually LRP is also a good fit for word alignment interpretation, we have some concerns with the mathematical soundness of LRP when applied to attention models.", "Our proposed method is also considerably more flexible and easier to implement than LRP.", "The Interpretation Problem Formally, by interpreting model prediction, we are referring to the following problem: given a trained MT model and input tokens S = {s 0 , s 1 , .", ".", ".", ", s I−1 }, at a certain time step j when the models predicts t j , we want to know which source word in S \"contributed\" most to this prediction.", "Note that the prediction t j might not be arg max t j p(t j | t 1:j−1 ), as the locally optimal option may be pruned during beam search and not end up in the final translation.", "Under this framework, we can see an important conceptual problem regarding interpreting attention weights as word alignment.", "Suppose for the same source sentence, there are two alternative translations that diverge at target time step j, generating t j and t ′ j which respectively correspond to different source words.", "Presumably, the source word that is aligned to t j and t ′ j should changed correspondingly.", "However, this is not possible with the attention weight interpretation, because the attention weight is computed before prediction of t j or t ′ j .", "With that, we argue that an ideal interpretation algorithm should be able to adapt the interpretation with the specified output label, regard-less of whether it is the most likely label predicted by the model.", "As a final note, the term \"attention weights\" here refers to the weights of the attention between encoder and decoder (the \"encoder-decoder attention\" in Vaswani et al.", "(2017) ).", "Specifically, they do not refer to the weight of self-attention modules that only exist in the Transformer architecture, which do not establish alignment between the source and target words.", "Method Our proposal is based on the notion of visual saliency (Simonyan et al., 2013) in computer vision.", "In brief, the saliency of an input feature is defined by the partial gradient of the output score with regard to the input.", "We propose to extend this idea to NMT by drawing analogy between input pixels and the embedding look-up operation.", "Visual Saliency Suppose we have an image classification example (x 0 , y 0 ), with y 0 being a specific image class and x 0 being an |X |-dimensional vector.", "Each entry of x 0 is an input feature (i.e., a pixel) to the classifier.", "Given the input x 0 , a trained classifier can generate a prediction score for class y 0 , denoted as p(y 0 | x 0 ).", "Consider the first-order Taylor expansion of a perturbed version of this score at the neighborhood of input x 0 : p(y 0 | x) ≈ p(y 0 | x 0 ) + ∂p(y 0 | x) ∂x x 0 · (x − x 0 ) (1) This is essentially re-formulating the perturbed prediction score p(y 0 | x) as an affine approximation of the input features, while the \"contribution\" of each feature to the final prediction being the partial derivative of the prediction score with regard to the feature.", "Assuming a feature that is deemed as salient for the local perturbation of the prediction score would also be globally salient, the saliency of an input feature is defined as follows: Definition 1 Denoted as Ψ(x, y), the saliency of feature vector x with regard to output class y is defined as ∂p(y | x) ∂x .", "Note that Ψ(x, y) is also a vector, with each entry corresponding to the saliency of a single input feature in x.", "Such formulation has following nice properties: • The saliency of an input feature is related to the choice of output class y, as model scores of different output classes correspond to a different set of parameters, and hence resulting in different partial gradients for the input features.", "This makes up for the aforementioned deficiency of attention weights in addressing the interpretation problem.", "• The partial gradient could be computed by back-propagation, which is efficiently implemented in most deep learning frameworks.", "• The formulation is agnostic to the model that generates p(y | x), so it could be applied to any deep learning architecture.", "Word Saliency In computer vision, the input feature is a 3D Tensor corresponding to the level in each channel.", "The key question to apply such method to NMT is what constitutes the input feature to a NMT system.", "Li et al.", "(2016) proposed to use the embedding of of the input words as the input feature to formulate saliency score, which results in the saliency of an input word being a vector of the same dimension as embedding vectors.", "To obtain a scalar saliency value, they computed the mean of the absolute value of the embedding gradients.", "We argue that there is a more mathematically principled way to approach this.", "To start, we treat the word embedding look-up operation as a dot product between the embedding weight matrix W and an one-hot vector z.", "The size of z is the same as the source vocabulary size.", "Similarly, the input sentence could be formulated as a matrix Z with only 0 and 1 entries.", "Notice that z has certain resemblance to the pixels of an image, with each cell representing the pixel-wise activation level of the words in the vocabulary.", "For the output word t j at time step j, we can similarly define the saliency of the one-hot vector z as: Ψ(z, t j ) = ∂p(t j | Z) ∂z (2) where p(t j | Z) is the probability of word t j generated by the NMT model given source sentence Z. Ψ(z, t j ) is a vector of the same size as z.", "However, note that there is a key difference between z and pixels.", "If the pixel level is 0, it means that the pixel is black, while a 0-entry in z means that the input word is not the word denoted by the corresponding cell.", "While the black region of an input image may still carry important information, we are not interested in the saliency of the 0-entries in z.", "1 Hence, we only take the 1-entries of matrix Z as the input to the NMT model.", "For a source word s i in the source sentence, this means we only care about the saliency of the 1-entries, i.e., the entry corresponding to source word s i : ψ(s i , t j ) = [ ∂p(t j | Z) ∂z ] s i = [ ∂p(t j | Z) ∂W s i · ∂W s i ∂z ] s i = [ ∂p(t j | Z) ∂W s i · W ] s i = ∂p(t j | Z) ∂W s i · W s i (3) where [·] i denotes the i-th row of a matrix or the ith element of a vector.", "In other words, the saliency ψ(s i , t j ) is a weighted sum of the word embedding of input word s i , with the partial gradient of each cell as the weight.", "By comparison, the word saliency 2 in Li et al.", "(2016) is defined as: ψ ′ (s i , t j ) = mean ( ∂p(t j | Z) ∂W s i ) (4) There are two implementation details that we would like to call for the reader's attention: • When the same word occurs multiple times in the source sentence, multiple copies of embedding for such word need to be made to ensure that the gradients flowing to different instances of the same word are not merged; • Note that ψ(s i , t j ) is not a probability distribution, which does not affect word alignment results because we are taking arg max.", "For visualizations presented herein, we normalized the distribution by p( s i | t j ) ∝ max(0, ψ(s i , t j )).", "One may also use softmax function for applications that need more well-formed probability distribution.", "1 Although we introduce z to facilitate presentation, note that word embedding look-up is never implemented as a matrix multiplication.", "Instead, it is implemented as a table lookup, so for each input word, only one row of the word embedding is fed into the subsequent computation.", "As a consequence, during training, since the other rows are not part of the computation graph, only parameters in the rows corresponding to the 1-entries will be updated.", "This is another reason why we choose to discard the saliency of 0-entries.", "2 Li et al.", "(2016) mostly focused on studying saliency on the level of word embedding dimensions.", "This word-level formulation is proposed as part of the analysis in Section 5.2 and Section 6 of that work.", "SmoothGrad There are two scenarios where the naïve gradientbased saliency may make mistakes: • For highly non-linear models, the saliency obtained from local perturbation may not be a good representation of the global saliency.", "• If the model fits the distribution nearly perfectly, some data points or input features may become saturated, i.e.", "having a partial gradient of 0.", "This does not necessarily mean they are not salient with regard to the prediction.", "We alleviate these problems with SmoothGrad, a method proposed by Smilkov et al.", "(2017) .", "The idea is to augment the input to the network into n samples by adding random noise generated by normal distribution N (0, σ 2 ).", "The saliency scores of each augmented sample are then averaged to cancel out the noise in the gradients.", "We made one small modification to this method in our experiments: rather than adding noise to the word inputs that are represented as one-hot vectors, we instead add noise to the queried embedding vectors.", "This allows us to introduce more randomness for each word input.", "Experiments Evaluation Method The best evaluation method would compare predicted word alignments against manually labeled word alignments between source sentences and NMT output sentences, but this is too costly for our study.", "Instead, we conduct two automatic evaluations for our proposed method using resources available: • force decoding: take a human-annotated corpus, run NMT models to force-generate the target side of the corpus and measure AER against the human alignment; • free decoding: take the NMT prediction, obtain reasonably clean reference alignments between the prediction and the source and measure AER against this reference.", "3 Notice that both automatic evaluation methods have their respective limitation: the force decoding method may force the model to predict something it deems unlikely, and thus generating noisy alignment; whereas the free decoding method lacks authentic references.", "Setup We follow Zenkel et al.", "(2019) in data setup and use the accompanied scripts of that paper 4 for preprocessing.", "Their training data consists of 1.9M, 1.1M and 0.4M sentence pairs for German-English (de-en), English-French (en-fr) and Romanian-English (ro-en) language pairs, respectively, whereas the manually-aligned test data contains 508, 447 and 248 sentence pairs for each language pair.", "There is no development data provided in their setup, and it is not clear what they used for NMT system training, so we set aside the last 1,000 sentences of the training data for each language as the development set.", "For our NMT systems, we use fairseq 5 to train attention-based RNN systems (LSTM) (Bahdanau et al., 2014) , convolution systems (FConv) (Gehring et al., 2017) , and Transformer systems (Transformer) (Vaswani et al., 2017) .", "We use the pre-configured model architectures for IWSLT German-English experiments 6 to build all NMT systems.", "Our experiments cover the following interpretation methods: • Attention: directly take the attention weights as soft alignment scores.", "For transformer, we follow the implementation in fairseq and used the attention weights from the final layer averaged across all heads; • Smoothed Attention: obtain multiple version of attention weights with the same data augmentation procedure as SmoothGrad and average them.", "This is to prove that smoothing itself does not improve the interpretation quality, and has to be used together with effective interpretation method; • (Li et al., 2016) : applied with normal backpropagation (Grad) and SmoothGrad; • Ours: applied with normal back-propagation (Grad) and SmoothGrad.", "For all the methods above, we follow the same procedure in (Zenkel et al., 2019) to convert soft alignment scores to hard alignment.", "For force decoding experiments, we generate symmetrized alignment results with growdiag-final.", "We also include AER results 7 of fast-align (Dyer et al., 2013) , GIZA++ 8 and the best model (Add+SGD) from Zenkel et al.", "(2019) on the same dataset for comparison.", "However, the readers should be aware that there are certain caveats in this comparison: • All of these models are specifically designed and optimized to generate high-quality alignments, while our method is an interpretation method and is not making any architecture modifications or parameter updates; • fast-align and GIZA++ usually need to update model with full sentence to generate optimal alignments, while our system and Zenkel et al.", "(2019) can do so on-the-fly.", "7 We reproduced the fast-align results as a sanity check and we were able to perfectly replicate their numbers with their released scripts.", "8 https://github.com/moses-smt/giza-pp Realizing the second caveat, we also run fastalign under the online alignment scenario, where we first train a fast-align model and decode on the test set.", "This is a real-world scenario in applications such as computer-aided translation (Bouma and Parmentier, 2014; Arcan et al., 2014) , where we cannot practically update alignment models onthe-fly.", "On the other hand, we believe this is a slightly better comparison for methods with online alignment capabilities such as Zenkel et al.", "(2019) and this work.", "The data used in Zenkel et al.", "(2019) did not provide a manually-aligned development set, so we tune the SmoothGrad hyperparameters (noise standard deviation σ and sample size n) on a 30sentence subset of the German-English test data with the Transformer model.", "We ended up using the recommended σ = 0.15 in the original paper and a slightly smaller sample size n = 30 for speed.", "This hyperparameter setting is applied to the other SmoothGrad experiments as-is.", "For com-parison with previous work, we do not exclude these sentences from the reported results, we instead mark the numbers affected to raise caution.", "Table 1 shows the AER results under the force decoding setting.", "First, note that after applying our saliency method with normal back-propagation, AER is only reduced for FConv model but instead increases for LSTM and Transformer.", "The largest increase is observed for Transformer, where the AER increases by about 20 points on average.", "However, after applying SmoothGrad on top of that, we observe a sharp drop in AER, which ends up with 10-20 points lower than the attention weight baseline.", "We can also see that this is not just an effect introduced by input noise, as the same smoothing procedure for attention increases the AER most of the times.", "To summarize, at least under force decoding settings, our saliency method with SmoothGrad obtains word alignment interpretations of much higher quality than the attention weight baseline.", "Force Decoding Results As for Li et al.", "(2016) , for FConv and LSTM architectures, it is not only consistently worse than our method, but at times also worse than attention.", "Besides, the effect of SmoothGrad is also not as consistent on their saliency formulation as ours.", "Although with the Transformer model, the Li et al.", "(2016) method obtained better AER than our method under several settings, it is still pretty clear overall that the superior mathematical soundness of our method is translated into better interpretation quality.", "While the GIZA++ model obtains the best alignment result in Table 1 9 , most of our word alignment interpretation of FConv model with Smooth-Grad surpasses the alignment quality of fast-align (either Online or Offline), sometimes by as much as 8.7 points (symmetrized ro<>en result).", "Our best models are also largely on-par with (Zenkel et al., 2019) .", "These are notable results as our method is an interpretation method and no extra parameter is updated to optimize the quality of alignment.", "On the other hand, this also indicates that it is possible to induce high-quality 9 While Ghader and Monz (2017) showed that the AER obtained by LSTM model is close to that of GIZA++, our experiments yield a much larger difference.", "We think this is largely due to the fact that we choose to train our model with BPE, while Ghader and Monz (2017) explicitly avoided doing so.", "alignments from NMT model without modifying its parameters, showing that it has acquired such information in an implicit way.", "Most interestingly, although NMT is often deemed as performing poorly under low-resource setting, our interpretation seems to work relatively well on ro<>en language pair, which happens to be the language pair that we have least training data for.", "We think this is a phenomenon that merits further exploration.", "Besides, it can be seen that for all reported methods, the overall order for the number of alignment errors is FConv < LSTM < Transformer.", "To our best knowledge, this is also a novel insight, as no one has analyzed attention weights of FConv with other architectures before.", "We can also observe that while our method is not strong enough to fully bridge the gap of the attention noise level between different model architecture, it does manage to narrow the difference in some cases.", "Table 2 shows the result under free decoding setting.", "The trend in this group of experiment is similar to Table 1 , except that Transformer occasionally outperforms LSTM.", "We think this is mainly due to the fact that Transformer generates higher quality translations, but could also be partially attributed to the noise in fast-align reference.", "Also, notice that the AER numbers are also generally lower compared to Table 1 under this setting.", "One reason is that our model is aligning output with which it is most confident, so less noise should be expected in the model behavior.", "On the other hand, by qualitatively comparing the reference translation in the test set and the NMT output, we find that it is generally easier to align the translation as it is often a more literal translation.", "6 Analysis 6.1 Comparison with Li et al.", "(2016) The main reason why the word saliency formulation in Li et al.", "(2016) does not work as well for word alignment is the lack of polarity in the formulation.", "In other words, it only quantifies how much the input influences the output, but does not specify in what way does the input influence.", "This is sufficient for error analysis, but does not suit the purpose of word alignment, as humans will only align a target word to the input words that constitute a translation pair, i.e.", "have positive influence.", "Figure 2 shows a case where this problem occurs in our German-English experiments.", "Note that in Subfigure (a), the source word nur has high saliency on several target words, e.g.", "should, but the word nur is actually not translated in the reference.", "On the other hand, as shown in Subfigure (b), our method correctly assigns negative (shown as white) or small positive values at all time steps for this source word.", "Specifically, the saliency value of nur for should is negative with large magnitude, indicating significant negative contributions to the prediction of that target word.", "Hence, a good word alignment interpreta-tion should strongly avoid aligning them.", "Free Decoding Results SmoothGrad Tables 1 and 2 show that SmoothGrad is a crucial factor to reduce AER, especially for Transformer.", "Figure 3 Table 1 .", "By comparing Subfigures (a) and (c), we notice that (1) without SmoothGrad, the word saliency obtained from the Transformer model is extremely noisy, and (2) the output of SmoothGrad is not only a smoother version of the naïve gradient output, but also gains new information by performing extra forward and backward evaluations with the noisy input.", "For example, compare the alignment point between source word wir and target word we: in Subfigure (a), this word pair has very low saliency, but in (c), they become the most likely alignment pair for that target word.", "Referring back to our motivation for using SmoothGrad in Section 4.3, we think the observations above verify that the Transformer model is a case where very high non-linearities occur almost everywhere in the parameter space, such that the saliency obtained from local perturbation is a very Table 3 : Alignment distribution entropy for selected deen models.", "att stands for attention in Table 1. poor representation of the global saliency almost all the time.", "On the other hand, this is also why the Transformer especially relies on SmoothGrad to work well, as the perturbation will give a better estimation of the global saliency.", "It could also be observed from Subfigures (b) and (d) that when the noise is too moderate, the evaluation does not deviate enough from the original spot to gain non-local information, and at (d) it deviates too much and hence the resulting alignment is almost random.", "Intuitively, the noise parameter σ should be sensitive to the model architecture or even specific input feature values, but interestingly we end up finding that a single choice from the computer vision literature works well with all of our systems.", "We encourage future work to conduct more comprehensive analysis of the effect of SmoothGrad on more complicated architectures beyond convolutional neural nets.", "Alignment Dispersion We run German-English alignments under several different SmoothGrad noise deviation σ and report their dispersion as measured by entropy of the (soft) alignment distribution averaged by number of target words.", "Results are summarized in Ta-ble 3, where lower entropy indicates more peaky alignments.", "First, we observe that dispersion of word saliency gets higher as we increase σ, which aligns with the observations in Figure 3 .", "It should also be noted that the alignment dispersion is consistently lower for free decoding than force decoding.", "This verifies our conjecture that the force decoding setting might introduce more noise in the model behavior, but judging from this result, that gap seems to be minimal.", "Comparing different architectures, the dispersion of attention weights does not correlate well with the dispersion of word saliency.", "We also notice that, while the Transformer attention interpretation consistently results in higher AER, its dispersion is lower than the other architectures, indicating that with attention, a lot of the probability mass might be concentrated in the wrong place more often.", "This corroborates the finding in Raganato and Tiedemann (2018) .", "Discussion And Future Work There are several extensions to this work that we would like to discuss in this section.", "First, in this paper we only explored two saliency methods among many others available (Montavon et al., 2018) .", "In our preliminary study, we also experimented with guided back-propagation (Springenberg et al., 2014), a frequently used saliency method in computer vision, which did not work well for our problem.", "We suspect that there is a gap between applying these methods on mostlyconvolutional architectures in computer vision and architectures with more non-linearities in NLP.", "We hope the future research from the NLP and machine learning communities could bridge this gap.", "Secondly, the alignment errors in our method comes from three different sources: the limitation of NMT models on learning word alignments, the limitation of interpretation method on recovering interpretable word alignments, and the ambiguity in word alignments itself.", "Although we have shown that high quality alignment could be recovered from NMT systems (thus pushing our understanding on the limitation of NMT models), we are not yet able to separate these sources of errors in this work.", "While exploration on this direction will help us better understand both NMT models and the capability of saliency methods in NLP, researchers may want to avoid using word alignment as a benchmark for saliency methods because of its ambiguity.", "For such purpose, simpler tasks with clear ground truth, such as subject-verb agreement, might be a better choice.", "Finally, as mentioned before, we are only conducting approximate evaluation to measure the ability of our interpretation method.", "An immediate future work would be evaluating this on human-annotated translation outputs generated by the NMT system.", "Conclusion We propose to use word saliency and SmoothGrad to interpret word alignments from NMT predictions.", "Our proposal is model-agnostic, is able to be applied either offline or online, and does not require any parameter updates or architectural change.", "Both force decoding and free decoding evaluations show that our method is capable of generating word alignment interpretations of much higher quality compared to its attentionbased counterpart.", "Our empirical results also probe into the NMT black-box and reveal that even without any special architecture or training algorithm, some NMT models have already implicitly learned interpretable word alignments of comparable quality to fast-align.", "The model and code for our experiments are available at https://github.com/shuoyangd/meerkat." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "4.3", "5.1", "5.2", "5.3", "6.2", "6.3", "7", "8" ], "paper_header_content": [ "Introduction", "Related Work", "The Interpretation Problem", "Method", "Visual Saliency", "Word Saliency", "SmoothGrad", "Evaluation Method", "Setup", "Force Decoding Results", "SmoothGrad", "Alignment Dispersion", "Discussion And Future Work", "Conclusion" ] }
GEM-SciDuet-train-38#paper-1054#slide-11
SmoothGrad
Gradients are very local measure of sensitivity. Highly non-linear models may have pathological points where the gradients are noisy. Solution: calculate saliency for multiple copies of the same input corrupted with gaussian noise, and average the saliency of copies. Saliency-driven Word Alignment Interpretation for NMT
Gradients are very local measure of sensitivity. Highly non-linear models may have pathological points where the gradients are noisy. Solution: calculate saliency for multiple copies of the same input corrupted with gaussian noise, and average the saliency of copies. Saliency-driven Word Alignment Interpretation for NMT
[]
GEM-SciDuet-train-38#paper-1054#slide-13
1054
Saliency-driven Word Alignment Interpretation for Neural Machine Translation
Despite their original goal to jointly learn to align and translate, Neural Machine Translation (NMT) models, especially Transformer, are often perceived as not learning interpretable word alignments. In this paper, we show that NMT models do learn interpretable word alignments, which could only be revealed with proper interpretation methods. We propose a series of such methods that are model-agnostic, are able to be applied either offline or online, and do not require parameter update or architectural change. We show that under the force decoding setup, the alignments induced by our interpretation method are of better quality than fast-align for some systems, and when performing free decoding, they agree well with the alignments induced by automatic alignment tools.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223 ], "paper_content_text": [ "Introduction Neural Machine Translation (NMT) has made lots of advancements since its inception.", "One of the key innovations that led to the largest improvements is the introduction of the attention mechanism (Bahdanau et al., 2014; Luong et al., 2015) , which jointly learns word alignment and translation.", "Since then, the attention mechanism has gradually become a general technique in various NLP tasks, including summarization (Rush et al., 2015; See et al., 2017) , natural language inference (Parikh et al., 2016) and speech recognition (Chorowski et al., 2015; Chan et al., 2016) .", "Although word alignment is no longer a integral step like the case for Statistical Machine Translation (SMT) systems (Brown et al., 1993; Koehn et al., 2003) , there is a resurgence of interest in the community to study word alignment for NMT models.", "Even for NMT, word alignments are useful for error analysis, inserting external vocabularies, and providing guidance for human translators in computer-aided translation.", "When aiming for the most accurate alignments, the state-of-the-art tools include GIZA++ (Brown et al., 1993; Och and Ney, 2003) and fast-align (Dyer et al., 2013) , which are all external models invented in SMT era and need to be run as a separate post-processing step after the full sentence translation is complete.", "As a direct result, they are not suitable for analyzing the internal decision processes of the neural machine translation models.", "Besides, these models are hard to apply in the online fashion, i.e.", "in the middle of left-to-right translation process, such as the scenario in certain constrained decoding algorithms (Hasler et al., 2018) and in computeraided translation (Bouma and Parmentier, 2014; Arcan et al., 2014) .", "For these cases, the current common practice is to simply generate word alignments from attention weights between the encoder and decoder.", "However, there are problems with this practice.", "Koehn and Knowles (2017) showed that attention-based word alignment interpretation may be subject to \"off-by-one\" errors.", "Zenkel et al.", "(2019) ; Tang et al.", "(2018b) ; Raganato and Tiedemann (2018) pointed out that the attention-induced alignment is particularly noisy with Transformer models.", "Because of this, some studies, such as Nguyen and Chiang (2018); Zenkel et al.", "(2019) proposed either to add extra modules to generate higher quality word alignments, or to use these modules to further improve the model performance or interpretability.", "This paper is a step towards interpreting word alignments from NMT without relying on external models.", "We argue that using only attention weights is insufficient for generating clean word alignment interpretations, which we demonstrate both conceptually and empirically.", "We propose to use the notion of saliency to obtain word alignment interpretation of NMT predictions.", "Different from previous alignment models, our proposal is a pure interpretation method and does not require any parameter update or architecture change.", "Nevertheless, we are able to reduce Alignment Error Rate (AER) by 10-20 points over the attention weight baseline under two evaluation settings we adopt (see Figure 1 for an example), and beat fast-align (Dyer et al., 2013) by as much as 8.7 points.", "Not only have we proposed a superior model interpretation method, but our empirical results also uncover that, contrary to common beliefs, architectures such as convolutional sequenceto-sequence models (Gehring et al., 2017) have already implicitly learned highly interpretable word alignments, which sheds light on how future improvement should be made on these architectures.", "Related Work We start with work that combines word alignments with NMT.", "Research in this area generally falls into one of three themes: (1) employing the notion of word alignments to interpret the prediction of NMT; (2) making use of word alignments to improve NMT performance; (3) making use of NMT to improve word alignments.", "We mainly focus on related work in the first theme as this is the problem we are addressing in this work.", "Then we briefly introduce work in the other themes that is relevant to our study.", "We conclude by briefly summarizing related work to our proposed interpretation method.", "For the attention in RNN-based sequence-tosequence model, the first comprehensive analysis is conducted by Ghader and Monz (2017) .", "They argued that the attention in such systems agree with word alignment to a certain extent by showing that the RNN-based system achieves comparable alignment error rate comparable to that of bidirectional GIZA++ with symmetrization.", "However, they also point out that they are not exactly the same, as training the attention with alignments would occasionally cause the model to forget important information.", "Lee et al.", "(2017) presented a toolkit that facilitates study for the attention in RNN-based models.", "There is also a number of other studies that analyze the attention in Transformer models.", "Tang et al.", "(2018a,b) conducted targeted evaluation of neural machine translation models in two different evaluation tasks, namely subject-verb agreement and word sense disambiguation.", "During the analysis, they noted that the pattern in Transformer model (what they refer to as advanced attention mechanism) is very different from that of the attention in RNN-based architecture, in that a lot of the probability mass is focused on the last input token.", "They did not dive deeper in this phenomenon in their analysis.", "Raganato and Tiedemann (2018) performed a brief but more refined analysis on each attention head and each layer, where they noticed several different patterns inside the modules, and concluded that Transformer tends to focus on local dependencies in lower layers but finds long dependencies on higher ones.", "Beyond interpretation, in order to improve the translation of rare words, Nguyen and Chiang (2018) introduced LexNet, a feed-forward neural network that directly predicts the target word from a weighted sum of the source embeddings, on top of an RNN-based Seq2Seq models.", "Their goal was to improve translation output and hence they did not empirically show AER improvements on manually-aligned corpora.", "There are also a few other studies that inject alignment supervision during NMT training (Mi et al., 2016; Liu et al., 2016) .", "In terms of improvements in word alignment quality, Legrand et al.", "(2016) ; Wang et al.", "(2018) ; proposed neu-ral word alignment modules decoupled from NMT systems, while Zenkel et al.", "(2019) introduced a separate module to extract alignment from NMT decoder states, with which they achieved comparable AER with fast-align with Transformer models.", "The saliency method we propose in this work draws its inspiration from visual saliency proposed by Simonyan et al.", "(2013); Springenberg et al.", "(2014) ; Smilkov et al.", "(2017) .", "It should be noted that these methods were mostly applied to computer vision tasks.", "To the best of our knowledge, Li et al.", "(2016) presented the only work that directly employs saliency methods to interpret NLP models.", "Most similar to our work in spirit, Ding et al.", "(2017) used Layer-wise Relevance Propagation (LRP; Bach et al.", "2015) , an interpretation method resembling saliency, to interpret the internal working mechanisms of RNN-based neural machine translation systems.", "Although conceptually LRP is also a good fit for word alignment interpretation, we have some concerns with the mathematical soundness of LRP when applied to attention models.", "Our proposed method is also considerably more flexible and easier to implement than LRP.", "The Interpretation Problem Formally, by interpreting model prediction, we are referring to the following problem: given a trained MT model and input tokens S = {s 0 , s 1 , .", ".", ".", ", s I−1 }, at a certain time step j when the models predicts t j , we want to know which source word in S \"contributed\" most to this prediction.", "Note that the prediction t j might not be arg max t j p(t j | t 1:j−1 ), as the locally optimal option may be pruned during beam search and not end up in the final translation.", "Under this framework, we can see an important conceptual problem regarding interpreting attention weights as word alignment.", "Suppose for the same source sentence, there are two alternative translations that diverge at target time step j, generating t j and t ′ j which respectively correspond to different source words.", "Presumably, the source word that is aligned to t j and t ′ j should changed correspondingly.", "However, this is not possible with the attention weight interpretation, because the attention weight is computed before prediction of t j or t ′ j .", "With that, we argue that an ideal interpretation algorithm should be able to adapt the interpretation with the specified output label, regard-less of whether it is the most likely label predicted by the model.", "As a final note, the term \"attention weights\" here refers to the weights of the attention between encoder and decoder (the \"encoder-decoder attention\" in Vaswani et al.", "(2017) ).", "Specifically, they do not refer to the weight of self-attention modules that only exist in the Transformer architecture, which do not establish alignment between the source and target words.", "Method Our proposal is based on the notion of visual saliency (Simonyan et al., 2013) in computer vision.", "In brief, the saliency of an input feature is defined by the partial gradient of the output score with regard to the input.", "We propose to extend this idea to NMT by drawing analogy between input pixels and the embedding look-up operation.", "Visual Saliency Suppose we have an image classification example (x 0 , y 0 ), with y 0 being a specific image class and x 0 being an |X |-dimensional vector.", "Each entry of x 0 is an input feature (i.e., a pixel) to the classifier.", "Given the input x 0 , a trained classifier can generate a prediction score for class y 0 , denoted as p(y 0 | x 0 ).", "Consider the first-order Taylor expansion of a perturbed version of this score at the neighborhood of input x 0 : p(y 0 | x) ≈ p(y 0 | x 0 ) + ∂p(y 0 | x) ∂x x 0 · (x − x 0 ) (1) This is essentially re-formulating the perturbed prediction score p(y 0 | x) as an affine approximation of the input features, while the \"contribution\" of each feature to the final prediction being the partial derivative of the prediction score with regard to the feature.", "Assuming a feature that is deemed as salient for the local perturbation of the prediction score would also be globally salient, the saliency of an input feature is defined as follows: Definition 1 Denoted as Ψ(x, y), the saliency of feature vector x with regard to output class y is defined as ∂p(y | x) ∂x .", "Note that Ψ(x, y) is also a vector, with each entry corresponding to the saliency of a single input feature in x.", "Such formulation has following nice properties: • The saliency of an input feature is related to the choice of output class y, as model scores of different output classes correspond to a different set of parameters, and hence resulting in different partial gradients for the input features.", "This makes up for the aforementioned deficiency of attention weights in addressing the interpretation problem.", "• The partial gradient could be computed by back-propagation, which is efficiently implemented in most deep learning frameworks.", "• The formulation is agnostic to the model that generates p(y | x), so it could be applied to any deep learning architecture.", "Word Saliency In computer vision, the input feature is a 3D Tensor corresponding to the level in each channel.", "The key question to apply such method to NMT is what constitutes the input feature to a NMT system.", "Li et al.", "(2016) proposed to use the embedding of of the input words as the input feature to formulate saliency score, which results in the saliency of an input word being a vector of the same dimension as embedding vectors.", "To obtain a scalar saliency value, they computed the mean of the absolute value of the embedding gradients.", "We argue that there is a more mathematically principled way to approach this.", "To start, we treat the word embedding look-up operation as a dot product between the embedding weight matrix W and an one-hot vector z.", "The size of z is the same as the source vocabulary size.", "Similarly, the input sentence could be formulated as a matrix Z with only 0 and 1 entries.", "Notice that z has certain resemblance to the pixels of an image, with each cell representing the pixel-wise activation level of the words in the vocabulary.", "For the output word t j at time step j, we can similarly define the saliency of the one-hot vector z as: Ψ(z, t j ) = ∂p(t j | Z) ∂z (2) where p(t j | Z) is the probability of word t j generated by the NMT model given source sentence Z. Ψ(z, t j ) is a vector of the same size as z.", "However, note that there is a key difference between z and pixels.", "If the pixel level is 0, it means that the pixel is black, while a 0-entry in z means that the input word is not the word denoted by the corresponding cell.", "While the black region of an input image may still carry important information, we are not interested in the saliency of the 0-entries in z.", "1 Hence, we only take the 1-entries of matrix Z as the input to the NMT model.", "For a source word s i in the source sentence, this means we only care about the saliency of the 1-entries, i.e., the entry corresponding to source word s i : ψ(s i , t j ) = [ ∂p(t j | Z) ∂z ] s i = [ ∂p(t j | Z) ∂W s i · ∂W s i ∂z ] s i = [ ∂p(t j | Z) ∂W s i · W ] s i = ∂p(t j | Z) ∂W s i · W s i (3) where [·] i denotes the i-th row of a matrix or the ith element of a vector.", "In other words, the saliency ψ(s i , t j ) is a weighted sum of the word embedding of input word s i , with the partial gradient of each cell as the weight.", "By comparison, the word saliency 2 in Li et al.", "(2016) is defined as: ψ ′ (s i , t j ) = mean ( ∂p(t j | Z) ∂W s i ) (4) There are two implementation details that we would like to call for the reader's attention: • When the same word occurs multiple times in the source sentence, multiple copies of embedding for such word need to be made to ensure that the gradients flowing to different instances of the same word are not merged; • Note that ψ(s i , t j ) is not a probability distribution, which does not affect word alignment results because we are taking arg max.", "For visualizations presented herein, we normalized the distribution by p( s i | t j ) ∝ max(0, ψ(s i , t j )).", "One may also use softmax function for applications that need more well-formed probability distribution.", "1 Although we introduce z to facilitate presentation, note that word embedding look-up is never implemented as a matrix multiplication.", "Instead, it is implemented as a table lookup, so for each input word, only one row of the word embedding is fed into the subsequent computation.", "As a consequence, during training, since the other rows are not part of the computation graph, only parameters in the rows corresponding to the 1-entries will be updated.", "This is another reason why we choose to discard the saliency of 0-entries.", "2 Li et al.", "(2016) mostly focused on studying saliency on the level of word embedding dimensions.", "This word-level formulation is proposed as part of the analysis in Section 5.2 and Section 6 of that work.", "SmoothGrad There are two scenarios where the naïve gradientbased saliency may make mistakes: • For highly non-linear models, the saliency obtained from local perturbation may not be a good representation of the global saliency.", "• If the model fits the distribution nearly perfectly, some data points or input features may become saturated, i.e.", "having a partial gradient of 0.", "This does not necessarily mean they are not salient with regard to the prediction.", "We alleviate these problems with SmoothGrad, a method proposed by Smilkov et al.", "(2017) .", "The idea is to augment the input to the network into n samples by adding random noise generated by normal distribution N (0, σ 2 ).", "The saliency scores of each augmented sample are then averaged to cancel out the noise in the gradients.", "We made one small modification to this method in our experiments: rather than adding noise to the word inputs that are represented as one-hot vectors, we instead add noise to the queried embedding vectors.", "This allows us to introduce more randomness for each word input.", "Experiments Evaluation Method The best evaluation method would compare predicted word alignments against manually labeled word alignments between source sentences and NMT output sentences, but this is too costly for our study.", "Instead, we conduct two automatic evaluations for our proposed method using resources available: • force decoding: take a human-annotated corpus, run NMT models to force-generate the target side of the corpus and measure AER against the human alignment; • free decoding: take the NMT prediction, obtain reasonably clean reference alignments between the prediction and the source and measure AER against this reference.", "3 Notice that both automatic evaluation methods have their respective limitation: the force decoding method may force the model to predict something it deems unlikely, and thus generating noisy alignment; whereas the free decoding method lacks authentic references.", "Setup We follow Zenkel et al.", "(2019) in data setup and use the accompanied scripts of that paper 4 for preprocessing.", "Their training data consists of 1.9M, 1.1M and 0.4M sentence pairs for German-English (de-en), English-French (en-fr) and Romanian-English (ro-en) language pairs, respectively, whereas the manually-aligned test data contains 508, 447 and 248 sentence pairs for each language pair.", "There is no development data provided in their setup, and it is not clear what they used for NMT system training, so we set aside the last 1,000 sentences of the training data for each language as the development set.", "For our NMT systems, we use fairseq 5 to train attention-based RNN systems (LSTM) (Bahdanau et al., 2014) , convolution systems (FConv) (Gehring et al., 2017) , and Transformer systems (Transformer) (Vaswani et al., 2017) .", "We use the pre-configured model architectures for IWSLT German-English experiments 6 to build all NMT systems.", "Our experiments cover the following interpretation methods: • Attention: directly take the attention weights as soft alignment scores.", "For transformer, we follow the implementation in fairseq and used the attention weights from the final layer averaged across all heads; • Smoothed Attention: obtain multiple version of attention weights with the same data augmentation procedure as SmoothGrad and average them.", "This is to prove that smoothing itself does not improve the interpretation quality, and has to be used together with effective interpretation method; • (Li et al., 2016) : applied with normal backpropagation (Grad) and SmoothGrad; • Ours: applied with normal back-propagation (Grad) and SmoothGrad.", "For all the methods above, we follow the same procedure in (Zenkel et al., 2019) to convert soft alignment scores to hard alignment.", "For force decoding experiments, we generate symmetrized alignment results with growdiag-final.", "We also include AER results 7 of fast-align (Dyer et al., 2013) , GIZA++ 8 and the best model (Add+SGD) from Zenkel et al.", "(2019) on the same dataset for comparison.", "However, the readers should be aware that there are certain caveats in this comparison: • All of these models are specifically designed and optimized to generate high-quality alignments, while our method is an interpretation method and is not making any architecture modifications or parameter updates; • fast-align and GIZA++ usually need to update model with full sentence to generate optimal alignments, while our system and Zenkel et al.", "(2019) can do so on-the-fly.", "7 We reproduced the fast-align results as a sanity check and we were able to perfectly replicate their numbers with their released scripts.", "8 https://github.com/moses-smt/giza-pp Realizing the second caveat, we also run fastalign under the online alignment scenario, where we first train a fast-align model and decode on the test set.", "This is a real-world scenario in applications such as computer-aided translation (Bouma and Parmentier, 2014; Arcan et al., 2014) , where we cannot practically update alignment models onthe-fly.", "On the other hand, we believe this is a slightly better comparison for methods with online alignment capabilities such as Zenkel et al.", "(2019) and this work.", "The data used in Zenkel et al.", "(2019) did not provide a manually-aligned development set, so we tune the SmoothGrad hyperparameters (noise standard deviation σ and sample size n) on a 30sentence subset of the German-English test data with the Transformer model.", "We ended up using the recommended σ = 0.15 in the original paper and a slightly smaller sample size n = 30 for speed.", "This hyperparameter setting is applied to the other SmoothGrad experiments as-is.", "For com-parison with previous work, we do not exclude these sentences from the reported results, we instead mark the numbers affected to raise caution.", "Table 1 shows the AER results under the force decoding setting.", "First, note that after applying our saliency method with normal back-propagation, AER is only reduced for FConv model but instead increases for LSTM and Transformer.", "The largest increase is observed for Transformer, where the AER increases by about 20 points on average.", "However, after applying SmoothGrad on top of that, we observe a sharp drop in AER, which ends up with 10-20 points lower than the attention weight baseline.", "We can also see that this is not just an effect introduced by input noise, as the same smoothing procedure for attention increases the AER most of the times.", "To summarize, at least under force decoding settings, our saliency method with SmoothGrad obtains word alignment interpretations of much higher quality than the attention weight baseline.", "Force Decoding Results As for Li et al.", "(2016) , for FConv and LSTM architectures, it is not only consistently worse than our method, but at times also worse than attention.", "Besides, the effect of SmoothGrad is also not as consistent on their saliency formulation as ours.", "Although with the Transformer model, the Li et al.", "(2016) method obtained better AER than our method under several settings, it is still pretty clear overall that the superior mathematical soundness of our method is translated into better interpretation quality.", "While the GIZA++ model obtains the best alignment result in Table 1 9 , most of our word alignment interpretation of FConv model with Smooth-Grad surpasses the alignment quality of fast-align (either Online or Offline), sometimes by as much as 8.7 points (symmetrized ro<>en result).", "Our best models are also largely on-par with (Zenkel et al., 2019) .", "These are notable results as our method is an interpretation method and no extra parameter is updated to optimize the quality of alignment.", "On the other hand, this also indicates that it is possible to induce high-quality 9 While Ghader and Monz (2017) showed that the AER obtained by LSTM model is close to that of GIZA++, our experiments yield a much larger difference.", "We think this is largely due to the fact that we choose to train our model with BPE, while Ghader and Monz (2017) explicitly avoided doing so.", "alignments from NMT model without modifying its parameters, showing that it has acquired such information in an implicit way.", "Most interestingly, although NMT is often deemed as performing poorly under low-resource setting, our interpretation seems to work relatively well on ro<>en language pair, which happens to be the language pair that we have least training data for.", "We think this is a phenomenon that merits further exploration.", "Besides, it can be seen that for all reported methods, the overall order for the number of alignment errors is FConv < LSTM < Transformer.", "To our best knowledge, this is also a novel insight, as no one has analyzed attention weights of FConv with other architectures before.", "We can also observe that while our method is not strong enough to fully bridge the gap of the attention noise level between different model architecture, it does manage to narrow the difference in some cases.", "Table 2 shows the result under free decoding setting.", "The trend in this group of experiment is similar to Table 1 , except that Transformer occasionally outperforms LSTM.", "We think this is mainly due to the fact that Transformer generates higher quality translations, but could also be partially attributed to the noise in fast-align reference.", "Also, notice that the AER numbers are also generally lower compared to Table 1 under this setting.", "One reason is that our model is aligning output with which it is most confident, so less noise should be expected in the model behavior.", "On the other hand, by qualitatively comparing the reference translation in the test set and the NMT output, we find that it is generally easier to align the translation as it is often a more literal translation.", "6 Analysis 6.1 Comparison with Li et al.", "(2016) The main reason why the word saliency formulation in Li et al.", "(2016) does not work as well for word alignment is the lack of polarity in the formulation.", "In other words, it only quantifies how much the input influences the output, but does not specify in what way does the input influence.", "This is sufficient for error analysis, but does not suit the purpose of word alignment, as humans will only align a target word to the input words that constitute a translation pair, i.e.", "have positive influence.", "Figure 2 shows a case where this problem occurs in our German-English experiments.", "Note that in Subfigure (a), the source word nur has high saliency on several target words, e.g.", "should, but the word nur is actually not translated in the reference.", "On the other hand, as shown in Subfigure (b), our method correctly assigns negative (shown as white) or small positive values at all time steps for this source word.", "Specifically, the saliency value of nur for should is negative with large magnitude, indicating significant negative contributions to the prediction of that target word.", "Hence, a good word alignment interpreta-tion should strongly avoid aligning them.", "Free Decoding Results SmoothGrad Tables 1 and 2 show that SmoothGrad is a crucial factor to reduce AER, especially for Transformer.", "Figure 3 Table 1 .", "By comparing Subfigures (a) and (c), we notice that (1) without SmoothGrad, the word saliency obtained from the Transformer model is extremely noisy, and (2) the output of SmoothGrad is not only a smoother version of the naïve gradient output, but also gains new information by performing extra forward and backward evaluations with the noisy input.", "For example, compare the alignment point between source word wir and target word we: in Subfigure (a), this word pair has very low saliency, but in (c), they become the most likely alignment pair for that target word.", "Referring back to our motivation for using SmoothGrad in Section 4.3, we think the observations above verify that the Transformer model is a case where very high non-linearities occur almost everywhere in the parameter space, such that the saliency obtained from local perturbation is a very Table 3 : Alignment distribution entropy for selected deen models.", "att stands for attention in Table 1. poor representation of the global saliency almost all the time.", "On the other hand, this is also why the Transformer especially relies on SmoothGrad to work well, as the perturbation will give a better estimation of the global saliency.", "It could also be observed from Subfigures (b) and (d) that when the noise is too moderate, the evaluation does not deviate enough from the original spot to gain non-local information, and at (d) it deviates too much and hence the resulting alignment is almost random.", "Intuitively, the noise parameter σ should be sensitive to the model architecture or even specific input feature values, but interestingly we end up finding that a single choice from the computer vision literature works well with all of our systems.", "We encourage future work to conduct more comprehensive analysis of the effect of SmoothGrad on more complicated architectures beyond convolutional neural nets.", "Alignment Dispersion We run German-English alignments under several different SmoothGrad noise deviation σ and report their dispersion as measured by entropy of the (soft) alignment distribution averaged by number of target words.", "Results are summarized in Ta-ble 3, where lower entropy indicates more peaky alignments.", "First, we observe that dispersion of word saliency gets higher as we increase σ, which aligns with the observations in Figure 3 .", "It should also be noted that the alignment dispersion is consistently lower for free decoding than force decoding.", "This verifies our conjecture that the force decoding setting might introduce more noise in the model behavior, but judging from this result, that gap seems to be minimal.", "Comparing different architectures, the dispersion of attention weights does not correlate well with the dispersion of word saliency.", "We also notice that, while the Transformer attention interpretation consistently results in higher AER, its dispersion is lower than the other architectures, indicating that with attention, a lot of the probability mass might be concentrated in the wrong place more often.", "This corroborates the finding in Raganato and Tiedemann (2018) .", "Discussion And Future Work There are several extensions to this work that we would like to discuss in this section.", "First, in this paper we only explored two saliency methods among many others available (Montavon et al., 2018) .", "In our preliminary study, we also experimented with guided back-propagation (Springenberg et al., 2014), a frequently used saliency method in computer vision, which did not work well for our problem.", "We suspect that there is a gap between applying these methods on mostlyconvolutional architectures in computer vision and architectures with more non-linearities in NLP.", "We hope the future research from the NLP and machine learning communities could bridge this gap.", "Secondly, the alignment errors in our method comes from three different sources: the limitation of NMT models on learning word alignments, the limitation of interpretation method on recovering interpretable word alignments, and the ambiguity in word alignments itself.", "Although we have shown that high quality alignment could be recovered from NMT systems (thus pushing our understanding on the limitation of NMT models), we are not yet able to separate these sources of errors in this work.", "While exploration on this direction will help us better understand both NMT models and the capability of saliency methods in NLP, researchers may want to avoid using word alignment as a benchmark for saliency methods because of its ambiguity.", "For such purpose, simpler tasks with clear ground truth, such as subject-verb agreement, might be a better choice.", "Finally, as mentioned before, we are only conducting approximate evaluation to measure the ability of our interpretation method.", "An immediate future work would be evaluating this on human-annotated translation outputs generated by the NMT system.", "Conclusion We propose to use word saliency and SmoothGrad to interpret word alignments from NMT predictions.", "Our proposal is model-agnostic, is able to be applied either offline or online, and does not require any parameter updates or architectural change.", "Both force decoding and free decoding evaluations show that our method is capable of generating word alignment interpretations of much higher quality compared to its attentionbased counterpart.", "Our empirical results also probe into the NMT black-box and reveal that even without any special architecture or training algorithm, some NMT models have already implicitly learned interpretable word alignments of comparable quality to fast-align.", "The model and code for our experiments are available at https://github.com/shuoyangd/meerkat." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "4.3", "5.1", "5.2", "5.3", "6.2", "6.3", "7", "8" ], "paper_header_content": [ "Introduction", "Related Work", "The Interpretation Problem", "Method", "Visual Saliency", "Word Saliency", "SmoothGrad", "Evaluation Method", "Setup", "Force Decoding Results", "SmoothGrad", "Alignment Dispersion", "Discussion And Future Work", "Conclusion" ] }
GEM-SciDuet-train-38#paper-1054#slide-13
Feature in Computer Vision
Photo Credit: Hainan Xu Saliency-driven Word Alignment Interpretation for NMT
Photo Credit: Hainan Xu Saliency-driven Word Alignment Interpretation for NMT
[]
GEM-SciDuet-train-38#paper-1054#slide-14
1054
Saliency-driven Word Alignment Interpretation for Neural Machine Translation
Despite their original goal to jointly learn to align and translate, Neural Machine Translation (NMT) models, especially Transformer, are often perceived as not learning interpretable word alignments. In this paper, we show that NMT models do learn interpretable word alignments, which could only be revealed with proper interpretation methods. We propose a series of such methods that are model-agnostic, are able to be applied either offline or online, and do not require parameter update or architectural change. We show that under the force decoding setup, the alignments induced by our interpretation method are of better quality than fast-align for some systems, and when performing free decoding, they agree well with the alignments induced by automatic alignment tools.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223 ], "paper_content_text": [ "Introduction Neural Machine Translation (NMT) has made lots of advancements since its inception.", "One of the key innovations that led to the largest improvements is the introduction of the attention mechanism (Bahdanau et al., 2014; Luong et al., 2015) , which jointly learns word alignment and translation.", "Since then, the attention mechanism has gradually become a general technique in various NLP tasks, including summarization (Rush et al., 2015; See et al., 2017) , natural language inference (Parikh et al., 2016) and speech recognition (Chorowski et al., 2015; Chan et al., 2016) .", "Although word alignment is no longer a integral step like the case for Statistical Machine Translation (SMT) systems (Brown et al., 1993; Koehn et al., 2003) , there is a resurgence of interest in the community to study word alignment for NMT models.", "Even for NMT, word alignments are useful for error analysis, inserting external vocabularies, and providing guidance for human translators in computer-aided translation.", "When aiming for the most accurate alignments, the state-of-the-art tools include GIZA++ (Brown et al., 1993; Och and Ney, 2003) and fast-align (Dyer et al., 2013) , which are all external models invented in SMT era and need to be run as a separate post-processing step after the full sentence translation is complete.", "As a direct result, they are not suitable for analyzing the internal decision processes of the neural machine translation models.", "Besides, these models are hard to apply in the online fashion, i.e.", "in the middle of left-to-right translation process, such as the scenario in certain constrained decoding algorithms (Hasler et al., 2018) and in computeraided translation (Bouma and Parmentier, 2014; Arcan et al., 2014) .", "For these cases, the current common practice is to simply generate word alignments from attention weights between the encoder and decoder.", "However, there are problems with this practice.", "Koehn and Knowles (2017) showed that attention-based word alignment interpretation may be subject to \"off-by-one\" errors.", "Zenkel et al.", "(2019) ; Tang et al.", "(2018b) ; Raganato and Tiedemann (2018) pointed out that the attention-induced alignment is particularly noisy with Transformer models.", "Because of this, some studies, such as Nguyen and Chiang (2018); Zenkel et al.", "(2019) proposed either to add extra modules to generate higher quality word alignments, or to use these modules to further improve the model performance or interpretability.", "This paper is a step towards interpreting word alignments from NMT without relying on external models.", "We argue that using only attention weights is insufficient for generating clean word alignment interpretations, which we demonstrate both conceptually and empirically.", "We propose to use the notion of saliency to obtain word alignment interpretation of NMT predictions.", "Different from previous alignment models, our proposal is a pure interpretation method and does not require any parameter update or architecture change.", "Nevertheless, we are able to reduce Alignment Error Rate (AER) by 10-20 points over the attention weight baseline under two evaluation settings we adopt (see Figure 1 for an example), and beat fast-align (Dyer et al., 2013) by as much as 8.7 points.", "Not only have we proposed a superior model interpretation method, but our empirical results also uncover that, contrary to common beliefs, architectures such as convolutional sequenceto-sequence models (Gehring et al., 2017) have already implicitly learned highly interpretable word alignments, which sheds light on how future improvement should be made on these architectures.", "Related Work We start with work that combines word alignments with NMT.", "Research in this area generally falls into one of three themes: (1) employing the notion of word alignments to interpret the prediction of NMT; (2) making use of word alignments to improve NMT performance; (3) making use of NMT to improve word alignments.", "We mainly focus on related work in the first theme as this is the problem we are addressing in this work.", "Then we briefly introduce work in the other themes that is relevant to our study.", "We conclude by briefly summarizing related work to our proposed interpretation method.", "For the attention in RNN-based sequence-tosequence model, the first comprehensive analysis is conducted by Ghader and Monz (2017) .", "They argued that the attention in such systems agree with word alignment to a certain extent by showing that the RNN-based system achieves comparable alignment error rate comparable to that of bidirectional GIZA++ with symmetrization.", "However, they also point out that they are not exactly the same, as training the attention with alignments would occasionally cause the model to forget important information.", "Lee et al.", "(2017) presented a toolkit that facilitates study for the attention in RNN-based models.", "There is also a number of other studies that analyze the attention in Transformer models.", "Tang et al.", "(2018a,b) conducted targeted evaluation of neural machine translation models in two different evaluation tasks, namely subject-verb agreement and word sense disambiguation.", "During the analysis, they noted that the pattern in Transformer model (what they refer to as advanced attention mechanism) is very different from that of the attention in RNN-based architecture, in that a lot of the probability mass is focused on the last input token.", "They did not dive deeper in this phenomenon in their analysis.", "Raganato and Tiedemann (2018) performed a brief but more refined analysis on each attention head and each layer, where they noticed several different patterns inside the modules, and concluded that Transformer tends to focus on local dependencies in lower layers but finds long dependencies on higher ones.", "Beyond interpretation, in order to improve the translation of rare words, Nguyen and Chiang (2018) introduced LexNet, a feed-forward neural network that directly predicts the target word from a weighted sum of the source embeddings, on top of an RNN-based Seq2Seq models.", "Their goal was to improve translation output and hence they did not empirically show AER improvements on manually-aligned corpora.", "There are also a few other studies that inject alignment supervision during NMT training (Mi et al., 2016; Liu et al., 2016) .", "In terms of improvements in word alignment quality, Legrand et al.", "(2016) ; Wang et al.", "(2018) ; proposed neu-ral word alignment modules decoupled from NMT systems, while Zenkel et al.", "(2019) introduced a separate module to extract alignment from NMT decoder states, with which they achieved comparable AER with fast-align with Transformer models.", "The saliency method we propose in this work draws its inspiration from visual saliency proposed by Simonyan et al.", "(2013); Springenberg et al.", "(2014) ; Smilkov et al.", "(2017) .", "It should be noted that these methods were mostly applied to computer vision tasks.", "To the best of our knowledge, Li et al.", "(2016) presented the only work that directly employs saliency methods to interpret NLP models.", "Most similar to our work in spirit, Ding et al.", "(2017) used Layer-wise Relevance Propagation (LRP; Bach et al.", "2015) , an interpretation method resembling saliency, to interpret the internal working mechanisms of RNN-based neural machine translation systems.", "Although conceptually LRP is also a good fit for word alignment interpretation, we have some concerns with the mathematical soundness of LRP when applied to attention models.", "Our proposed method is also considerably more flexible and easier to implement than LRP.", "The Interpretation Problem Formally, by interpreting model prediction, we are referring to the following problem: given a trained MT model and input tokens S = {s 0 , s 1 , .", ".", ".", ", s I−1 }, at a certain time step j when the models predicts t j , we want to know which source word in S \"contributed\" most to this prediction.", "Note that the prediction t j might not be arg max t j p(t j | t 1:j−1 ), as the locally optimal option may be pruned during beam search and not end up in the final translation.", "Under this framework, we can see an important conceptual problem regarding interpreting attention weights as word alignment.", "Suppose for the same source sentence, there are two alternative translations that diverge at target time step j, generating t j and t ′ j which respectively correspond to different source words.", "Presumably, the source word that is aligned to t j and t ′ j should changed correspondingly.", "However, this is not possible with the attention weight interpretation, because the attention weight is computed before prediction of t j or t ′ j .", "With that, we argue that an ideal interpretation algorithm should be able to adapt the interpretation with the specified output label, regard-less of whether it is the most likely label predicted by the model.", "As a final note, the term \"attention weights\" here refers to the weights of the attention between encoder and decoder (the \"encoder-decoder attention\" in Vaswani et al.", "(2017) ).", "Specifically, they do not refer to the weight of self-attention modules that only exist in the Transformer architecture, which do not establish alignment between the source and target words.", "Method Our proposal is based on the notion of visual saliency (Simonyan et al., 2013) in computer vision.", "In brief, the saliency of an input feature is defined by the partial gradient of the output score with regard to the input.", "We propose to extend this idea to NMT by drawing analogy between input pixels and the embedding look-up operation.", "Visual Saliency Suppose we have an image classification example (x 0 , y 0 ), with y 0 being a specific image class and x 0 being an |X |-dimensional vector.", "Each entry of x 0 is an input feature (i.e., a pixel) to the classifier.", "Given the input x 0 , a trained classifier can generate a prediction score for class y 0 , denoted as p(y 0 | x 0 ).", "Consider the first-order Taylor expansion of a perturbed version of this score at the neighborhood of input x 0 : p(y 0 | x) ≈ p(y 0 | x 0 ) + ∂p(y 0 | x) ∂x x 0 · (x − x 0 ) (1) This is essentially re-formulating the perturbed prediction score p(y 0 | x) as an affine approximation of the input features, while the \"contribution\" of each feature to the final prediction being the partial derivative of the prediction score with regard to the feature.", "Assuming a feature that is deemed as salient for the local perturbation of the prediction score would also be globally salient, the saliency of an input feature is defined as follows: Definition 1 Denoted as Ψ(x, y), the saliency of feature vector x with regard to output class y is defined as ∂p(y | x) ∂x .", "Note that Ψ(x, y) is also a vector, with each entry corresponding to the saliency of a single input feature in x.", "Such formulation has following nice properties: • The saliency of an input feature is related to the choice of output class y, as model scores of different output classes correspond to a different set of parameters, and hence resulting in different partial gradients for the input features.", "This makes up for the aforementioned deficiency of attention weights in addressing the interpretation problem.", "• The partial gradient could be computed by back-propagation, which is efficiently implemented in most deep learning frameworks.", "• The formulation is agnostic to the model that generates p(y | x), so it could be applied to any deep learning architecture.", "Word Saliency In computer vision, the input feature is a 3D Tensor corresponding to the level in each channel.", "The key question to apply such method to NMT is what constitutes the input feature to a NMT system.", "Li et al.", "(2016) proposed to use the embedding of of the input words as the input feature to formulate saliency score, which results in the saliency of an input word being a vector of the same dimension as embedding vectors.", "To obtain a scalar saliency value, they computed the mean of the absolute value of the embedding gradients.", "We argue that there is a more mathematically principled way to approach this.", "To start, we treat the word embedding look-up operation as a dot product between the embedding weight matrix W and an one-hot vector z.", "The size of z is the same as the source vocabulary size.", "Similarly, the input sentence could be formulated as a matrix Z with only 0 and 1 entries.", "Notice that z has certain resemblance to the pixels of an image, with each cell representing the pixel-wise activation level of the words in the vocabulary.", "For the output word t j at time step j, we can similarly define the saliency of the one-hot vector z as: Ψ(z, t j ) = ∂p(t j | Z) ∂z (2) where p(t j | Z) is the probability of word t j generated by the NMT model given source sentence Z. Ψ(z, t j ) is a vector of the same size as z.", "However, note that there is a key difference between z and pixels.", "If the pixel level is 0, it means that the pixel is black, while a 0-entry in z means that the input word is not the word denoted by the corresponding cell.", "While the black region of an input image may still carry important information, we are not interested in the saliency of the 0-entries in z.", "1 Hence, we only take the 1-entries of matrix Z as the input to the NMT model.", "For a source word s i in the source sentence, this means we only care about the saliency of the 1-entries, i.e., the entry corresponding to source word s i : ψ(s i , t j ) = [ ∂p(t j | Z) ∂z ] s i = [ ∂p(t j | Z) ∂W s i · ∂W s i ∂z ] s i = [ ∂p(t j | Z) ∂W s i · W ] s i = ∂p(t j | Z) ∂W s i · W s i (3) where [·] i denotes the i-th row of a matrix or the ith element of a vector.", "In other words, the saliency ψ(s i , t j ) is a weighted sum of the word embedding of input word s i , with the partial gradient of each cell as the weight.", "By comparison, the word saliency 2 in Li et al.", "(2016) is defined as: ψ ′ (s i , t j ) = mean ( ∂p(t j | Z) ∂W s i ) (4) There are two implementation details that we would like to call for the reader's attention: • When the same word occurs multiple times in the source sentence, multiple copies of embedding for such word need to be made to ensure that the gradients flowing to different instances of the same word are not merged; • Note that ψ(s i , t j ) is not a probability distribution, which does not affect word alignment results because we are taking arg max.", "For visualizations presented herein, we normalized the distribution by p( s i | t j ) ∝ max(0, ψ(s i , t j )).", "One may also use softmax function for applications that need more well-formed probability distribution.", "1 Although we introduce z to facilitate presentation, note that word embedding look-up is never implemented as a matrix multiplication.", "Instead, it is implemented as a table lookup, so for each input word, only one row of the word embedding is fed into the subsequent computation.", "As a consequence, during training, since the other rows are not part of the computation graph, only parameters in the rows corresponding to the 1-entries will be updated.", "This is another reason why we choose to discard the saliency of 0-entries.", "2 Li et al.", "(2016) mostly focused on studying saliency on the level of word embedding dimensions.", "This word-level formulation is proposed as part of the analysis in Section 5.2 and Section 6 of that work.", "SmoothGrad There are two scenarios where the naïve gradientbased saliency may make mistakes: • For highly non-linear models, the saliency obtained from local perturbation may not be a good representation of the global saliency.", "• If the model fits the distribution nearly perfectly, some data points or input features may become saturated, i.e.", "having a partial gradient of 0.", "This does not necessarily mean they are not salient with regard to the prediction.", "We alleviate these problems with SmoothGrad, a method proposed by Smilkov et al.", "(2017) .", "The idea is to augment the input to the network into n samples by adding random noise generated by normal distribution N (0, σ 2 ).", "The saliency scores of each augmented sample are then averaged to cancel out the noise in the gradients.", "We made one small modification to this method in our experiments: rather than adding noise to the word inputs that are represented as one-hot vectors, we instead add noise to the queried embedding vectors.", "This allows us to introduce more randomness for each word input.", "Experiments Evaluation Method The best evaluation method would compare predicted word alignments against manually labeled word alignments between source sentences and NMT output sentences, but this is too costly for our study.", "Instead, we conduct two automatic evaluations for our proposed method using resources available: • force decoding: take a human-annotated corpus, run NMT models to force-generate the target side of the corpus and measure AER against the human alignment; • free decoding: take the NMT prediction, obtain reasonably clean reference alignments between the prediction and the source and measure AER against this reference.", "3 Notice that both automatic evaluation methods have their respective limitation: the force decoding method may force the model to predict something it deems unlikely, and thus generating noisy alignment; whereas the free decoding method lacks authentic references.", "Setup We follow Zenkel et al.", "(2019) in data setup and use the accompanied scripts of that paper 4 for preprocessing.", "Their training data consists of 1.9M, 1.1M and 0.4M sentence pairs for German-English (de-en), English-French (en-fr) and Romanian-English (ro-en) language pairs, respectively, whereas the manually-aligned test data contains 508, 447 and 248 sentence pairs for each language pair.", "There is no development data provided in their setup, and it is not clear what they used for NMT system training, so we set aside the last 1,000 sentences of the training data for each language as the development set.", "For our NMT systems, we use fairseq 5 to train attention-based RNN systems (LSTM) (Bahdanau et al., 2014) , convolution systems (FConv) (Gehring et al., 2017) , and Transformer systems (Transformer) (Vaswani et al., 2017) .", "We use the pre-configured model architectures for IWSLT German-English experiments 6 to build all NMT systems.", "Our experiments cover the following interpretation methods: • Attention: directly take the attention weights as soft alignment scores.", "For transformer, we follow the implementation in fairseq and used the attention weights from the final layer averaged across all heads; • Smoothed Attention: obtain multiple version of attention weights with the same data augmentation procedure as SmoothGrad and average them.", "This is to prove that smoothing itself does not improve the interpretation quality, and has to be used together with effective interpretation method; • (Li et al., 2016) : applied with normal backpropagation (Grad) and SmoothGrad; • Ours: applied with normal back-propagation (Grad) and SmoothGrad.", "For all the methods above, we follow the same procedure in (Zenkel et al., 2019) to convert soft alignment scores to hard alignment.", "For force decoding experiments, we generate symmetrized alignment results with growdiag-final.", "We also include AER results 7 of fast-align (Dyer et al., 2013) , GIZA++ 8 and the best model (Add+SGD) from Zenkel et al.", "(2019) on the same dataset for comparison.", "However, the readers should be aware that there are certain caveats in this comparison: • All of these models are specifically designed and optimized to generate high-quality alignments, while our method is an interpretation method and is not making any architecture modifications or parameter updates; • fast-align and GIZA++ usually need to update model with full sentence to generate optimal alignments, while our system and Zenkel et al.", "(2019) can do so on-the-fly.", "7 We reproduced the fast-align results as a sanity check and we were able to perfectly replicate their numbers with their released scripts.", "8 https://github.com/moses-smt/giza-pp Realizing the second caveat, we also run fastalign under the online alignment scenario, where we first train a fast-align model and decode on the test set.", "This is a real-world scenario in applications such as computer-aided translation (Bouma and Parmentier, 2014; Arcan et al., 2014) , where we cannot practically update alignment models onthe-fly.", "On the other hand, we believe this is a slightly better comparison for methods with online alignment capabilities such as Zenkel et al.", "(2019) and this work.", "The data used in Zenkel et al.", "(2019) did not provide a manually-aligned development set, so we tune the SmoothGrad hyperparameters (noise standard deviation σ and sample size n) on a 30sentence subset of the German-English test data with the Transformer model.", "We ended up using the recommended σ = 0.15 in the original paper and a slightly smaller sample size n = 30 for speed.", "This hyperparameter setting is applied to the other SmoothGrad experiments as-is.", "For com-parison with previous work, we do not exclude these sentences from the reported results, we instead mark the numbers affected to raise caution.", "Table 1 shows the AER results under the force decoding setting.", "First, note that after applying our saliency method with normal back-propagation, AER is only reduced for FConv model but instead increases for LSTM and Transformer.", "The largest increase is observed for Transformer, where the AER increases by about 20 points on average.", "However, after applying SmoothGrad on top of that, we observe a sharp drop in AER, which ends up with 10-20 points lower than the attention weight baseline.", "We can also see that this is not just an effect introduced by input noise, as the same smoothing procedure for attention increases the AER most of the times.", "To summarize, at least under force decoding settings, our saliency method with SmoothGrad obtains word alignment interpretations of much higher quality than the attention weight baseline.", "Force Decoding Results As for Li et al.", "(2016) , for FConv and LSTM architectures, it is not only consistently worse than our method, but at times also worse than attention.", "Besides, the effect of SmoothGrad is also not as consistent on their saliency formulation as ours.", "Although with the Transformer model, the Li et al.", "(2016) method obtained better AER than our method under several settings, it is still pretty clear overall that the superior mathematical soundness of our method is translated into better interpretation quality.", "While the GIZA++ model obtains the best alignment result in Table 1 9 , most of our word alignment interpretation of FConv model with Smooth-Grad surpasses the alignment quality of fast-align (either Online or Offline), sometimes by as much as 8.7 points (symmetrized ro<>en result).", "Our best models are also largely on-par with (Zenkel et al., 2019) .", "These are notable results as our method is an interpretation method and no extra parameter is updated to optimize the quality of alignment.", "On the other hand, this also indicates that it is possible to induce high-quality 9 While Ghader and Monz (2017) showed that the AER obtained by LSTM model is close to that of GIZA++, our experiments yield a much larger difference.", "We think this is largely due to the fact that we choose to train our model with BPE, while Ghader and Monz (2017) explicitly avoided doing so.", "alignments from NMT model without modifying its parameters, showing that it has acquired such information in an implicit way.", "Most interestingly, although NMT is often deemed as performing poorly under low-resource setting, our interpretation seems to work relatively well on ro<>en language pair, which happens to be the language pair that we have least training data for.", "We think this is a phenomenon that merits further exploration.", "Besides, it can be seen that for all reported methods, the overall order for the number of alignment errors is FConv < LSTM < Transformer.", "To our best knowledge, this is also a novel insight, as no one has analyzed attention weights of FConv with other architectures before.", "We can also observe that while our method is not strong enough to fully bridge the gap of the attention noise level between different model architecture, it does manage to narrow the difference in some cases.", "Table 2 shows the result under free decoding setting.", "The trend in this group of experiment is similar to Table 1 , except that Transformer occasionally outperforms LSTM.", "We think this is mainly due to the fact that Transformer generates higher quality translations, but could also be partially attributed to the noise in fast-align reference.", "Also, notice that the AER numbers are also generally lower compared to Table 1 under this setting.", "One reason is that our model is aligning output with which it is most confident, so less noise should be expected in the model behavior.", "On the other hand, by qualitatively comparing the reference translation in the test set and the NMT output, we find that it is generally easier to align the translation as it is often a more literal translation.", "6 Analysis 6.1 Comparison with Li et al.", "(2016) The main reason why the word saliency formulation in Li et al.", "(2016) does not work as well for word alignment is the lack of polarity in the formulation.", "In other words, it only quantifies how much the input influences the output, but does not specify in what way does the input influence.", "This is sufficient for error analysis, but does not suit the purpose of word alignment, as humans will only align a target word to the input words that constitute a translation pair, i.e.", "have positive influence.", "Figure 2 shows a case where this problem occurs in our German-English experiments.", "Note that in Subfigure (a), the source word nur has high saliency on several target words, e.g.", "should, but the word nur is actually not translated in the reference.", "On the other hand, as shown in Subfigure (b), our method correctly assigns negative (shown as white) or small positive values at all time steps for this source word.", "Specifically, the saliency value of nur for should is negative with large magnitude, indicating significant negative contributions to the prediction of that target word.", "Hence, a good word alignment interpreta-tion should strongly avoid aligning them.", "Free Decoding Results SmoothGrad Tables 1 and 2 show that SmoothGrad is a crucial factor to reduce AER, especially for Transformer.", "Figure 3 Table 1 .", "By comparing Subfigures (a) and (c), we notice that (1) without SmoothGrad, the word saliency obtained from the Transformer model is extremely noisy, and (2) the output of SmoothGrad is not only a smoother version of the naïve gradient output, but also gains new information by performing extra forward and backward evaluations with the noisy input.", "For example, compare the alignment point between source word wir and target word we: in Subfigure (a), this word pair has very low saliency, but in (c), they become the most likely alignment pair for that target word.", "Referring back to our motivation for using SmoothGrad in Section 4.3, we think the observations above verify that the Transformer model is a case where very high non-linearities occur almost everywhere in the parameter space, such that the saliency obtained from local perturbation is a very Table 3 : Alignment distribution entropy for selected deen models.", "att stands for attention in Table 1. poor representation of the global saliency almost all the time.", "On the other hand, this is also why the Transformer especially relies on SmoothGrad to work well, as the perturbation will give a better estimation of the global saliency.", "It could also be observed from Subfigures (b) and (d) that when the noise is too moderate, the evaluation does not deviate enough from the original spot to gain non-local information, and at (d) it deviates too much and hence the resulting alignment is almost random.", "Intuitively, the noise parameter σ should be sensitive to the model architecture or even specific input feature values, but interestingly we end up finding that a single choice from the computer vision literature works well with all of our systems.", "We encourage future work to conduct more comprehensive analysis of the effect of SmoothGrad on more complicated architectures beyond convolutional neural nets.", "Alignment Dispersion We run German-English alignments under several different SmoothGrad noise deviation σ and report their dispersion as measured by entropy of the (soft) alignment distribution averaged by number of target words.", "Results are summarized in Ta-ble 3, where lower entropy indicates more peaky alignments.", "First, we observe that dispersion of word saliency gets higher as we increase σ, which aligns with the observations in Figure 3 .", "It should also be noted that the alignment dispersion is consistently lower for free decoding than force decoding.", "This verifies our conjecture that the force decoding setting might introduce more noise in the model behavior, but judging from this result, that gap seems to be minimal.", "Comparing different architectures, the dispersion of attention weights does not correlate well with the dispersion of word saliency.", "We also notice that, while the Transformer attention interpretation consistently results in higher AER, its dispersion is lower than the other architectures, indicating that with attention, a lot of the probability mass might be concentrated in the wrong place more often.", "This corroborates the finding in Raganato and Tiedemann (2018) .", "Discussion And Future Work There are several extensions to this work that we would like to discuss in this section.", "First, in this paper we only explored two saliency methods among many others available (Montavon et al., 2018) .", "In our preliminary study, we also experimented with guided back-propagation (Springenberg et al., 2014), a frequently used saliency method in computer vision, which did not work well for our problem.", "We suspect that there is a gap between applying these methods on mostlyconvolutional architectures in computer vision and architectures with more non-linearities in NLP.", "We hope the future research from the NLP and machine learning communities could bridge this gap.", "Secondly, the alignment errors in our method comes from three different sources: the limitation of NMT models on learning word alignments, the limitation of interpretation method on recovering interpretable word alignments, and the ambiguity in word alignments itself.", "Although we have shown that high quality alignment could be recovered from NMT systems (thus pushing our understanding on the limitation of NMT models), we are not yet able to separate these sources of errors in this work.", "While exploration on this direction will help us better understand both NMT models and the capability of saliency methods in NLP, researchers may want to avoid using word alignment as a benchmark for saliency methods because of its ambiguity.", "For such purpose, simpler tasks with clear ground truth, such as subject-verb agreement, might be a better choice.", "Finally, as mentioned before, we are only conducting approximate evaluation to measure the ability of our interpretation method.", "An immediate future work would be evaluating this on human-annotated translation outputs generated by the NMT system.", "Conclusion We propose to use word saliency and SmoothGrad to interpret word alignments from NMT predictions.", "Our proposal is model-agnostic, is able to be applied either offline or online, and does not require any parameter updates or architectural change.", "Both force decoding and free decoding evaluations show that our method is capable of generating word alignment interpretations of much higher quality compared to its attentionbased counterpart.", "Our empirical results also probe into the NMT black-box and reveal that even without any special architecture or training algorithm, some NMT models have already implicitly learned interpretable word alignments of comparable quality to fast-align.", "The model and code for our experiments are available at https://github.com/shuoyangd/meerkat." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "4.3", "5.1", "5.2", "5.3", "6.2", "6.3", "7", "8" ], "paper_header_content": [ "Introduction", "Related Work", "The Interpretation Problem", "Method", "Visual Saliency", "Word Saliency", "SmoothGrad", "Evaluation Method", "Setup", "Force Decoding Results", "SmoothGrad", "Alignment Dispersion", "Discussion And Future Work", "Conclusion" ] }
GEM-SciDuet-train-38#paper-1054#slide-14
Feature in NLP
Its straight-forward to compute saliency for a single dimension of the word embedding. Saliency-driven Word Alignment Interpretation for NMT But how to compose the saliency of each dimension into the saliency of a word?
Its straight-forward to compute saliency for a single dimension of the word embedding. Saliency-driven Word Alignment Interpretation for NMT But how to compose the saliency of each dimension into the saliency of a word?
[]
GEM-SciDuet-train-38#paper-1054#slide-15
1054
Saliency-driven Word Alignment Interpretation for Neural Machine Translation
Despite their original goal to jointly learn to align and translate, Neural Machine Translation (NMT) models, especially Transformer, are often perceived as not learning interpretable word alignments. In this paper, we show that NMT models do learn interpretable word alignments, which could only be revealed with proper interpretation methods. We propose a series of such methods that are model-agnostic, are able to be applied either offline or online, and do not require parameter update or architectural change. We show that under the force decoding setup, the alignments induced by our interpretation method are of better quality than fast-align for some systems, and when performing free decoding, they agree well with the alignments induced by automatic alignment tools.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223 ], "paper_content_text": [ "Introduction Neural Machine Translation (NMT) has made lots of advancements since its inception.", "One of the key innovations that led to the largest improvements is the introduction of the attention mechanism (Bahdanau et al., 2014; Luong et al., 2015) , which jointly learns word alignment and translation.", "Since then, the attention mechanism has gradually become a general technique in various NLP tasks, including summarization (Rush et al., 2015; See et al., 2017) , natural language inference (Parikh et al., 2016) and speech recognition (Chorowski et al., 2015; Chan et al., 2016) .", "Although word alignment is no longer a integral step like the case for Statistical Machine Translation (SMT) systems (Brown et al., 1993; Koehn et al., 2003) , there is a resurgence of interest in the community to study word alignment for NMT models.", "Even for NMT, word alignments are useful for error analysis, inserting external vocabularies, and providing guidance for human translators in computer-aided translation.", "When aiming for the most accurate alignments, the state-of-the-art tools include GIZA++ (Brown et al., 1993; Och and Ney, 2003) and fast-align (Dyer et al., 2013) , which are all external models invented in SMT era and need to be run as a separate post-processing step after the full sentence translation is complete.", "As a direct result, they are not suitable for analyzing the internal decision processes of the neural machine translation models.", "Besides, these models are hard to apply in the online fashion, i.e.", "in the middle of left-to-right translation process, such as the scenario in certain constrained decoding algorithms (Hasler et al., 2018) and in computeraided translation (Bouma and Parmentier, 2014; Arcan et al., 2014) .", "For these cases, the current common practice is to simply generate word alignments from attention weights between the encoder and decoder.", "However, there are problems with this practice.", "Koehn and Knowles (2017) showed that attention-based word alignment interpretation may be subject to \"off-by-one\" errors.", "Zenkel et al.", "(2019) ; Tang et al.", "(2018b) ; Raganato and Tiedemann (2018) pointed out that the attention-induced alignment is particularly noisy with Transformer models.", "Because of this, some studies, such as Nguyen and Chiang (2018); Zenkel et al.", "(2019) proposed either to add extra modules to generate higher quality word alignments, or to use these modules to further improve the model performance or interpretability.", "This paper is a step towards interpreting word alignments from NMT without relying on external models.", "We argue that using only attention weights is insufficient for generating clean word alignment interpretations, which we demonstrate both conceptually and empirically.", "We propose to use the notion of saliency to obtain word alignment interpretation of NMT predictions.", "Different from previous alignment models, our proposal is a pure interpretation method and does not require any parameter update or architecture change.", "Nevertheless, we are able to reduce Alignment Error Rate (AER) by 10-20 points over the attention weight baseline under two evaluation settings we adopt (see Figure 1 for an example), and beat fast-align (Dyer et al., 2013) by as much as 8.7 points.", "Not only have we proposed a superior model interpretation method, but our empirical results also uncover that, contrary to common beliefs, architectures such as convolutional sequenceto-sequence models (Gehring et al., 2017) have already implicitly learned highly interpretable word alignments, which sheds light on how future improvement should be made on these architectures.", "Related Work We start with work that combines word alignments with NMT.", "Research in this area generally falls into one of three themes: (1) employing the notion of word alignments to interpret the prediction of NMT; (2) making use of word alignments to improve NMT performance; (3) making use of NMT to improve word alignments.", "We mainly focus on related work in the first theme as this is the problem we are addressing in this work.", "Then we briefly introduce work in the other themes that is relevant to our study.", "We conclude by briefly summarizing related work to our proposed interpretation method.", "For the attention in RNN-based sequence-tosequence model, the first comprehensive analysis is conducted by Ghader and Monz (2017) .", "They argued that the attention in such systems agree with word alignment to a certain extent by showing that the RNN-based system achieves comparable alignment error rate comparable to that of bidirectional GIZA++ with symmetrization.", "However, they also point out that they are not exactly the same, as training the attention with alignments would occasionally cause the model to forget important information.", "Lee et al.", "(2017) presented a toolkit that facilitates study for the attention in RNN-based models.", "There is also a number of other studies that analyze the attention in Transformer models.", "Tang et al.", "(2018a,b) conducted targeted evaluation of neural machine translation models in two different evaluation tasks, namely subject-verb agreement and word sense disambiguation.", "During the analysis, they noted that the pattern in Transformer model (what they refer to as advanced attention mechanism) is very different from that of the attention in RNN-based architecture, in that a lot of the probability mass is focused on the last input token.", "They did not dive deeper in this phenomenon in their analysis.", "Raganato and Tiedemann (2018) performed a brief but more refined analysis on each attention head and each layer, where they noticed several different patterns inside the modules, and concluded that Transformer tends to focus on local dependencies in lower layers but finds long dependencies on higher ones.", "Beyond interpretation, in order to improve the translation of rare words, Nguyen and Chiang (2018) introduced LexNet, a feed-forward neural network that directly predicts the target word from a weighted sum of the source embeddings, on top of an RNN-based Seq2Seq models.", "Their goal was to improve translation output and hence they did not empirically show AER improvements on manually-aligned corpora.", "There are also a few other studies that inject alignment supervision during NMT training (Mi et al., 2016; Liu et al., 2016) .", "In terms of improvements in word alignment quality, Legrand et al.", "(2016) ; Wang et al.", "(2018) ; proposed neu-ral word alignment modules decoupled from NMT systems, while Zenkel et al.", "(2019) introduced a separate module to extract alignment from NMT decoder states, with which they achieved comparable AER with fast-align with Transformer models.", "The saliency method we propose in this work draws its inspiration from visual saliency proposed by Simonyan et al.", "(2013); Springenberg et al.", "(2014) ; Smilkov et al.", "(2017) .", "It should be noted that these methods were mostly applied to computer vision tasks.", "To the best of our knowledge, Li et al.", "(2016) presented the only work that directly employs saliency methods to interpret NLP models.", "Most similar to our work in spirit, Ding et al.", "(2017) used Layer-wise Relevance Propagation (LRP; Bach et al.", "2015) , an interpretation method resembling saliency, to interpret the internal working mechanisms of RNN-based neural machine translation systems.", "Although conceptually LRP is also a good fit for word alignment interpretation, we have some concerns with the mathematical soundness of LRP when applied to attention models.", "Our proposed method is also considerably more flexible and easier to implement than LRP.", "The Interpretation Problem Formally, by interpreting model prediction, we are referring to the following problem: given a trained MT model and input tokens S = {s 0 , s 1 , .", ".", ".", ", s I−1 }, at a certain time step j when the models predicts t j , we want to know which source word in S \"contributed\" most to this prediction.", "Note that the prediction t j might not be arg max t j p(t j | t 1:j−1 ), as the locally optimal option may be pruned during beam search and not end up in the final translation.", "Under this framework, we can see an important conceptual problem regarding interpreting attention weights as word alignment.", "Suppose for the same source sentence, there are two alternative translations that diverge at target time step j, generating t j and t ′ j which respectively correspond to different source words.", "Presumably, the source word that is aligned to t j and t ′ j should changed correspondingly.", "However, this is not possible with the attention weight interpretation, because the attention weight is computed before prediction of t j or t ′ j .", "With that, we argue that an ideal interpretation algorithm should be able to adapt the interpretation with the specified output label, regard-less of whether it is the most likely label predicted by the model.", "As a final note, the term \"attention weights\" here refers to the weights of the attention between encoder and decoder (the \"encoder-decoder attention\" in Vaswani et al.", "(2017) ).", "Specifically, they do not refer to the weight of self-attention modules that only exist in the Transformer architecture, which do not establish alignment between the source and target words.", "Method Our proposal is based on the notion of visual saliency (Simonyan et al., 2013) in computer vision.", "In brief, the saliency of an input feature is defined by the partial gradient of the output score with regard to the input.", "We propose to extend this idea to NMT by drawing analogy between input pixels and the embedding look-up operation.", "Visual Saliency Suppose we have an image classification example (x 0 , y 0 ), with y 0 being a specific image class and x 0 being an |X |-dimensional vector.", "Each entry of x 0 is an input feature (i.e., a pixel) to the classifier.", "Given the input x 0 , a trained classifier can generate a prediction score for class y 0 , denoted as p(y 0 | x 0 ).", "Consider the first-order Taylor expansion of a perturbed version of this score at the neighborhood of input x 0 : p(y 0 | x) ≈ p(y 0 | x 0 ) + ∂p(y 0 | x) ∂x x 0 · (x − x 0 ) (1) This is essentially re-formulating the perturbed prediction score p(y 0 | x) as an affine approximation of the input features, while the \"contribution\" of each feature to the final prediction being the partial derivative of the prediction score with regard to the feature.", "Assuming a feature that is deemed as salient for the local perturbation of the prediction score would also be globally salient, the saliency of an input feature is defined as follows: Definition 1 Denoted as Ψ(x, y), the saliency of feature vector x with regard to output class y is defined as ∂p(y | x) ∂x .", "Note that Ψ(x, y) is also a vector, with each entry corresponding to the saliency of a single input feature in x.", "Such formulation has following nice properties: • The saliency of an input feature is related to the choice of output class y, as model scores of different output classes correspond to a different set of parameters, and hence resulting in different partial gradients for the input features.", "This makes up for the aforementioned deficiency of attention weights in addressing the interpretation problem.", "• The partial gradient could be computed by back-propagation, which is efficiently implemented in most deep learning frameworks.", "• The formulation is agnostic to the model that generates p(y | x), so it could be applied to any deep learning architecture.", "Word Saliency In computer vision, the input feature is a 3D Tensor corresponding to the level in each channel.", "The key question to apply such method to NMT is what constitutes the input feature to a NMT system.", "Li et al.", "(2016) proposed to use the embedding of of the input words as the input feature to formulate saliency score, which results in the saliency of an input word being a vector of the same dimension as embedding vectors.", "To obtain a scalar saliency value, they computed the mean of the absolute value of the embedding gradients.", "We argue that there is a more mathematically principled way to approach this.", "To start, we treat the word embedding look-up operation as a dot product between the embedding weight matrix W and an one-hot vector z.", "The size of z is the same as the source vocabulary size.", "Similarly, the input sentence could be formulated as a matrix Z with only 0 and 1 entries.", "Notice that z has certain resemblance to the pixels of an image, with each cell representing the pixel-wise activation level of the words in the vocabulary.", "For the output word t j at time step j, we can similarly define the saliency of the one-hot vector z as: Ψ(z, t j ) = ∂p(t j | Z) ∂z (2) where p(t j | Z) is the probability of word t j generated by the NMT model given source sentence Z. Ψ(z, t j ) is a vector of the same size as z.", "However, note that there is a key difference between z and pixels.", "If the pixel level is 0, it means that the pixel is black, while a 0-entry in z means that the input word is not the word denoted by the corresponding cell.", "While the black region of an input image may still carry important information, we are not interested in the saliency of the 0-entries in z.", "1 Hence, we only take the 1-entries of matrix Z as the input to the NMT model.", "For a source word s i in the source sentence, this means we only care about the saliency of the 1-entries, i.e., the entry corresponding to source word s i : ψ(s i , t j ) = [ ∂p(t j | Z) ∂z ] s i = [ ∂p(t j | Z) ∂W s i · ∂W s i ∂z ] s i = [ ∂p(t j | Z) ∂W s i · W ] s i = ∂p(t j | Z) ∂W s i · W s i (3) where [·] i denotes the i-th row of a matrix or the ith element of a vector.", "In other words, the saliency ψ(s i , t j ) is a weighted sum of the word embedding of input word s i , with the partial gradient of each cell as the weight.", "By comparison, the word saliency 2 in Li et al.", "(2016) is defined as: ψ ′ (s i , t j ) = mean ( ∂p(t j | Z) ∂W s i ) (4) There are two implementation details that we would like to call for the reader's attention: • When the same word occurs multiple times in the source sentence, multiple copies of embedding for such word need to be made to ensure that the gradients flowing to different instances of the same word are not merged; • Note that ψ(s i , t j ) is not a probability distribution, which does not affect word alignment results because we are taking arg max.", "For visualizations presented herein, we normalized the distribution by p( s i | t j ) ∝ max(0, ψ(s i , t j )).", "One may also use softmax function for applications that need more well-formed probability distribution.", "1 Although we introduce z to facilitate presentation, note that word embedding look-up is never implemented as a matrix multiplication.", "Instead, it is implemented as a table lookup, so for each input word, only one row of the word embedding is fed into the subsequent computation.", "As a consequence, during training, since the other rows are not part of the computation graph, only parameters in the rows corresponding to the 1-entries will be updated.", "This is another reason why we choose to discard the saliency of 0-entries.", "2 Li et al.", "(2016) mostly focused on studying saliency on the level of word embedding dimensions.", "This word-level formulation is proposed as part of the analysis in Section 5.2 and Section 6 of that work.", "SmoothGrad There are two scenarios where the naïve gradientbased saliency may make mistakes: • For highly non-linear models, the saliency obtained from local perturbation may not be a good representation of the global saliency.", "• If the model fits the distribution nearly perfectly, some data points or input features may become saturated, i.e.", "having a partial gradient of 0.", "This does not necessarily mean they are not salient with regard to the prediction.", "We alleviate these problems with SmoothGrad, a method proposed by Smilkov et al.", "(2017) .", "The idea is to augment the input to the network into n samples by adding random noise generated by normal distribution N (0, σ 2 ).", "The saliency scores of each augmented sample are then averaged to cancel out the noise in the gradients.", "We made one small modification to this method in our experiments: rather than adding noise to the word inputs that are represented as one-hot vectors, we instead add noise to the queried embedding vectors.", "This allows us to introduce more randomness for each word input.", "Experiments Evaluation Method The best evaluation method would compare predicted word alignments against manually labeled word alignments between source sentences and NMT output sentences, but this is too costly for our study.", "Instead, we conduct two automatic evaluations for our proposed method using resources available: • force decoding: take a human-annotated corpus, run NMT models to force-generate the target side of the corpus and measure AER against the human alignment; • free decoding: take the NMT prediction, obtain reasonably clean reference alignments between the prediction and the source and measure AER against this reference.", "3 Notice that both automatic evaluation methods have their respective limitation: the force decoding method may force the model to predict something it deems unlikely, and thus generating noisy alignment; whereas the free decoding method lacks authentic references.", "Setup We follow Zenkel et al.", "(2019) in data setup and use the accompanied scripts of that paper 4 for preprocessing.", "Their training data consists of 1.9M, 1.1M and 0.4M sentence pairs for German-English (de-en), English-French (en-fr) and Romanian-English (ro-en) language pairs, respectively, whereas the manually-aligned test data contains 508, 447 and 248 sentence pairs for each language pair.", "There is no development data provided in their setup, and it is not clear what they used for NMT system training, so we set aside the last 1,000 sentences of the training data for each language as the development set.", "For our NMT systems, we use fairseq 5 to train attention-based RNN systems (LSTM) (Bahdanau et al., 2014) , convolution systems (FConv) (Gehring et al., 2017) , and Transformer systems (Transformer) (Vaswani et al., 2017) .", "We use the pre-configured model architectures for IWSLT German-English experiments 6 to build all NMT systems.", "Our experiments cover the following interpretation methods: • Attention: directly take the attention weights as soft alignment scores.", "For transformer, we follow the implementation in fairseq and used the attention weights from the final layer averaged across all heads; • Smoothed Attention: obtain multiple version of attention weights with the same data augmentation procedure as SmoothGrad and average them.", "This is to prove that smoothing itself does not improve the interpretation quality, and has to be used together with effective interpretation method; • (Li et al., 2016) : applied with normal backpropagation (Grad) and SmoothGrad; • Ours: applied with normal back-propagation (Grad) and SmoothGrad.", "For all the methods above, we follow the same procedure in (Zenkel et al., 2019) to convert soft alignment scores to hard alignment.", "For force decoding experiments, we generate symmetrized alignment results with growdiag-final.", "We also include AER results 7 of fast-align (Dyer et al., 2013) , GIZA++ 8 and the best model (Add+SGD) from Zenkel et al.", "(2019) on the same dataset for comparison.", "However, the readers should be aware that there are certain caveats in this comparison: • All of these models are specifically designed and optimized to generate high-quality alignments, while our method is an interpretation method and is not making any architecture modifications or parameter updates; • fast-align and GIZA++ usually need to update model with full sentence to generate optimal alignments, while our system and Zenkel et al.", "(2019) can do so on-the-fly.", "7 We reproduced the fast-align results as a sanity check and we were able to perfectly replicate their numbers with their released scripts.", "8 https://github.com/moses-smt/giza-pp Realizing the second caveat, we also run fastalign under the online alignment scenario, where we first train a fast-align model and decode on the test set.", "This is a real-world scenario in applications such as computer-aided translation (Bouma and Parmentier, 2014; Arcan et al., 2014) , where we cannot practically update alignment models onthe-fly.", "On the other hand, we believe this is a slightly better comparison for methods with online alignment capabilities such as Zenkel et al.", "(2019) and this work.", "The data used in Zenkel et al.", "(2019) did not provide a manually-aligned development set, so we tune the SmoothGrad hyperparameters (noise standard deviation σ and sample size n) on a 30sentence subset of the German-English test data with the Transformer model.", "We ended up using the recommended σ = 0.15 in the original paper and a slightly smaller sample size n = 30 for speed.", "This hyperparameter setting is applied to the other SmoothGrad experiments as-is.", "For com-parison with previous work, we do not exclude these sentences from the reported results, we instead mark the numbers affected to raise caution.", "Table 1 shows the AER results under the force decoding setting.", "First, note that after applying our saliency method with normal back-propagation, AER is only reduced for FConv model but instead increases for LSTM and Transformer.", "The largest increase is observed for Transformer, where the AER increases by about 20 points on average.", "However, after applying SmoothGrad on top of that, we observe a sharp drop in AER, which ends up with 10-20 points lower than the attention weight baseline.", "We can also see that this is not just an effect introduced by input noise, as the same smoothing procedure for attention increases the AER most of the times.", "To summarize, at least under force decoding settings, our saliency method with SmoothGrad obtains word alignment interpretations of much higher quality than the attention weight baseline.", "Force Decoding Results As for Li et al.", "(2016) , for FConv and LSTM architectures, it is not only consistently worse than our method, but at times also worse than attention.", "Besides, the effect of SmoothGrad is also not as consistent on their saliency formulation as ours.", "Although with the Transformer model, the Li et al.", "(2016) method obtained better AER than our method under several settings, it is still pretty clear overall that the superior mathematical soundness of our method is translated into better interpretation quality.", "While the GIZA++ model obtains the best alignment result in Table 1 9 , most of our word alignment interpretation of FConv model with Smooth-Grad surpasses the alignment quality of fast-align (either Online or Offline), sometimes by as much as 8.7 points (symmetrized ro<>en result).", "Our best models are also largely on-par with (Zenkel et al., 2019) .", "These are notable results as our method is an interpretation method and no extra parameter is updated to optimize the quality of alignment.", "On the other hand, this also indicates that it is possible to induce high-quality 9 While Ghader and Monz (2017) showed that the AER obtained by LSTM model is close to that of GIZA++, our experiments yield a much larger difference.", "We think this is largely due to the fact that we choose to train our model with BPE, while Ghader and Monz (2017) explicitly avoided doing so.", "alignments from NMT model without modifying its parameters, showing that it has acquired such information in an implicit way.", "Most interestingly, although NMT is often deemed as performing poorly under low-resource setting, our interpretation seems to work relatively well on ro<>en language pair, which happens to be the language pair that we have least training data for.", "We think this is a phenomenon that merits further exploration.", "Besides, it can be seen that for all reported methods, the overall order for the number of alignment errors is FConv < LSTM < Transformer.", "To our best knowledge, this is also a novel insight, as no one has analyzed attention weights of FConv with other architectures before.", "We can also observe that while our method is not strong enough to fully bridge the gap of the attention noise level between different model architecture, it does manage to narrow the difference in some cases.", "Table 2 shows the result under free decoding setting.", "The trend in this group of experiment is similar to Table 1 , except that Transformer occasionally outperforms LSTM.", "We think this is mainly due to the fact that Transformer generates higher quality translations, but could also be partially attributed to the noise in fast-align reference.", "Also, notice that the AER numbers are also generally lower compared to Table 1 under this setting.", "One reason is that our model is aligning output with which it is most confident, so less noise should be expected in the model behavior.", "On the other hand, by qualitatively comparing the reference translation in the test set and the NMT output, we find that it is generally easier to align the translation as it is often a more literal translation.", "6 Analysis 6.1 Comparison with Li et al.", "(2016) The main reason why the word saliency formulation in Li et al.", "(2016) does not work as well for word alignment is the lack of polarity in the formulation.", "In other words, it only quantifies how much the input influences the output, but does not specify in what way does the input influence.", "This is sufficient for error analysis, but does not suit the purpose of word alignment, as humans will only align a target word to the input words that constitute a translation pair, i.e.", "have positive influence.", "Figure 2 shows a case where this problem occurs in our German-English experiments.", "Note that in Subfigure (a), the source word nur has high saliency on several target words, e.g.", "should, but the word nur is actually not translated in the reference.", "On the other hand, as shown in Subfigure (b), our method correctly assigns negative (shown as white) or small positive values at all time steps for this source word.", "Specifically, the saliency value of nur for should is negative with large magnitude, indicating significant negative contributions to the prediction of that target word.", "Hence, a good word alignment interpreta-tion should strongly avoid aligning them.", "Free Decoding Results SmoothGrad Tables 1 and 2 show that SmoothGrad is a crucial factor to reduce AER, especially for Transformer.", "Figure 3 Table 1 .", "By comparing Subfigures (a) and (c), we notice that (1) without SmoothGrad, the word saliency obtained from the Transformer model is extremely noisy, and (2) the output of SmoothGrad is not only a smoother version of the naïve gradient output, but also gains new information by performing extra forward and backward evaluations with the noisy input.", "For example, compare the alignment point between source word wir and target word we: in Subfigure (a), this word pair has very low saliency, but in (c), they become the most likely alignment pair for that target word.", "Referring back to our motivation for using SmoothGrad in Section 4.3, we think the observations above verify that the Transformer model is a case where very high non-linearities occur almost everywhere in the parameter space, such that the saliency obtained from local perturbation is a very Table 3 : Alignment distribution entropy for selected deen models.", "att stands for attention in Table 1. poor representation of the global saliency almost all the time.", "On the other hand, this is also why the Transformer especially relies on SmoothGrad to work well, as the perturbation will give a better estimation of the global saliency.", "It could also be observed from Subfigures (b) and (d) that when the noise is too moderate, the evaluation does not deviate enough from the original spot to gain non-local information, and at (d) it deviates too much and hence the resulting alignment is almost random.", "Intuitively, the noise parameter σ should be sensitive to the model architecture or even specific input feature values, but interestingly we end up finding that a single choice from the computer vision literature works well with all of our systems.", "We encourage future work to conduct more comprehensive analysis of the effect of SmoothGrad on more complicated architectures beyond convolutional neural nets.", "Alignment Dispersion We run German-English alignments under several different SmoothGrad noise deviation σ and report their dispersion as measured by entropy of the (soft) alignment distribution averaged by number of target words.", "Results are summarized in Ta-ble 3, where lower entropy indicates more peaky alignments.", "First, we observe that dispersion of word saliency gets higher as we increase σ, which aligns with the observations in Figure 3 .", "It should also be noted that the alignment dispersion is consistently lower for free decoding than force decoding.", "This verifies our conjecture that the force decoding setting might introduce more noise in the model behavior, but judging from this result, that gap seems to be minimal.", "Comparing different architectures, the dispersion of attention weights does not correlate well with the dispersion of word saliency.", "We also notice that, while the Transformer attention interpretation consistently results in higher AER, its dispersion is lower than the other architectures, indicating that with attention, a lot of the probability mass might be concentrated in the wrong place more often.", "This corroborates the finding in Raganato and Tiedemann (2018) .", "Discussion And Future Work There are several extensions to this work that we would like to discuss in this section.", "First, in this paper we only explored two saliency methods among many others available (Montavon et al., 2018) .", "In our preliminary study, we also experimented with guided back-propagation (Springenberg et al., 2014), a frequently used saliency method in computer vision, which did not work well for our problem.", "We suspect that there is a gap between applying these methods on mostlyconvolutional architectures in computer vision and architectures with more non-linearities in NLP.", "We hope the future research from the NLP and machine learning communities could bridge this gap.", "Secondly, the alignment errors in our method comes from three different sources: the limitation of NMT models on learning word alignments, the limitation of interpretation method on recovering interpretable word alignments, and the ambiguity in word alignments itself.", "Although we have shown that high quality alignment could be recovered from NMT systems (thus pushing our understanding on the limitation of NMT models), we are not yet able to separate these sources of errors in this work.", "While exploration on this direction will help us better understand both NMT models and the capability of saliency methods in NLP, researchers may want to avoid using word alignment as a benchmark for saliency methods because of its ambiguity.", "For such purpose, simpler tasks with clear ground truth, such as subject-verb agreement, might be a better choice.", "Finally, as mentioned before, we are only conducting approximate evaluation to measure the ability of our interpretation method.", "An immediate future work would be evaluating this on human-annotated translation outputs generated by the NMT system.", "Conclusion We propose to use word saliency and SmoothGrad to interpret word alignments from NMT predictions.", "Our proposal is model-agnostic, is able to be applied either offline or online, and does not require any parameter updates or architectural change.", "Both force decoding and free decoding evaluations show that our method is capable of generating word alignment interpretations of much higher quality compared to its attentionbased counterpart.", "Our empirical results also probe into the NMT black-box and reveal that even without any special architecture or training algorithm, some NMT models have already implicitly learned interpretable word alignments of comparable quality to fast-align.", "The model and code for our experiments are available at https://github.com/shuoyangd/meerkat." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "4.3", "5.1", "5.2", "5.3", "6.2", "6.3", "7", "8" ], "paper_header_content": [ "Introduction", "Related Work", "The Interpretation Problem", "Method", "Visual Saliency", "Word Saliency", "SmoothGrad", "Evaluation Method", "Setup", "Force Decoding Results", "SmoothGrad", "Alignment Dispersion", "Discussion And Future Work", "Conclusion" ] }
GEM-SciDuet-train-38#paper-1054#slide-15
Our Proposal
Consider word embedding look-up as a dot product between the embedding matrix and an one-hot vector. Saliency-driven Word Alignment Interpretation for NMT The in the one-hot vector denotes the identity of the input word. Lets perturb that like a real value! i.e. take gradients with regard to the
Consider word embedding look-up as a dot product between the embedding matrix and an one-hot vector. Saliency-driven Word Alignment Interpretation for NMT The in the one-hot vector denotes the identity of the input word. Lets perturb that like a real value! i.e. take gradients with regard to the
[]
GEM-SciDuet-train-38#paper-1054#slide-17
1054
Saliency-driven Word Alignment Interpretation for Neural Machine Translation
Despite their original goal to jointly learn to align and translate, Neural Machine Translation (NMT) models, especially Transformer, are often perceived as not learning interpretable word alignments. In this paper, we show that NMT models do learn interpretable word alignments, which could only be revealed with proper interpretation methods. We propose a series of such methods that are model-agnostic, are able to be applied either offline or online, and do not require parameter update or architectural change. We show that under the force decoding setup, the alignments induced by our interpretation method are of better quality than fast-align for some systems, and when performing free decoding, they agree well with the alignments induced by automatic alignment tools.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223 ], "paper_content_text": [ "Introduction Neural Machine Translation (NMT) has made lots of advancements since its inception.", "One of the key innovations that led to the largest improvements is the introduction of the attention mechanism (Bahdanau et al., 2014; Luong et al., 2015) , which jointly learns word alignment and translation.", "Since then, the attention mechanism has gradually become a general technique in various NLP tasks, including summarization (Rush et al., 2015; See et al., 2017) , natural language inference (Parikh et al., 2016) and speech recognition (Chorowski et al., 2015; Chan et al., 2016) .", "Although word alignment is no longer a integral step like the case for Statistical Machine Translation (SMT) systems (Brown et al., 1993; Koehn et al., 2003) , there is a resurgence of interest in the community to study word alignment for NMT models.", "Even for NMT, word alignments are useful for error analysis, inserting external vocabularies, and providing guidance for human translators in computer-aided translation.", "When aiming for the most accurate alignments, the state-of-the-art tools include GIZA++ (Brown et al., 1993; Och and Ney, 2003) and fast-align (Dyer et al., 2013) , which are all external models invented in SMT era and need to be run as a separate post-processing step after the full sentence translation is complete.", "As a direct result, they are not suitable for analyzing the internal decision processes of the neural machine translation models.", "Besides, these models are hard to apply in the online fashion, i.e.", "in the middle of left-to-right translation process, such as the scenario in certain constrained decoding algorithms (Hasler et al., 2018) and in computeraided translation (Bouma and Parmentier, 2014; Arcan et al., 2014) .", "For these cases, the current common practice is to simply generate word alignments from attention weights between the encoder and decoder.", "However, there are problems with this practice.", "Koehn and Knowles (2017) showed that attention-based word alignment interpretation may be subject to \"off-by-one\" errors.", "Zenkel et al.", "(2019) ; Tang et al.", "(2018b) ; Raganato and Tiedemann (2018) pointed out that the attention-induced alignment is particularly noisy with Transformer models.", "Because of this, some studies, such as Nguyen and Chiang (2018); Zenkel et al.", "(2019) proposed either to add extra modules to generate higher quality word alignments, or to use these modules to further improve the model performance or interpretability.", "This paper is a step towards interpreting word alignments from NMT without relying on external models.", "We argue that using only attention weights is insufficient for generating clean word alignment interpretations, which we demonstrate both conceptually and empirically.", "We propose to use the notion of saliency to obtain word alignment interpretation of NMT predictions.", "Different from previous alignment models, our proposal is a pure interpretation method and does not require any parameter update or architecture change.", "Nevertheless, we are able to reduce Alignment Error Rate (AER) by 10-20 points over the attention weight baseline under two evaluation settings we adopt (see Figure 1 for an example), and beat fast-align (Dyer et al., 2013) by as much as 8.7 points.", "Not only have we proposed a superior model interpretation method, but our empirical results also uncover that, contrary to common beliefs, architectures such as convolutional sequenceto-sequence models (Gehring et al., 2017) have already implicitly learned highly interpretable word alignments, which sheds light on how future improvement should be made on these architectures.", "Related Work We start with work that combines word alignments with NMT.", "Research in this area generally falls into one of three themes: (1) employing the notion of word alignments to interpret the prediction of NMT; (2) making use of word alignments to improve NMT performance; (3) making use of NMT to improve word alignments.", "We mainly focus on related work in the first theme as this is the problem we are addressing in this work.", "Then we briefly introduce work in the other themes that is relevant to our study.", "We conclude by briefly summarizing related work to our proposed interpretation method.", "For the attention in RNN-based sequence-tosequence model, the first comprehensive analysis is conducted by Ghader and Monz (2017) .", "They argued that the attention in such systems agree with word alignment to a certain extent by showing that the RNN-based system achieves comparable alignment error rate comparable to that of bidirectional GIZA++ with symmetrization.", "However, they also point out that they are not exactly the same, as training the attention with alignments would occasionally cause the model to forget important information.", "Lee et al.", "(2017) presented a toolkit that facilitates study for the attention in RNN-based models.", "There is also a number of other studies that analyze the attention in Transformer models.", "Tang et al.", "(2018a,b) conducted targeted evaluation of neural machine translation models in two different evaluation tasks, namely subject-verb agreement and word sense disambiguation.", "During the analysis, they noted that the pattern in Transformer model (what they refer to as advanced attention mechanism) is very different from that of the attention in RNN-based architecture, in that a lot of the probability mass is focused on the last input token.", "They did not dive deeper in this phenomenon in their analysis.", "Raganato and Tiedemann (2018) performed a brief but more refined analysis on each attention head and each layer, where they noticed several different patterns inside the modules, and concluded that Transformer tends to focus on local dependencies in lower layers but finds long dependencies on higher ones.", "Beyond interpretation, in order to improve the translation of rare words, Nguyen and Chiang (2018) introduced LexNet, a feed-forward neural network that directly predicts the target word from a weighted sum of the source embeddings, on top of an RNN-based Seq2Seq models.", "Their goal was to improve translation output and hence they did not empirically show AER improvements on manually-aligned corpora.", "There are also a few other studies that inject alignment supervision during NMT training (Mi et al., 2016; Liu et al., 2016) .", "In terms of improvements in word alignment quality, Legrand et al.", "(2016) ; Wang et al.", "(2018) ; proposed neu-ral word alignment modules decoupled from NMT systems, while Zenkel et al.", "(2019) introduced a separate module to extract alignment from NMT decoder states, with which they achieved comparable AER with fast-align with Transformer models.", "The saliency method we propose in this work draws its inspiration from visual saliency proposed by Simonyan et al.", "(2013); Springenberg et al.", "(2014) ; Smilkov et al.", "(2017) .", "It should be noted that these methods were mostly applied to computer vision tasks.", "To the best of our knowledge, Li et al.", "(2016) presented the only work that directly employs saliency methods to interpret NLP models.", "Most similar to our work in spirit, Ding et al.", "(2017) used Layer-wise Relevance Propagation (LRP; Bach et al.", "2015) , an interpretation method resembling saliency, to interpret the internal working mechanisms of RNN-based neural machine translation systems.", "Although conceptually LRP is also a good fit for word alignment interpretation, we have some concerns with the mathematical soundness of LRP when applied to attention models.", "Our proposed method is also considerably more flexible and easier to implement than LRP.", "The Interpretation Problem Formally, by interpreting model prediction, we are referring to the following problem: given a trained MT model and input tokens S = {s 0 , s 1 , .", ".", ".", ", s I−1 }, at a certain time step j when the models predicts t j , we want to know which source word in S \"contributed\" most to this prediction.", "Note that the prediction t j might not be arg max t j p(t j | t 1:j−1 ), as the locally optimal option may be pruned during beam search and not end up in the final translation.", "Under this framework, we can see an important conceptual problem regarding interpreting attention weights as word alignment.", "Suppose for the same source sentence, there are two alternative translations that diverge at target time step j, generating t j and t ′ j which respectively correspond to different source words.", "Presumably, the source word that is aligned to t j and t ′ j should changed correspondingly.", "However, this is not possible with the attention weight interpretation, because the attention weight is computed before prediction of t j or t ′ j .", "With that, we argue that an ideal interpretation algorithm should be able to adapt the interpretation with the specified output label, regard-less of whether it is the most likely label predicted by the model.", "As a final note, the term \"attention weights\" here refers to the weights of the attention between encoder and decoder (the \"encoder-decoder attention\" in Vaswani et al.", "(2017) ).", "Specifically, they do not refer to the weight of self-attention modules that only exist in the Transformer architecture, which do not establish alignment between the source and target words.", "Method Our proposal is based on the notion of visual saliency (Simonyan et al., 2013) in computer vision.", "In brief, the saliency of an input feature is defined by the partial gradient of the output score with regard to the input.", "We propose to extend this idea to NMT by drawing analogy between input pixels and the embedding look-up operation.", "Visual Saliency Suppose we have an image classification example (x 0 , y 0 ), with y 0 being a specific image class and x 0 being an |X |-dimensional vector.", "Each entry of x 0 is an input feature (i.e., a pixel) to the classifier.", "Given the input x 0 , a trained classifier can generate a prediction score for class y 0 , denoted as p(y 0 | x 0 ).", "Consider the first-order Taylor expansion of a perturbed version of this score at the neighborhood of input x 0 : p(y 0 | x) ≈ p(y 0 | x 0 ) + ∂p(y 0 | x) ∂x x 0 · (x − x 0 ) (1) This is essentially re-formulating the perturbed prediction score p(y 0 | x) as an affine approximation of the input features, while the \"contribution\" of each feature to the final prediction being the partial derivative of the prediction score with regard to the feature.", "Assuming a feature that is deemed as salient for the local perturbation of the prediction score would also be globally salient, the saliency of an input feature is defined as follows: Definition 1 Denoted as Ψ(x, y), the saliency of feature vector x with regard to output class y is defined as ∂p(y | x) ∂x .", "Note that Ψ(x, y) is also a vector, with each entry corresponding to the saliency of a single input feature in x.", "Such formulation has following nice properties: • The saliency of an input feature is related to the choice of output class y, as model scores of different output classes correspond to a different set of parameters, and hence resulting in different partial gradients for the input features.", "This makes up for the aforementioned deficiency of attention weights in addressing the interpretation problem.", "• The partial gradient could be computed by back-propagation, which is efficiently implemented in most deep learning frameworks.", "• The formulation is agnostic to the model that generates p(y | x), so it could be applied to any deep learning architecture.", "Word Saliency In computer vision, the input feature is a 3D Tensor corresponding to the level in each channel.", "The key question to apply such method to NMT is what constitutes the input feature to a NMT system.", "Li et al.", "(2016) proposed to use the embedding of of the input words as the input feature to formulate saliency score, which results in the saliency of an input word being a vector of the same dimension as embedding vectors.", "To obtain a scalar saliency value, they computed the mean of the absolute value of the embedding gradients.", "We argue that there is a more mathematically principled way to approach this.", "To start, we treat the word embedding look-up operation as a dot product between the embedding weight matrix W and an one-hot vector z.", "The size of z is the same as the source vocabulary size.", "Similarly, the input sentence could be formulated as a matrix Z with only 0 and 1 entries.", "Notice that z has certain resemblance to the pixels of an image, with each cell representing the pixel-wise activation level of the words in the vocabulary.", "For the output word t j at time step j, we can similarly define the saliency of the one-hot vector z as: Ψ(z, t j ) = ∂p(t j | Z) ∂z (2) where p(t j | Z) is the probability of word t j generated by the NMT model given source sentence Z. Ψ(z, t j ) is a vector of the same size as z.", "However, note that there is a key difference between z and pixels.", "If the pixel level is 0, it means that the pixel is black, while a 0-entry in z means that the input word is not the word denoted by the corresponding cell.", "While the black region of an input image may still carry important information, we are not interested in the saliency of the 0-entries in z.", "1 Hence, we only take the 1-entries of matrix Z as the input to the NMT model.", "For a source word s i in the source sentence, this means we only care about the saliency of the 1-entries, i.e., the entry corresponding to source word s i : ψ(s i , t j ) = [ ∂p(t j | Z) ∂z ] s i = [ ∂p(t j | Z) ∂W s i · ∂W s i ∂z ] s i = [ ∂p(t j | Z) ∂W s i · W ] s i = ∂p(t j | Z) ∂W s i · W s i (3) where [·] i denotes the i-th row of a matrix or the ith element of a vector.", "In other words, the saliency ψ(s i , t j ) is a weighted sum of the word embedding of input word s i , with the partial gradient of each cell as the weight.", "By comparison, the word saliency 2 in Li et al.", "(2016) is defined as: ψ ′ (s i , t j ) = mean ( ∂p(t j | Z) ∂W s i ) (4) There are two implementation details that we would like to call for the reader's attention: • When the same word occurs multiple times in the source sentence, multiple copies of embedding for such word need to be made to ensure that the gradients flowing to different instances of the same word are not merged; • Note that ψ(s i , t j ) is not a probability distribution, which does not affect word alignment results because we are taking arg max.", "For visualizations presented herein, we normalized the distribution by p( s i | t j ) ∝ max(0, ψ(s i , t j )).", "One may also use softmax function for applications that need more well-formed probability distribution.", "1 Although we introduce z to facilitate presentation, note that word embedding look-up is never implemented as a matrix multiplication.", "Instead, it is implemented as a table lookup, so for each input word, only one row of the word embedding is fed into the subsequent computation.", "As a consequence, during training, since the other rows are not part of the computation graph, only parameters in the rows corresponding to the 1-entries will be updated.", "This is another reason why we choose to discard the saliency of 0-entries.", "2 Li et al.", "(2016) mostly focused on studying saliency on the level of word embedding dimensions.", "This word-level formulation is proposed as part of the analysis in Section 5.2 and Section 6 of that work.", "SmoothGrad There are two scenarios where the naïve gradientbased saliency may make mistakes: • For highly non-linear models, the saliency obtained from local perturbation may not be a good representation of the global saliency.", "• If the model fits the distribution nearly perfectly, some data points or input features may become saturated, i.e.", "having a partial gradient of 0.", "This does not necessarily mean they are not salient with regard to the prediction.", "We alleviate these problems with SmoothGrad, a method proposed by Smilkov et al.", "(2017) .", "The idea is to augment the input to the network into n samples by adding random noise generated by normal distribution N (0, σ 2 ).", "The saliency scores of each augmented sample are then averaged to cancel out the noise in the gradients.", "We made one small modification to this method in our experiments: rather than adding noise to the word inputs that are represented as one-hot vectors, we instead add noise to the queried embedding vectors.", "This allows us to introduce more randomness for each word input.", "Experiments Evaluation Method The best evaluation method would compare predicted word alignments against manually labeled word alignments between source sentences and NMT output sentences, but this is too costly for our study.", "Instead, we conduct two automatic evaluations for our proposed method using resources available: • force decoding: take a human-annotated corpus, run NMT models to force-generate the target side of the corpus and measure AER against the human alignment; • free decoding: take the NMT prediction, obtain reasonably clean reference alignments between the prediction and the source and measure AER against this reference.", "3 Notice that both automatic evaluation methods have their respective limitation: the force decoding method may force the model to predict something it deems unlikely, and thus generating noisy alignment; whereas the free decoding method lacks authentic references.", "Setup We follow Zenkel et al.", "(2019) in data setup and use the accompanied scripts of that paper 4 for preprocessing.", "Their training data consists of 1.9M, 1.1M and 0.4M sentence pairs for German-English (de-en), English-French (en-fr) and Romanian-English (ro-en) language pairs, respectively, whereas the manually-aligned test data contains 508, 447 and 248 sentence pairs for each language pair.", "There is no development data provided in their setup, and it is not clear what they used for NMT system training, so we set aside the last 1,000 sentences of the training data for each language as the development set.", "For our NMT systems, we use fairseq 5 to train attention-based RNN systems (LSTM) (Bahdanau et al., 2014) , convolution systems (FConv) (Gehring et al., 2017) , and Transformer systems (Transformer) (Vaswani et al., 2017) .", "We use the pre-configured model architectures for IWSLT German-English experiments 6 to build all NMT systems.", "Our experiments cover the following interpretation methods: • Attention: directly take the attention weights as soft alignment scores.", "For transformer, we follow the implementation in fairseq and used the attention weights from the final layer averaged across all heads; • Smoothed Attention: obtain multiple version of attention weights with the same data augmentation procedure as SmoothGrad and average them.", "This is to prove that smoothing itself does not improve the interpretation quality, and has to be used together with effective interpretation method; • (Li et al., 2016) : applied with normal backpropagation (Grad) and SmoothGrad; • Ours: applied with normal back-propagation (Grad) and SmoothGrad.", "For all the methods above, we follow the same procedure in (Zenkel et al., 2019) to convert soft alignment scores to hard alignment.", "For force decoding experiments, we generate symmetrized alignment results with growdiag-final.", "We also include AER results 7 of fast-align (Dyer et al., 2013) , GIZA++ 8 and the best model (Add+SGD) from Zenkel et al.", "(2019) on the same dataset for comparison.", "However, the readers should be aware that there are certain caveats in this comparison: • All of these models are specifically designed and optimized to generate high-quality alignments, while our method is an interpretation method and is not making any architecture modifications or parameter updates; • fast-align and GIZA++ usually need to update model with full sentence to generate optimal alignments, while our system and Zenkel et al.", "(2019) can do so on-the-fly.", "7 We reproduced the fast-align results as a sanity check and we were able to perfectly replicate their numbers with their released scripts.", "8 https://github.com/moses-smt/giza-pp Realizing the second caveat, we also run fastalign under the online alignment scenario, where we first train a fast-align model and decode on the test set.", "This is a real-world scenario in applications such as computer-aided translation (Bouma and Parmentier, 2014; Arcan et al., 2014) , where we cannot practically update alignment models onthe-fly.", "On the other hand, we believe this is a slightly better comparison for methods with online alignment capabilities such as Zenkel et al.", "(2019) and this work.", "The data used in Zenkel et al.", "(2019) did not provide a manually-aligned development set, so we tune the SmoothGrad hyperparameters (noise standard deviation σ and sample size n) on a 30sentence subset of the German-English test data with the Transformer model.", "We ended up using the recommended σ = 0.15 in the original paper and a slightly smaller sample size n = 30 for speed.", "This hyperparameter setting is applied to the other SmoothGrad experiments as-is.", "For com-parison with previous work, we do not exclude these sentences from the reported results, we instead mark the numbers affected to raise caution.", "Table 1 shows the AER results under the force decoding setting.", "First, note that after applying our saliency method with normal back-propagation, AER is only reduced for FConv model but instead increases for LSTM and Transformer.", "The largest increase is observed for Transformer, where the AER increases by about 20 points on average.", "However, after applying SmoothGrad on top of that, we observe a sharp drop in AER, which ends up with 10-20 points lower than the attention weight baseline.", "We can also see that this is not just an effect introduced by input noise, as the same smoothing procedure for attention increases the AER most of the times.", "To summarize, at least under force decoding settings, our saliency method with SmoothGrad obtains word alignment interpretations of much higher quality than the attention weight baseline.", "Force Decoding Results As for Li et al.", "(2016) , for FConv and LSTM architectures, it is not only consistently worse than our method, but at times also worse than attention.", "Besides, the effect of SmoothGrad is also not as consistent on their saliency formulation as ours.", "Although with the Transformer model, the Li et al.", "(2016) method obtained better AER than our method under several settings, it is still pretty clear overall that the superior mathematical soundness of our method is translated into better interpretation quality.", "While the GIZA++ model obtains the best alignment result in Table 1 9 , most of our word alignment interpretation of FConv model with Smooth-Grad surpasses the alignment quality of fast-align (either Online or Offline), sometimes by as much as 8.7 points (symmetrized ro<>en result).", "Our best models are also largely on-par with (Zenkel et al., 2019) .", "These are notable results as our method is an interpretation method and no extra parameter is updated to optimize the quality of alignment.", "On the other hand, this also indicates that it is possible to induce high-quality 9 While Ghader and Monz (2017) showed that the AER obtained by LSTM model is close to that of GIZA++, our experiments yield a much larger difference.", "We think this is largely due to the fact that we choose to train our model with BPE, while Ghader and Monz (2017) explicitly avoided doing so.", "alignments from NMT model without modifying its parameters, showing that it has acquired such information in an implicit way.", "Most interestingly, although NMT is often deemed as performing poorly under low-resource setting, our interpretation seems to work relatively well on ro<>en language pair, which happens to be the language pair that we have least training data for.", "We think this is a phenomenon that merits further exploration.", "Besides, it can be seen that for all reported methods, the overall order for the number of alignment errors is FConv < LSTM < Transformer.", "To our best knowledge, this is also a novel insight, as no one has analyzed attention weights of FConv with other architectures before.", "We can also observe that while our method is not strong enough to fully bridge the gap of the attention noise level between different model architecture, it does manage to narrow the difference in some cases.", "Table 2 shows the result under free decoding setting.", "The trend in this group of experiment is similar to Table 1 , except that Transformer occasionally outperforms LSTM.", "We think this is mainly due to the fact that Transformer generates higher quality translations, but could also be partially attributed to the noise in fast-align reference.", "Also, notice that the AER numbers are also generally lower compared to Table 1 under this setting.", "One reason is that our model is aligning output with which it is most confident, so less noise should be expected in the model behavior.", "On the other hand, by qualitatively comparing the reference translation in the test set and the NMT output, we find that it is generally easier to align the translation as it is often a more literal translation.", "6 Analysis 6.1 Comparison with Li et al.", "(2016) The main reason why the word saliency formulation in Li et al.", "(2016) does not work as well for word alignment is the lack of polarity in the formulation.", "In other words, it only quantifies how much the input influences the output, but does not specify in what way does the input influence.", "This is sufficient for error analysis, but does not suit the purpose of word alignment, as humans will only align a target word to the input words that constitute a translation pair, i.e.", "have positive influence.", "Figure 2 shows a case where this problem occurs in our German-English experiments.", "Note that in Subfigure (a), the source word nur has high saliency on several target words, e.g.", "should, but the word nur is actually not translated in the reference.", "On the other hand, as shown in Subfigure (b), our method correctly assigns negative (shown as white) or small positive values at all time steps for this source word.", "Specifically, the saliency value of nur for should is negative with large magnitude, indicating significant negative contributions to the prediction of that target word.", "Hence, a good word alignment interpreta-tion should strongly avoid aligning them.", "Free Decoding Results SmoothGrad Tables 1 and 2 show that SmoothGrad is a crucial factor to reduce AER, especially for Transformer.", "Figure 3 Table 1 .", "By comparing Subfigures (a) and (c), we notice that (1) without SmoothGrad, the word saliency obtained from the Transformer model is extremely noisy, and (2) the output of SmoothGrad is not only a smoother version of the naïve gradient output, but also gains new information by performing extra forward and backward evaluations with the noisy input.", "For example, compare the alignment point between source word wir and target word we: in Subfigure (a), this word pair has very low saliency, but in (c), they become the most likely alignment pair for that target word.", "Referring back to our motivation for using SmoothGrad in Section 4.3, we think the observations above verify that the Transformer model is a case where very high non-linearities occur almost everywhere in the parameter space, such that the saliency obtained from local perturbation is a very Table 3 : Alignment distribution entropy for selected deen models.", "att stands for attention in Table 1. poor representation of the global saliency almost all the time.", "On the other hand, this is also why the Transformer especially relies on SmoothGrad to work well, as the perturbation will give a better estimation of the global saliency.", "It could also be observed from Subfigures (b) and (d) that when the noise is too moderate, the evaluation does not deviate enough from the original spot to gain non-local information, and at (d) it deviates too much and hence the resulting alignment is almost random.", "Intuitively, the noise parameter σ should be sensitive to the model architecture or even specific input feature values, but interestingly we end up finding that a single choice from the computer vision literature works well with all of our systems.", "We encourage future work to conduct more comprehensive analysis of the effect of SmoothGrad on more complicated architectures beyond convolutional neural nets.", "Alignment Dispersion We run German-English alignments under several different SmoothGrad noise deviation σ and report their dispersion as measured by entropy of the (soft) alignment distribution averaged by number of target words.", "Results are summarized in Ta-ble 3, where lower entropy indicates more peaky alignments.", "First, we observe that dispersion of word saliency gets higher as we increase σ, which aligns with the observations in Figure 3 .", "It should also be noted that the alignment dispersion is consistently lower for free decoding than force decoding.", "This verifies our conjecture that the force decoding setting might introduce more noise in the model behavior, but judging from this result, that gap seems to be minimal.", "Comparing different architectures, the dispersion of attention weights does not correlate well with the dispersion of word saliency.", "We also notice that, while the Transformer attention interpretation consistently results in higher AER, its dispersion is lower than the other architectures, indicating that with attention, a lot of the probability mass might be concentrated in the wrong place more often.", "This corroborates the finding in Raganato and Tiedemann (2018) .", "Discussion And Future Work There are several extensions to this work that we would like to discuss in this section.", "First, in this paper we only explored two saliency methods among many others available (Montavon et al., 2018) .", "In our preliminary study, we also experimented with guided back-propagation (Springenberg et al., 2014), a frequently used saliency method in computer vision, which did not work well for our problem.", "We suspect that there is a gap between applying these methods on mostlyconvolutional architectures in computer vision and architectures with more non-linearities in NLP.", "We hope the future research from the NLP and machine learning communities could bridge this gap.", "Secondly, the alignment errors in our method comes from three different sources: the limitation of NMT models on learning word alignments, the limitation of interpretation method on recovering interpretable word alignments, and the ambiguity in word alignments itself.", "Although we have shown that high quality alignment could be recovered from NMT systems (thus pushing our understanding on the limitation of NMT models), we are not yet able to separate these sources of errors in this work.", "While exploration on this direction will help us better understand both NMT models and the capability of saliency methods in NLP, researchers may want to avoid using word alignment as a benchmark for saliency methods because of its ambiguity.", "For such purpose, simpler tasks with clear ground truth, such as subject-verb agreement, might be a better choice.", "Finally, as mentioned before, we are only conducting approximate evaluation to measure the ability of our interpretation method.", "An immediate future work would be evaluating this on human-annotated translation outputs generated by the NMT system.", "Conclusion We propose to use word saliency and SmoothGrad to interpret word alignments from NMT predictions.", "Our proposal is model-agnostic, is able to be applied either offline or online, and does not require any parameter updates or architectural change.", "Both force decoding and free decoding evaluations show that our method is capable of generating word alignment interpretations of much higher quality compared to its attentionbased counterpart.", "Our empirical results also probe into the NMT black-box and reveal that even without any special architecture or training algorithm, some NMT models have already implicitly learned interpretable word alignments of comparable quality to fast-align.", "The model and code for our experiments are available at https://github.com/shuoyangd/meerkat." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "4.3", "5.1", "5.2", "5.3", "6.2", "6.3", "7", "8" ], "paper_header_content": [ "Introduction", "Related Work", "The Interpretation Problem", "Method", "Visual Saliency", "Word Saliency", "SmoothGrad", "Evaluation Method", "Setup", "Force Decoding Results", "SmoothGrad", "Alignment Dispersion", "Discussion And Future Work", "Conclusion" ] }
GEM-SciDuet-train-38#paper-1054#slide-17
Evaluation
Fortunately, theres human judgments to rely on. Need to do force decoding with NMT model. Saliency-driven Word Alignment Interpretation for NMT
Fortunately, theres human judgments to rely on. Need to do force decoding with NMT model. Saliency-driven Word Alignment Interpretation for NMT
[]
GEM-SciDuet-train-38#paper-1054#slide-18
1054
Saliency-driven Word Alignment Interpretation for Neural Machine Translation
Despite their original goal to jointly learn to align and translate, Neural Machine Translation (NMT) models, especially Transformer, are often perceived as not learning interpretable word alignments. In this paper, we show that NMT models do learn interpretable word alignments, which could only be revealed with proper interpretation methods. We propose a series of such methods that are model-agnostic, are able to be applied either offline or online, and do not require parameter update or architectural change. We show that under the force decoding setup, the alignments induced by our interpretation method are of better quality than fast-align for some systems, and when performing free decoding, they agree well with the alignments induced by automatic alignment tools.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223 ], "paper_content_text": [ "Introduction Neural Machine Translation (NMT) has made lots of advancements since its inception.", "One of the key innovations that led to the largest improvements is the introduction of the attention mechanism (Bahdanau et al., 2014; Luong et al., 2015) , which jointly learns word alignment and translation.", "Since then, the attention mechanism has gradually become a general technique in various NLP tasks, including summarization (Rush et al., 2015; See et al., 2017) , natural language inference (Parikh et al., 2016) and speech recognition (Chorowski et al., 2015; Chan et al., 2016) .", "Although word alignment is no longer a integral step like the case for Statistical Machine Translation (SMT) systems (Brown et al., 1993; Koehn et al., 2003) , there is a resurgence of interest in the community to study word alignment for NMT models.", "Even for NMT, word alignments are useful for error analysis, inserting external vocabularies, and providing guidance for human translators in computer-aided translation.", "When aiming for the most accurate alignments, the state-of-the-art tools include GIZA++ (Brown et al., 1993; Och and Ney, 2003) and fast-align (Dyer et al., 2013) , which are all external models invented in SMT era and need to be run as a separate post-processing step after the full sentence translation is complete.", "As a direct result, they are not suitable for analyzing the internal decision processes of the neural machine translation models.", "Besides, these models are hard to apply in the online fashion, i.e.", "in the middle of left-to-right translation process, such as the scenario in certain constrained decoding algorithms (Hasler et al., 2018) and in computeraided translation (Bouma and Parmentier, 2014; Arcan et al., 2014) .", "For these cases, the current common practice is to simply generate word alignments from attention weights between the encoder and decoder.", "However, there are problems with this practice.", "Koehn and Knowles (2017) showed that attention-based word alignment interpretation may be subject to \"off-by-one\" errors.", "Zenkel et al.", "(2019) ; Tang et al.", "(2018b) ; Raganato and Tiedemann (2018) pointed out that the attention-induced alignment is particularly noisy with Transformer models.", "Because of this, some studies, such as Nguyen and Chiang (2018); Zenkel et al.", "(2019) proposed either to add extra modules to generate higher quality word alignments, or to use these modules to further improve the model performance or interpretability.", "This paper is a step towards interpreting word alignments from NMT without relying on external models.", "We argue that using only attention weights is insufficient for generating clean word alignment interpretations, which we demonstrate both conceptually and empirically.", "We propose to use the notion of saliency to obtain word alignment interpretation of NMT predictions.", "Different from previous alignment models, our proposal is a pure interpretation method and does not require any parameter update or architecture change.", "Nevertheless, we are able to reduce Alignment Error Rate (AER) by 10-20 points over the attention weight baseline under two evaluation settings we adopt (see Figure 1 for an example), and beat fast-align (Dyer et al., 2013) by as much as 8.7 points.", "Not only have we proposed a superior model interpretation method, but our empirical results also uncover that, contrary to common beliefs, architectures such as convolutional sequenceto-sequence models (Gehring et al., 2017) have already implicitly learned highly interpretable word alignments, which sheds light on how future improvement should be made on these architectures.", "Related Work We start with work that combines word alignments with NMT.", "Research in this area generally falls into one of three themes: (1) employing the notion of word alignments to interpret the prediction of NMT; (2) making use of word alignments to improve NMT performance; (3) making use of NMT to improve word alignments.", "We mainly focus on related work in the first theme as this is the problem we are addressing in this work.", "Then we briefly introduce work in the other themes that is relevant to our study.", "We conclude by briefly summarizing related work to our proposed interpretation method.", "For the attention in RNN-based sequence-tosequence model, the first comprehensive analysis is conducted by Ghader and Monz (2017) .", "They argued that the attention in such systems agree with word alignment to a certain extent by showing that the RNN-based system achieves comparable alignment error rate comparable to that of bidirectional GIZA++ with symmetrization.", "However, they also point out that they are not exactly the same, as training the attention with alignments would occasionally cause the model to forget important information.", "Lee et al.", "(2017) presented a toolkit that facilitates study for the attention in RNN-based models.", "There is also a number of other studies that analyze the attention in Transformer models.", "Tang et al.", "(2018a,b) conducted targeted evaluation of neural machine translation models in two different evaluation tasks, namely subject-verb agreement and word sense disambiguation.", "During the analysis, they noted that the pattern in Transformer model (what they refer to as advanced attention mechanism) is very different from that of the attention in RNN-based architecture, in that a lot of the probability mass is focused on the last input token.", "They did not dive deeper in this phenomenon in their analysis.", "Raganato and Tiedemann (2018) performed a brief but more refined analysis on each attention head and each layer, where they noticed several different patterns inside the modules, and concluded that Transformer tends to focus on local dependencies in lower layers but finds long dependencies on higher ones.", "Beyond interpretation, in order to improve the translation of rare words, Nguyen and Chiang (2018) introduced LexNet, a feed-forward neural network that directly predicts the target word from a weighted sum of the source embeddings, on top of an RNN-based Seq2Seq models.", "Their goal was to improve translation output and hence they did not empirically show AER improvements on manually-aligned corpora.", "There are also a few other studies that inject alignment supervision during NMT training (Mi et al., 2016; Liu et al., 2016) .", "In terms of improvements in word alignment quality, Legrand et al.", "(2016) ; Wang et al.", "(2018) ; proposed neu-ral word alignment modules decoupled from NMT systems, while Zenkel et al.", "(2019) introduced a separate module to extract alignment from NMT decoder states, with which they achieved comparable AER with fast-align with Transformer models.", "The saliency method we propose in this work draws its inspiration from visual saliency proposed by Simonyan et al.", "(2013); Springenberg et al.", "(2014) ; Smilkov et al.", "(2017) .", "It should be noted that these methods were mostly applied to computer vision tasks.", "To the best of our knowledge, Li et al.", "(2016) presented the only work that directly employs saliency methods to interpret NLP models.", "Most similar to our work in spirit, Ding et al.", "(2017) used Layer-wise Relevance Propagation (LRP; Bach et al.", "2015) , an interpretation method resembling saliency, to interpret the internal working mechanisms of RNN-based neural machine translation systems.", "Although conceptually LRP is also a good fit for word alignment interpretation, we have some concerns with the mathematical soundness of LRP when applied to attention models.", "Our proposed method is also considerably more flexible and easier to implement than LRP.", "The Interpretation Problem Formally, by interpreting model prediction, we are referring to the following problem: given a trained MT model and input tokens S = {s 0 , s 1 , .", ".", ".", ", s I−1 }, at a certain time step j when the models predicts t j , we want to know which source word in S \"contributed\" most to this prediction.", "Note that the prediction t j might not be arg max t j p(t j | t 1:j−1 ), as the locally optimal option may be pruned during beam search and not end up in the final translation.", "Under this framework, we can see an important conceptual problem regarding interpreting attention weights as word alignment.", "Suppose for the same source sentence, there are two alternative translations that diverge at target time step j, generating t j and t ′ j which respectively correspond to different source words.", "Presumably, the source word that is aligned to t j and t ′ j should changed correspondingly.", "However, this is not possible with the attention weight interpretation, because the attention weight is computed before prediction of t j or t ′ j .", "With that, we argue that an ideal interpretation algorithm should be able to adapt the interpretation with the specified output label, regard-less of whether it is the most likely label predicted by the model.", "As a final note, the term \"attention weights\" here refers to the weights of the attention between encoder and decoder (the \"encoder-decoder attention\" in Vaswani et al.", "(2017) ).", "Specifically, they do not refer to the weight of self-attention modules that only exist in the Transformer architecture, which do not establish alignment between the source and target words.", "Method Our proposal is based on the notion of visual saliency (Simonyan et al., 2013) in computer vision.", "In brief, the saliency of an input feature is defined by the partial gradient of the output score with regard to the input.", "We propose to extend this idea to NMT by drawing analogy between input pixels and the embedding look-up operation.", "Visual Saliency Suppose we have an image classification example (x 0 , y 0 ), with y 0 being a specific image class and x 0 being an |X |-dimensional vector.", "Each entry of x 0 is an input feature (i.e., a pixel) to the classifier.", "Given the input x 0 , a trained classifier can generate a prediction score for class y 0 , denoted as p(y 0 | x 0 ).", "Consider the first-order Taylor expansion of a perturbed version of this score at the neighborhood of input x 0 : p(y 0 | x) ≈ p(y 0 | x 0 ) + ∂p(y 0 | x) ∂x x 0 · (x − x 0 ) (1) This is essentially re-formulating the perturbed prediction score p(y 0 | x) as an affine approximation of the input features, while the \"contribution\" of each feature to the final prediction being the partial derivative of the prediction score with regard to the feature.", "Assuming a feature that is deemed as salient for the local perturbation of the prediction score would also be globally salient, the saliency of an input feature is defined as follows: Definition 1 Denoted as Ψ(x, y), the saliency of feature vector x with regard to output class y is defined as ∂p(y | x) ∂x .", "Note that Ψ(x, y) is also a vector, with each entry corresponding to the saliency of a single input feature in x.", "Such formulation has following nice properties: • The saliency of an input feature is related to the choice of output class y, as model scores of different output classes correspond to a different set of parameters, and hence resulting in different partial gradients for the input features.", "This makes up for the aforementioned deficiency of attention weights in addressing the interpretation problem.", "• The partial gradient could be computed by back-propagation, which is efficiently implemented in most deep learning frameworks.", "• The formulation is agnostic to the model that generates p(y | x), so it could be applied to any deep learning architecture.", "Word Saliency In computer vision, the input feature is a 3D Tensor corresponding to the level in each channel.", "The key question to apply such method to NMT is what constitutes the input feature to a NMT system.", "Li et al.", "(2016) proposed to use the embedding of of the input words as the input feature to formulate saliency score, which results in the saliency of an input word being a vector of the same dimension as embedding vectors.", "To obtain a scalar saliency value, they computed the mean of the absolute value of the embedding gradients.", "We argue that there is a more mathematically principled way to approach this.", "To start, we treat the word embedding look-up operation as a dot product between the embedding weight matrix W and an one-hot vector z.", "The size of z is the same as the source vocabulary size.", "Similarly, the input sentence could be formulated as a matrix Z with only 0 and 1 entries.", "Notice that z has certain resemblance to the pixels of an image, with each cell representing the pixel-wise activation level of the words in the vocabulary.", "For the output word t j at time step j, we can similarly define the saliency of the one-hot vector z as: Ψ(z, t j ) = ∂p(t j | Z) ∂z (2) where p(t j | Z) is the probability of word t j generated by the NMT model given source sentence Z. Ψ(z, t j ) is a vector of the same size as z.", "However, note that there is a key difference between z and pixels.", "If the pixel level is 0, it means that the pixel is black, while a 0-entry in z means that the input word is not the word denoted by the corresponding cell.", "While the black region of an input image may still carry important information, we are not interested in the saliency of the 0-entries in z.", "1 Hence, we only take the 1-entries of matrix Z as the input to the NMT model.", "For a source word s i in the source sentence, this means we only care about the saliency of the 1-entries, i.e., the entry corresponding to source word s i : ψ(s i , t j ) = [ ∂p(t j | Z) ∂z ] s i = [ ∂p(t j | Z) ∂W s i · ∂W s i ∂z ] s i = [ ∂p(t j | Z) ∂W s i · W ] s i = ∂p(t j | Z) ∂W s i · W s i (3) where [·] i denotes the i-th row of a matrix or the ith element of a vector.", "In other words, the saliency ψ(s i , t j ) is a weighted sum of the word embedding of input word s i , with the partial gradient of each cell as the weight.", "By comparison, the word saliency 2 in Li et al.", "(2016) is defined as: ψ ′ (s i , t j ) = mean ( ∂p(t j | Z) ∂W s i ) (4) There are two implementation details that we would like to call for the reader's attention: • When the same word occurs multiple times in the source sentence, multiple copies of embedding for such word need to be made to ensure that the gradients flowing to different instances of the same word are not merged; • Note that ψ(s i , t j ) is not a probability distribution, which does not affect word alignment results because we are taking arg max.", "For visualizations presented herein, we normalized the distribution by p( s i | t j ) ∝ max(0, ψ(s i , t j )).", "One may also use softmax function for applications that need more well-formed probability distribution.", "1 Although we introduce z to facilitate presentation, note that word embedding look-up is never implemented as a matrix multiplication.", "Instead, it is implemented as a table lookup, so for each input word, only one row of the word embedding is fed into the subsequent computation.", "As a consequence, during training, since the other rows are not part of the computation graph, only parameters in the rows corresponding to the 1-entries will be updated.", "This is another reason why we choose to discard the saliency of 0-entries.", "2 Li et al.", "(2016) mostly focused on studying saliency on the level of word embedding dimensions.", "This word-level formulation is proposed as part of the analysis in Section 5.2 and Section 6 of that work.", "SmoothGrad There are two scenarios where the naïve gradientbased saliency may make mistakes: • For highly non-linear models, the saliency obtained from local perturbation may not be a good representation of the global saliency.", "• If the model fits the distribution nearly perfectly, some data points or input features may become saturated, i.e.", "having a partial gradient of 0.", "This does not necessarily mean they are not salient with regard to the prediction.", "We alleviate these problems with SmoothGrad, a method proposed by Smilkov et al.", "(2017) .", "The idea is to augment the input to the network into n samples by adding random noise generated by normal distribution N (0, σ 2 ).", "The saliency scores of each augmented sample are then averaged to cancel out the noise in the gradients.", "We made one small modification to this method in our experiments: rather than adding noise to the word inputs that are represented as one-hot vectors, we instead add noise to the queried embedding vectors.", "This allows us to introduce more randomness for each word input.", "Experiments Evaluation Method The best evaluation method would compare predicted word alignments against manually labeled word alignments between source sentences and NMT output sentences, but this is too costly for our study.", "Instead, we conduct two automatic evaluations for our proposed method using resources available: • force decoding: take a human-annotated corpus, run NMT models to force-generate the target side of the corpus and measure AER against the human alignment; • free decoding: take the NMT prediction, obtain reasonably clean reference alignments between the prediction and the source and measure AER against this reference.", "3 Notice that both automatic evaluation methods have their respective limitation: the force decoding method may force the model to predict something it deems unlikely, and thus generating noisy alignment; whereas the free decoding method lacks authentic references.", "Setup We follow Zenkel et al.", "(2019) in data setup and use the accompanied scripts of that paper 4 for preprocessing.", "Their training data consists of 1.9M, 1.1M and 0.4M sentence pairs for German-English (de-en), English-French (en-fr) and Romanian-English (ro-en) language pairs, respectively, whereas the manually-aligned test data contains 508, 447 and 248 sentence pairs for each language pair.", "There is no development data provided in their setup, and it is not clear what they used for NMT system training, so we set aside the last 1,000 sentences of the training data for each language as the development set.", "For our NMT systems, we use fairseq 5 to train attention-based RNN systems (LSTM) (Bahdanau et al., 2014) , convolution systems (FConv) (Gehring et al., 2017) , and Transformer systems (Transformer) (Vaswani et al., 2017) .", "We use the pre-configured model architectures for IWSLT German-English experiments 6 to build all NMT systems.", "Our experiments cover the following interpretation methods: • Attention: directly take the attention weights as soft alignment scores.", "For transformer, we follow the implementation in fairseq and used the attention weights from the final layer averaged across all heads; • Smoothed Attention: obtain multiple version of attention weights with the same data augmentation procedure as SmoothGrad and average them.", "This is to prove that smoothing itself does not improve the interpretation quality, and has to be used together with effective interpretation method; • (Li et al., 2016) : applied with normal backpropagation (Grad) and SmoothGrad; • Ours: applied with normal back-propagation (Grad) and SmoothGrad.", "For all the methods above, we follow the same procedure in (Zenkel et al., 2019) to convert soft alignment scores to hard alignment.", "For force decoding experiments, we generate symmetrized alignment results with growdiag-final.", "We also include AER results 7 of fast-align (Dyer et al., 2013) , GIZA++ 8 and the best model (Add+SGD) from Zenkel et al.", "(2019) on the same dataset for comparison.", "However, the readers should be aware that there are certain caveats in this comparison: • All of these models are specifically designed and optimized to generate high-quality alignments, while our method is an interpretation method and is not making any architecture modifications or parameter updates; • fast-align and GIZA++ usually need to update model with full sentence to generate optimal alignments, while our system and Zenkel et al.", "(2019) can do so on-the-fly.", "7 We reproduced the fast-align results as a sanity check and we were able to perfectly replicate their numbers with their released scripts.", "8 https://github.com/moses-smt/giza-pp Realizing the second caveat, we also run fastalign under the online alignment scenario, where we first train a fast-align model and decode on the test set.", "This is a real-world scenario in applications such as computer-aided translation (Bouma and Parmentier, 2014; Arcan et al., 2014) , where we cannot practically update alignment models onthe-fly.", "On the other hand, we believe this is a slightly better comparison for methods with online alignment capabilities such as Zenkel et al.", "(2019) and this work.", "The data used in Zenkel et al.", "(2019) did not provide a manually-aligned development set, so we tune the SmoothGrad hyperparameters (noise standard deviation σ and sample size n) on a 30sentence subset of the German-English test data with the Transformer model.", "We ended up using the recommended σ = 0.15 in the original paper and a slightly smaller sample size n = 30 for speed.", "This hyperparameter setting is applied to the other SmoothGrad experiments as-is.", "For com-parison with previous work, we do not exclude these sentences from the reported results, we instead mark the numbers affected to raise caution.", "Table 1 shows the AER results under the force decoding setting.", "First, note that after applying our saliency method with normal back-propagation, AER is only reduced for FConv model but instead increases for LSTM and Transformer.", "The largest increase is observed for Transformer, where the AER increases by about 20 points on average.", "However, after applying SmoothGrad on top of that, we observe a sharp drop in AER, which ends up with 10-20 points lower than the attention weight baseline.", "We can also see that this is not just an effect introduced by input noise, as the same smoothing procedure for attention increases the AER most of the times.", "To summarize, at least under force decoding settings, our saliency method with SmoothGrad obtains word alignment interpretations of much higher quality than the attention weight baseline.", "Force Decoding Results As for Li et al.", "(2016) , for FConv and LSTM architectures, it is not only consistently worse than our method, but at times also worse than attention.", "Besides, the effect of SmoothGrad is also not as consistent on their saliency formulation as ours.", "Although with the Transformer model, the Li et al.", "(2016) method obtained better AER than our method under several settings, it is still pretty clear overall that the superior mathematical soundness of our method is translated into better interpretation quality.", "While the GIZA++ model obtains the best alignment result in Table 1 9 , most of our word alignment interpretation of FConv model with Smooth-Grad surpasses the alignment quality of fast-align (either Online or Offline), sometimes by as much as 8.7 points (symmetrized ro<>en result).", "Our best models are also largely on-par with (Zenkel et al., 2019) .", "These are notable results as our method is an interpretation method and no extra parameter is updated to optimize the quality of alignment.", "On the other hand, this also indicates that it is possible to induce high-quality 9 While Ghader and Monz (2017) showed that the AER obtained by LSTM model is close to that of GIZA++, our experiments yield a much larger difference.", "We think this is largely due to the fact that we choose to train our model with BPE, while Ghader and Monz (2017) explicitly avoided doing so.", "alignments from NMT model without modifying its parameters, showing that it has acquired such information in an implicit way.", "Most interestingly, although NMT is often deemed as performing poorly under low-resource setting, our interpretation seems to work relatively well on ro<>en language pair, which happens to be the language pair that we have least training data for.", "We think this is a phenomenon that merits further exploration.", "Besides, it can be seen that for all reported methods, the overall order for the number of alignment errors is FConv < LSTM < Transformer.", "To our best knowledge, this is also a novel insight, as no one has analyzed attention weights of FConv with other architectures before.", "We can also observe that while our method is not strong enough to fully bridge the gap of the attention noise level between different model architecture, it does manage to narrow the difference in some cases.", "Table 2 shows the result under free decoding setting.", "The trend in this group of experiment is similar to Table 1 , except that Transformer occasionally outperforms LSTM.", "We think this is mainly due to the fact that Transformer generates higher quality translations, but could also be partially attributed to the noise in fast-align reference.", "Also, notice that the AER numbers are also generally lower compared to Table 1 under this setting.", "One reason is that our model is aligning output with which it is most confident, so less noise should be expected in the model behavior.", "On the other hand, by qualitatively comparing the reference translation in the test set and the NMT output, we find that it is generally easier to align the translation as it is often a more literal translation.", "6 Analysis 6.1 Comparison with Li et al.", "(2016) The main reason why the word saliency formulation in Li et al.", "(2016) does not work as well for word alignment is the lack of polarity in the formulation.", "In other words, it only quantifies how much the input influences the output, but does not specify in what way does the input influence.", "This is sufficient for error analysis, but does not suit the purpose of word alignment, as humans will only align a target word to the input words that constitute a translation pair, i.e.", "have positive influence.", "Figure 2 shows a case where this problem occurs in our German-English experiments.", "Note that in Subfigure (a), the source word nur has high saliency on several target words, e.g.", "should, but the word nur is actually not translated in the reference.", "On the other hand, as shown in Subfigure (b), our method correctly assigns negative (shown as white) or small positive values at all time steps for this source word.", "Specifically, the saliency value of nur for should is negative with large magnitude, indicating significant negative contributions to the prediction of that target word.", "Hence, a good word alignment interpreta-tion should strongly avoid aligning them.", "Free Decoding Results SmoothGrad Tables 1 and 2 show that SmoothGrad is a crucial factor to reduce AER, especially for Transformer.", "Figure 3 Table 1 .", "By comparing Subfigures (a) and (c), we notice that (1) without SmoothGrad, the word saliency obtained from the Transformer model is extremely noisy, and (2) the output of SmoothGrad is not only a smoother version of the naïve gradient output, but also gains new information by performing extra forward and backward evaluations with the noisy input.", "For example, compare the alignment point between source word wir and target word we: in Subfigure (a), this word pair has very low saliency, but in (c), they become the most likely alignment pair for that target word.", "Referring back to our motivation for using SmoothGrad in Section 4.3, we think the observations above verify that the Transformer model is a case where very high non-linearities occur almost everywhere in the parameter space, such that the saliency obtained from local perturbation is a very Table 3 : Alignment distribution entropy for selected deen models.", "att stands for attention in Table 1. poor representation of the global saliency almost all the time.", "On the other hand, this is also why the Transformer especially relies on SmoothGrad to work well, as the perturbation will give a better estimation of the global saliency.", "It could also be observed from Subfigures (b) and (d) that when the noise is too moderate, the evaluation does not deviate enough from the original spot to gain non-local information, and at (d) it deviates too much and hence the resulting alignment is almost random.", "Intuitively, the noise parameter σ should be sensitive to the model architecture or even specific input feature values, but interestingly we end up finding that a single choice from the computer vision literature works well with all of our systems.", "We encourage future work to conduct more comprehensive analysis of the effect of SmoothGrad on more complicated architectures beyond convolutional neural nets.", "Alignment Dispersion We run German-English alignments under several different SmoothGrad noise deviation σ and report their dispersion as measured by entropy of the (soft) alignment distribution averaged by number of target words.", "Results are summarized in Ta-ble 3, where lower entropy indicates more peaky alignments.", "First, we observe that dispersion of word saliency gets higher as we increase σ, which aligns with the observations in Figure 3 .", "It should also be noted that the alignment dispersion is consistently lower for free decoding than force decoding.", "This verifies our conjecture that the force decoding setting might introduce more noise in the model behavior, but judging from this result, that gap seems to be minimal.", "Comparing different architectures, the dispersion of attention weights does not correlate well with the dispersion of word saliency.", "We also notice that, while the Transformer attention interpretation consistently results in higher AER, its dispersion is lower than the other architectures, indicating that with attention, a lot of the probability mass might be concentrated in the wrong place more often.", "This corroborates the finding in Raganato and Tiedemann (2018) .", "Discussion And Future Work There are several extensions to this work that we would like to discuss in this section.", "First, in this paper we only explored two saliency methods among many others available (Montavon et al., 2018) .", "In our preliminary study, we also experimented with guided back-propagation (Springenberg et al., 2014), a frequently used saliency method in computer vision, which did not work well for our problem.", "We suspect that there is a gap between applying these methods on mostlyconvolutional architectures in computer vision and architectures with more non-linearities in NLP.", "We hope the future research from the NLP and machine learning communities could bridge this gap.", "Secondly, the alignment errors in our method comes from three different sources: the limitation of NMT models on learning word alignments, the limitation of interpretation method on recovering interpretable word alignments, and the ambiguity in word alignments itself.", "Although we have shown that high quality alignment could be recovered from NMT systems (thus pushing our understanding on the limitation of NMT models), we are not yet able to separate these sources of errors in this work.", "While exploration on this direction will help us better understand both NMT models and the capability of saliency methods in NLP, researchers may want to avoid using word alignment as a benchmark for saliency methods because of its ambiguity.", "For such purpose, simpler tasks with clear ground truth, such as subject-verb agreement, might be a better choice.", "Finally, as mentioned before, we are only conducting approximate evaluation to measure the ability of our interpretation method.", "An immediate future work would be evaluating this on human-annotated translation outputs generated by the NMT system.", "Conclusion We propose to use word saliency and SmoothGrad to interpret word alignments from NMT predictions.", "Our proposal is model-agnostic, is able to be applied either offline or online, and does not require any parameter updates or architectural change.", "Both force decoding and free decoding evaluations show that our method is capable of generating word alignment interpretations of much higher quality compared to its attentionbased counterpart.", "Our empirical results also probe into the NMT black-box and reveal that even without any special architecture or training algorithm, some NMT models have already implicitly learned interpretable word alignments of comparable quality to fast-align.", "The model and code for our experiments are available at https://github.com/shuoyangd/meerkat." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "4.3", "5.1", "5.2", "5.3", "6.2", "6.3", "7", "8" ], "paper_header_content": [ "Introduction", "Related Work", "The Interpretation Problem", "Method", "Visual Saliency", "Word Saliency", "SmoothGrad", "Evaluation Method", "Setup", "Force Decoding Results", "SmoothGrad", "Alignment Dispersion", "Discussion And Future Work", "Conclusion" ] }
GEM-SciDuet-train-38#paper-1054#slide-18
Setup
Architecture: Convolutional S2S, LSTM, Transformer (with fairseq default hyper- parameters) Dataset: Following Zenkel et al. [2019], which covers de-en, fr-en and ro-en. SmoothGrad hyper-parameters: N=30 and Saliency-driven Word Alignment Interpretation for NMT
Architecture: Convolutional S2S, LSTM, Transformer (with fairseq default hyper- parameters) Dataset: Following Zenkel et al. [2019], which covers de-en, fr-en and ro-en. SmoothGrad hyper-parameters: N=30 and Saliency-driven Word Alignment Interpretation for NMT
[]
GEM-SciDuet-train-38#paper-1054#slide-19
1054
Saliency-driven Word Alignment Interpretation for Neural Machine Translation
Despite their original goal to jointly learn to align and translate, Neural Machine Translation (NMT) models, especially Transformer, are often perceived as not learning interpretable word alignments. In this paper, we show that NMT models do learn interpretable word alignments, which could only be revealed with proper interpretation methods. We propose a series of such methods that are model-agnostic, are able to be applied either offline or online, and do not require parameter update or architectural change. We show that under the force decoding setup, the alignments induced by our interpretation method are of better quality than fast-align for some systems, and when performing free decoding, they agree well with the alignments induced by automatic alignment tools.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223 ], "paper_content_text": [ "Introduction Neural Machine Translation (NMT) has made lots of advancements since its inception.", "One of the key innovations that led to the largest improvements is the introduction of the attention mechanism (Bahdanau et al., 2014; Luong et al., 2015) , which jointly learns word alignment and translation.", "Since then, the attention mechanism has gradually become a general technique in various NLP tasks, including summarization (Rush et al., 2015; See et al., 2017) , natural language inference (Parikh et al., 2016) and speech recognition (Chorowski et al., 2015; Chan et al., 2016) .", "Although word alignment is no longer a integral step like the case for Statistical Machine Translation (SMT) systems (Brown et al., 1993; Koehn et al., 2003) , there is a resurgence of interest in the community to study word alignment for NMT models.", "Even for NMT, word alignments are useful for error analysis, inserting external vocabularies, and providing guidance for human translators in computer-aided translation.", "When aiming for the most accurate alignments, the state-of-the-art tools include GIZA++ (Brown et al., 1993; Och and Ney, 2003) and fast-align (Dyer et al., 2013) , which are all external models invented in SMT era and need to be run as a separate post-processing step after the full sentence translation is complete.", "As a direct result, they are not suitable for analyzing the internal decision processes of the neural machine translation models.", "Besides, these models are hard to apply in the online fashion, i.e.", "in the middle of left-to-right translation process, such as the scenario in certain constrained decoding algorithms (Hasler et al., 2018) and in computeraided translation (Bouma and Parmentier, 2014; Arcan et al., 2014) .", "For these cases, the current common practice is to simply generate word alignments from attention weights between the encoder and decoder.", "However, there are problems with this practice.", "Koehn and Knowles (2017) showed that attention-based word alignment interpretation may be subject to \"off-by-one\" errors.", "Zenkel et al.", "(2019) ; Tang et al.", "(2018b) ; Raganato and Tiedemann (2018) pointed out that the attention-induced alignment is particularly noisy with Transformer models.", "Because of this, some studies, such as Nguyen and Chiang (2018); Zenkel et al.", "(2019) proposed either to add extra modules to generate higher quality word alignments, or to use these modules to further improve the model performance or interpretability.", "This paper is a step towards interpreting word alignments from NMT without relying on external models.", "We argue that using only attention weights is insufficient for generating clean word alignment interpretations, which we demonstrate both conceptually and empirically.", "We propose to use the notion of saliency to obtain word alignment interpretation of NMT predictions.", "Different from previous alignment models, our proposal is a pure interpretation method and does not require any parameter update or architecture change.", "Nevertheless, we are able to reduce Alignment Error Rate (AER) by 10-20 points over the attention weight baseline under two evaluation settings we adopt (see Figure 1 for an example), and beat fast-align (Dyer et al., 2013) by as much as 8.7 points.", "Not only have we proposed a superior model interpretation method, but our empirical results also uncover that, contrary to common beliefs, architectures such as convolutional sequenceto-sequence models (Gehring et al., 2017) have already implicitly learned highly interpretable word alignments, which sheds light on how future improvement should be made on these architectures.", "Related Work We start with work that combines word alignments with NMT.", "Research in this area generally falls into one of three themes: (1) employing the notion of word alignments to interpret the prediction of NMT; (2) making use of word alignments to improve NMT performance; (3) making use of NMT to improve word alignments.", "We mainly focus on related work in the first theme as this is the problem we are addressing in this work.", "Then we briefly introduce work in the other themes that is relevant to our study.", "We conclude by briefly summarizing related work to our proposed interpretation method.", "For the attention in RNN-based sequence-tosequence model, the first comprehensive analysis is conducted by Ghader and Monz (2017) .", "They argued that the attention in such systems agree with word alignment to a certain extent by showing that the RNN-based system achieves comparable alignment error rate comparable to that of bidirectional GIZA++ with symmetrization.", "However, they also point out that they are not exactly the same, as training the attention with alignments would occasionally cause the model to forget important information.", "Lee et al.", "(2017) presented a toolkit that facilitates study for the attention in RNN-based models.", "There is also a number of other studies that analyze the attention in Transformer models.", "Tang et al.", "(2018a,b) conducted targeted evaluation of neural machine translation models in two different evaluation tasks, namely subject-verb agreement and word sense disambiguation.", "During the analysis, they noted that the pattern in Transformer model (what they refer to as advanced attention mechanism) is very different from that of the attention in RNN-based architecture, in that a lot of the probability mass is focused on the last input token.", "They did not dive deeper in this phenomenon in their analysis.", "Raganato and Tiedemann (2018) performed a brief but more refined analysis on each attention head and each layer, where they noticed several different patterns inside the modules, and concluded that Transformer tends to focus on local dependencies in lower layers but finds long dependencies on higher ones.", "Beyond interpretation, in order to improve the translation of rare words, Nguyen and Chiang (2018) introduced LexNet, a feed-forward neural network that directly predicts the target word from a weighted sum of the source embeddings, on top of an RNN-based Seq2Seq models.", "Their goal was to improve translation output and hence they did not empirically show AER improvements on manually-aligned corpora.", "There are also a few other studies that inject alignment supervision during NMT training (Mi et al., 2016; Liu et al., 2016) .", "In terms of improvements in word alignment quality, Legrand et al.", "(2016) ; Wang et al.", "(2018) ; proposed neu-ral word alignment modules decoupled from NMT systems, while Zenkel et al.", "(2019) introduced a separate module to extract alignment from NMT decoder states, with which they achieved comparable AER with fast-align with Transformer models.", "The saliency method we propose in this work draws its inspiration from visual saliency proposed by Simonyan et al.", "(2013); Springenberg et al.", "(2014) ; Smilkov et al.", "(2017) .", "It should be noted that these methods were mostly applied to computer vision tasks.", "To the best of our knowledge, Li et al.", "(2016) presented the only work that directly employs saliency methods to interpret NLP models.", "Most similar to our work in spirit, Ding et al.", "(2017) used Layer-wise Relevance Propagation (LRP; Bach et al.", "2015) , an interpretation method resembling saliency, to interpret the internal working mechanisms of RNN-based neural machine translation systems.", "Although conceptually LRP is also a good fit for word alignment interpretation, we have some concerns with the mathematical soundness of LRP when applied to attention models.", "Our proposed method is also considerably more flexible and easier to implement than LRP.", "The Interpretation Problem Formally, by interpreting model prediction, we are referring to the following problem: given a trained MT model and input tokens S = {s 0 , s 1 , .", ".", ".", ", s I−1 }, at a certain time step j when the models predicts t j , we want to know which source word in S \"contributed\" most to this prediction.", "Note that the prediction t j might not be arg max t j p(t j | t 1:j−1 ), as the locally optimal option may be pruned during beam search and not end up in the final translation.", "Under this framework, we can see an important conceptual problem regarding interpreting attention weights as word alignment.", "Suppose for the same source sentence, there are two alternative translations that diverge at target time step j, generating t j and t ′ j which respectively correspond to different source words.", "Presumably, the source word that is aligned to t j and t ′ j should changed correspondingly.", "However, this is not possible with the attention weight interpretation, because the attention weight is computed before prediction of t j or t ′ j .", "With that, we argue that an ideal interpretation algorithm should be able to adapt the interpretation with the specified output label, regard-less of whether it is the most likely label predicted by the model.", "As a final note, the term \"attention weights\" here refers to the weights of the attention between encoder and decoder (the \"encoder-decoder attention\" in Vaswani et al.", "(2017) ).", "Specifically, they do not refer to the weight of self-attention modules that only exist in the Transformer architecture, which do not establish alignment between the source and target words.", "Method Our proposal is based on the notion of visual saliency (Simonyan et al., 2013) in computer vision.", "In brief, the saliency of an input feature is defined by the partial gradient of the output score with regard to the input.", "We propose to extend this idea to NMT by drawing analogy between input pixels and the embedding look-up operation.", "Visual Saliency Suppose we have an image classification example (x 0 , y 0 ), with y 0 being a specific image class and x 0 being an |X |-dimensional vector.", "Each entry of x 0 is an input feature (i.e., a pixel) to the classifier.", "Given the input x 0 , a trained classifier can generate a prediction score for class y 0 , denoted as p(y 0 | x 0 ).", "Consider the first-order Taylor expansion of a perturbed version of this score at the neighborhood of input x 0 : p(y 0 | x) ≈ p(y 0 | x 0 ) + ∂p(y 0 | x) ∂x x 0 · (x − x 0 ) (1) This is essentially re-formulating the perturbed prediction score p(y 0 | x) as an affine approximation of the input features, while the \"contribution\" of each feature to the final prediction being the partial derivative of the prediction score with regard to the feature.", "Assuming a feature that is deemed as salient for the local perturbation of the prediction score would also be globally salient, the saliency of an input feature is defined as follows: Definition 1 Denoted as Ψ(x, y), the saliency of feature vector x with regard to output class y is defined as ∂p(y | x) ∂x .", "Note that Ψ(x, y) is also a vector, with each entry corresponding to the saliency of a single input feature in x.", "Such formulation has following nice properties: • The saliency of an input feature is related to the choice of output class y, as model scores of different output classes correspond to a different set of parameters, and hence resulting in different partial gradients for the input features.", "This makes up for the aforementioned deficiency of attention weights in addressing the interpretation problem.", "• The partial gradient could be computed by back-propagation, which is efficiently implemented in most deep learning frameworks.", "• The formulation is agnostic to the model that generates p(y | x), so it could be applied to any deep learning architecture.", "Word Saliency In computer vision, the input feature is a 3D Tensor corresponding to the level in each channel.", "The key question to apply such method to NMT is what constitutes the input feature to a NMT system.", "Li et al.", "(2016) proposed to use the embedding of of the input words as the input feature to formulate saliency score, which results in the saliency of an input word being a vector of the same dimension as embedding vectors.", "To obtain a scalar saliency value, they computed the mean of the absolute value of the embedding gradients.", "We argue that there is a more mathematically principled way to approach this.", "To start, we treat the word embedding look-up operation as a dot product between the embedding weight matrix W and an one-hot vector z.", "The size of z is the same as the source vocabulary size.", "Similarly, the input sentence could be formulated as a matrix Z with only 0 and 1 entries.", "Notice that z has certain resemblance to the pixels of an image, with each cell representing the pixel-wise activation level of the words in the vocabulary.", "For the output word t j at time step j, we can similarly define the saliency of the one-hot vector z as: Ψ(z, t j ) = ∂p(t j | Z) ∂z (2) where p(t j | Z) is the probability of word t j generated by the NMT model given source sentence Z. Ψ(z, t j ) is a vector of the same size as z.", "However, note that there is a key difference between z and pixels.", "If the pixel level is 0, it means that the pixel is black, while a 0-entry in z means that the input word is not the word denoted by the corresponding cell.", "While the black region of an input image may still carry important information, we are not interested in the saliency of the 0-entries in z.", "1 Hence, we only take the 1-entries of matrix Z as the input to the NMT model.", "For a source word s i in the source sentence, this means we only care about the saliency of the 1-entries, i.e., the entry corresponding to source word s i : ψ(s i , t j ) = [ ∂p(t j | Z) ∂z ] s i = [ ∂p(t j | Z) ∂W s i · ∂W s i ∂z ] s i = [ ∂p(t j | Z) ∂W s i · W ] s i = ∂p(t j | Z) ∂W s i · W s i (3) where [·] i denotes the i-th row of a matrix or the ith element of a vector.", "In other words, the saliency ψ(s i , t j ) is a weighted sum of the word embedding of input word s i , with the partial gradient of each cell as the weight.", "By comparison, the word saliency 2 in Li et al.", "(2016) is defined as: ψ ′ (s i , t j ) = mean ( ∂p(t j | Z) ∂W s i ) (4) There are two implementation details that we would like to call for the reader's attention: • When the same word occurs multiple times in the source sentence, multiple copies of embedding for such word need to be made to ensure that the gradients flowing to different instances of the same word are not merged; • Note that ψ(s i , t j ) is not a probability distribution, which does not affect word alignment results because we are taking arg max.", "For visualizations presented herein, we normalized the distribution by p( s i | t j ) ∝ max(0, ψ(s i , t j )).", "One may also use softmax function for applications that need more well-formed probability distribution.", "1 Although we introduce z to facilitate presentation, note that word embedding look-up is never implemented as a matrix multiplication.", "Instead, it is implemented as a table lookup, so for each input word, only one row of the word embedding is fed into the subsequent computation.", "As a consequence, during training, since the other rows are not part of the computation graph, only parameters in the rows corresponding to the 1-entries will be updated.", "This is another reason why we choose to discard the saliency of 0-entries.", "2 Li et al.", "(2016) mostly focused on studying saliency on the level of word embedding dimensions.", "This word-level formulation is proposed as part of the analysis in Section 5.2 and Section 6 of that work.", "SmoothGrad There are two scenarios where the naïve gradientbased saliency may make mistakes: • For highly non-linear models, the saliency obtained from local perturbation may not be a good representation of the global saliency.", "• If the model fits the distribution nearly perfectly, some data points or input features may become saturated, i.e.", "having a partial gradient of 0.", "This does not necessarily mean they are not salient with regard to the prediction.", "We alleviate these problems with SmoothGrad, a method proposed by Smilkov et al.", "(2017) .", "The idea is to augment the input to the network into n samples by adding random noise generated by normal distribution N (0, σ 2 ).", "The saliency scores of each augmented sample are then averaged to cancel out the noise in the gradients.", "We made one small modification to this method in our experiments: rather than adding noise to the word inputs that are represented as one-hot vectors, we instead add noise to the queried embedding vectors.", "This allows us to introduce more randomness for each word input.", "Experiments Evaluation Method The best evaluation method would compare predicted word alignments against manually labeled word alignments between source sentences and NMT output sentences, but this is too costly for our study.", "Instead, we conduct two automatic evaluations for our proposed method using resources available: • force decoding: take a human-annotated corpus, run NMT models to force-generate the target side of the corpus and measure AER against the human alignment; • free decoding: take the NMT prediction, obtain reasonably clean reference alignments between the prediction and the source and measure AER against this reference.", "3 Notice that both automatic evaluation methods have their respective limitation: the force decoding method may force the model to predict something it deems unlikely, and thus generating noisy alignment; whereas the free decoding method lacks authentic references.", "Setup We follow Zenkel et al.", "(2019) in data setup and use the accompanied scripts of that paper 4 for preprocessing.", "Their training data consists of 1.9M, 1.1M and 0.4M sentence pairs for German-English (de-en), English-French (en-fr) and Romanian-English (ro-en) language pairs, respectively, whereas the manually-aligned test data contains 508, 447 and 248 sentence pairs for each language pair.", "There is no development data provided in their setup, and it is not clear what they used for NMT system training, so we set aside the last 1,000 sentences of the training data for each language as the development set.", "For our NMT systems, we use fairseq 5 to train attention-based RNN systems (LSTM) (Bahdanau et al., 2014) , convolution systems (FConv) (Gehring et al., 2017) , and Transformer systems (Transformer) (Vaswani et al., 2017) .", "We use the pre-configured model architectures for IWSLT German-English experiments 6 to build all NMT systems.", "Our experiments cover the following interpretation methods: • Attention: directly take the attention weights as soft alignment scores.", "For transformer, we follow the implementation in fairseq and used the attention weights from the final layer averaged across all heads; • Smoothed Attention: obtain multiple version of attention weights with the same data augmentation procedure as SmoothGrad and average them.", "This is to prove that smoothing itself does not improve the interpretation quality, and has to be used together with effective interpretation method; • (Li et al., 2016) : applied with normal backpropagation (Grad) and SmoothGrad; • Ours: applied with normal back-propagation (Grad) and SmoothGrad.", "For all the methods above, we follow the same procedure in (Zenkel et al., 2019) to convert soft alignment scores to hard alignment.", "For force decoding experiments, we generate symmetrized alignment results with growdiag-final.", "We also include AER results 7 of fast-align (Dyer et al., 2013) , GIZA++ 8 and the best model (Add+SGD) from Zenkel et al.", "(2019) on the same dataset for comparison.", "However, the readers should be aware that there are certain caveats in this comparison: • All of these models are specifically designed and optimized to generate high-quality alignments, while our method is an interpretation method and is not making any architecture modifications or parameter updates; • fast-align and GIZA++ usually need to update model with full sentence to generate optimal alignments, while our system and Zenkel et al.", "(2019) can do so on-the-fly.", "7 We reproduced the fast-align results as a sanity check and we were able to perfectly replicate their numbers with their released scripts.", "8 https://github.com/moses-smt/giza-pp Realizing the second caveat, we also run fastalign under the online alignment scenario, where we first train a fast-align model and decode on the test set.", "This is a real-world scenario in applications such as computer-aided translation (Bouma and Parmentier, 2014; Arcan et al., 2014) , where we cannot practically update alignment models onthe-fly.", "On the other hand, we believe this is a slightly better comparison for methods with online alignment capabilities such as Zenkel et al.", "(2019) and this work.", "The data used in Zenkel et al.", "(2019) did not provide a manually-aligned development set, so we tune the SmoothGrad hyperparameters (noise standard deviation σ and sample size n) on a 30sentence subset of the German-English test data with the Transformer model.", "We ended up using the recommended σ = 0.15 in the original paper and a slightly smaller sample size n = 30 for speed.", "This hyperparameter setting is applied to the other SmoothGrad experiments as-is.", "For com-parison with previous work, we do not exclude these sentences from the reported results, we instead mark the numbers affected to raise caution.", "Table 1 shows the AER results under the force decoding setting.", "First, note that after applying our saliency method with normal back-propagation, AER is only reduced for FConv model but instead increases for LSTM and Transformer.", "The largest increase is observed for Transformer, where the AER increases by about 20 points on average.", "However, after applying SmoothGrad on top of that, we observe a sharp drop in AER, which ends up with 10-20 points lower than the attention weight baseline.", "We can also see that this is not just an effect introduced by input noise, as the same smoothing procedure for attention increases the AER most of the times.", "To summarize, at least under force decoding settings, our saliency method with SmoothGrad obtains word alignment interpretations of much higher quality than the attention weight baseline.", "Force Decoding Results As for Li et al.", "(2016) , for FConv and LSTM architectures, it is not only consistently worse than our method, but at times also worse than attention.", "Besides, the effect of SmoothGrad is also not as consistent on their saliency formulation as ours.", "Although with the Transformer model, the Li et al.", "(2016) method obtained better AER than our method under several settings, it is still pretty clear overall that the superior mathematical soundness of our method is translated into better interpretation quality.", "While the GIZA++ model obtains the best alignment result in Table 1 9 , most of our word alignment interpretation of FConv model with Smooth-Grad surpasses the alignment quality of fast-align (either Online or Offline), sometimes by as much as 8.7 points (symmetrized ro<>en result).", "Our best models are also largely on-par with (Zenkel et al., 2019) .", "These are notable results as our method is an interpretation method and no extra parameter is updated to optimize the quality of alignment.", "On the other hand, this also indicates that it is possible to induce high-quality 9 While Ghader and Monz (2017) showed that the AER obtained by LSTM model is close to that of GIZA++, our experiments yield a much larger difference.", "We think this is largely due to the fact that we choose to train our model with BPE, while Ghader and Monz (2017) explicitly avoided doing so.", "alignments from NMT model without modifying its parameters, showing that it has acquired such information in an implicit way.", "Most interestingly, although NMT is often deemed as performing poorly under low-resource setting, our interpretation seems to work relatively well on ro<>en language pair, which happens to be the language pair that we have least training data for.", "We think this is a phenomenon that merits further exploration.", "Besides, it can be seen that for all reported methods, the overall order for the number of alignment errors is FConv < LSTM < Transformer.", "To our best knowledge, this is also a novel insight, as no one has analyzed attention weights of FConv with other architectures before.", "We can also observe that while our method is not strong enough to fully bridge the gap of the attention noise level between different model architecture, it does manage to narrow the difference in some cases.", "Table 2 shows the result under free decoding setting.", "The trend in this group of experiment is similar to Table 1 , except that Transformer occasionally outperforms LSTM.", "We think this is mainly due to the fact that Transformer generates higher quality translations, but could also be partially attributed to the noise in fast-align reference.", "Also, notice that the AER numbers are also generally lower compared to Table 1 under this setting.", "One reason is that our model is aligning output with which it is most confident, so less noise should be expected in the model behavior.", "On the other hand, by qualitatively comparing the reference translation in the test set and the NMT output, we find that it is generally easier to align the translation as it is often a more literal translation.", "6 Analysis 6.1 Comparison with Li et al.", "(2016) The main reason why the word saliency formulation in Li et al.", "(2016) does not work as well for word alignment is the lack of polarity in the formulation.", "In other words, it only quantifies how much the input influences the output, but does not specify in what way does the input influence.", "This is sufficient for error analysis, but does not suit the purpose of word alignment, as humans will only align a target word to the input words that constitute a translation pair, i.e.", "have positive influence.", "Figure 2 shows a case where this problem occurs in our German-English experiments.", "Note that in Subfigure (a), the source word nur has high saliency on several target words, e.g.", "should, but the word nur is actually not translated in the reference.", "On the other hand, as shown in Subfigure (b), our method correctly assigns negative (shown as white) or small positive values at all time steps for this source word.", "Specifically, the saliency value of nur for should is negative with large magnitude, indicating significant negative contributions to the prediction of that target word.", "Hence, a good word alignment interpreta-tion should strongly avoid aligning them.", "Free Decoding Results SmoothGrad Tables 1 and 2 show that SmoothGrad is a crucial factor to reduce AER, especially for Transformer.", "Figure 3 Table 1 .", "By comparing Subfigures (a) and (c), we notice that (1) without SmoothGrad, the word saliency obtained from the Transformer model is extremely noisy, and (2) the output of SmoothGrad is not only a smoother version of the naïve gradient output, but also gains new information by performing extra forward and backward evaluations with the noisy input.", "For example, compare the alignment point between source word wir and target word we: in Subfigure (a), this word pair has very low saliency, but in (c), they become the most likely alignment pair for that target word.", "Referring back to our motivation for using SmoothGrad in Section 4.3, we think the observations above verify that the Transformer model is a case where very high non-linearities occur almost everywhere in the parameter space, such that the saliency obtained from local perturbation is a very Table 3 : Alignment distribution entropy for selected deen models.", "att stands for attention in Table 1. poor representation of the global saliency almost all the time.", "On the other hand, this is also why the Transformer especially relies on SmoothGrad to work well, as the perturbation will give a better estimation of the global saliency.", "It could also be observed from Subfigures (b) and (d) that when the noise is too moderate, the evaluation does not deviate enough from the original spot to gain non-local information, and at (d) it deviates too much and hence the resulting alignment is almost random.", "Intuitively, the noise parameter σ should be sensitive to the model architecture or even specific input feature values, but interestingly we end up finding that a single choice from the computer vision literature works well with all of our systems.", "We encourage future work to conduct more comprehensive analysis of the effect of SmoothGrad on more complicated architectures beyond convolutional neural nets.", "Alignment Dispersion We run German-English alignments under several different SmoothGrad noise deviation σ and report their dispersion as measured by entropy of the (soft) alignment distribution averaged by number of target words.", "Results are summarized in Ta-ble 3, where lower entropy indicates more peaky alignments.", "First, we observe that dispersion of word saliency gets higher as we increase σ, which aligns with the observations in Figure 3 .", "It should also be noted that the alignment dispersion is consistently lower for free decoding than force decoding.", "This verifies our conjecture that the force decoding setting might introduce more noise in the model behavior, but judging from this result, that gap seems to be minimal.", "Comparing different architectures, the dispersion of attention weights does not correlate well with the dispersion of word saliency.", "We also notice that, while the Transformer attention interpretation consistently results in higher AER, its dispersion is lower than the other architectures, indicating that with attention, a lot of the probability mass might be concentrated in the wrong place more often.", "This corroborates the finding in Raganato and Tiedemann (2018) .", "Discussion And Future Work There are several extensions to this work that we would like to discuss in this section.", "First, in this paper we only explored two saliency methods among many others available (Montavon et al., 2018) .", "In our preliminary study, we also experimented with guided back-propagation (Springenberg et al., 2014), a frequently used saliency method in computer vision, which did not work well for our problem.", "We suspect that there is a gap between applying these methods on mostlyconvolutional architectures in computer vision and architectures with more non-linearities in NLP.", "We hope the future research from the NLP and machine learning communities could bridge this gap.", "Secondly, the alignment errors in our method comes from three different sources: the limitation of NMT models on learning word alignments, the limitation of interpretation method on recovering interpretable word alignments, and the ambiguity in word alignments itself.", "Although we have shown that high quality alignment could be recovered from NMT systems (thus pushing our understanding on the limitation of NMT models), we are not yet able to separate these sources of errors in this work.", "While exploration on this direction will help us better understand both NMT models and the capability of saliency methods in NLP, researchers may want to avoid using word alignment as a benchmark for saliency methods because of its ambiguity.", "For such purpose, simpler tasks with clear ground truth, such as subject-verb agreement, might be a better choice.", "Finally, as mentioned before, we are only conducting approximate evaluation to measure the ability of our interpretation method.", "An immediate future work would be evaluating this on human-annotated translation outputs generated by the NMT system.", "Conclusion We propose to use word saliency and SmoothGrad to interpret word alignments from NMT predictions.", "Our proposal is model-agnostic, is able to be applied either offline or online, and does not require any parameter updates or architectural change.", "Both force decoding and free decoding evaluations show that our method is capable of generating word alignment interpretations of much higher quality compared to its attentionbased counterpart.", "Our empirical results also probe into the NMT black-box and reveal that even without any special architecture or training algorithm, some NMT models have already implicitly learned interpretable word alignments of comparable quality to fast-align.", "The model and code for our experiments are available at https://github.com/shuoyangd/meerkat." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "4.3", "5.1", "5.2", "5.3", "6.2", "6.3", "7", "8" ], "paper_header_content": [ "Introduction", "Related Work", "The Interpretation Problem", "Method", "Visual Saliency", "Word Saliency", "SmoothGrad", "Evaluation Method", "Setup", "Force Decoding Results", "SmoothGrad", "Alignment Dispersion", "Discussion And Future Work", "Conclusion" ] }
GEM-SciDuet-train-38#paper-1054#slide-19
Baselines
Smoothed Attention: forward pass on multiple corrupted input samples, then average the attention weights over samples [Li et al. 2016]: compute element-wise absolute value of embedding gradients, then average over embedding dimensions Saliency-driven Word Alignment Interpretation for NMT
Smoothed Attention: forward pass on multiple corrupted input samples, then average the attention weights over samples [Li et al. 2016]: compute element-wise absolute value of embedding gradients, then average over embedding dimensions Saliency-driven Word Alignment Interpretation for NMT
[]
GEM-SciDuet-train-38#paper-1054#slide-20
1054
Saliency-driven Word Alignment Interpretation for Neural Machine Translation
Despite their original goal to jointly learn to align and translate, Neural Machine Translation (NMT) models, especially Transformer, are often perceived as not learning interpretable word alignments. In this paper, we show that NMT models do learn interpretable word alignments, which could only be revealed with proper interpretation methods. We propose a series of such methods that are model-agnostic, are able to be applied either offline or online, and do not require parameter update or architectural change. We show that under the force decoding setup, the alignments induced by our interpretation method are of better quality than fast-align for some systems, and when performing free decoding, they agree well with the alignments induced by automatic alignment tools.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223 ], "paper_content_text": [ "Introduction Neural Machine Translation (NMT) has made lots of advancements since its inception.", "One of the key innovations that led to the largest improvements is the introduction of the attention mechanism (Bahdanau et al., 2014; Luong et al., 2015) , which jointly learns word alignment and translation.", "Since then, the attention mechanism has gradually become a general technique in various NLP tasks, including summarization (Rush et al., 2015; See et al., 2017) , natural language inference (Parikh et al., 2016) and speech recognition (Chorowski et al., 2015; Chan et al., 2016) .", "Although word alignment is no longer a integral step like the case for Statistical Machine Translation (SMT) systems (Brown et al., 1993; Koehn et al., 2003) , there is a resurgence of interest in the community to study word alignment for NMT models.", "Even for NMT, word alignments are useful for error analysis, inserting external vocabularies, and providing guidance for human translators in computer-aided translation.", "When aiming for the most accurate alignments, the state-of-the-art tools include GIZA++ (Brown et al., 1993; Och and Ney, 2003) and fast-align (Dyer et al., 2013) , which are all external models invented in SMT era and need to be run as a separate post-processing step after the full sentence translation is complete.", "As a direct result, they are not suitable for analyzing the internal decision processes of the neural machine translation models.", "Besides, these models are hard to apply in the online fashion, i.e.", "in the middle of left-to-right translation process, such as the scenario in certain constrained decoding algorithms (Hasler et al., 2018) and in computeraided translation (Bouma and Parmentier, 2014; Arcan et al., 2014) .", "For these cases, the current common practice is to simply generate word alignments from attention weights between the encoder and decoder.", "However, there are problems with this practice.", "Koehn and Knowles (2017) showed that attention-based word alignment interpretation may be subject to \"off-by-one\" errors.", "Zenkel et al.", "(2019) ; Tang et al.", "(2018b) ; Raganato and Tiedemann (2018) pointed out that the attention-induced alignment is particularly noisy with Transformer models.", "Because of this, some studies, such as Nguyen and Chiang (2018); Zenkel et al.", "(2019) proposed either to add extra modules to generate higher quality word alignments, or to use these modules to further improve the model performance or interpretability.", "This paper is a step towards interpreting word alignments from NMT without relying on external models.", "We argue that using only attention weights is insufficient for generating clean word alignment interpretations, which we demonstrate both conceptually and empirically.", "We propose to use the notion of saliency to obtain word alignment interpretation of NMT predictions.", "Different from previous alignment models, our proposal is a pure interpretation method and does not require any parameter update or architecture change.", "Nevertheless, we are able to reduce Alignment Error Rate (AER) by 10-20 points over the attention weight baseline under two evaluation settings we adopt (see Figure 1 for an example), and beat fast-align (Dyer et al., 2013) by as much as 8.7 points.", "Not only have we proposed a superior model interpretation method, but our empirical results also uncover that, contrary to common beliefs, architectures such as convolutional sequenceto-sequence models (Gehring et al., 2017) have already implicitly learned highly interpretable word alignments, which sheds light on how future improvement should be made on these architectures.", "Related Work We start with work that combines word alignments with NMT.", "Research in this area generally falls into one of three themes: (1) employing the notion of word alignments to interpret the prediction of NMT; (2) making use of word alignments to improve NMT performance; (3) making use of NMT to improve word alignments.", "We mainly focus on related work in the first theme as this is the problem we are addressing in this work.", "Then we briefly introduce work in the other themes that is relevant to our study.", "We conclude by briefly summarizing related work to our proposed interpretation method.", "For the attention in RNN-based sequence-tosequence model, the first comprehensive analysis is conducted by Ghader and Monz (2017) .", "They argued that the attention in such systems agree with word alignment to a certain extent by showing that the RNN-based system achieves comparable alignment error rate comparable to that of bidirectional GIZA++ with symmetrization.", "However, they also point out that they are not exactly the same, as training the attention with alignments would occasionally cause the model to forget important information.", "Lee et al.", "(2017) presented a toolkit that facilitates study for the attention in RNN-based models.", "There is also a number of other studies that analyze the attention in Transformer models.", "Tang et al.", "(2018a,b) conducted targeted evaluation of neural machine translation models in two different evaluation tasks, namely subject-verb agreement and word sense disambiguation.", "During the analysis, they noted that the pattern in Transformer model (what they refer to as advanced attention mechanism) is very different from that of the attention in RNN-based architecture, in that a lot of the probability mass is focused on the last input token.", "They did not dive deeper in this phenomenon in their analysis.", "Raganato and Tiedemann (2018) performed a brief but more refined analysis on each attention head and each layer, where they noticed several different patterns inside the modules, and concluded that Transformer tends to focus on local dependencies in lower layers but finds long dependencies on higher ones.", "Beyond interpretation, in order to improve the translation of rare words, Nguyen and Chiang (2018) introduced LexNet, a feed-forward neural network that directly predicts the target word from a weighted sum of the source embeddings, on top of an RNN-based Seq2Seq models.", "Their goal was to improve translation output and hence they did not empirically show AER improvements on manually-aligned corpora.", "There are also a few other studies that inject alignment supervision during NMT training (Mi et al., 2016; Liu et al., 2016) .", "In terms of improvements in word alignment quality, Legrand et al.", "(2016) ; Wang et al.", "(2018) ; proposed neu-ral word alignment modules decoupled from NMT systems, while Zenkel et al.", "(2019) introduced a separate module to extract alignment from NMT decoder states, with which they achieved comparable AER with fast-align with Transformer models.", "The saliency method we propose in this work draws its inspiration from visual saliency proposed by Simonyan et al.", "(2013); Springenberg et al.", "(2014) ; Smilkov et al.", "(2017) .", "It should be noted that these methods were mostly applied to computer vision tasks.", "To the best of our knowledge, Li et al.", "(2016) presented the only work that directly employs saliency methods to interpret NLP models.", "Most similar to our work in spirit, Ding et al.", "(2017) used Layer-wise Relevance Propagation (LRP; Bach et al.", "2015) , an interpretation method resembling saliency, to interpret the internal working mechanisms of RNN-based neural machine translation systems.", "Although conceptually LRP is also a good fit for word alignment interpretation, we have some concerns with the mathematical soundness of LRP when applied to attention models.", "Our proposed method is also considerably more flexible and easier to implement than LRP.", "The Interpretation Problem Formally, by interpreting model prediction, we are referring to the following problem: given a trained MT model and input tokens S = {s 0 , s 1 , .", ".", ".", ", s I−1 }, at a certain time step j when the models predicts t j , we want to know which source word in S \"contributed\" most to this prediction.", "Note that the prediction t j might not be arg max t j p(t j | t 1:j−1 ), as the locally optimal option may be pruned during beam search and not end up in the final translation.", "Under this framework, we can see an important conceptual problem regarding interpreting attention weights as word alignment.", "Suppose for the same source sentence, there are two alternative translations that diverge at target time step j, generating t j and t ′ j which respectively correspond to different source words.", "Presumably, the source word that is aligned to t j and t ′ j should changed correspondingly.", "However, this is not possible with the attention weight interpretation, because the attention weight is computed before prediction of t j or t ′ j .", "With that, we argue that an ideal interpretation algorithm should be able to adapt the interpretation with the specified output label, regard-less of whether it is the most likely label predicted by the model.", "As a final note, the term \"attention weights\" here refers to the weights of the attention between encoder and decoder (the \"encoder-decoder attention\" in Vaswani et al.", "(2017) ).", "Specifically, they do not refer to the weight of self-attention modules that only exist in the Transformer architecture, which do not establish alignment between the source and target words.", "Method Our proposal is based on the notion of visual saliency (Simonyan et al., 2013) in computer vision.", "In brief, the saliency of an input feature is defined by the partial gradient of the output score with regard to the input.", "We propose to extend this idea to NMT by drawing analogy between input pixels and the embedding look-up operation.", "Visual Saliency Suppose we have an image classification example (x 0 , y 0 ), with y 0 being a specific image class and x 0 being an |X |-dimensional vector.", "Each entry of x 0 is an input feature (i.e., a pixel) to the classifier.", "Given the input x 0 , a trained classifier can generate a prediction score for class y 0 , denoted as p(y 0 | x 0 ).", "Consider the first-order Taylor expansion of a perturbed version of this score at the neighborhood of input x 0 : p(y 0 | x) ≈ p(y 0 | x 0 ) + ∂p(y 0 | x) ∂x x 0 · (x − x 0 ) (1) This is essentially re-formulating the perturbed prediction score p(y 0 | x) as an affine approximation of the input features, while the \"contribution\" of each feature to the final prediction being the partial derivative of the prediction score with regard to the feature.", "Assuming a feature that is deemed as salient for the local perturbation of the prediction score would also be globally salient, the saliency of an input feature is defined as follows: Definition 1 Denoted as Ψ(x, y), the saliency of feature vector x with regard to output class y is defined as ∂p(y | x) ∂x .", "Note that Ψ(x, y) is also a vector, with each entry corresponding to the saliency of a single input feature in x.", "Such formulation has following nice properties: • The saliency of an input feature is related to the choice of output class y, as model scores of different output classes correspond to a different set of parameters, and hence resulting in different partial gradients for the input features.", "This makes up for the aforementioned deficiency of attention weights in addressing the interpretation problem.", "• The partial gradient could be computed by back-propagation, which is efficiently implemented in most deep learning frameworks.", "• The formulation is agnostic to the model that generates p(y | x), so it could be applied to any deep learning architecture.", "Word Saliency In computer vision, the input feature is a 3D Tensor corresponding to the level in each channel.", "The key question to apply such method to NMT is what constitutes the input feature to a NMT system.", "Li et al.", "(2016) proposed to use the embedding of of the input words as the input feature to formulate saliency score, which results in the saliency of an input word being a vector of the same dimension as embedding vectors.", "To obtain a scalar saliency value, they computed the mean of the absolute value of the embedding gradients.", "We argue that there is a more mathematically principled way to approach this.", "To start, we treat the word embedding look-up operation as a dot product between the embedding weight matrix W and an one-hot vector z.", "The size of z is the same as the source vocabulary size.", "Similarly, the input sentence could be formulated as a matrix Z with only 0 and 1 entries.", "Notice that z has certain resemblance to the pixels of an image, with each cell representing the pixel-wise activation level of the words in the vocabulary.", "For the output word t j at time step j, we can similarly define the saliency of the one-hot vector z as: Ψ(z, t j ) = ∂p(t j | Z) ∂z (2) where p(t j | Z) is the probability of word t j generated by the NMT model given source sentence Z. Ψ(z, t j ) is a vector of the same size as z.", "However, note that there is a key difference between z and pixels.", "If the pixel level is 0, it means that the pixel is black, while a 0-entry in z means that the input word is not the word denoted by the corresponding cell.", "While the black region of an input image may still carry important information, we are not interested in the saliency of the 0-entries in z.", "1 Hence, we only take the 1-entries of matrix Z as the input to the NMT model.", "For a source word s i in the source sentence, this means we only care about the saliency of the 1-entries, i.e., the entry corresponding to source word s i : ψ(s i , t j ) = [ ∂p(t j | Z) ∂z ] s i = [ ∂p(t j | Z) ∂W s i · ∂W s i ∂z ] s i = [ ∂p(t j | Z) ∂W s i · W ] s i = ∂p(t j | Z) ∂W s i · W s i (3) where [·] i denotes the i-th row of a matrix or the ith element of a vector.", "In other words, the saliency ψ(s i , t j ) is a weighted sum of the word embedding of input word s i , with the partial gradient of each cell as the weight.", "By comparison, the word saliency 2 in Li et al.", "(2016) is defined as: ψ ′ (s i , t j ) = mean ( ∂p(t j | Z) ∂W s i ) (4) There are two implementation details that we would like to call for the reader's attention: • When the same word occurs multiple times in the source sentence, multiple copies of embedding for such word need to be made to ensure that the gradients flowing to different instances of the same word are not merged; • Note that ψ(s i , t j ) is not a probability distribution, which does not affect word alignment results because we are taking arg max.", "For visualizations presented herein, we normalized the distribution by p( s i | t j ) ∝ max(0, ψ(s i , t j )).", "One may also use softmax function for applications that need more well-formed probability distribution.", "1 Although we introduce z to facilitate presentation, note that word embedding look-up is never implemented as a matrix multiplication.", "Instead, it is implemented as a table lookup, so for each input word, only one row of the word embedding is fed into the subsequent computation.", "As a consequence, during training, since the other rows are not part of the computation graph, only parameters in the rows corresponding to the 1-entries will be updated.", "This is another reason why we choose to discard the saliency of 0-entries.", "2 Li et al.", "(2016) mostly focused on studying saliency on the level of word embedding dimensions.", "This word-level formulation is proposed as part of the analysis in Section 5.2 and Section 6 of that work.", "SmoothGrad There are two scenarios where the naïve gradientbased saliency may make mistakes: • For highly non-linear models, the saliency obtained from local perturbation may not be a good representation of the global saliency.", "• If the model fits the distribution nearly perfectly, some data points or input features may become saturated, i.e.", "having a partial gradient of 0.", "This does not necessarily mean they are not salient with regard to the prediction.", "We alleviate these problems with SmoothGrad, a method proposed by Smilkov et al.", "(2017) .", "The idea is to augment the input to the network into n samples by adding random noise generated by normal distribution N (0, σ 2 ).", "The saliency scores of each augmented sample are then averaged to cancel out the noise in the gradients.", "We made one small modification to this method in our experiments: rather than adding noise to the word inputs that are represented as one-hot vectors, we instead add noise to the queried embedding vectors.", "This allows us to introduce more randomness for each word input.", "Experiments Evaluation Method The best evaluation method would compare predicted word alignments against manually labeled word alignments between source sentences and NMT output sentences, but this is too costly for our study.", "Instead, we conduct two automatic evaluations for our proposed method using resources available: • force decoding: take a human-annotated corpus, run NMT models to force-generate the target side of the corpus and measure AER against the human alignment; • free decoding: take the NMT prediction, obtain reasonably clean reference alignments between the prediction and the source and measure AER against this reference.", "3 Notice that both automatic evaluation methods have their respective limitation: the force decoding method may force the model to predict something it deems unlikely, and thus generating noisy alignment; whereas the free decoding method lacks authentic references.", "Setup We follow Zenkel et al.", "(2019) in data setup and use the accompanied scripts of that paper 4 for preprocessing.", "Their training data consists of 1.9M, 1.1M and 0.4M sentence pairs for German-English (de-en), English-French (en-fr) and Romanian-English (ro-en) language pairs, respectively, whereas the manually-aligned test data contains 508, 447 and 248 sentence pairs for each language pair.", "There is no development data provided in their setup, and it is not clear what they used for NMT system training, so we set aside the last 1,000 sentences of the training data for each language as the development set.", "For our NMT systems, we use fairseq 5 to train attention-based RNN systems (LSTM) (Bahdanau et al., 2014) , convolution systems (FConv) (Gehring et al., 2017) , and Transformer systems (Transformer) (Vaswani et al., 2017) .", "We use the pre-configured model architectures for IWSLT German-English experiments 6 to build all NMT systems.", "Our experiments cover the following interpretation methods: • Attention: directly take the attention weights as soft alignment scores.", "For transformer, we follow the implementation in fairseq and used the attention weights from the final layer averaged across all heads; • Smoothed Attention: obtain multiple version of attention weights with the same data augmentation procedure as SmoothGrad and average them.", "This is to prove that smoothing itself does not improve the interpretation quality, and has to be used together with effective interpretation method; • (Li et al., 2016) : applied with normal backpropagation (Grad) and SmoothGrad; • Ours: applied with normal back-propagation (Grad) and SmoothGrad.", "For all the methods above, we follow the same procedure in (Zenkel et al., 2019) to convert soft alignment scores to hard alignment.", "For force decoding experiments, we generate symmetrized alignment results with growdiag-final.", "We also include AER results 7 of fast-align (Dyer et al., 2013) , GIZA++ 8 and the best model (Add+SGD) from Zenkel et al.", "(2019) on the same dataset for comparison.", "However, the readers should be aware that there are certain caveats in this comparison: • All of these models are specifically designed and optimized to generate high-quality alignments, while our method is an interpretation method and is not making any architecture modifications or parameter updates; • fast-align and GIZA++ usually need to update model with full sentence to generate optimal alignments, while our system and Zenkel et al.", "(2019) can do so on-the-fly.", "7 We reproduced the fast-align results as a sanity check and we were able to perfectly replicate their numbers with their released scripts.", "8 https://github.com/moses-smt/giza-pp Realizing the second caveat, we also run fastalign under the online alignment scenario, where we first train a fast-align model and decode on the test set.", "This is a real-world scenario in applications such as computer-aided translation (Bouma and Parmentier, 2014; Arcan et al., 2014) , where we cannot practically update alignment models onthe-fly.", "On the other hand, we believe this is a slightly better comparison for methods with online alignment capabilities such as Zenkel et al.", "(2019) and this work.", "The data used in Zenkel et al.", "(2019) did not provide a manually-aligned development set, so we tune the SmoothGrad hyperparameters (noise standard deviation σ and sample size n) on a 30sentence subset of the German-English test data with the Transformer model.", "We ended up using the recommended σ = 0.15 in the original paper and a slightly smaller sample size n = 30 for speed.", "This hyperparameter setting is applied to the other SmoothGrad experiments as-is.", "For com-parison with previous work, we do not exclude these sentences from the reported results, we instead mark the numbers affected to raise caution.", "Table 1 shows the AER results under the force decoding setting.", "First, note that after applying our saliency method with normal back-propagation, AER is only reduced for FConv model but instead increases for LSTM and Transformer.", "The largest increase is observed for Transformer, where the AER increases by about 20 points on average.", "However, after applying SmoothGrad on top of that, we observe a sharp drop in AER, which ends up with 10-20 points lower than the attention weight baseline.", "We can also see that this is not just an effect introduced by input noise, as the same smoothing procedure for attention increases the AER most of the times.", "To summarize, at least under force decoding settings, our saliency method with SmoothGrad obtains word alignment interpretations of much higher quality than the attention weight baseline.", "Force Decoding Results As for Li et al.", "(2016) , for FConv and LSTM architectures, it is not only consistently worse than our method, but at times also worse than attention.", "Besides, the effect of SmoothGrad is also not as consistent on their saliency formulation as ours.", "Although with the Transformer model, the Li et al.", "(2016) method obtained better AER than our method under several settings, it is still pretty clear overall that the superior mathematical soundness of our method is translated into better interpretation quality.", "While the GIZA++ model obtains the best alignment result in Table 1 9 , most of our word alignment interpretation of FConv model with Smooth-Grad surpasses the alignment quality of fast-align (either Online or Offline), sometimes by as much as 8.7 points (symmetrized ro<>en result).", "Our best models are also largely on-par with (Zenkel et al., 2019) .", "These are notable results as our method is an interpretation method and no extra parameter is updated to optimize the quality of alignment.", "On the other hand, this also indicates that it is possible to induce high-quality 9 While Ghader and Monz (2017) showed that the AER obtained by LSTM model is close to that of GIZA++, our experiments yield a much larger difference.", "We think this is largely due to the fact that we choose to train our model with BPE, while Ghader and Monz (2017) explicitly avoided doing so.", "alignments from NMT model without modifying its parameters, showing that it has acquired such information in an implicit way.", "Most interestingly, although NMT is often deemed as performing poorly under low-resource setting, our interpretation seems to work relatively well on ro<>en language pair, which happens to be the language pair that we have least training data for.", "We think this is a phenomenon that merits further exploration.", "Besides, it can be seen that for all reported methods, the overall order for the number of alignment errors is FConv < LSTM < Transformer.", "To our best knowledge, this is also a novel insight, as no one has analyzed attention weights of FConv with other architectures before.", "We can also observe that while our method is not strong enough to fully bridge the gap of the attention noise level between different model architecture, it does manage to narrow the difference in some cases.", "Table 2 shows the result under free decoding setting.", "The trend in this group of experiment is similar to Table 1 , except that Transformer occasionally outperforms LSTM.", "We think this is mainly due to the fact that Transformer generates higher quality translations, but could also be partially attributed to the noise in fast-align reference.", "Also, notice that the AER numbers are also generally lower compared to Table 1 under this setting.", "One reason is that our model is aligning output with which it is most confident, so less noise should be expected in the model behavior.", "On the other hand, by qualitatively comparing the reference translation in the test set and the NMT output, we find that it is generally easier to align the translation as it is often a more literal translation.", "6 Analysis 6.1 Comparison with Li et al.", "(2016) The main reason why the word saliency formulation in Li et al.", "(2016) does not work as well for word alignment is the lack of polarity in the formulation.", "In other words, it only quantifies how much the input influences the output, but does not specify in what way does the input influence.", "This is sufficient for error analysis, but does not suit the purpose of word alignment, as humans will only align a target word to the input words that constitute a translation pair, i.e.", "have positive influence.", "Figure 2 shows a case where this problem occurs in our German-English experiments.", "Note that in Subfigure (a), the source word nur has high saliency on several target words, e.g.", "should, but the word nur is actually not translated in the reference.", "On the other hand, as shown in Subfigure (b), our method correctly assigns negative (shown as white) or small positive values at all time steps for this source word.", "Specifically, the saliency value of nur for should is negative with large magnitude, indicating significant negative contributions to the prediction of that target word.", "Hence, a good word alignment interpreta-tion should strongly avoid aligning them.", "Free Decoding Results SmoothGrad Tables 1 and 2 show that SmoothGrad is a crucial factor to reduce AER, especially for Transformer.", "Figure 3 Table 1 .", "By comparing Subfigures (a) and (c), we notice that (1) without SmoothGrad, the word saliency obtained from the Transformer model is extremely noisy, and (2) the output of SmoothGrad is not only a smoother version of the naïve gradient output, but also gains new information by performing extra forward and backward evaluations with the noisy input.", "For example, compare the alignment point between source word wir and target word we: in Subfigure (a), this word pair has very low saliency, but in (c), they become the most likely alignment pair for that target word.", "Referring back to our motivation for using SmoothGrad in Section 4.3, we think the observations above verify that the Transformer model is a case where very high non-linearities occur almost everywhere in the parameter space, such that the saliency obtained from local perturbation is a very Table 3 : Alignment distribution entropy for selected deen models.", "att stands for attention in Table 1. poor representation of the global saliency almost all the time.", "On the other hand, this is also why the Transformer especially relies on SmoothGrad to work well, as the perturbation will give a better estimation of the global saliency.", "It could also be observed from Subfigures (b) and (d) that when the noise is too moderate, the evaluation does not deviate enough from the original spot to gain non-local information, and at (d) it deviates too much and hence the resulting alignment is almost random.", "Intuitively, the noise parameter σ should be sensitive to the model architecture or even specific input feature values, but interestingly we end up finding that a single choice from the computer vision literature works well with all of our systems.", "We encourage future work to conduct more comprehensive analysis of the effect of SmoothGrad on more complicated architectures beyond convolutional neural nets.", "Alignment Dispersion We run German-English alignments under several different SmoothGrad noise deviation σ and report their dispersion as measured by entropy of the (soft) alignment distribution averaged by number of target words.", "Results are summarized in Ta-ble 3, where lower entropy indicates more peaky alignments.", "First, we observe that dispersion of word saliency gets higher as we increase σ, which aligns with the observations in Figure 3 .", "It should also be noted that the alignment dispersion is consistently lower for free decoding than force decoding.", "This verifies our conjecture that the force decoding setting might introduce more noise in the model behavior, but judging from this result, that gap seems to be minimal.", "Comparing different architectures, the dispersion of attention weights does not correlate well with the dispersion of word saliency.", "We also notice that, while the Transformer attention interpretation consistently results in higher AER, its dispersion is lower than the other architectures, indicating that with attention, a lot of the probability mass might be concentrated in the wrong place more often.", "This corroborates the finding in Raganato and Tiedemann (2018) .", "Discussion And Future Work There are several extensions to this work that we would like to discuss in this section.", "First, in this paper we only explored two saliency methods among many others available (Montavon et al., 2018) .", "In our preliminary study, we also experimented with guided back-propagation (Springenberg et al., 2014), a frequently used saliency method in computer vision, which did not work well for our problem.", "We suspect that there is a gap between applying these methods on mostlyconvolutional architectures in computer vision and architectures with more non-linearities in NLP.", "We hope the future research from the NLP and machine learning communities could bridge this gap.", "Secondly, the alignment errors in our method comes from three different sources: the limitation of NMT models on learning word alignments, the limitation of interpretation method on recovering interpretable word alignments, and the ambiguity in word alignments itself.", "Although we have shown that high quality alignment could be recovered from NMT systems (thus pushing our understanding on the limitation of NMT models), we are not yet able to separate these sources of errors in this work.", "While exploration on this direction will help us better understand both NMT models and the capability of saliency methods in NLP, researchers may want to avoid using word alignment as a benchmark for saliency methods because of its ambiguity.", "For such purpose, simpler tasks with clear ground truth, such as subject-verb agreement, might be a better choice.", "Finally, as mentioned before, we are only conducting approximate evaluation to measure the ability of our interpretation method.", "An immediate future work would be evaluating this on human-annotated translation outputs generated by the NMT system.", "Conclusion We propose to use word saliency and SmoothGrad to interpret word alignments from NMT predictions.", "Our proposal is model-agnostic, is able to be applied either offline or online, and does not require any parameter updates or architectural change.", "Both force decoding and free decoding evaluations show that our method is capable of generating word alignment interpretations of much higher quality compared to its attentionbased counterpart.", "Our empirical results also probe into the NMT black-box and reveal that even without any special architecture or training algorithm, some NMT models have already implicitly learned interpretable word alignments of comparable quality to fast-align.", "The model and code for our experiments are available at https://github.com/shuoyangd/meerkat." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "4.3", "5.1", "5.2", "5.3", "6.2", "6.3", "7", "8" ], "paper_header_content": [ "Introduction", "Related Work", "The Interpretation Problem", "Method", "Visual Saliency", "Word Saliency", "SmoothGrad", "Evaluation Method", "Setup", "Force Decoding Results", "SmoothGrad", "Alignment Dispersion", "Discussion And Future Work", "Conclusion" ] }
GEM-SciDuet-train-38#paper-1054#slide-20
Convolutional S2S on de en
Saliency-driven Word Alignment Interpretation for NMT
Saliency-driven Word Alignment Interpretation for NMT
[]
GEM-SciDuet-train-38#paper-1054#slide-21
1054
Saliency-driven Word Alignment Interpretation for Neural Machine Translation
Despite their original goal to jointly learn to align and translate, Neural Machine Translation (NMT) models, especially Transformer, are often perceived as not learning interpretable word alignments. In this paper, we show that NMT models do learn interpretable word alignments, which could only be revealed with proper interpretation methods. We propose a series of such methods that are model-agnostic, are able to be applied either offline or online, and do not require parameter update or architectural change. We show that under the force decoding setup, the alignments induced by our interpretation method are of better quality than fast-align for some systems, and when performing free decoding, they agree well with the alignments induced by automatic alignment tools.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223 ], "paper_content_text": [ "Introduction Neural Machine Translation (NMT) has made lots of advancements since its inception.", "One of the key innovations that led to the largest improvements is the introduction of the attention mechanism (Bahdanau et al., 2014; Luong et al., 2015) , which jointly learns word alignment and translation.", "Since then, the attention mechanism has gradually become a general technique in various NLP tasks, including summarization (Rush et al., 2015; See et al., 2017) , natural language inference (Parikh et al., 2016) and speech recognition (Chorowski et al., 2015; Chan et al., 2016) .", "Although word alignment is no longer a integral step like the case for Statistical Machine Translation (SMT) systems (Brown et al., 1993; Koehn et al., 2003) , there is a resurgence of interest in the community to study word alignment for NMT models.", "Even for NMT, word alignments are useful for error analysis, inserting external vocabularies, and providing guidance for human translators in computer-aided translation.", "When aiming for the most accurate alignments, the state-of-the-art tools include GIZA++ (Brown et al., 1993; Och and Ney, 2003) and fast-align (Dyer et al., 2013) , which are all external models invented in SMT era and need to be run as a separate post-processing step after the full sentence translation is complete.", "As a direct result, they are not suitable for analyzing the internal decision processes of the neural machine translation models.", "Besides, these models are hard to apply in the online fashion, i.e.", "in the middle of left-to-right translation process, such as the scenario in certain constrained decoding algorithms (Hasler et al., 2018) and in computeraided translation (Bouma and Parmentier, 2014; Arcan et al., 2014) .", "For these cases, the current common practice is to simply generate word alignments from attention weights between the encoder and decoder.", "However, there are problems with this practice.", "Koehn and Knowles (2017) showed that attention-based word alignment interpretation may be subject to \"off-by-one\" errors.", "Zenkel et al.", "(2019) ; Tang et al.", "(2018b) ; Raganato and Tiedemann (2018) pointed out that the attention-induced alignment is particularly noisy with Transformer models.", "Because of this, some studies, such as Nguyen and Chiang (2018); Zenkel et al.", "(2019) proposed either to add extra modules to generate higher quality word alignments, or to use these modules to further improve the model performance or interpretability.", "This paper is a step towards interpreting word alignments from NMT without relying on external models.", "We argue that using only attention weights is insufficient for generating clean word alignment interpretations, which we demonstrate both conceptually and empirically.", "We propose to use the notion of saliency to obtain word alignment interpretation of NMT predictions.", "Different from previous alignment models, our proposal is a pure interpretation method and does not require any parameter update or architecture change.", "Nevertheless, we are able to reduce Alignment Error Rate (AER) by 10-20 points over the attention weight baseline under two evaluation settings we adopt (see Figure 1 for an example), and beat fast-align (Dyer et al., 2013) by as much as 8.7 points.", "Not only have we proposed a superior model interpretation method, but our empirical results also uncover that, contrary to common beliefs, architectures such as convolutional sequenceto-sequence models (Gehring et al., 2017) have already implicitly learned highly interpretable word alignments, which sheds light on how future improvement should be made on these architectures.", "Related Work We start with work that combines word alignments with NMT.", "Research in this area generally falls into one of three themes: (1) employing the notion of word alignments to interpret the prediction of NMT; (2) making use of word alignments to improve NMT performance; (3) making use of NMT to improve word alignments.", "We mainly focus on related work in the first theme as this is the problem we are addressing in this work.", "Then we briefly introduce work in the other themes that is relevant to our study.", "We conclude by briefly summarizing related work to our proposed interpretation method.", "For the attention in RNN-based sequence-tosequence model, the first comprehensive analysis is conducted by Ghader and Monz (2017) .", "They argued that the attention in such systems agree with word alignment to a certain extent by showing that the RNN-based system achieves comparable alignment error rate comparable to that of bidirectional GIZA++ with symmetrization.", "However, they also point out that they are not exactly the same, as training the attention with alignments would occasionally cause the model to forget important information.", "Lee et al.", "(2017) presented a toolkit that facilitates study for the attention in RNN-based models.", "There is also a number of other studies that analyze the attention in Transformer models.", "Tang et al.", "(2018a,b) conducted targeted evaluation of neural machine translation models in two different evaluation tasks, namely subject-verb agreement and word sense disambiguation.", "During the analysis, they noted that the pattern in Transformer model (what they refer to as advanced attention mechanism) is very different from that of the attention in RNN-based architecture, in that a lot of the probability mass is focused on the last input token.", "They did not dive deeper in this phenomenon in their analysis.", "Raganato and Tiedemann (2018) performed a brief but more refined analysis on each attention head and each layer, where they noticed several different patterns inside the modules, and concluded that Transformer tends to focus on local dependencies in lower layers but finds long dependencies on higher ones.", "Beyond interpretation, in order to improve the translation of rare words, Nguyen and Chiang (2018) introduced LexNet, a feed-forward neural network that directly predicts the target word from a weighted sum of the source embeddings, on top of an RNN-based Seq2Seq models.", "Their goal was to improve translation output and hence they did not empirically show AER improvements on manually-aligned corpora.", "There are also a few other studies that inject alignment supervision during NMT training (Mi et al., 2016; Liu et al., 2016) .", "In terms of improvements in word alignment quality, Legrand et al.", "(2016) ; Wang et al.", "(2018) ; proposed neu-ral word alignment modules decoupled from NMT systems, while Zenkel et al.", "(2019) introduced a separate module to extract alignment from NMT decoder states, with which they achieved comparable AER with fast-align with Transformer models.", "The saliency method we propose in this work draws its inspiration from visual saliency proposed by Simonyan et al.", "(2013); Springenberg et al.", "(2014) ; Smilkov et al.", "(2017) .", "It should be noted that these methods were mostly applied to computer vision tasks.", "To the best of our knowledge, Li et al.", "(2016) presented the only work that directly employs saliency methods to interpret NLP models.", "Most similar to our work in spirit, Ding et al.", "(2017) used Layer-wise Relevance Propagation (LRP; Bach et al.", "2015) , an interpretation method resembling saliency, to interpret the internal working mechanisms of RNN-based neural machine translation systems.", "Although conceptually LRP is also a good fit for word alignment interpretation, we have some concerns with the mathematical soundness of LRP when applied to attention models.", "Our proposed method is also considerably more flexible and easier to implement than LRP.", "The Interpretation Problem Formally, by interpreting model prediction, we are referring to the following problem: given a trained MT model and input tokens S = {s 0 , s 1 , .", ".", ".", ", s I−1 }, at a certain time step j when the models predicts t j , we want to know which source word in S \"contributed\" most to this prediction.", "Note that the prediction t j might not be arg max t j p(t j | t 1:j−1 ), as the locally optimal option may be pruned during beam search and not end up in the final translation.", "Under this framework, we can see an important conceptual problem regarding interpreting attention weights as word alignment.", "Suppose for the same source sentence, there are two alternative translations that diverge at target time step j, generating t j and t ′ j which respectively correspond to different source words.", "Presumably, the source word that is aligned to t j and t ′ j should changed correspondingly.", "However, this is not possible with the attention weight interpretation, because the attention weight is computed before prediction of t j or t ′ j .", "With that, we argue that an ideal interpretation algorithm should be able to adapt the interpretation with the specified output label, regard-less of whether it is the most likely label predicted by the model.", "As a final note, the term \"attention weights\" here refers to the weights of the attention between encoder and decoder (the \"encoder-decoder attention\" in Vaswani et al.", "(2017) ).", "Specifically, they do not refer to the weight of self-attention modules that only exist in the Transformer architecture, which do not establish alignment between the source and target words.", "Method Our proposal is based on the notion of visual saliency (Simonyan et al., 2013) in computer vision.", "In brief, the saliency of an input feature is defined by the partial gradient of the output score with regard to the input.", "We propose to extend this idea to NMT by drawing analogy between input pixels and the embedding look-up operation.", "Visual Saliency Suppose we have an image classification example (x 0 , y 0 ), with y 0 being a specific image class and x 0 being an |X |-dimensional vector.", "Each entry of x 0 is an input feature (i.e., a pixel) to the classifier.", "Given the input x 0 , a trained classifier can generate a prediction score for class y 0 , denoted as p(y 0 | x 0 ).", "Consider the first-order Taylor expansion of a perturbed version of this score at the neighborhood of input x 0 : p(y 0 | x) ≈ p(y 0 | x 0 ) + ∂p(y 0 | x) ∂x x 0 · (x − x 0 ) (1) This is essentially re-formulating the perturbed prediction score p(y 0 | x) as an affine approximation of the input features, while the \"contribution\" of each feature to the final prediction being the partial derivative of the prediction score with regard to the feature.", "Assuming a feature that is deemed as salient for the local perturbation of the prediction score would also be globally salient, the saliency of an input feature is defined as follows: Definition 1 Denoted as Ψ(x, y), the saliency of feature vector x with regard to output class y is defined as ∂p(y | x) ∂x .", "Note that Ψ(x, y) is also a vector, with each entry corresponding to the saliency of a single input feature in x.", "Such formulation has following nice properties: • The saliency of an input feature is related to the choice of output class y, as model scores of different output classes correspond to a different set of parameters, and hence resulting in different partial gradients for the input features.", "This makes up for the aforementioned deficiency of attention weights in addressing the interpretation problem.", "• The partial gradient could be computed by back-propagation, which is efficiently implemented in most deep learning frameworks.", "• The formulation is agnostic to the model that generates p(y | x), so it could be applied to any deep learning architecture.", "Word Saliency In computer vision, the input feature is a 3D Tensor corresponding to the level in each channel.", "The key question to apply such method to NMT is what constitutes the input feature to a NMT system.", "Li et al.", "(2016) proposed to use the embedding of of the input words as the input feature to formulate saliency score, which results in the saliency of an input word being a vector of the same dimension as embedding vectors.", "To obtain a scalar saliency value, they computed the mean of the absolute value of the embedding gradients.", "We argue that there is a more mathematically principled way to approach this.", "To start, we treat the word embedding look-up operation as a dot product between the embedding weight matrix W and an one-hot vector z.", "The size of z is the same as the source vocabulary size.", "Similarly, the input sentence could be formulated as a matrix Z with only 0 and 1 entries.", "Notice that z has certain resemblance to the pixels of an image, with each cell representing the pixel-wise activation level of the words in the vocabulary.", "For the output word t j at time step j, we can similarly define the saliency of the one-hot vector z as: Ψ(z, t j ) = ∂p(t j | Z) ∂z (2) where p(t j | Z) is the probability of word t j generated by the NMT model given source sentence Z. Ψ(z, t j ) is a vector of the same size as z.", "However, note that there is a key difference between z and pixels.", "If the pixel level is 0, it means that the pixel is black, while a 0-entry in z means that the input word is not the word denoted by the corresponding cell.", "While the black region of an input image may still carry important information, we are not interested in the saliency of the 0-entries in z.", "1 Hence, we only take the 1-entries of matrix Z as the input to the NMT model.", "For a source word s i in the source sentence, this means we only care about the saliency of the 1-entries, i.e., the entry corresponding to source word s i : ψ(s i , t j ) = [ ∂p(t j | Z) ∂z ] s i = [ ∂p(t j | Z) ∂W s i · ∂W s i ∂z ] s i = [ ∂p(t j | Z) ∂W s i · W ] s i = ∂p(t j | Z) ∂W s i · W s i (3) where [·] i denotes the i-th row of a matrix or the ith element of a vector.", "In other words, the saliency ψ(s i , t j ) is a weighted sum of the word embedding of input word s i , with the partial gradient of each cell as the weight.", "By comparison, the word saliency 2 in Li et al.", "(2016) is defined as: ψ ′ (s i , t j ) = mean ( ∂p(t j | Z) ∂W s i ) (4) There are two implementation details that we would like to call for the reader's attention: • When the same word occurs multiple times in the source sentence, multiple copies of embedding for such word need to be made to ensure that the gradients flowing to different instances of the same word are not merged; • Note that ψ(s i , t j ) is not a probability distribution, which does not affect word alignment results because we are taking arg max.", "For visualizations presented herein, we normalized the distribution by p( s i | t j ) ∝ max(0, ψ(s i , t j )).", "One may also use softmax function for applications that need more well-formed probability distribution.", "1 Although we introduce z to facilitate presentation, note that word embedding look-up is never implemented as a matrix multiplication.", "Instead, it is implemented as a table lookup, so for each input word, only one row of the word embedding is fed into the subsequent computation.", "As a consequence, during training, since the other rows are not part of the computation graph, only parameters in the rows corresponding to the 1-entries will be updated.", "This is another reason why we choose to discard the saliency of 0-entries.", "2 Li et al.", "(2016) mostly focused on studying saliency on the level of word embedding dimensions.", "This word-level formulation is proposed as part of the analysis in Section 5.2 and Section 6 of that work.", "SmoothGrad There are two scenarios where the naïve gradientbased saliency may make mistakes: • For highly non-linear models, the saliency obtained from local perturbation may not be a good representation of the global saliency.", "• If the model fits the distribution nearly perfectly, some data points or input features may become saturated, i.e.", "having a partial gradient of 0.", "This does not necessarily mean they are not salient with regard to the prediction.", "We alleviate these problems with SmoothGrad, a method proposed by Smilkov et al.", "(2017) .", "The idea is to augment the input to the network into n samples by adding random noise generated by normal distribution N (0, σ 2 ).", "The saliency scores of each augmented sample are then averaged to cancel out the noise in the gradients.", "We made one small modification to this method in our experiments: rather than adding noise to the word inputs that are represented as one-hot vectors, we instead add noise to the queried embedding vectors.", "This allows us to introduce more randomness for each word input.", "Experiments Evaluation Method The best evaluation method would compare predicted word alignments against manually labeled word alignments between source sentences and NMT output sentences, but this is too costly for our study.", "Instead, we conduct two automatic evaluations for our proposed method using resources available: • force decoding: take a human-annotated corpus, run NMT models to force-generate the target side of the corpus and measure AER against the human alignment; • free decoding: take the NMT prediction, obtain reasonably clean reference alignments between the prediction and the source and measure AER against this reference.", "3 Notice that both automatic evaluation methods have their respective limitation: the force decoding method may force the model to predict something it deems unlikely, and thus generating noisy alignment; whereas the free decoding method lacks authentic references.", "Setup We follow Zenkel et al.", "(2019) in data setup and use the accompanied scripts of that paper 4 for preprocessing.", "Their training data consists of 1.9M, 1.1M and 0.4M sentence pairs for German-English (de-en), English-French (en-fr) and Romanian-English (ro-en) language pairs, respectively, whereas the manually-aligned test data contains 508, 447 and 248 sentence pairs for each language pair.", "There is no development data provided in their setup, and it is not clear what they used for NMT system training, so we set aside the last 1,000 sentences of the training data for each language as the development set.", "For our NMT systems, we use fairseq 5 to train attention-based RNN systems (LSTM) (Bahdanau et al., 2014) , convolution systems (FConv) (Gehring et al., 2017) , and Transformer systems (Transformer) (Vaswani et al., 2017) .", "We use the pre-configured model architectures for IWSLT German-English experiments 6 to build all NMT systems.", "Our experiments cover the following interpretation methods: • Attention: directly take the attention weights as soft alignment scores.", "For transformer, we follow the implementation in fairseq and used the attention weights from the final layer averaged across all heads; • Smoothed Attention: obtain multiple version of attention weights with the same data augmentation procedure as SmoothGrad and average them.", "This is to prove that smoothing itself does not improve the interpretation quality, and has to be used together with effective interpretation method; • (Li et al., 2016) : applied with normal backpropagation (Grad) and SmoothGrad; • Ours: applied with normal back-propagation (Grad) and SmoothGrad.", "For all the methods above, we follow the same procedure in (Zenkel et al., 2019) to convert soft alignment scores to hard alignment.", "For force decoding experiments, we generate symmetrized alignment results with growdiag-final.", "We also include AER results 7 of fast-align (Dyer et al., 2013) , GIZA++ 8 and the best model (Add+SGD) from Zenkel et al.", "(2019) on the same dataset for comparison.", "However, the readers should be aware that there are certain caveats in this comparison: • All of these models are specifically designed and optimized to generate high-quality alignments, while our method is an interpretation method and is not making any architecture modifications or parameter updates; • fast-align and GIZA++ usually need to update model with full sentence to generate optimal alignments, while our system and Zenkel et al.", "(2019) can do so on-the-fly.", "7 We reproduced the fast-align results as a sanity check and we were able to perfectly replicate their numbers with their released scripts.", "8 https://github.com/moses-smt/giza-pp Realizing the second caveat, we also run fastalign under the online alignment scenario, where we first train a fast-align model and decode on the test set.", "This is a real-world scenario in applications such as computer-aided translation (Bouma and Parmentier, 2014; Arcan et al., 2014) , where we cannot practically update alignment models onthe-fly.", "On the other hand, we believe this is a slightly better comparison for methods with online alignment capabilities such as Zenkel et al.", "(2019) and this work.", "The data used in Zenkel et al.", "(2019) did not provide a manually-aligned development set, so we tune the SmoothGrad hyperparameters (noise standard deviation σ and sample size n) on a 30sentence subset of the German-English test data with the Transformer model.", "We ended up using the recommended σ = 0.15 in the original paper and a slightly smaller sample size n = 30 for speed.", "This hyperparameter setting is applied to the other SmoothGrad experiments as-is.", "For com-parison with previous work, we do not exclude these sentences from the reported results, we instead mark the numbers affected to raise caution.", "Table 1 shows the AER results under the force decoding setting.", "First, note that after applying our saliency method with normal back-propagation, AER is only reduced for FConv model but instead increases for LSTM and Transformer.", "The largest increase is observed for Transformer, where the AER increases by about 20 points on average.", "However, after applying SmoothGrad on top of that, we observe a sharp drop in AER, which ends up with 10-20 points lower than the attention weight baseline.", "We can also see that this is not just an effect introduced by input noise, as the same smoothing procedure for attention increases the AER most of the times.", "To summarize, at least under force decoding settings, our saliency method with SmoothGrad obtains word alignment interpretations of much higher quality than the attention weight baseline.", "Force Decoding Results As for Li et al.", "(2016) , for FConv and LSTM architectures, it is not only consistently worse than our method, but at times also worse than attention.", "Besides, the effect of SmoothGrad is also not as consistent on their saliency formulation as ours.", "Although with the Transformer model, the Li et al.", "(2016) method obtained better AER than our method under several settings, it is still pretty clear overall that the superior mathematical soundness of our method is translated into better interpretation quality.", "While the GIZA++ model obtains the best alignment result in Table 1 9 , most of our word alignment interpretation of FConv model with Smooth-Grad surpasses the alignment quality of fast-align (either Online or Offline), sometimes by as much as 8.7 points (symmetrized ro<>en result).", "Our best models are also largely on-par with (Zenkel et al., 2019) .", "These are notable results as our method is an interpretation method and no extra parameter is updated to optimize the quality of alignment.", "On the other hand, this also indicates that it is possible to induce high-quality 9 While Ghader and Monz (2017) showed that the AER obtained by LSTM model is close to that of GIZA++, our experiments yield a much larger difference.", "We think this is largely due to the fact that we choose to train our model with BPE, while Ghader and Monz (2017) explicitly avoided doing so.", "alignments from NMT model without modifying its parameters, showing that it has acquired such information in an implicit way.", "Most interestingly, although NMT is often deemed as performing poorly under low-resource setting, our interpretation seems to work relatively well on ro<>en language pair, which happens to be the language pair that we have least training data for.", "We think this is a phenomenon that merits further exploration.", "Besides, it can be seen that for all reported methods, the overall order for the number of alignment errors is FConv < LSTM < Transformer.", "To our best knowledge, this is also a novel insight, as no one has analyzed attention weights of FConv with other architectures before.", "We can also observe that while our method is not strong enough to fully bridge the gap of the attention noise level between different model architecture, it does manage to narrow the difference in some cases.", "Table 2 shows the result under free decoding setting.", "The trend in this group of experiment is similar to Table 1 , except that Transformer occasionally outperforms LSTM.", "We think this is mainly due to the fact that Transformer generates higher quality translations, but could also be partially attributed to the noise in fast-align reference.", "Also, notice that the AER numbers are also generally lower compared to Table 1 under this setting.", "One reason is that our model is aligning output with which it is most confident, so less noise should be expected in the model behavior.", "On the other hand, by qualitatively comparing the reference translation in the test set and the NMT output, we find that it is generally easier to align the translation as it is often a more literal translation.", "6 Analysis 6.1 Comparison with Li et al.", "(2016) The main reason why the word saliency formulation in Li et al.", "(2016) does not work as well for word alignment is the lack of polarity in the formulation.", "In other words, it only quantifies how much the input influences the output, but does not specify in what way does the input influence.", "This is sufficient for error analysis, but does not suit the purpose of word alignment, as humans will only align a target word to the input words that constitute a translation pair, i.e.", "have positive influence.", "Figure 2 shows a case where this problem occurs in our German-English experiments.", "Note that in Subfigure (a), the source word nur has high saliency on several target words, e.g.", "should, but the word nur is actually not translated in the reference.", "On the other hand, as shown in Subfigure (b), our method correctly assigns negative (shown as white) or small positive values at all time steps for this source word.", "Specifically, the saliency value of nur for should is negative with large magnitude, indicating significant negative contributions to the prediction of that target word.", "Hence, a good word alignment interpreta-tion should strongly avoid aligning them.", "Free Decoding Results SmoothGrad Tables 1 and 2 show that SmoothGrad is a crucial factor to reduce AER, especially for Transformer.", "Figure 3 Table 1 .", "By comparing Subfigures (a) and (c), we notice that (1) without SmoothGrad, the word saliency obtained from the Transformer model is extremely noisy, and (2) the output of SmoothGrad is not only a smoother version of the naïve gradient output, but also gains new information by performing extra forward and backward evaluations with the noisy input.", "For example, compare the alignment point between source word wir and target word we: in Subfigure (a), this word pair has very low saliency, but in (c), they become the most likely alignment pair for that target word.", "Referring back to our motivation for using SmoothGrad in Section 4.3, we think the observations above verify that the Transformer model is a case where very high non-linearities occur almost everywhere in the parameter space, such that the saliency obtained from local perturbation is a very Table 3 : Alignment distribution entropy for selected deen models.", "att stands for attention in Table 1. poor representation of the global saliency almost all the time.", "On the other hand, this is also why the Transformer especially relies on SmoothGrad to work well, as the perturbation will give a better estimation of the global saliency.", "It could also be observed from Subfigures (b) and (d) that when the noise is too moderate, the evaluation does not deviate enough from the original spot to gain non-local information, and at (d) it deviates too much and hence the resulting alignment is almost random.", "Intuitively, the noise parameter σ should be sensitive to the model architecture or even specific input feature values, but interestingly we end up finding that a single choice from the computer vision literature works well with all of our systems.", "We encourage future work to conduct more comprehensive analysis of the effect of SmoothGrad on more complicated architectures beyond convolutional neural nets.", "Alignment Dispersion We run German-English alignments under several different SmoothGrad noise deviation σ and report their dispersion as measured by entropy of the (soft) alignment distribution averaged by number of target words.", "Results are summarized in Ta-ble 3, where lower entropy indicates more peaky alignments.", "First, we observe that dispersion of word saliency gets higher as we increase σ, which aligns with the observations in Figure 3 .", "It should also be noted that the alignment dispersion is consistently lower for free decoding than force decoding.", "This verifies our conjecture that the force decoding setting might introduce more noise in the model behavior, but judging from this result, that gap seems to be minimal.", "Comparing different architectures, the dispersion of attention weights does not correlate well with the dispersion of word saliency.", "We also notice that, while the Transformer attention interpretation consistently results in higher AER, its dispersion is lower than the other architectures, indicating that with attention, a lot of the probability mass might be concentrated in the wrong place more often.", "This corroborates the finding in Raganato and Tiedemann (2018) .", "Discussion And Future Work There are several extensions to this work that we would like to discuss in this section.", "First, in this paper we only explored two saliency methods among many others available (Montavon et al., 2018) .", "In our preliminary study, we also experimented with guided back-propagation (Springenberg et al., 2014), a frequently used saliency method in computer vision, which did not work well for our problem.", "We suspect that there is a gap between applying these methods on mostlyconvolutional architectures in computer vision and architectures with more non-linearities in NLP.", "We hope the future research from the NLP and machine learning communities could bridge this gap.", "Secondly, the alignment errors in our method comes from three different sources: the limitation of NMT models on learning word alignments, the limitation of interpretation method on recovering interpretable word alignments, and the ambiguity in word alignments itself.", "Although we have shown that high quality alignment could be recovered from NMT systems (thus pushing our understanding on the limitation of NMT models), we are not yet able to separate these sources of errors in this work.", "While exploration on this direction will help us better understand both NMT models and the capability of saliency methods in NLP, researchers may want to avoid using word alignment as a benchmark for saliency methods because of its ambiguity.", "For such purpose, simpler tasks with clear ground truth, such as subject-verb agreement, might be a better choice.", "Finally, as mentioned before, we are only conducting approximate evaluation to measure the ability of our interpretation method.", "An immediate future work would be evaluating this on human-annotated translation outputs generated by the NMT system.", "Conclusion We propose to use word saliency and SmoothGrad to interpret word alignments from NMT predictions.", "Our proposal is model-agnostic, is able to be applied either offline or online, and does not require any parameter updates or architectural change.", "Both force decoding and free decoding evaluations show that our method is capable of generating word alignment interpretations of much higher quality compared to its attentionbased counterpart.", "Our empirical results also probe into the NMT black-box and reveal that even without any special architecture or training algorithm, some NMT models have already implicitly learned interpretable word alignments of comparable quality to fast-align.", "The model and code for our experiments are available at https://github.com/shuoyangd/meerkat." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "4.3", "5.1", "5.2", "5.3", "6.2", "6.3", "7", "8" ], "paper_header_content": [ "Introduction", "Related Work", "The Interpretation Problem", "Method", "Visual Saliency", "Word Saliency", "SmoothGrad", "Evaluation Method", "Setup", "Force Decoding Results", "SmoothGrad", "Alignment Dispersion", "Discussion And Future Work", "Conclusion" ] }
GEM-SciDuet-train-38#paper-1054#slide-21
Attention on de en
Saliency-driven Word Alignment Interpretation for NMT
Saliency-driven Word Alignment Interpretation for NMT
[]
GEM-SciDuet-train-38#paper-1054#slide-22
1054
Saliency-driven Word Alignment Interpretation for Neural Machine Translation
Despite their original goal to jointly learn to align and translate, Neural Machine Translation (NMT) models, especially Transformer, are often perceived as not learning interpretable word alignments. In this paper, we show that NMT models do learn interpretable word alignments, which could only be revealed with proper interpretation methods. We propose a series of such methods that are model-agnostic, are able to be applied either offline or online, and do not require parameter update or architectural change. We show that under the force decoding setup, the alignments induced by our interpretation method are of better quality than fast-align for some systems, and when performing free decoding, they agree well with the alignments induced by automatic alignment tools.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223 ], "paper_content_text": [ "Introduction Neural Machine Translation (NMT) has made lots of advancements since its inception.", "One of the key innovations that led to the largest improvements is the introduction of the attention mechanism (Bahdanau et al., 2014; Luong et al., 2015) , which jointly learns word alignment and translation.", "Since then, the attention mechanism has gradually become a general technique in various NLP tasks, including summarization (Rush et al., 2015; See et al., 2017) , natural language inference (Parikh et al., 2016) and speech recognition (Chorowski et al., 2015; Chan et al., 2016) .", "Although word alignment is no longer a integral step like the case for Statistical Machine Translation (SMT) systems (Brown et al., 1993; Koehn et al., 2003) , there is a resurgence of interest in the community to study word alignment for NMT models.", "Even for NMT, word alignments are useful for error analysis, inserting external vocabularies, and providing guidance for human translators in computer-aided translation.", "When aiming for the most accurate alignments, the state-of-the-art tools include GIZA++ (Brown et al., 1993; Och and Ney, 2003) and fast-align (Dyer et al., 2013) , which are all external models invented in SMT era and need to be run as a separate post-processing step after the full sentence translation is complete.", "As a direct result, they are not suitable for analyzing the internal decision processes of the neural machine translation models.", "Besides, these models are hard to apply in the online fashion, i.e.", "in the middle of left-to-right translation process, such as the scenario in certain constrained decoding algorithms (Hasler et al., 2018) and in computeraided translation (Bouma and Parmentier, 2014; Arcan et al., 2014) .", "For these cases, the current common practice is to simply generate word alignments from attention weights between the encoder and decoder.", "However, there are problems with this practice.", "Koehn and Knowles (2017) showed that attention-based word alignment interpretation may be subject to \"off-by-one\" errors.", "Zenkel et al.", "(2019) ; Tang et al.", "(2018b) ; Raganato and Tiedemann (2018) pointed out that the attention-induced alignment is particularly noisy with Transformer models.", "Because of this, some studies, such as Nguyen and Chiang (2018); Zenkel et al.", "(2019) proposed either to add extra modules to generate higher quality word alignments, or to use these modules to further improve the model performance or interpretability.", "This paper is a step towards interpreting word alignments from NMT without relying on external models.", "We argue that using only attention weights is insufficient for generating clean word alignment interpretations, which we demonstrate both conceptually and empirically.", "We propose to use the notion of saliency to obtain word alignment interpretation of NMT predictions.", "Different from previous alignment models, our proposal is a pure interpretation method and does not require any parameter update or architecture change.", "Nevertheless, we are able to reduce Alignment Error Rate (AER) by 10-20 points over the attention weight baseline under two evaluation settings we adopt (see Figure 1 for an example), and beat fast-align (Dyer et al., 2013) by as much as 8.7 points.", "Not only have we proposed a superior model interpretation method, but our empirical results also uncover that, contrary to common beliefs, architectures such as convolutional sequenceto-sequence models (Gehring et al., 2017) have already implicitly learned highly interpretable word alignments, which sheds light on how future improvement should be made on these architectures.", "Related Work We start with work that combines word alignments with NMT.", "Research in this area generally falls into one of three themes: (1) employing the notion of word alignments to interpret the prediction of NMT; (2) making use of word alignments to improve NMT performance; (3) making use of NMT to improve word alignments.", "We mainly focus on related work in the first theme as this is the problem we are addressing in this work.", "Then we briefly introduce work in the other themes that is relevant to our study.", "We conclude by briefly summarizing related work to our proposed interpretation method.", "For the attention in RNN-based sequence-tosequence model, the first comprehensive analysis is conducted by Ghader and Monz (2017) .", "They argued that the attention in such systems agree with word alignment to a certain extent by showing that the RNN-based system achieves comparable alignment error rate comparable to that of bidirectional GIZA++ with symmetrization.", "However, they also point out that they are not exactly the same, as training the attention with alignments would occasionally cause the model to forget important information.", "Lee et al.", "(2017) presented a toolkit that facilitates study for the attention in RNN-based models.", "There is also a number of other studies that analyze the attention in Transformer models.", "Tang et al.", "(2018a,b) conducted targeted evaluation of neural machine translation models in two different evaluation tasks, namely subject-verb agreement and word sense disambiguation.", "During the analysis, they noted that the pattern in Transformer model (what they refer to as advanced attention mechanism) is very different from that of the attention in RNN-based architecture, in that a lot of the probability mass is focused on the last input token.", "They did not dive deeper in this phenomenon in their analysis.", "Raganato and Tiedemann (2018) performed a brief but more refined analysis on each attention head and each layer, where they noticed several different patterns inside the modules, and concluded that Transformer tends to focus on local dependencies in lower layers but finds long dependencies on higher ones.", "Beyond interpretation, in order to improve the translation of rare words, Nguyen and Chiang (2018) introduced LexNet, a feed-forward neural network that directly predicts the target word from a weighted sum of the source embeddings, on top of an RNN-based Seq2Seq models.", "Their goal was to improve translation output and hence they did not empirically show AER improvements on manually-aligned corpora.", "There are also a few other studies that inject alignment supervision during NMT training (Mi et al., 2016; Liu et al., 2016) .", "In terms of improvements in word alignment quality, Legrand et al.", "(2016) ; Wang et al.", "(2018) ; proposed neu-ral word alignment modules decoupled from NMT systems, while Zenkel et al.", "(2019) introduced a separate module to extract alignment from NMT decoder states, with which they achieved comparable AER with fast-align with Transformer models.", "The saliency method we propose in this work draws its inspiration from visual saliency proposed by Simonyan et al.", "(2013); Springenberg et al.", "(2014) ; Smilkov et al.", "(2017) .", "It should be noted that these methods were mostly applied to computer vision tasks.", "To the best of our knowledge, Li et al.", "(2016) presented the only work that directly employs saliency methods to interpret NLP models.", "Most similar to our work in spirit, Ding et al.", "(2017) used Layer-wise Relevance Propagation (LRP; Bach et al.", "2015) , an interpretation method resembling saliency, to interpret the internal working mechanisms of RNN-based neural machine translation systems.", "Although conceptually LRP is also a good fit for word alignment interpretation, we have some concerns with the mathematical soundness of LRP when applied to attention models.", "Our proposed method is also considerably more flexible and easier to implement than LRP.", "The Interpretation Problem Formally, by interpreting model prediction, we are referring to the following problem: given a trained MT model and input tokens S = {s 0 , s 1 , .", ".", ".", ", s I−1 }, at a certain time step j when the models predicts t j , we want to know which source word in S \"contributed\" most to this prediction.", "Note that the prediction t j might not be arg max t j p(t j | t 1:j−1 ), as the locally optimal option may be pruned during beam search and not end up in the final translation.", "Under this framework, we can see an important conceptual problem regarding interpreting attention weights as word alignment.", "Suppose for the same source sentence, there are two alternative translations that diverge at target time step j, generating t j and t ′ j which respectively correspond to different source words.", "Presumably, the source word that is aligned to t j and t ′ j should changed correspondingly.", "However, this is not possible with the attention weight interpretation, because the attention weight is computed before prediction of t j or t ′ j .", "With that, we argue that an ideal interpretation algorithm should be able to adapt the interpretation with the specified output label, regard-less of whether it is the most likely label predicted by the model.", "As a final note, the term \"attention weights\" here refers to the weights of the attention between encoder and decoder (the \"encoder-decoder attention\" in Vaswani et al.", "(2017) ).", "Specifically, they do not refer to the weight of self-attention modules that only exist in the Transformer architecture, which do not establish alignment between the source and target words.", "Method Our proposal is based on the notion of visual saliency (Simonyan et al., 2013) in computer vision.", "In brief, the saliency of an input feature is defined by the partial gradient of the output score with regard to the input.", "We propose to extend this idea to NMT by drawing analogy between input pixels and the embedding look-up operation.", "Visual Saliency Suppose we have an image classification example (x 0 , y 0 ), with y 0 being a specific image class and x 0 being an |X |-dimensional vector.", "Each entry of x 0 is an input feature (i.e., a pixel) to the classifier.", "Given the input x 0 , a trained classifier can generate a prediction score for class y 0 , denoted as p(y 0 | x 0 ).", "Consider the first-order Taylor expansion of a perturbed version of this score at the neighborhood of input x 0 : p(y 0 | x) ≈ p(y 0 | x 0 ) + ∂p(y 0 | x) ∂x x 0 · (x − x 0 ) (1) This is essentially re-formulating the perturbed prediction score p(y 0 | x) as an affine approximation of the input features, while the \"contribution\" of each feature to the final prediction being the partial derivative of the prediction score with regard to the feature.", "Assuming a feature that is deemed as salient for the local perturbation of the prediction score would also be globally salient, the saliency of an input feature is defined as follows: Definition 1 Denoted as Ψ(x, y), the saliency of feature vector x with regard to output class y is defined as ∂p(y | x) ∂x .", "Note that Ψ(x, y) is also a vector, with each entry corresponding to the saliency of a single input feature in x.", "Such formulation has following nice properties: • The saliency of an input feature is related to the choice of output class y, as model scores of different output classes correspond to a different set of parameters, and hence resulting in different partial gradients for the input features.", "This makes up for the aforementioned deficiency of attention weights in addressing the interpretation problem.", "• The partial gradient could be computed by back-propagation, which is efficiently implemented in most deep learning frameworks.", "• The formulation is agnostic to the model that generates p(y | x), so it could be applied to any deep learning architecture.", "Word Saliency In computer vision, the input feature is a 3D Tensor corresponding to the level in each channel.", "The key question to apply such method to NMT is what constitutes the input feature to a NMT system.", "Li et al.", "(2016) proposed to use the embedding of of the input words as the input feature to formulate saliency score, which results in the saliency of an input word being a vector of the same dimension as embedding vectors.", "To obtain a scalar saliency value, they computed the mean of the absolute value of the embedding gradients.", "We argue that there is a more mathematically principled way to approach this.", "To start, we treat the word embedding look-up operation as a dot product between the embedding weight matrix W and an one-hot vector z.", "The size of z is the same as the source vocabulary size.", "Similarly, the input sentence could be formulated as a matrix Z with only 0 and 1 entries.", "Notice that z has certain resemblance to the pixels of an image, with each cell representing the pixel-wise activation level of the words in the vocabulary.", "For the output word t j at time step j, we can similarly define the saliency of the one-hot vector z as: Ψ(z, t j ) = ∂p(t j | Z) ∂z (2) where p(t j | Z) is the probability of word t j generated by the NMT model given source sentence Z. Ψ(z, t j ) is a vector of the same size as z.", "However, note that there is a key difference between z and pixels.", "If the pixel level is 0, it means that the pixel is black, while a 0-entry in z means that the input word is not the word denoted by the corresponding cell.", "While the black region of an input image may still carry important information, we are not interested in the saliency of the 0-entries in z.", "1 Hence, we only take the 1-entries of matrix Z as the input to the NMT model.", "For a source word s i in the source sentence, this means we only care about the saliency of the 1-entries, i.e., the entry corresponding to source word s i : ψ(s i , t j ) = [ ∂p(t j | Z) ∂z ] s i = [ ∂p(t j | Z) ∂W s i · ∂W s i ∂z ] s i = [ ∂p(t j | Z) ∂W s i · W ] s i = ∂p(t j | Z) ∂W s i · W s i (3) where [·] i denotes the i-th row of a matrix or the ith element of a vector.", "In other words, the saliency ψ(s i , t j ) is a weighted sum of the word embedding of input word s i , with the partial gradient of each cell as the weight.", "By comparison, the word saliency 2 in Li et al.", "(2016) is defined as: ψ ′ (s i , t j ) = mean ( ∂p(t j | Z) ∂W s i ) (4) There are two implementation details that we would like to call for the reader's attention: • When the same word occurs multiple times in the source sentence, multiple copies of embedding for such word need to be made to ensure that the gradients flowing to different instances of the same word are not merged; • Note that ψ(s i , t j ) is not a probability distribution, which does not affect word alignment results because we are taking arg max.", "For visualizations presented herein, we normalized the distribution by p( s i | t j ) ∝ max(0, ψ(s i , t j )).", "One may also use softmax function for applications that need more well-formed probability distribution.", "1 Although we introduce z to facilitate presentation, note that word embedding look-up is never implemented as a matrix multiplication.", "Instead, it is implemented as a table lookup, so for each input word, only one row of the word embedding is fed into the subsequent computation.", "As a consequence, during training, since the other rows are not part of the computation graph, only parameters in the rows corresponding to the 1-entries will be updated.", "This is another reason why we choose to discard the saliency of 0-entries.", "2 Li et al.", "(2016) mostly focused on studying saliency on the level of word embedding dimensions.", "This word-level formulation is proposed as part of the analysis in Section 5.2 and Section 6 of that work.", "SmoothGrad There are two scenarios where the naïve gradientbased saliency may make mistakes: • For highly non-linear models, the saliency obtained from local perturbation may not be a good representation of the global saliency.", "• If the model fits the distribution nearly perfectly, some data points or input features may become saturated, i.e.", "having a partial gradient of 0.", "This does not necessarily mean they are not salient with regard to the prediction.", "We alleviate these problems with SmoothGrad, a method proposed by Smilkov et al.", "(2017) .", "The idea is to augment the input to the network into n samples by adding random noise generated by normal distribution N (0, σ 2 ).", "The saliency scores of each augmented sample are then averaged to cancel out the noise in the gradients.", "We made one small modification to this method in our experiments: rather than adding noise to the word inputs that are represented as one-hot vectors, we instead add noise to the queried embedding vectors.", "This allows us to introduce more randomness for each word input.", "Experiments Evaluation Method The best evaluation method would compare predicted word alignments against manually labeled word alignments between source sentences and NMT output sentences, but this is too costly for our study.", "Instead, we conduct two automatic evaluations for our proposed method using resources available: • force decoding: take a human-annotated corpus, run NMT models to force-generate the target side of the corpus and measure AER against the human alignment; • free decoding: take the NMT prediction, obtain reasonably clean reference alignments between the prediction and the source and measure AER against this reference.", "3 Notice that both automatic evaluation methods have their respective limitation: the force decoding method may force the model to predict something it deems unlikely, and thus generating noisy alignment; whereas the free decoding method lacks authentic references.", "Setup We follow Zenkel et al.", "(2019) in data setup and use the accompanied scripts of that paper 4 for preprocessing.", "Their training data consists of 1.9M, 1.1M and 0.4M sentence pairs for German-English (de-en), English-French (en-fr) and Romanian-English (ro-en) language pairs, respectively, whereas the manually-aligned test data contains 508, 447 and 248 sentence pairs for each language pair.", "There is no development data provided in their setup, and it is not clear what they used for NMT system training, so we set aside the last 1,000 sentences of the training data for each language as the development set.", "For our NMT systems, we use fairseq 5 to train attention-based RNN systems (LSTM) (Bahdanau et al., 2014) , convolution systems (FConv) (Gehring et al., 2017) , and Transformer systems (Transformer) (Vaswani et al., 2017) .", "We use the pre-configured model architectures for IWSLT German-English experiments 6 to build all NMT systems.", "Our experiments cover the following interpretation methods: • Attention: directly take the attention weights as soft alignment scores.", "For transformer, we follow the implementation in fairseq and used the attention weights from the final layer averaged across all heads; • Smoothed Attention: obtain multiple version of attention weights with the same data augmentation procedure as SmoothGrad and average them.", "This is to prove that smoothing itself does not improve the interpretation quality, and has to be used together with effective interpretation method; • (Li et al., 2016) : applied with normal backpropagation (Grad) and SmoothGrad; • Ours: applied with normal back-propagation (Grad) and SmoothGrad.", "For all the methods above, we follow the same procedure in (Zenkel et al., 2019) to convert soft alignment scores to hard alignment.", "For force decoding experiments, we generate symmetrized alignment results with growdiag-final.", "We also include AER results 7 of fast-align (Dyer et al., 2013) , GIZA++ 8 and the best model (Add+SGD) from Zenkel et al.", "(2019) on the same dataset for comparison.", "However, the readers should be aware that there are certain caveats in this comparison: • All of these models are specifically designed and optimized to generate high-quality alignments, while our method is an interpretation method and is not making any architecture modifications or parameter updates; • fast-align and GIZA++ usually need to update model with full sentence to generate optimal alignments, while our system and Zenkel et al.", "(2019) can do so on-the-fly.", "7 We reproduced the fast-align results as a sanity check and we were able to perfectly replicate their numbers with their released scripts.", "8 https://github.com/moses-smt/giza-pp Realizing the second caveat, we also run fastalign under the online alignment scenario, where we first train a fast-align model and decode on the test set.", "This is a real-world scenario in applications such as computer-aided translation (Bouma and Parmentier, 2014; Arcan et al., 2014) , where we cannot practically update alignment models onthe-fly.", "On the other hand, we believe this is a slightly better comparison for methods with online alignment capabilities such as Zenkel et al.", "(2019) and this work.", "The data used in Zenkel et al.", "(2019) did not provide a manually-aligned development set, so we tune the SmoothGrad hyperparameters (noise standard deviation σ and sample size n) on a 30sentence subset of the German-English test data with the Transformer model.", "We ended up using the recommended σ = 0.15 in the original paper and a slightly smaller sample size n = 30 for speed.", "This hyperparameter setting is applied to the other SmoothGrad experiments as-is.", "For com-parison with previous work, we do not exclude these sentences from the reported results, we instead mark the numbers affected to raise caution.", "Table 1 shows the AER results under the force decoding setting.", "First, note that after applying our saliency method with normal back-propagation, AER is only reduced for FConv model but instead increases for LSTM and Transformer.", "The largest increase is observed for Transformer, where the AER increases by about 20 points on average.", "However, after applying SmoothGrad on top of that, we observe a sharp drop in AER, which ends up with 10-20 points lower than the attention weight baseline.", "We can also see that this is not just an effect introduced by input noise, as the same smoothing procedure for attention increases the AER most of the times.", "To summarize, at least under force decoding settings, our saliency method with SmoothGrad obtains word alignment interpretations of much higher quality than the attention weight baseline.", "Force Decoding Results As for Li et al.", "(2016) , for FConv and LSTM architectures, it is not only consistently worse than our method, but at times also worse than attention.", "Besides, the effect of SmoothGrad is also not as consistent on their saliency formulation as ours.", "Although with the Transformer model, the Li et al.", "(2016) method obtained better AER than our method under several settings, it is still pretty clear overall that the superior mathematical soundness of our method is translated into better interpretation quality.", "While the GIZA++ model obtains the best alignment result in Table 1 9 , most of our word alignment interpretation of FConv model with Smooth-Grad surpasses the alignment quality of fast-align (either Online or Offline), sometimes by as much as 8.7 points (symmetrized ro<>en result).", "Our best models are also largely on-par with (Zenkel et al., 2019) .", "These are notable results as our method is an interpretation method and no extra parameter is updated to optimize the quality of alignment.", "On the other hand, this also indicates that it is possible to induce high-quality 9 While Ghader and Monz (2017) showed that the AER obtained by LSTM model is close to that of GIZA++, our experiments yield a much larger difference.", "We think this is largely due to the fact that we choose to train our model with BPE, while Ghader and Monz (2017) explicitly avoided doing so.", "alignments from NMT model without modifying its parameters, showing that it has acquired such information in an implicit way.", "Most interestingly, although NMT is often deemed as performing poorly under low-resource setting, our interpretation seems to work relatively well on ro<>en language pair, which happens to be the language pair that we have least training data for.", "We think this is a phenomenon that merits further exploration.", "Besides, it can be seen that for all reported methods, the overall order for the number of alignment errors is FConv < LSTM < Transformer.", "To our best knowledge, this is also a novel insight, as no one has analyzed attention weights of FConv with other architectures before.", "We can also observe that while our method is not strong enough to fully bridge the gap of the attention noise level between different model architecture, it does manage to narrow the difference in some cases.", "Table 2 shows the result under free decoding setting.", "The trend in this group of experiment is similar to Table 1 , except that Transformer occasionally outperforms LSTM.", "We think this is mainly due to the fact that Transformer generates higher quality translations, but could also be partially attributed to the noise in fast-align reference.", "Also, notice that the AER numbers are also generally lower compared to Table 1 under this setting.", "One reason is that our model is aligning output with which it is most confident, so less noise should be expected in the model behavior.", "On the other hand, by qualitatively comparing the reference translation in the test set and the NMT output, we find that it is generally easier to align the translation as it is often a more literal translation.", "6 Analysis 6.1 Comparison with Li et al.", "(2016) The main reason why the word saliency formulation in Li et al.", "(2016) does not work as well for word alignment is the lack of polarity in the formulation.", "In other words, it only quantifies how much the input influences the output, but does not specify in what way does the input influence.", "This is sufficient for error analysis, but does not suit the purpose of word alignment, as humans will only align a target word to the input words that constitute a translation pair, i.e.", "have positive influence.", "Figure 2 shows a case where this problem occurs in our German-English experiments.", "Note that in Subfigure (a), the source word nur has high saliency on several target words, e.g.", "should, but the word nur is actually not translated in the reference.", "On the other hand, as shown in Subfigure (b), our method correctly assigns negative (shown as white) or small positive values at all time steps for this source word.", "Specifically, the saliency value of nur for should is negative with large magnitude, indicating significant negative contributions to the prediction of that target word.", "Hence, a good word alignment interpreta-tion should strongly avoid aligning them.", "Free Decoding Results SmoothGrad Tables 1 and 2 show that SmoothGrad is a crucial factor to reduce AER, especially for Transformer.", "Figure 3 Table 1 .", "By comparing Subfigures (a) and (c), we notice that (1) without SmoothGrad, the word saliency obtained from the Transformer model is extremely noisy, and (2) the output of SmoothGrad is not only a smoother version of the naïve gradient output, but also gains new information by performing extra forward and backward evaluations with the noisy input.", "For example, compare the alignment point between source word wir and target word we: in Subfigure (a), this word pair has very low saliency, but in (c), they become the most likely alignment pair for that target word.", "Referring back to our motivation for using SmoothGrad in Section 4.3, we think the observations above verify that the Transformer model is a case where very high non-linearities occur almost everywhere in the parameter space, such that the saliency obtained from local perturbation is a very Table 3 : Alignment distribution entropy for selected deen models.", "att stands for attention in Table 1. poor representation of the global saliency almost all the time.", "On the other hand, this is also why the Transformer especially relies on SmoothGrad to work well, as the perturbation will give a better estimation of the global saliency.", "It could also be observed from Subfigures (b) and (d) that when the noise is too moderate, the evaluation does not deviate enough from the original spot to gain non-local information, and at (d) it deviates too much and hence the resulting alignment is almost random.", "Intuitively, the noise parameter σ should be sensitive to the model architecture or even specific input feature values, but interestingly we end up finding that a single choice from the computer vision literature works well with all of our systems.", "We encourage future work to conduct more comprehensive analysis of the effect of SmoothGrad on more complicated architectures beyond convolutional neural nets.", "Alignment Dispersion We run German-English alignments under several different SmoothGrad noise deviation σ and report their dispersion as measured by entropy of the (soft) alignment distribution averaged by number of target words.", "Results are summarized in Ta-ble 3, where lower entropy indicates more peaky alignments.", "First, we observe that dispersion of word saliency gets higher as we increase σ, which aligns with the observations in Figure 3 .", "It should also be noted that the alignment dispersion is consistently lower for free decoding than force decoding.", "This verifies our conjecture that the force decoding setting might introduce more noise in the model behavior, but judging from this result, that gap seems to be minimal.", "Comparing different architectures, the dispersion of attention weights does not correlate well with the dispersion of word saliency.", "We also notice that, while the Transformer attention interpretation consistently results in higher AER, its dispersion is lower than the other architectures, indicating that with attention, a lot of the probability mass might be concentrated in the wrong place more often.", "This corroborates the finding in Raganato and Tiedemann (2018) .", "Discussion And Future Work There are several extensions to this work that we would like to discuss in this section.", "First, in this paper we only explored two saliency methods among many others available (Montavon et al., 2018) .", "In our preliminary study, we also experimented with guided back-propagation (Springenberg et al., 2014), a frequently used saliency method in computer vision, which did not work well for our problem.", "We suspect that there is a gap between applying these methods on mostlyconvolutional architectures in computer vision and architectures with more non-linearities in NLP.", "We hope the future research from the NLP and machine learning communities could bridge this gap.", "Secondly, the alignment errors in our method comes from three different sources: the limitation of NMT models on learning word alignments, the limitation of interpretation method on recovering interpretable word alignments, and the ambiguity in word alignments itself.", "Although we have shown that high quality alignment could be recovered from NMT systems (thus pushing our understanding on the limitation of NMT models), we are not yet able to separate these sources of errors in this work.", "While exploration on this direction will help us better understand both NMT models and the capability of saliency methods in NLP, researchers may want to avoid using word alignment as a benchmark for saliency methods because of its ambiguity.", "For such purpose, simpler tasks with clear ground truth, such as subject-verb agreement, might be a better choice.", "Finally, as mentioned before, we are only conducting approximate evaluation to measure the ability of our interpretation method.", "An immediate future work would be evaluating this on human-annotated translation outputs generated by the NMT system.", "Conclusion We propose to use word saliency and SmoothGrad to interpret word alignments from NMT predictions.", "Our proposal is model-agnostic, is able to be applied either offline or online, and does not require any parameter updates or architectural change.", "Both force decoding and free decoding evaluations show that our method is capable of generating word alignment interpretations of much higher quality compared to its attentionbased counterpart.", "Our empirical results also probe into the NMT black-box and reveal that even without any special architecture or training algorithm, some NMT models have already implicitly learned interpretable word alignments of comparable quality to fast-align.", "The model and code for our experiments are available at https://github.com/shuoyangd/meerkat." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "4.3", "5.1", "5.2", "5.3", "6.2", "6.3", "7", "8" ], "paper_header_content": [ "Introduction", "Related Work", "The Interpretation Problem", "Method", "Visual Saliency", "Word Saliency", "SmoothGrad", "Evaluation Method", "Setup", "Force Decoding Results", "SmoothGrad", "Alignment Dispersion", "Discussion And Future Work", "Conclusion" ] }
GEM-SciDuet-train-38#paper-1054#slide-22
OursSmoothGrad on de en
Saliency-driven Word Alignment Interpretation for NMT
Saliency-driven Word Alignment Interpretation for NMT
[]
GEM-SciDuet-train-38#paper-1054#slide-23
1054
Saliency-driven Word Alignment Interpretation for Neural Machine Translation
Despite their original goal to jointly learn to align and translate, Neural Machine Translation (NMT) models, especially Transformer, are often perceived as not learning interpretable word alignments. In this paper, we show that NMT models do learn interpretable word alignments, which could only be revealed with proper interpretation methods. We propose a series of such methods that are model-agnostic, are able to be applied either offline or online, and do not require parameter update or architectural change. We show that under the force decoding setup, the alignments induced by our interpretation method are of better quality than fast-align for some systems, and when performing free decoding, they agree well with the alignments induced by automatic alignment tools.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223 ], "paper_content_text": [ "Introduction Neural Machine Translation (NMT) has made lots of advancements since its inception.", "One of the key innovations that led to the largest improvements is the introduction of the attention mechanism (Bahdanau et al., 2014; Luong et al., 2015) , which jointly learns word alignment and translation.", "Since then, the attention mechanism has gradually become a general technique in various NLP tasks, including summarization (Rush et al., 2015; See et al., 2017) , natural language inference (Parikh et al., 2016) and speech recognition (Chorowski et al., 2015; Chan et al., 2016) .", "Although word alignment is no longer a integral step like the case for Statistical Machine Translation (SMT) systems (Brown et al., 1993; Koehn et al., 2003) , there is a resurgence of interest in the community to study word alignment for NMT models.", "Even for NMT, word alignments are useful for error analysis, inserting external vocabularies, and providing guidance for human translators in computer-aided translation.", "When aiming for the most accurate alignments, the state-of-the-art tools include GIZA++ (Brown et al., 1993; Och and Ney, 2003) and fast-align (Dyer et al., 2013) , which are all external models invented in SMT era and need to be run as a separate post-processing step after the full sentence translation is complete.", "As a direct result, they are not suitable for analyzing the internal decision processes of the neural machine translation models.", "Besides, these models are hard to apply in the online fashion, i.e.", "in the middle of left-to-right translation process, such as the scenario in certain constrained decoding algorithms (Hasler et al., 2018) and in computeraided translation (Bouma and Parmentier, 2014; Arcan et al., 2014) .", "For these cases, the current common practice is to simply generate word alignments from attention weights between the encoder and decoder.", "However, there are problems with this practice.", "Koehn and Knowles (2017) showed that attention-based word alignment interpretation may be subject to \"off-by-one\" errors.", "Zenkel et al.", "(2019) ; Tang et al.", "(2018b) ; Raganato and Tiedemann (2018) pointed out that the attention-induced alignment is particularly noisy with Transformer models.", "Because of this, some studies, such as Nguyen and Chiang (2018); Zenkel et al.", "(2019) proposed either to add extra modules to generate higher quality word alignments, or to use these modules to further improve the model performance or interpretability.", "This paper is a step towards interpreting word alignments from NMT without relying on external models.", "We argue that using only attention weights is insufficient for generating clean word alignment interpretations, which we demonstrate both conceptually and empirically.", "We propose to use the notion of saliency to obtain word alignment interpretation of NMT predictions.", "Different from previous alignment models, our proposal is a pure interpretation method and does not require any parameter update or architecture change.", "Nevertheless, we are able to reduce Alignment Error Rate (AER) by 10-20 points over the attention weight baseline under two evaluation settings we adopt (see Figure 1 for an example), and beat fast-align (Dyer et al., 2013) by as much as 8.7 points.", "Not only have we proposed a superior model interpretation method, but our empirical results also uncover that, contrary to common beliefs, architectures such as convolutional sequenceto-sequence models (Gehring et al., 2017) have already implicitly learned highly interpretable word alignments, which sheds light on how future improvement should be made on these architectures.", "Related Work We start with work that combines word alignments with NMT.", "Research in this area generally falls into one of three themes: (1) employing the notion of word alignments to interpret the prediction of NMT; (2) making use of word alignments to improve NMT performance; (3) making use of NMT to improve word alignments.", "We mainly focus on related work in the first theme as this is the problem we are addressing in this work.", "Then we briefly introduce work in the other themes that is relevant to our study.", "We conclude by briefly summarizing related work to our proposed interpretation method.", "For the attention in RNN-based sequence-tosequence model, the first comprehensive analysis is conducted by Ghader and Monz (2017) .", "They argued that the attention in such systems agree with word alignment to a certain extent by showing that the RNN-based system achieves comparable alignment error rate comparable to that of bidirectional GIZA++ with symmetrization.", "However, they also point out that they are not exactly the same, as training the attention with alignments would occasionally cause the model to forget important information.", "Lee et al.", "(2017) presented a toolkit that facilitates study for the attention in RNN-based models.", "There is also a number of other studies that analyze the attention in Transformer models.", "Tang et al.", "(2018a,b) conducted targeted evaluation of neural machine translation models in two different evaluation tasks, namely subject-verb agreement and word sense disambiguation.", "During the analysis, they noted that the pattern in Transformer model (what they refer to as advanced attention mechanism) is very different from that of the attention in RNN-based architecture, in that a lot of the probability mass is focused on the last input token.", "They did not dive deeper in this phenomenon in their analysis.", "Raganato and Tiedemann (2018) performed a brief but more refined analysis on each attention head and each layer, where they noticed several different patterns inside the modules, and concluded that Transformer tends to focus on local dependencies in lower layers but finds long dependencies on higher ones.", "Beyond interpretation, in order to improve the translation of rare words, Nguyen and Chiang (2018) introduced LexNet, a feed-forward neural network that directly predicts the target word from a weighted sum of the source embeddings, on top of an RNN-based Seq2Seq models.", "Their goal was to improve translation output and hence they did not empirically show AER improvements on manually-aligned corpora.", "There are also a few other studies that inject alignment supervision during NMT training (Mi et al., 2016; Liu et al., 2016) .", "In terms of improvements in word alignment quality, Legrand et al.", "(2016) ; Wang et al.", "(2018) ; proposed neu-ral word alignment modules decoupled from NMT systems, while Zenkel et al.", "(2019) introduced a separate module to extract alignment from NMT decoder states, with which they achieved comparable AER with fast-align with Transformer models.", "The saliency method we propose in this work draws its inspiration from visual saliency proposed by Simonyan et al.", "(2013); Springenberg et al.", "(2014) ; Smilkov et al.", "(2017) .", "It should be noted that these methods were mostly applied to computer vision tasks.", "To the best of our knowledge, Li et al.", "(2016) presented the only work that directly employs saliency methods to interpret NLP models.", "Most similar to our work in spirit, Ding et al.", "(2017) used Layer-wise Relevance Propagation (LRP; Bach et al.", "2015) , an interpretation method resembling saliency, to interpret the internal working mechanisms of RNN-based neural machine translation systems.", "Although conceptually LRP is also a good fit for word alignment interpretation, we have some concerns with the mathematical soundness of LRP when applied to attention models.", "Our proposed method is also considerably more flexible and easier to implement than LRP.", "The Interpretation Problem Formally, by interpreting model prediction, we are referring to the following problem: given a trained MT model and input tokens S = {s 0 , s 1 , .", ".", ".", ", s I−1 }, at a certain time step j when the models predicts t j , we want to know which source word in S \"contributed\" most to this prediction.", "Note that the prediction t j might not be arg max t j p(t j | t 1:j−1 ), as the locally optimal option may be pruned during beam search and not end up in the final translation.", "Under this framework, we can see an important conceptual problem regarding interpreting attention weights as word alignment.", "Suppose for the same source sentence, there are two alternative translations that diverge at target time step j, generating t j and t ′ j which respectively correspond to different source words.", "Presumably, the source word that is aligned to t j and t ′ j should changed correspondingly.", "However, this is not possible with the attention weight interpretation, because the attention weight is computed before prediction of t j or t ′ j .", "With that, we argue that an ideal interpretation algorithm should be able to adapt the interpretation with the specified output label, regard-less of whether it is the most likely label predicted by the model.", "As a final note, the term \"attention weights\" here refers to the weights of the attention between encoder and decoder (the \"encoder-decoder attention\" in Vaswani et al.", "(2017) ).", "Specifically, they do not refer to the weight of self-attention modules that only exist in the Transformer architecture, which do not establish alignment between the source and target words.", "Method Our proposal is based on the notion of visual saliency (Simonyan et al., 2013) in computer vision.", "In brief, the saliency of an input feature is defined by the partial gradient of the output score with regard to the input.", "We propose to extend this idea to NMT by drawing analogy between input pixels and the embedding look-up operation.", "Visual Saliency Suppose we have an image classification example (x 0 , y 0 ), with y 0 being a specific image class and x 0 being an |X |-dimensional vector.", "Each entry of x 0 is an input feature (i.e., a pixel) to the classifier.", "Given the input x 0 , a trained classifier can generate a prediction score for class y 0 , denoted as p(y 0 | x 0 ).", "Consider the first-order Taylor expansion of a perturbed version of this score at the neighborhood of input x 0 : p(y 0 | x) ≈ p(y 0 | x 0 ) + ∂p(y 0 | x) ∂x x 0 · (x − x 0 ) (1) This is essentially re-formulating the perturbed prediction score p(y 0 | x) as an affine approximation of the input features, while the \"contribution\" of each feature to the final prediction being the partial derivative of the prediction score with regard to the feature.", "Assuming a feature that is deemed as salient for the local perturbation of the prediction score would also be globally salient, the saliency of an input feature is defined as follows: Definition 1 Denoted as Ψ(x, y), the saliency of feature vector x with regard to output class y is defined as ∂p(y | x) ∂x .", "Note that Ψ(x, y) is also a vector, with each entry corresponding to the saliency of a single input feature in x.", "Such formulation has following nice properties: • The saliency of an input feature is related to the choice of output class y, as model scores of different output classes correspond to a different set of parameters, and hence resulting in different partial gradients for the input features.", "This makes up for the aforementioned deficiency of attention weights in addressing the interpretation problem.", "• The partial gradient could be computed by back-propagation, which is efficiently implemented in most deep learning frameworks.", "• The formulation is agnostic to the model that generates p(y | x), so it could be applied to any deep learning architecture.", "Word Saliency In computer vision, the input feature is a 3D Tensor corresponding to the level in each channel.", "The key question to apply such method to NMT is what constitutes the input feature to a NMT system.", "Li et al.", "(2016) proposed to use the embedding of of the input words as the input feature to formulate saliency score, which results in the saliency of an input word being a vector of the same dimension as embedding vectors.", "To obtain a scalar saliency value, they computed the mean of the absolute value of the embedding gradients.", "We argue that there is a more mathematically principled way to approach this.", "To start, we treat the word embedding look-up operation as a dot product between the embedding weight matrix W and an one-hot vector z.", "The size of z is the same as the source vocabulary size.", "Similarly, the input sentence could be formulated as a matrix Z with only 0 and 1 entries.", "Notice that z has certain resemblance to the pixels of an image, with each cell representing the pixel-wise activation level of the words in the vocabulary.", "For the output word t j at time step j, we can similarly define the saliency of the one-hot vector z as: Ψ(z, t j ) = ∂p(t j | Z) ∂z (2) where p(t j | Z) is the probability of word t j generated by the NMT model given source sentence Z. Ψ(z, t j ) is a vector of the same size as z.", "However, note that there is a key difference between z and pixels.", "If the pixel level is 0, it means that the pixel is black, while a 0-entry in z means that the input word is not the word denoted by the corresponding cell.", "While the black region of an input image may still carry important information, we are not interested in the saliency of the 0-entries in z.", "1 Hence, we only take the 1-entries of matrix Z as the input to the NMT model.", "For a source word s i in the source sentence, this means we only care about the saliency of the 1-entries, i.e., the entry corresponding to source word s i : ψ(s i , t j ) = [ ∂p(t j | Z) ∂z ] s i = [ ∂p(t j | Z) ∂W s i · ∂W s i ∂z ] s i = [ ∂p(t j | Z) ∂W s i · W ] s i = ∂p(t j | Z) ∂W s i · W s i (3) where [·] i denotes the i-th row of a matrix or the ith element of a vector.", "In other words, the saliency ψ(s i , t j ) is a weighted sum of the word embedding of input word s i , with the partial gradient of each cell as the weight.", "By comparison, the word saliency 2 in Li et al.", "(2016) is defined as: ψ ′ (s i , t j ) = mean ( ∂p(t j | Z) ∂W s i ) (4) There are two implementation details that we would like to call for the reader's attention: • When the same word occurs multiple times in the source sentence, multiple copies of embedding for such word need to be made to ensure that the gradients flowing to different instances of the same word are not merged; • Note that ψ(s i , t j ) is not a probability distribution, which does not affect word alignment results because we are taking arg max.", "For visualizations presented herein, we normalized the distribution by p( s i | t j ) ∝ max(0, ψ(s i , t j )).", "One may also use softmax function for applications that need more well-formed probability distribution.", "1 Although we introduce z to facilitate presentation, note that word embedding look-up is never implemented as a matrix multiplication.", "Instead, it is implemented as a table lookup, so for each input word, only one row of the word embedding is fed into the subsequent computation.", "As a consequence, during training, since the other rows are not part of the computation graph, only parameters in the rows corresponding to the 1-entries will be updated.", "This is another reason why we choose to discard the saliency of 0-entries.", "2 Li et al.", "(2016) mostly focused on studying saliency on the level of word embedding dimensions.", "This word-level formulation is proposed as part of the analysis in Section 5.2 and Section 6 of that work.", "SmoothGrad There are two scenarios where the naïve gradientbased saliency may make mistakes: • For highly non-linear models, the saliency obtained from local perturbation may not be a good representation of the global saliency.", "• If the model fits the distribution nearly perfectly, some data points or input features may become saturated, i.e.", "having a partial gradient of 0.", "This does not necessarily mean they are not salient with regard to the prediction.", "We alleviate these problems with SmoothGrad, a method proposed by Smilkov et al.", "(2017) .", "The idea is to augment the input to the network into n samples by adding random noise generated by normal distribution N (0, σ 2 ).", "The saliency scores of each augmented sample are then averaged to cancel out the noise in the gradients.", "We made one small modification to this method in our experiments: rather than adding noise to the word inputs that are represented as one-hot vectors, we instead add noise to the queried embedding vectors.", "This allows us to introduce more randomness for each word input.", "Experiments Evaluation Method The best evaluation method would compare predicted word alignments against manually labeled word alignments between source sentences and NMT output sentences, but this is too costly for our study.", "Instead, we conduct two automatic evaluations for our proposed method using resources available: • force decoding: take a human-annotated corpus, run NMT models to force-generate the target side of the corpus and measure AER against the human alignment; • free decoding: take the NMT prediction, obtain reasonably clean reference alignments between the prediction and the source and measure AER against this reference.", "3 Notice that both automatic evaluation methods have their respective limitation: the force decoding method may force the model to predict something it deems unlikely, and thus generating noisy alignment; whereas the free decoding method lacks authentic references.", "Setup We follow Zenkel et al.", "(2019) in data setup and use the accompanied scripts of that paper 4 for preprocessing.", "Their training data consists of 1.9M, 1.1M and 0.4M sentence pairs for German-English (de-en), English-French (en-fr) and Romanian-English (ro-en) language pairs, respectively, whereas the manually-aligned test data contains 508, 447 and 248 sentence pairs for each language pair.", "There is no development data provided in their setup, and it is not clear what they used for NMT system training, so we set aside the last 1,000 sentences of the training data for each language as the development set.", "For our NMT systems, we use fairseq 5 to train attention-based RNN systems (LSTM) (Bahdanau et al., 2014) , convolution systems (FConv) (Gehring et al., 2017) , and Transformer systems (Transformer) (Vaswani et al., 2017) .", "We use the pre-configured model architectures for IWSLT German-English experiments 6 to build all NMT systems.", "Our experiments cover the following interpretation methods: • Attention: directly take the attention weights as soft alignment scores.", "For transformer, we follow the implementation in fairseq and used the attention weights from the final layer averaged across all heads; • Smoothed Attention: obtain multiple version of attention weights with the same data augmentation procedure as SmoothGrad and average them.", "This is to prove that smoothing itself does not improve the interpretation quality, and has to be used together with effective interpretation method; • (Li et al., 2016) : applied with normal backpropagation (Grad) and SmoothGrad; • Ours: applied with normal back-propagation (Grad) and SmoothGrad.", "For all the methods above, we follow the same procedure in (Zenkel et al., 2019) to convert soft alignment scores to hard alignment.", "For force decoding experiments, we generate symmetrized alignment results with growdiag-final.", "We also include AER results 7 of fast-align (Dyer et al., 2013) , GIZA++ 8 and the best model (Add+SGD) from Zenkel et al.", "(2019) on the same dataset for comparison.", "However, the readers should be aware that there are certain caveats in this comparison: • All of these models are specifically designed and optimized to generate high-quality alignments, while our method is an interpretation method and is not making any architecture modifications or parameter updates; • fast-align and GIZA++ usually need to update model with full sentence to generate optimal alignments, while our system and Zenkel et al.", "(2019) can do so on-the-fly.", "7 We reproduced the fast-align results as a sanity check and we were able to perfectly replicate their numbers with their released scripts.", "8 https://github.com/moses-smt/giza-pp Realizing the second caveat, we also run fastalign under the online alignment scenario, where we first train a fast-align model and decode on the test set.", "This is a real-world scenario in applications such as computer-aided translation (Bouma and Parmentier, 2014; Arcan et al., 2014) , where we cannot practically update alignment models onthe-fly.", "On the other hand, we believe this is a slightly better comparison for methods with online alignment capabilities such as Zenkel et al.", "(2019) and this work.", "The data used in Zenkel et al.", "(2019) did not provide a manually-aligned development set, so we tune the SmoothGrad hyperparameters (noise standard deviation σ and sample size n) on a 30sentence subset of the German-English test data with the Transformer model.", "We ended up using the recommended σ = 0.15 in the original paper and a slightly smaller sample size n = 30 for speed.", "This hyperparameter setting is applied to the other SmoothGrad experiments as-is.", "For com-parison with previous work, we do not exclude these sentences from the reported results, we instead mark the numbers affected to raise caution.", "Table 1 shows the AER results under the force decoding setting.", "First, note that after applying our saliency method with normal back-propagation, AER is only reduced for FConv model but instead increases for LSTM and Transformer.", "The largest increase is observed for Transformer, where the AER increases by about 20 points on average.", "However, after applying SmoothGrad on top of that, we observe a sharp drop in AER, which ends up with 10-20 points lower than the attention weight baseline.", "We can also see that this is not just an effect introduced by input noise, as the same smoothing procedure for attention increases the AER most of the times.", "To summarize, at least under force decoding settings, our saliency method with SmoothGrad obtains word alignment interpretations of much higher quality than the attention weight baseline.", "Force Decoding Results As for Li et al.", "(2016) , for FConv and LSTM architectures, it is not only consistently worse than our method, but at times also worse than attention.", "Besides, the effect of SmoothGrad is also not as consistent on their saliency formulation as ours.", "Although with the Transformer model, the Li et al.", "(2016) method obtained better AER than our method under several settings, it is still pretty clear overall that the superior mathematical soundness of our method is translated into better interpretation quality.", "While the GIZA++ model obtains the best alignment result in Table 1 9 , most of our word alignment interpretation of FConv model with Smooth-Grad surpasses the alignment quality of fast-align (either Online or Offline), sometimes by as much as 8.7 points (symmetrized ro<>en result).", "Our best models are also largely on-par with (Zenkel et al., 2019) .", "These are notable results as our method is an interpretation method and no extra parameter is updated to optimize the quality of alignment.", "On the other hand, this also indicates that it is possible to induce high-quality 9 While Ghader and Monz (2017) showed that the AER obtained by LSTM model is close to that of GIZA++, our experiments yield a much larger difference.", "We think this is largely due to the fact that we choose to train our model with BPE, while Ghader and Monz (2017) explicitly avoided doing so.", "alignments from NMT model without modifying its parameters, showing that it has acquired such information in an implicit way.", "Most interestingly, although NMT is often deemed as performing poorly under low-resource setting, our interpretation seems to work relatively well on ro<>en language pair, which happens to be the language pair that we have least training data for.", "We think this is a phenomenon that merits further exploration.", "Besides, it can be seen that for all reported methods, the overall order for the number of alignment errors is FConv < LSTM < Transformer.", "To our best knowledge, this is also a novel insight, as no one has analyzed attention weights of FConv with other architectures before.", "We can also observe that while our method is not strong enough to fully bridge the gap of the attention noise level between different model architecture, it does manage to narrow the difference in some cases.", "Table 2 shows the result under free decoding setting.", "The trend in this group of experiment is similar to Table 1 , except that Transformer occasionally outperforms LSTM.", "We think this is mainly due to the fact that Transformer generates higher quality translations, but could also be partially attributed to the noise in fast-align reference.", "Also, notice that the AER numbers are also generally lower compared to Table 1 under this setting.", "One reason is that our model is aligning output with which it is most confident, so less noise should be expected in the model behavior.", "On the other hand, by qualitatively comparing the reference translation in the test set and the NMT output, we find that it is generally easier to align the translation as it is often a more literal translation.", "6 Analysis 6.1 Comparison with Li et al.", "(2016) The main reason why the word saliency formulation in Li et al.", "(2016) does not work as well for word alignment is the lack of polarity in the formulation.", "In other words, it only quantifies how much the input influences the output, but does not specify in what way does the input influence.", "This is sufficient for error analysis, but does not suit the purpose of word alignment, as humans will only align a target word to the input words that constitute a translation pair, i.e.", "have positive influence.", "Figure 2 shows a case where this problem occurs in our German-English experiments.", "Note that in Subfigure (a), the source word nur has high saliency on several target words, e.g.", "should, but the word nur is actually not translated in the reference.", "On the other hand, as shown in Subfigure (b), our method correctly assigns negative (shown as white) or small positive values at all time steps for this source word.", "Specifically, the saliency value of nur for should is negative with large magnitude, indicating significant negative contributions to the prediction of that target word.", "Hence, a good word alignment interpreta-tion should strongly avoid aligning them.", "Free Decoding Results SmoothGrad Tables 1 and 2 show that SmoothGrad is a crucial factor to reduce AER, especially for Transformer.", "Figure 3 Table 1 .", "By comparing Subfigures (a) and (c), we notice that (1) without SmoothGrad, the word saliency obtained from the Transformer model is extremely noisy, and (2) the output of SmoothGrad is not only a smoother version of the naïve gradient output, but also gains new information by performing extra forward and backward evaluations with the noisy input.", "For example, compare the alignment point between source word wir and target word we: in Subfigure (a), this word pair has very low saliency, but in (c), they become the most likely alignment pair for that target word.", "Referring back to our motivation for using SmoothGrad in Section 4.3, we think the observations above verify that the Transformer model is a case where very high non-linearities occur almost everywhere in the parameter space, such that the saliency obtained from local perturbation is a very Table 3 : Alignment distribution entropy for selected deen models.", "att stands for attention in Table 1. poor representation of the global saliency almost all the time.", "On the other hand, this is also why the Transformer especially relies on SmoothGrad to work well, as the perturbation will give a better estimation of the global saliency.", "It could also be observed from Subfigures (b) and (d) that when the noise is too moderate, the evaluation does not deviate enough from the original spot to gain non-local information, and at (d) it deviates too much and hence the resulting alignment is almost random.", "Intuitively, the noise parameter σ should be sensitive to the model architecture or even specific input feature values, but interestingly we end up finding that a single choice from the computer vision literature works well with all of our systems.", "We encourage future work to conduct more comprehensive analysis of the effect of SmoothGrad on more complicated architectures beyond convolutional neural nets.", "Alignment Dispersion We run German-English alignments under several different SmoothGrad noise deviation σ and report their dispersion as measured by entropy of the (soft) alignment distribution averaged by number of target words.", "Results are summarized in Ta-ble 3, where lower entropy indicates more peaky alignments.", "First, we observe that dispersion of word saliency gets higher as we increase σ, which aligns with the observations in Figure 3 .", "It should also be noted that the alignment dispersion is consistently lower for free decoding than force decoding.", "This verifies our conjecture that the force decoding setting might introduce more noise in the model behavior, but judging from this result, that gap seems to be minimal.", "Comparing different architectures, the dispersion of attention weights does not correlate well with the dispersion of word saliency.", "We also notice that, while the Transformer attention interpretation consistently results in higher AER, its dispersion is lower than the other architectures, indicating that with attention, a lot of the probability mass might be concentrated in the wrong place more often.", "This corroborates the finding in Raganato and Tiedemann (2018) .", "Discussion And Future Work There are several extensions to this work that we would like to discuss in this section.", "First, in this paper we only explored two saliency methods among many others available (Montavon et al., 2018) .", "In our preliminary study, we also experimented with guided back-propagation (Springenberg et al., 2014), a frequently used saliency method in computer vision, which did not work well for our problem.", "We suspect that there is a gap between applying these methods on mostlyconvolutional architectures in computer vision and architectures with more non-linearities in NLP.", "We hope the future research from the NLP and machine learning communities could bridge this gap.", "Secondly, the alignment errors in our method comes from three different sources: the limitation of NMT models on learning word alignments, the limitation of interpretation method on recovering interpretable word alignments, and the ambiguity in word alignments itself.", "Although we have shown that high quality alignment could be recovered from NMT systems (thus pushing our understanding on the limitation of NMT models), we are not yet able to separate these sources of errors in this work.", "While exploration on this direction will help us better understand both NMT models and the capability of saliency methods in NLP, researchers may want to avoid using word alignment as a benchmark for saliency methods because of its ambiguity.", "For such purpose, simpler tasks with clear ground truth, such as subject-verb agreement, might be a better choice.", "Finally, as mentioned before, we are only conducting approximate evaluation to measure the ability of our interpretation method.", "An immediate future work would be evaluating this on human-annotated translation outputs generated by the NMT system.", "Conclusion We propose to use word saliency and SmoothGrad to interpret word alignments from NMT predictions.", "Our proposal is model-agnostic, is able to be applied either offline or online, and does not require any parameter updates or architectural change.", "Both force decoding and free decoding evaluations show that our method is capable of generating word alignment interpretations of much higher quality compared to its attentionbased counterpart.", "Our empirical results also probe into the NMT black-box and reveal that even without any special architecture or training algorithm, some NMT models have already implicitly learned interpretable word alignments of comparable quality to fast-align.", "The model and code for our experiments are available at https://github.com/shuoyangd/meerkat." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "4.3", "5.1", "5.2", "5.3", "6.2", "6.3", "7", "8" ], "paper_header_content": [ "Introduction", "Related Work", "The Interpretation Problem", "Method", "Visual Saliency", "Word Saliency", "SmoothGrad", "Evaluation Method", "Setup", "Force Decoding Results", "SmoothGrad", "Alignment Dispersion", "Discussion And Future Work", "Conclusion" ] }
GEM-SciDuet-train-38#paper-1054#slide-23
Li vs Ours
Saliency-driven Word Alignment Interpretation for NMT
Saliency-driven Word Alignment Interpretation for NMT
[]
GEM-SciDuet-train-38#paper-1054#slide-24
1054
Saliency-driven Word Alignment Interpretation for Neural Machine Translation
Despite their original goal to jointly learn to align and translate, Neural Machine Translation (NMT) models, especially Transformer, are often perceived as not learning interpretable word alignments. In this paper, we show that NMT models do learn interpretable word alignments, which could only be revealed with proper interpretation methods. We propose a series of such methods that are model-agnostic, are able to be applied either offline or online, and do not require parameter update or architectural change. We show that under the force decoding setup, the alignments induced by our interpretation method are of better quality than fast-align for some systems, and when performing free decoding, they agree well with the alignments induced by automatic alignment tools.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223 ], "paper_content_text": [ "Introduction Neural Machine Translation (NMT) has made lots of advancements since its inception.", "One of the key innovations that led to the largest improvements is the introduction of the attention mechanism (Bahdanau et al., 2014; Luong et al., 2015) , which jointly learns word alignment and translation.", "Since then, the attention mechanism has gradually become a general technique in various NLP tasks, including summarization (Rush et al., 2015; See et al., 2017) , natural language inference (Parikh et al., 2016) and speech recognition (Chorowski et al., 2015; Chan et al., 2016) .", "Although word alignment is no longer a integral step like the case for Statistical Machine Translation (SMT) systems (Brown et al., 1993; Koehn et al., 2003) , there is a resurgence of interest in the community to study word alignment for NMT models.", "Even for NMT, word alignments are useful for error analysis, inserting external vocabularies, and providing guidance for human translators in computer-aided translation.", "When aiming for the most accurate alignments, the state-of-the-art tools include GIZA++ (Brown et al., 1993; Och and Ney, 2003) and fast-align (Dyer et al., 2013) , which are all external models invented in SMT era and need to be run as a separate post-processing step after the full sentence translation is complete.", "As a direct result, they are not suitable for analyzing the internal decision processes of the neural machine translation models.", "Besides, these models are hard to apply in the online fashion, i.e.", "in the middle of left-to-right translation process, such as the scenario in certain constrained decoding algorithms (Hasler et al., 2018) and in computeraided translation (Bouma and Parmentier, 2014; Arcan et al., 2014) .", "For these cases, the current common practice is to simply generate word alignments from attention weights between the encoder and decoder.", "However, there are problems with this practice.", "Koehn and Knowles (2017) showed that attention-based word alignment interpretation may be subject to \"off-by-one\" errors.", "Zenkel et al.", "(2019) ; Tang et al.", "(2018b) ; Raganato and Tiedemann (2018) pointed out that the attention-induced alignment is particularly noisy with Transformer models.", "Because of this, some studies, such as Nguyen and Chiang (2018); Zenkel et al.", "(2019) proposed either to add extra modules to generate higher quality word alignments, or to use these modules to further improve the model performance or interpretability.", "This paper is a step towards interpreting word alignments from NMT without relying on external models.", "We argue that using only attention weights is insufficient for generating clean word alignment interpretations, which we demonstrate both conceptually and empirically.", "We propose to use the notion of saliency to obtain word alignment interpretation of NMT predictions.", "Different from previous alignment models, our proposal is a pure interpretation method and does not require any parameter update or architecture change.", "Nevertheless, we are able to reduce Alignment Error Rate (AER) by 10-20 points over the attention weight baseline under two evaluation settings we adopt (see Figure 1 for an example), and beat fast-align (Dyer et al., 2013) by as much as 8.7 points.", "Not only have we proposed a superior model interpretation method, but our empirical results also uncover that, contrary to common beliefs, architectures such as convolutional sequenceto-sequence models (Gehring et al., 2017) have already implicitly learned highly interpretable word alignments, which sheds light on how future improvement should be made on these architectures.", "Related Work We start with work that combines word alignments with NMT.", "Research in this area generally falls into one of three themes: (1) employing the notion of word alignments to interpret the prediction of NMT; (2) making use of word alignments to improve NMT performance; (3) making use of NMT to improve word alignments.", "We mainly focus on related work in the first theme as this is the problem we are addressing in this work.", "Then we briefly introduce work in the other themes that is relevant to our study.", "We conclude by briefly summarizing related work to our proposed interpretation method.", "For the attention in RNN-based sequence-tosequence model, the first comprehensive analysis is conducted by Ghader and Monz (2017) .", "They argued that the attention in such systems agree with word alignment to a certain extent by showing that the RNN-based system achieves comparable alignment error rate comparable to that of bidirectional GIZA++ with symmetrization.", "However, they also point out that they are not exactly the same, as training the attention with alignments would occasionally cause the model to forget important information.", "Lee et al.", "(2017) presented a toolkit that facilitates study for the attention in RNN-based models.", "There is also a number of other studies that analyze the attention in Transformer models.", "Tang et al.", "(2018a,b) conducted targeted evaluation of neural machine translation models in two different evaluation tasks, namely subject-verb agreement and word sense disambiguation.", "During the analysis, they noted that the pattern in Transformer model (what they refer to as advanced attention mechanism) is very different from that of the attention in RNN-based architecture, in that a lot of the probability mass is focused on the last input token.", "They did not dive deeper in this phenomenon in their analysis.", "Raganato and Tiedemann (2018) performed a brief but more refined analysis on each attention head and each layer, where they noticed several different patterns inside the modules, and concluded that Transformer tends to focus on local dependencies in lower layers but finds long dependencies on higher ones.", "Beyond interpretation, in order to improve the translation of rare words, Nguyen and Chiang (2018) introduced LexNet, a feed-forward neural network that directly predicts the target word from a weighted sum of the source embeddings, on top of an RNN-based Seq2Seq models.", "Their goal was to improve translation output and hence they did not empirically show AER improvements on manually-aligned corpora.", "There are also a few other studies that inject alignment supervision during NMT training (Mi et al., 2016; Liu et al., 2016) .", "In terms of improvements in word alignment quality, Legrand et al.", "(2016) ; Wang et al.", "(2018) ; proposed neu-ral word alignment modules decoupled from NMT systems, while Zenkel et al.", "(2019) introduced a separate module to extract alignment from NMT decoder states, with which they achieved comparable AER with fast-align with Transformer models.", "The saliency method we propose in this work draws its inspiration from visual saliency proposed by Simonyan et al.", "(2013); Springenberg et al.", "(2014) ; Smilkov et al.", "(2017) .", "It should be noted that these methods were mostly applied to computer vision tasks.", "To the best of our knowledge, Li et al.", "(2016) presented the only work that directly employs saliency methods to interpret NLP models.", "Most similar to our work in spirit, Ding et al.", "(2017) used Layer-wise Relevance Propagation (LRP; Bach et al.", "2015) , an interpretation method resembling saliency, to interpret the internal working mechanisms of RNN-based neural machine translation systems.", "Although conceptually LRP is also a good fit for word alignment interpretation, we have some concerns with the mathematical soundness of LRP when applied to attention models.", "Our proposed method is also considerably more flexible and easier to implement than LRP.", "The Interpretation Problem Formally, by interpreting model prediction, we are referring to the following problem: given a trained MT model and input tokens S = {s 0 , s 1 , .", ".", ".", ", s I−1 }, at a certain time step j when the models predicts t j , we want to know which source word in S \"contributed\" most to this prediction.", "Note that the prediction t j might not be arg max t j p(t j | t 1:j−1 ), as the locally optimal option may be pruned during beam search and not end up in the final translation.", "Under this framework, we can see an important conceptual problem regarding interpreting attention weights as word alignment.", "Suppose for the same source sentence, there are two alternative translations that diverge at target time step j, generating t j and t ′ j which respectively correspond to different source words.", "Presumably, the source word that is aligned to t j and t ′ j should changed correspondingly.", "However, this is not possible with the attention weight interpretation, because the attention weight is computed before prediction of t j or t ′ j .", "With that, we argue that an ideal interpretation algorithm should be able to adapt the interpretation with the specified output label, regard-less of whether it is the most likely label predicted by the model.", "As a final note, the term \"attention weights\" here refers to the weights of the attention between encoder and decoder (the \"encoder-decoder attention\" in Vaswani et al.", "(2017) ).", "Specifically, they do not refer to the weight of self-attention modules that only exist in the Transformer architecture, which do not establish alignment between the source and target words.", "Method Our proposal is based on the notion of visual saliency (Simonyan et al., 2013) in computer vision.", "In brief, the saliency of an input feature is defined by the partial gradient of the output score with regard to the input.", "We propose to extend this idea to NMT by drawing analogy between input pixels and the embedding look-up operation.", "Visual Saliency Suppose we have an image classification example (x 0 , y 0 ), with y 0 being a specific image class and x 0 being an |X |-dimensional vector.", "Each entry of x 0 is an input feature (i.e., a pixel) to the classifier.", "Given the input x 0 , a trained classifier can generate a prediction score for class y 0 , denoted as p(y 0 | x 0 ).", "Consider the first-order Taylor expansion of a perturbed version of this score at the neighborhood of input x 0 : p(y 0 | x) ≈ p(y 0 | x 0 ) + ∂p(y 0 | x) ∂x x 0 · (x − x 0 ) (1) This is essentially re-formulating the perturbed prediction score p(y 0 | x) as an affine approximation of the input features, while the \"contribution\" of each feature to the final prediction being the partial derivative of the prediction score with regard to the feature.", "Assuming a feature that is deemed as salient for the local perturbation of the prediction score would also be globally salient, the saliency of an input feature is defined as follows: Definition 1 Denoted as Ψ(x, y), the saliency of feature vector x with regard to output class y is defined as ∂p(y | x) ∂x .", "Note that Ψ(x, y) is also a vector, with each entry corresponding to the saliency of a single input feature in x.", "Such formulation has following nice properties: • The saliency of an input feature is related to the choice of output class y, as model scores of different output classes correspond to a different set of parameters, and hence resulting in different partial gradients for the input features.", "This makes up for the aforementioned deficiency of attention weights in addressing the interpretation problem.", "• The partial gradient could be computed by back-propagation, which is efficiently implemented in most deep learning frameworks.", "• The formulation is agnostic to the model that generates p(y | x), so it could be applied to any deep learning architecture.", "Word Saliency In computer vision, the input feature is a 3D Tensor corresponding to the level in each channel.", "The key question to apply such method to NMT is what constitutes the input feature to a NMT system.", "Li et al.", "(2016) proposed to use the embedding of of the input words as the input feature to formulate saliency score, which results in the saliency of an input word being a vector of the same dimension as embedding vectors.", "To obtain a scalar saliency value, they computed the mean of the absolute value of the embedding gradients.", "We argue that there is a more mathematically principled way to approach this.", "To start, we treat the word embedding look-up operation as a dot product between the embedding weight matrix W and an one-hot vector z.", "The size of z is the same as the source vocabulary size.", "Similarly, the input sentence could be formulated as a matrix Z with only 0 and 1 entries.", "Notice that z has certain resemblance to the pixels of an image, with each cell representing the pixel-wise activation level of the words in the vocabulary.", "For the output word t j at time step j, we can similarly define the saliency of the one-hot vector z as: Ψ(z, t j ) = ∂p(t j | Z) ∂z (2) where p(t j | Z) is the probability of word t j generated by the NMT model given source sentence Z. Ψ(z, t j ) is a vector of the same size as z.", "However, note that there is a key difference between z and pixels.", "If the pixel level is 0, it means that the pixel is black, while a 0-entry in z means that the input word is not the word denoted by the corresponding cell.", "While the black region of an input image may still carry important information, we are not interested in the saliency of the 0-entries in z.", "1 Hence, we only take the 1-entries of matrix Z as the input to the NMT model.", "For a source word s i in the source sentence, this means we only care about the saliency of the 1-entries, i.e., the entry corresponding to source word s i : ψ(s i , t j ) = [ ∂p(t j | Z) ∂z ] s i = [ ∂p(t j | Z) ∂W s i · ∂W s i ∂z ] s i = [ ∂p(t j | Z) ∂W s i · W ] s i = ∂p(t j | Z) ∂W s i · W s i (3) where [·] i denotes the i-th row of a matrix or the ith element of a vector.", "In other words, the saliency ψ(s i , t j ) is a weighted sum of the word embedding of input word s i , with the partial gradient of each cell as the weight.", "By comparison, the word saliency 2 in Li et al.", "(2016) is defined as: ψ ′ (s i , t j ) = mean ( ∂p(t j | Z) ∂W s i ) (4) There are two implementation details that we would like to call for the reader's attention: • When the same word occurs multiple times in the source sentence, multiple copies of embedding for such word need to be made to ensure that the gradients flowing to different instances of the same word are not merged; • Note that ψ(s i , t j ) is not a probability distribution, which does not affect word alignment results because we are taking arg max.", "For visualizations presented herein, we normalized the distribution by p( s i | t j ) ∝ max(0, ψ(s i , t j )).", "One may also use softmax function for applications that need more well-formed probability distribution.", "1 Although we introduce z to facilitate presentation, note that word embedding look-up is never implemented as a matrix multiplication.", "Instead, it is implemented as a table lookup, so for each input word, only one row of the word embedding is fed into the subsequent computation.", "As a consequence, during training, since the other rows are not part of the computation graph, only parameters in the rows corresponding to the 1-entries will be updated.", "This is another reason why we choose to discard the saliency of 0-entries.", "2 Li et al.", "(2016) mostly focused on studying saliency on the level of word embedding dimensions.", "This word-level formulation is proposed as part of the analysis in Section 5.2 and Section 6 of that work.", "SmoothGrad There are two scenarios where the naïve gradientbased saliency may make mistakes: • For highly non-linear models, the saliency obtained from local perturbation may not be a good representation of the global saliency.", "• If the model fits the distribution nearly perfectly, some data points or input features may become saturated, i.e.", "having a partial gradient of 0.", "This does not necessarily mean they are not salient with regard to the prediction.", "We alleviate these problems with SmoothGrad, a method proposed by Smilkov et al.", "(2017) .", "The idea is to augment the input to the network into n samples by adding random noise generated by normal distribution N (0, σ 2 ).", "The saliency scores of each augmented sample are then averaged to cancel out the noise in the gradients.", "We made one small modification to this method in our experiments: rather than adding noise to the word inputs that are represented as one-hot vectors, we instead add noise to the queried embedding vectors.", "This allows us to introduce more randomness for each word input.", "Experiments Evaluation Method The best evaluation method would compare predicted word alignments against manually labeled word alignments between source sentences and NMT output sentences, but this is too costly for our study.", "Instead, we conduct two automatic evaluations for our proposed method using resources available: • force decoding: take a human-annotated corpus, run NMT models to force-generate the target side of the corpus and measure AER against the human alignment; • free decoding: take the NMT prediction, obtain reasonably clean reference alignments between the prediction and the source and measure AER against this reference.", "3 Notice that both automatic evaluation methods have their respective limitation: the force decoding method may force the model to predict something it deems unlikely, and thus generating noisy alignment; whereas the free decoding method lacks authentic references.", "Setup We follow Zenkel et al.", "(2019) in data setup and use the accompanied scripts of that paper 4 for preprocessing.", "Their training data consists of 1.9M, 1.1M and 0.4M sentence pairs for German-English (de-en), English-French (en-fr) and Romanian-English (ro-en) language pairs, respectively, whereas the manually-aligned test data contains 508, 447 and 248 sentence pairs for each language pair.", "There is no development data provided in their setup, and it is not clear what they used for NMT system training, so we set aside the last 1,000 sentences of the training data for each language as the development set.", "For our NMT systems, we use fairseq 5 to train attention-based RNN systems (LSTM) (Bahdanau et al., 2014) , convolution systems (FConv) (Gehring et al., 2017) , and Transformer systems (Transformer) (Vaswani et al., 2017) .", "We use the pre-configured model architectures for IWSLT German-English experiments 6 to build all NMT systems.", "Our experiments cover the following interpretation methods: • Attention: directly take the attention weights as soft alignment scores.", "For transformer, we follow the implementation in fairseq and used the attention weights from the final layer averaged across all heads; • Smoothed Attention: obtain multiple version of attention weights with the same data augmentation procedure as SmoothGrad and average them.", "This is to prove that smoothing itself does not improve the interpretation quality, and has to be used together with effective interpretation method; • (Li et al., 2016) : applied with normal backpropagation (Grad) and SmoothGrad; • Ours: applied with normal back-propagation (Grad) and SmoothGrad.", "For all the methods above, we follow the same procedure in (Zenkel et al., 2019) to convert soft alignment scores to hard alignment.", "For force decoding experiments, we generate symmetrized alignment results with growdiag-final.", "We also include AER results 7 of fast-align (Dyer et al., 2013) , GIZA++ 8 and the best model (Add+SGD) from Zenkel et al.", "(2019) on the same dataset for comparison.", "However, the readers should be aware that there are certain caveats in this comparison: • All of these models are specifically designed and optimized to generate high-quality alignments, while our method is an interpretation method and is not making any architecture modifications or parameter updates; • fast-align and GIZA++ usually need to update model with full sentence to generate optimal alignments, while our system and Zenkel et al.", "(2019) can do so on-the-fly.", "7 We reproduced the fast-align results as a sanity check and we were able to perfectly replicate their numbers with their released scripts.", "8 https://github.com/moses-smt/giza-pp Realizing the second caveat, we also run fastalign under the online alignment scenario, where we first train a fast-align model and decode on the test set.", "This is a real-world scenario in applications such as computer-aided translation (Bouma and Parmentier, 2014; Arcan et al., 2014) , where we cannot practically update alignment models onthe-fly.", "On the other hand, we believe this is a slightly better comparison for methods with online alignment capabilities such as Zenkel et al.", "(2019) and this work.", "The data used in Zenkel et al.", "(2019) did not provide a manually-aligned development set, so we tune the SmoothGrad hyperparameters (noise standard deviation σ and sample size n) on a 30sentence subset of the German-English test data with the Transformer model.", "We ended up using the recommended σ = 0.15 in the original paper and a slightly smaller sample size n = 30 for speed.", "This hyperparameter setting is applied to the other SmoothGrad experiments as-is.", "For com-parison with previous work, we do not exclude these sentences from the reported results, we instead mark the numbers affected to raise caution.", "Table 1 shows the AER results under the force decoding setting.", "First, note that after applying our saliency method with normal back-propagation, AER is only reduced for FConv model but instead increases for LSTM and Transformer.", "The largest increase is observed for Transformer, where the AER increases by about 20 points on average.", "However, after applying SmoothGrad on top of that, we observe a sharp drop in AER, which ends up with 10-20 points lower than the attention weight baseline.", "We can also see that this is not just an effect introduced by input noise, as the same smoothing procedure for attention increases the AER most of the times.", "To summarize, at least under force decoding settings, our saliency method with SmoothGrad obtains word alignment interpretations of much higher quality than the attention weight baseline.", "Force Decoding Results As for Li et al.", "(2016) , for FConv and LSTM architectures, it is not only consistently worse than our method, but at times also worse than attention.", "Besides, the effect of SmoothGrad is also not as consistent on their saliency formulation as ours.", "Although with the Transformer model, the Li et al.", "(2016) method obtained better AER than our method under several settings, it is still pretty clear overall that the superior mathematical soundness of our method is translated into better interpretation quality.", "While the GIZA++ model obtains the best alignment result in Table 1 9 , most of our word alignment interpretation of FConv model with Smooth-Grad surpasses the alignment quality of fast-align (either Online or Offline), sometimes by as much as 8.7 points (symmetrized ro<>en result).", "Our best models are also largely on-par with (Zenkel et al., 2019) .", "These are notable results as our method is an interpretation method and no extra parameter is updated to optimize the quality of alignment.", "On the other hand, this also indicates that it is possible to induce high-quality 9 While Ghader and Monz (2017) showed that the AER obtained by LSTM model is close to that of GIZA++, our experiments yield a much larger difference.", "We think this is largely due to the fact that we choose to train our model with BPE, while Ghader and Monz (2017) explicitly avoided doing so.", "alignments from NMT model without modifying its parameters, showing that it has acquired such information in an implicit way.", "Most interestingly, although NMT is often deemed as performing poorly under low-resource setting, our interpretation seems to work relatively well on ro<>en language pair, which happens to be the language pair that we have least training data for.", "We think this is a phenomenon that merits further exploration.", "Besides, it can be seen that for all reported methods, the overall order for the number of alignment errors is FConv < LSTM < Transformer.", "To our best knowledge, this is also a novel insight, as no one has analyzed attention weights of FConv with other architectures before.", "We can also observe that while our method is not strong enough to fully bridge the gap of the attention noise level between different model architecture, it does manage to narrow the difference in some cases.", "Table 2 shows the result under free decoding setting.", "The trend in this group of experiment is similar to Table 1 , except that Transformer occasionally outperforms LSTM.", "We think this is mainly due to the fact that Transformer generates higher quality translations, but could also be partially attributed to the noise in fast-align reference.", "Also, notice that the AER numbers are also generally lower compared to Table 1 under this setting.", "One reason is that our model is aligning output with which it is most confident, so less noise should be expected in the model behavior.", "On the other hand, by qualitatively comparing the reference translation in the test set and the NMT output, we find that it is generally easier to align the translation as it is often a more literal translation.", "6 Analysis 6.1 Comparison with Li et al.", "(2016) The main reason why the word saliency formulation in Li et al.", "(2016) does not work as well for word alignment is the lack of polarity in the formulation.", "In other words, it only quantifies how much the input influences the output, but does not specify in what way does the input influence.", "This is sufficient for error analysis, but does not suit the purpose of word alignment, as humans will only align a target word to the input words that constitute a translation pair, i.e.", "have positive influence.", "Figure 2 shows a case where this problem occurs in our German-English experiments.", "Note that in Subfigure (a), the source word nur has high saliency on several target words, e.g.", "should, but the word nur is actually not translated in the reference.", "On the other hand, as shown in Subfigure (b), our method correctly assigns negative (shown as white) or small positive values at all time steps for this source word.", "Specifically, the saliency value of nur for should is negative with large magnitude, indicating significant negative contributions to the prediction of that target word.", "Hence, a good word alignment interpreta-tion should strongly avoid aligning them.", "Free Decoding Results SmoothGrad Tables 1 and 2 show that SmoothGrad is a crucial factor to reduce AER, especially for Transformer.", "Figure 3 Table 1 .", "By comparing Subfigures (a) and (c), we notice that (1) without SmoothGrad, the word saliency obtained from the Transformer model is extremely noisy, and (2) the output of SmoothGrad is not only a smoother version of the naïve gradient output, but also gains new information by performing extra forward and backward evaluations with the noisy input.", "For example, compare the alignment point between source word wir and target word we: in Subfigure (a), this word pair has very low saliency, but in (c), they become the most likely alignment pair for that target word.", "Referring back to our motivation for using SmoothGrad in Section 4.3, we think the observations above verify that the Transformer model is a case where very high non-linearities occur almost everywhere in the parameter space, such that the saliency obtained from local perturbation is a very Table 3 : Alignment distribution entropy for selected deen models.", "att stands for attention in Table 1. poor representation of the global saliency almost all the time.", "On the other hand, this is also why the Transformer especially relies on SmoothGrad to work well, as the perturbation will give a better estimation of the global saliency.", "It could also be observed from Subfigures (b) and (d) that when the noise is too moderate, the evaluation does not deviate enough from the original spot to gain non-local information, and at (d) it deviates too much and hence the resulting alignment is almost random.", "Intuitively, the noise parameter σ should be sensitive to the model architecture or even specific input feature values, but interestingly we end up finding that a single choice from the computer vision literature works well with all of our systems.", "We encourage future work to conduct more comprehensive analysis of the effect of SmoothGrad on more complicated architectures beyond convolutional neural nets.", "Alignment Dispersion We run German-English alignments under several different SmoothGrad noise deviation σ and report their dispersion as measured by entropy of the (soft) alignment distribution averaged by number of target words.", "Results are summarized in Ta-ble 3, where lower entropy indicates more peaky alignments.", "First, we observe that dispersion of word saliency gets higher as we increase σ, which aligns with the observations in Figure 3 .", "It should also be noted that the alignment dispersion is consistently lower for free decoding than force decoding.", "This verifies our conjecture that the force decoding setting might introduce more noise in the model behavior, but judging from this result, that gap seems to be minimal.", "Comparing different architectures, the dispersion of attention weights does not correlate well with the dispersion of word saliency.", "We also notice that, while the Transformer attention interpretation consistently results in higher AER, its dispersion is lower than the other architectures, indicating that with attention, a lot of the probability mass might be concentrated in the wrong place more often.", "This corroborates the finding in Raganato and Tiedemann (2018) .", "Discussion And Future Work There are several extensions to this work that we would like to discuss in this section.", "First, in this paper we only explored two saliency methods among many others available (Montavon et al., 2018) .", "In our preliminary study, we also experimented with guided back-propagation (Springenberg et al., 2014), a frequently used saliency method in computer vision, which did not work well for our problem.", "We suspect that there is a gap between applying these methods on mostlyconvolutional architectures in computer vision and architectures with more non-linearities in NLP.", "We hope the future research from the NLP and machine learning communities could bridge this gap.", "Secondly, the alignment errors in our method comes from three different sources: the limitation of NMT models on learning word alignments, the limitation of interpretation method on recovering interpretable word alignments, and the ambiguity in word alignments itself.", "Although we have shown that high quality alignment could be recovered from NMT systems (thus pushing our understanding on the limitation of NMT models), we are not yet able to separate these sources of errors in this work.", "While exploration on this direction will help us better understand both NMT models and the capability of saliency methods in NLP, researchers may want to avoid using word alignment as a benchmark for saliency methods because of its ambiguity.", "For such purpose, simpler tasks with clear ground truth, such as subject-verb agreement, might be a better choice.", "Finally, as mentioned before, we are only conducting approximate evaluation to measure the ability of our interpretation method.", "An immediate future work would be evaluating this on human-annotated translation outputs generated by the NMT system.", "Conclusion We propose to use word saliency and SmoothGrad to interpret word alignments from NMT predictions.", "Our proposal is model-agnostic, is able to be applied either offline or online, and does not require any parameter updates or architectural change.", "Both force decoding and free decoding evaluations show that our method is capable of generating word alignment interpretations of much higher quality compared to its attentionbased counterpart.", "Our empirical results also probe into the NMT black-box and reveal that even without any special architecture or training algorithm, some NMT models have already implicitly learned interpretable word alignments of comparable quality to fast-align.", "The model and code for our experiments are available at https://github.com/shuoyangd/meerkat." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "4.3", "5.1", "5.2", "5.3", "6.2", "6.3", "7", "8" ], "paper_header_content": [ "Introduction", "Related Work", "The Interpretation Problem", "Method", "Visual Saliency", "Word Saliency", "SmoothGrad", "Evaluation Method", "Setup", "Force Decoding Results", "SmoothGrad", "Alignment Dispersion", "Discussion And Future Work", "Conclusion" ] }
GEM-SciDuet-train-38#paper-1054#slide-24
Conclusion
Saliency + proper word-level score formulation is a better interpretation method than attention NMT models do learn interpretable alignments. We just need to properly uncover them! Saliency-driven Word Alignment Interpretation for NMT
Saliency + proper word-level score formulation is a better interpretation method than attention NMT models do learn interpretable alignments. We just need to properly uncover them! Saliency-driven Word Alignment Interpretation for NMT
[]
GEM-SciDuet-train-39#paper-1055#slide-0
1055
A Joint Model for Chinese Microblog Sentiment Analysis
Topic-based sentiment analysis for Chinese microblog aims to identify the user attitude on specified topics. In this paper, we propose a joint model by incorporating Support Vector Machines (SVM) and deep neural network to improve the performance of sentiment analysis. Firstly, a SVM Classifier is constructed using N-gram, N-POS and sentiment lexicons features. Meanwhile, a convolutional neural network is applied to learn paragraph representation features as the input of another SVM classifier. The classification results outputted by these two classifiers are merged as the final classification results. The evaluations on the SIGHAN-8 Topic-based Chinese microblog sentiment analysis task show that our proposed approach achieves the second rank on micro average F1 and the fourth rank on macro average F1 among a total of 13 submitted systems.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143 ], "paper_content_text": [ "Introduction With the development of the Internet, microblog has become a popular user-generated content platform where users share the newest events or their personal feelings with each other.", "Topic-based microblogs are the most common interactive way for users to share their opinions towards a specified topic.", "To identify the opinions of users, sentiment analysis techniques are investigated to classify texts into different categorizations according to their sentiment polarities.", "Most existing sentiment classification techniques are based on machine learning algorithms, such as Support Vector Machine, Naïve Bayes and Maximum Entropy.", "The machine learning based approach uses feature vectors as the input of classification to predict the classification results.", "Thus, feature engineering, a method for extracting effective features from texts, plays an important role.", "Some commonly used features in sentiment classification are unigram, bigram and sentiment words.", "However, these features cannot work well for cross-domain sentiment classification because of the lack of domain knowledge.", "Danushka Bollegala et al.", "(2011) used multiple sources to construct a sentiment sensitive thesaurus to overcome the lack of domain knowledge.", "New sentiment words expansion is another kind of approach to improve the performance of sentiment analysis.", "Strfano Baccianella et al.", "(2010) constructed SentiWord-Net by extending WordNet with sentiment information.", "It is now widely used in sentiment classification for English.", "As for Chinese sentiment analysis, Minlie Huang et al.", "(2014) proposed a new word detection method by mining the frequent sentiment word patterns.", "This method may discover new sentiment words from a large scale of unlabeled texts.", "With the rapid development of pre-trained word embedding and deep neural networks, a new way to represent texts and features is devloped.", "Mikolov et al.", "(2013) showed that word embedding represents words with meaningful syntactic and semantic information effectively.", "Recursive neural network proposed by Socher et al.", "(2011a; 2011b; is shown efficient to construct sentence representations based on the word embedding.", "Convolutional neural networks (CNN), another deep learn model which achieved success in image recognition field, was applied to nature language processing with word embed-dings.", "Yoon Kim (2014) used CNN with pretrained word embedding to achieve state-ofthe-art performances on some sentence classification tasks, including sentiment classification.", "Siwei Lai et al.", "(2015) incorporated global information in a recurrent convolutional neural network.", "It obtained further improvements comparing to other deep learning models.", "In this paper, we propose a joint model which incorporates traditional machine learning based method (SVM) and deep learning model.", "Two different classifiers are developed.", "One is a word feature based SVM classifier which uses word unigram, bigram and sentiment words as features.", "Another one is a CNN-based SVM classifier which takes paragraph representations features learned by CNN as input features.", "The classification results of these two classifiers are integrated to generate the final classification results.", "The evaluations on the SIGHAN-8 Topic-based Chinese microblog sentiment analysis task show that our proposed approach achieves the second rank on micro average F1 and the fourth rank on macro average F1 among a total of 13 submitted systems.", "Furthermore, the joint classifier strategy brings further performance improvement on individual classifiers.", "The rest of this paper is organized as follows.", "Section 2 presents the design and implementation of our proposed joint model.", "Section 3 gives the evaluation results and discussions.", "Finally, Section 4 gives the conclusion and future research directions.", "Our Approach The SIGHAN8 topic-based Chinese polarity classification task aims to is to classify Chinese microblog into three topic-related sentiment classes, namely neutral, positive and negative.", "This task may be generally regarded as a three-category classification problem.", "The SVM classifier which has been shown effective to document classification is adopted as the core classifier.", "Here, two different feature representation models, namely word-based vector space model and CNN-based composition representation, are adopted to generate the classification features for two classifiers, respectively.", "The classification outputs of two clas-sifiers are integrated to generate the final output.", "Data preprocessing Chinese microblog text is obviously different from formal text.", "Many microblogs have noises, including nickname, hashtag, repost or reply symbols, and URL.", "Therefore, before the feature representation and extraction, preprocessing is performed to filter out noise text in the microblogs.", "Meanwhile, the advertising text and topic-irrelevant microblog are identified as neutral text.", "Especially, this task is designed to identify the topic-relevant sentiments.", "Therefore, the information coming from the reply, repost and sharing parts should be filtered out to avoid their influences to the sentiment analysis of the microblog author.", "Generally speaking, such filtering is based on rules.", "The table 1 shows the example data preprocessing rules with illustrations.", "Table 2 shows the rules for identifying the advertisement and topic-irrelevant microblogs.", "The identified microblogs are labeled as neutral for topic-based sentiment classification.", "Word feature based classifier The word feature based classifier is designed based on the vector model.", "Firstly, the new sentiment words from unlabeled sentences data are recognized to expand the sentiment lexicon.", "The classification features are extracted from the labeled training data and sentiment lexicon resources.", "In order to alleviate the influences of unbalanced training data, SMOTE, which is an oversampling algorithm, is applied to training data before classifier training.", "Finally, a SVM classifier is trained on the balanced data.", "The framework of word feature based classifier is shown in Figure 1.", "Feature selection Unigram, Bigram, Uni-Part-of-Speech and Bi-Part-of-Speech features are selected as the basic features.", "CHI-test based feature selection is applied to obtain the top 20000 features.", "To improve the performance of sentiment classification, additional features based on lexicons including sentiment word lexicons, negation word lexicons, and adverb word lexicons, are incorporated.", "Rules Raw Text Processed Text Sharing news with 好看?吗?//【Galaxy S6:三星证明自 好看?吗? personal comments 己能做出好看的手机】http: //t.cn/ RwHRsIb(分享自 @ 今日头条) Removing HashTag # 三星 Galaxy S6# 三星 GALAXY S6 三星 GALAXY S6, ,挺中意 [酷][酷] [位置] 芒砀路 挺中意 [酷][酷] Removing URL 699 欧元起传三星 Galaxy S6/S6 Edge 售 699 欧元起传三星 Galaxy 价获证实(分享自 @ 新浪科技) S6/S6 Edge 售价获证实 http://t.cn/RwTo3on (分享自 @ 新浪科技) Removing nickname 玻璃取代塑料,更美 Galaxy S6 的 5 大 http://t.cn/RwHY6Az 妥协 http://t.cn/RwHY6Az 罗永浩我去 罗永浩我去小米和三星这 小米和三星这是要闹哪样, , ,老罗。 。不 是要闹哪样, , ,老罗。 。 能忍啊, , , , ,@ 锤子科技营销帐号 @ 罗 不能忍啊, , , , , 永浩 Removing 【视频:三星 S6 对比苹果 iPhone6 【视频:三星 S6 对比苹果 information sources MWC2015 @youtube 科技 】 iPhone6 MWC2015 http://t.cn/RwHQzJ8(来自于优酷安 @youtube 科技 】 卓客户端) http://t.cn/RwHQzJ8 Rules Type Including many different Advertisement topic (\"#...#\") tag.", "Including many words Advertisement like \"微商\", \"商机\", \"想赚钱\",\"面膜\".", "No actual content Topic-irrelevant Table 2 : Microblog text matching rules.", "By analyzing the expressions of the microblog text in training data, some special expression features in microblog text are identified.", "For example, the continuous punctuations are always used to express a strong feeling and thus, the microblog with continuous punctuations tends to be subjective.", "Another adopted feature for microblog text is the use of emoticons.", "Sentiment lexicon expansion In microblogs, abundant new or informal sentiment words are widely used.", "Normally, these new sentiment words are short but meaningful for expressing a strong feeling.", "These new sentiment words play an important role in Chinese microblog sentiment classification.", "Therefore, sentiment word identification is performed to recognize new sentiment words as the supplement of sentiment lexicon.", "Twenty million microblog text collected from Sina Weibo Platform are used in new sentiment word detection.", "Considering that new words normally cannot be correctly segmented by the existing segmentor, identifying new words from preliminary segmentation results together with their POS tags is a feasible method.", "Here, potential components for new words are limited to the segmentation tokens shorter than three.", "Using word frequency, mutual information and context entropy as the evaluation indicators for words, the most possible new word candidates are obtained.", "With the help of word embedding construction model, each word in the corpus can be represented as a low dimension vector together with its context information.", "Hence, the distances between the new words and the existed sentiment words corresponding to difference sentiment polarity are estimated.", "The new words are then classified into one of the three polarity classes by following voting mechanism.", "Classification Two steps are performed to determine the topic-relevant sentiment for input microblogs.", "The first step is to distinguish topic relevant messages from topic irrelevant messages.", "Sentiment classification is then applied to topic relevant messages in the second step.", "Topic relevant words generated by clustering analysis are employed as distinguishable features to filter out topic irrelevant microblogs because normally the topic irrelevant microblogs have few intersections with topic relevant words.", "Some advertisement posts consisting of several hot topic hash tags are also filtered out by considering the number of hash tag types in the microblog.", "The provided labeled dataset is used to train the SVM classifier with linear kernel.", "A new challenge is that the provided training set is imbalanced.", "There are about 3973 neutral microblogs, while the numbers of positive and negative microblogs are 394 and 538, respectively.", "In order to reduce the influences of imbalanced training dataset, the SMOTE algorithm (Chawla et al., 2002) is applied to oversampling the samples on minority class.", "Oversampling ratio is set to 10 and 7.4 for positive class and negative class, respectively.", "In this way, the training dataset becomes balanced.", "Another classifier is CNN-based SVM classifier.", "The classifier framework is shown in Figure 2 .", "Firstly, continuous bog of word (CBOW) model (Mikolov et al., 2013 ) is used to learn word embeddings from Chinese microblog text.", "A deep convolutional neural networks (CNN) model is applied to learn distributed paragraph representation features for Chinese microblog training and testing data.", "Finally, the distributed paragraph representation features are used in SVM classifier to learn the probability distribution over sentiment labels.", "CNN-based SVM classifier Word embedding construction Word embedding, wherein words are projected from a sparse, 1-of-V encoding (here V is the vocabulary size) onto a lower dimensional vector space via a hidden layer, are essentially feature extractors that encode semantic features of words in their dimensions.", "Mikolov et al.", "(2013) introduced CBOW model to learn vector representations which captures a large number of syntactic and semantic word relationships from unstructured text data.", "The main idea of this model is to find word representations which use the surrounding words in a sentence or a document to predict current word.", "In this study, we train the CBOW model by using 16GB Chinese microblog text.", "Finally, we obtain 200-dimension word embeddings for Chinese microblog text.", "CNN-based SVM classifier In the CNN-based SVM classifier, the input is a matrix which is composed of the word embeddings of microblogs.", "There are windows with the lengths of three, four and five words, respectively.", "A convolution operation involves three filters which are applied to these windows to produce new features.", "After convolution operation, a max-over-time pooling operation is applied over these features.", "The maximum value is taken as the feature corresponding to this particular filter.", "The idea is to capture the most important feature which has the largest value.", "Since one feature is extracted from one filter, the model uses multiple filters (with varying window sizes) to obtain multiple features.", "These features constitute the distributed paragraph feature representation.", "In the last step, a SVM classifier is applied on these distributed paragraph representation features to obtain the probability distributions over labels (positive, negative, and neutral).", "A set of merging rules is designed to incorporate the individual classification results of the two classifiers for generating the final result.", "If the two classification outputs are the same, naturally, the final output is the same.", "If the two classification outputs are different, the final result is determined from the merge rules shown in Table 3 .", "Simply speaking, if any of two classifiers output neutral category, the final output is neutral.", "If two classifiers outputs positive and negative, respectively, the final output is the result of CNN-based clas-sifier.", "Such a classification outputs merging strategy is based on the statistical analysis on the individual classifier performances on training dataset.", "Outputs Merging Experimental results and analysis Data set In the SIGHAN-8 Chinese sentiment analysis bakeoff dataset, 4905 topic-based Chinese microblog are provided as training data which consists of 394 positive, 538 negative and 3973 neutral microblogs corresponding to 5 topics, namely \"央行降息\", \"油价\", \"日本马桶\", \"三星 S6\"and \"雾霾\".", "In the testing data, there are 19,469 microblogs corresponding to 20 topic, such as \"12306 验证码\", \"中国政 府也门撤侨\", \"何以笙箫默\", \"刘翔退役\".", "Metrics Precision, recall and F1-value are used as the evaluation metrics, as shown below: P recision = SystemCorrect SystemOutput (1) Recall = SystemCorrect HumanLabeled (2) F 1 = 2 × P recision × Recall P recision + Recall (3) Where System.Output refers to the total number of the submitted results, System.Correct refers to the number of correctly classified results in the submitted results, Human.Labeled refers to the total number of manually labeled results in the Gold Standard.", "The evaluation metrics corresponding to positive, negative and overall are estimated, respectively.", "The corresponding microaverage and macro-average performances are then estimated.", "The micro-average estimates the average performance of the three evaluation metrics over the entire dataset.", "The macro-average estimates the average performances of the evaluation metrics on positive, negative and neutral, respectively.", "Experimental results and analysis There are two subtasks in SIGHAN-8 topicbased Chinese microblog polarity classification Table 6 : Performances by different classifiers in unrestricted resource subtask.", "task: restricted resource and unrestricted resource subtasks.", "Table 4 gives the performances in restricted resource subtask.", "The first column lists the name of participants who achieves higher macro average F1 values while out system is named as HLT_HITSZ.", "It is observed that our proposed approach achieves better performance on negative and positive categories, but obviously lower performance on neutral category.", "The good performance on the recall of minority classes showed the effectiveness of our consideration on imbalanced dataset training.", "The achieved performances in the unrestricted resource subtask are listed in Table 5 .", "Our system achieves about 3% of performance improvement on each category, respectively.", "It shows the contributions of extra training corpus and merging rules.", "In order to validate the effectiveness of merging rules, the performances of Classifier 1 and Classifier 2 are evaluated, individually.", "The achieved performances are given in Table 6.", "It is observed that generally speaking, Classifier 1 achieves a higher classification precision because many features are coming from manually compiled sentiment-related lexicons.", "However, these features are limited to training data so that Classifier 1 achieved a lower recall.", "On the contrary, Classifier 2 may learn the representation features automatically from training data which is better for generalization.", "Thus, a good recall is achieved.", "Meanwhile, the achieved performances show that our joint model obtains better performances compared to two individual classifiers which indicate the effectiveness of our proposed joint classification strategy.", "Conclusion In this work, we propose a joint model for sentiment topic analysis on Chinese microblog messages.", "A word feature based SVM classifier and a SVM classifier using CNN-based paragraph representation features are developed, respectively.", "To overcome the limitation of each classifier, their classification outputs are merged to generate the final output while the merging rules are based on statistical analy-sis on the performances on training dataset.", "Experimental results show that our proposed joint method achieves better sentiment classification performance over individual classifiers which show the effectiveness of the joint classifier strategy.", "In future, we intend to study the way to distinguish the subjective messages from objective messages for further improving the sentiment classification performance." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.2.1", "2.2.2", "2.2.3", "2.3.1", "2.3.2", "3.1", "3.2", "3.3", "4" ], "paper_header_content": [ "Introduction", "Our Approach", "Data preprocessing", "Word feature based classifier", "Feature selection", "Sentiment lexicon expansion", "Classification", "Word embedding construction", "CNN-based SVM classifier", "Data set", "Metrics", "Experimental results and analysis", "Conclusion" ] }
GEM-SciDuet-train-39#paper-1055#slide-0
Introduction
Task: Topic-Based Chinese Message Polarity Classification Classify the message into positive, negative, or neutral sentiment towards the given topic. For messages conveying both a positive and negative sentiment towards the topic, whichever is the stronger sentiment should be chosen. Real and noise data Imbalance data between classes Short but meaningful message Galaxy S6# GALAXY S6 Framework of our model Data preprocessing: rule-based process Word feature based SVM classifier: unigram + bigram + CNN-based SVM classifier: word embedding + convolutional Integrated strategy: multi-classifier results fusion Training and testing data
Task: Topic-Based Chinese Message Polarity Classification Classify the message into positive, negative, or neutral sentiment towards the given topic. For messages conveying both a positive and negative sentiment towards the topic, whichever is the stronger sentiment should be chosen. Real and noise data Imbalance data between classes Short but meaningful message Galaxy S6# GALAXY S6 Framework of our model Data preprocessing: rule-based process Word feature based SVM classifier: unigram + bigram + CNN-based SVM classifier: word embedding + convolutional Integrated strategy: multi-classifier results fusion Training and testing data
[]
GEM-SciDuet-train-39#paper-1055#slide-1
1055
A Joint Model for Chinese Microblog Sentiment Analysis
Topic-based sentiment analysis for Chinese microblog aims to identify the user attitude on specified topics. In this paper, we propose a joint model by incorporating Support Vector Machines (SVM) and deep neural network to improve the performance of sentiment analysis. Firstly, a SVM Classifier is constructed using N-gram, N-POS and sentiment lexicons features. Meanwhile, a convolutional neural network is applied to learn paragraph representation features as the input of another SVM classifier. The classification results outputted by these two classifiers are merged as the final classification results. The evaluations on the SIGHAN-8 Topic-based Chinese microblog sentiment analysis task show that our proposed approach achieves the second rank on micro average F1 and the fourth rank on macro average F1 among a total of 13 submitted systems.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143 ], "paper_content_text": [ "Introduction With the development of the Internet, microblog has become a popular user-generated content platform where users share the newest events or their personal feelings with each other.", "Topic-based microblogs are the most common interactive way for users to share their opinions towards a specified topic.", "To identify the opinions of users, sentiment analysis techniques are investigated to classify texts into different categorizations according to their sentiment polarities.", "Most existing sentiment classification techniques are based on machine learning algorithms, such as Support Vector Machine, Naïve Bayes and Maximum Entropy.", "The machine learning based approach uses feature vectors as the input of classification to predict the classification results.", "Thus, feature engineering, a method for extracting effective features from texts, plays an important role.", "Some commonly used features in sentiment classification are unigram, bigram and sentiment words.", "However, these features cannot work well for cross-domain sentiment classification because of the lack of domain knowledge.", "Danushka Bollegala et al.", "(2011) used multiple sources to construct a sentiment sensitive thesaurus to overcome the lack of domain knowledge.", "New sentiment words expansion is another kind of approach to improve the performance of sentiment analysis.", "Strfano Baccianella et al.", "(2010) constructed SentiWord-Net by extending WordNet with sentiment information.", "It is now widely used in sentiment classification for English.", "As for Chinese sentiment analysis, Minlie Huang et al.", "(2014) proposed a new word detection method by mining the frequent sentiment word patterns.", "This method may discover new sentiment words from a large scale of unlabeled texts.", "With the rapid development of pre-trained word embedding and deep neural networks, a new way to represent texts and features is devloped.", "Mikolov et al.", "(2013) showed that word embedding represents words with meaningful syntactic and semantic information effectively.", "Recursive neural network proposed by Socher et al.", "(2011a; 2011b; is shown efficient to construct sentence representations based on the word embedding.", "Convolutional neural networks (CNN), another deep learn model which achieved success in image recognition field, was applied to nature language processing with word embed-dings.", "Yoon Kim (2014) used CNN with pretrained word embedding to achieve state-ofthe-art performances on some sentence classification tasks, including sentiment classification.", "Siwei Lai et al.", "(2015) incorporated global information in a recurrent convolutional neural network.", "It obtained further improvements comparing to other deep learning models.", "In this paper, we propose a joint model which incorporates traditional machine learning based method (SVM) and deep learning model.", "Two different classifiers are developed.", "One is a word feature based SVM classifier which uses word unigram, bigram and sentiment words as features.", "Another one is a CNN-based SVM classifier which takes paragraph representations features learned by CNN as input features.", "The classification results of these two classifiers are integrated to generate the final classification results.", "The evaluations on the SIGHAN-8 Topic-based Chinese microblog sentiment analysis task show that our proposed approach achieves the second rank on micro average F1 and the fourth rank on macro average F1 among a total of 13 submitted systems.", "Furthermore, the joint classifier strategy brings further performance improvement on individual classifiers.", "The rest of this paper is organized as follows.", "Section 2 presents the design and implementation of our proposed joint model.", "Section 3 gives the evaluation results and discussions.", "Finally, Section 4 gives the conclusion and future research directions.", "Our Approach The SIGHAN8 topic-based Chinese polarity classification task aims to is to classify Chinese microblog into three topic-related sentiment classes, namely neutral, positive and negative.", "This task may be generally regarded as a three-category classification problem.", "The SVM classifier which has been shown effective to document classification is adopted as the core classifier.", "Here, two different feature representation models, namely word-based vector space model and CNN-based composition representation, are adopted to generate the classification features for two classifiers, respectively.", "The classification outputs of two clas-sifiers are integrated to generate the final output.", "Data preprocessing Chinese microblog text is obviously different from formal text.", "Many microblogs have noises, including nickname, hashtag, repost or reply symbols, and URL.", "Therefore, before the feature representation and extraction, preprocessing is performed to filter out noise text in the microblogs.", "Meanwhile, the advertising text and topic-irrelevant microblog are identified as neutral text.", "Especially, this task is designed to identify the topic-relevant sentiments.", "Therefore, the information coming from the reply, repost and sharing parts should be filtered out to avoid their influences to the sentiment analysis of the microblog author.", "Generally speaking, such filtering is based on rules.", "The table 1 shows the example data preprocessing rules with illustrations.", "Table 2 shows the rules for identifying the advertisement and topic-irrelevant microblogs.", "The identified microblogs are labeled as neutral for topic-based sentiment classification.", "Word feature based classifier The word feature based classifier is designed based on the vector model.", "Firstly, the new sentiment words from unlabeled sentences data are recognized to expand the sentiment lexicon.", "The classification features are extracted from the labeled training data and sentiment lexicon resources.", "In order to alleviate the influences of unbalanced training data, SMOTE, which is an oversampling algorithm, is applied to training data before classifier training.", "Finally, a SVM classifier is trained on the balanced data.", "The framework of word feature based classifier is shown in Figure 1.", "Feature selection Unigram, Bigram, Uni-Part-of-Speech and Bi-Part-of-Speech features are selected as the basic features.", "CHI-test based feature selection is applied to obtain the top 20000 features.", "To improve the performance of sentiment classification, additional features based on lexicons including sentiment word lexicons, negation word lexicons, and adverb word lexicons, are incorporated.", "Rules Raw Text Processed Text Sharing news with 好看?吗?//【Galaxy S6:三星证明自 好看?吗? personal comments 己能做出好看的手机】http: //t.cn/ RwHRsIb(分享自 @ 今日头条) Removing HashTag # 三星 Galaxy S6# 三星 GALAXY S6 三星 GALAXY S6, ,挺中意 [酷][酷] [位置] 芒砀路 挺中意 [酷][酷] Removing URL 699 欧元起传三星 Galaxy S6/S6 Edge 售 699 欧元起传三星 Galaxy 价获证实(分享自 @ 新浪科技) S6/S6 Edge 售价获证实 http://t.cn/RwTo3on (分享自 @ 新浪科技) Removing nickname 玻璃取代塑料,更美 Galaxy S6 的 5 大 http://t.cn/RwHY6Az 妥协 http://t.cn/RwHY6Az 罗永浩我去 罗永浩我去小米和三星这 小米和三星这是要闹哪样, , ,老罗。 。不 是要闹哪样, , ,老罗。 。 能忍啊, , , , ,@ 锤子科技营销帐号 @ 罗 不能忍啊, , , , , 永浩 Removing 【视频:三星 S6 对比苹果 iPhone6 【视频:三星 S6 对比苹果 information sources MWC2015 @youtube 科技 】 iPhone6 MWC2015 http://t.cn/RwHQzJ8(来自于优酷安 @youtube 科技 】 卓客户端) http://t.cn/RwHQzJ8 Rules Type Including many different Advertisement topic (\"#...#\") tag.", "Including many words Advertisement like \"微商\", \"商机\", \"想赚钱\",\"面膜\".", "No actual content Topic-irrelevant Table 2 : Microblog text matching rules.", "By analyzing the expressions of the microblog text in training data, some special expression features in microblog text are identified.", "For example, the continuous punctuations are always used to express a strong feeling and thus, the microblog with continuous punctuations tends to be subjective.", "Another adopted feature for microblog text is the use of emoticons.", "Sentiment lexicon expansion In microblogs, abundant new or informal sentiment words are widely used.", "Normally, these new sentiment words are short but meaningful for expressing a strong feeling.", "These new sentiment words play an important role in Chinese microblog sentiment classification.", "Therefore, sentiment word identification is performed to recognize new sentiment words as the supplement of sentiment lexicon.", "Twenty million microblog text collected from Sina Weibo Platform are used in new sentiment word detection.", "Considering that new words normally cannot be correctly segmented by the existing segmentor, identifying new words from preliminary segmentation results together with their POS tags is a feasible method.", "Here, potential components for new words are limited to the segmentation tokens shorter than three.", "Using word frequency, mutual information and context entropy as the evaluation indicators for words, the most possible new word candidates are obtained.", "With the help of word embedding construction model, each word in the corpus can be represented as a low dimension vector together with its context information.", "Hence, the distances between the new words and the existed sentiment words corresponding to difference sentiment polarity are estimated.", "The new words are then classified into one of the three polarity classes by following voting mechanism.", "Classification Two steps are performed to determine the topic-relevant sentiment for input microblogs.", "The first step is to distinguish topic relevant messages from topic irrelevant messages.", "Sentiment classification is then applied to topic relevant messages in the second step.", "Topic relevant words generated by clustering analysis are employed as distinguishable features to filter out topic irrelevant microblogs because normally the topic irrelevant microblogs have few intersections with topic relevant words.", "Some advertisement posts consisting of several hot topic hash tags are also filtered out by considering the number of hash tag types in the microblog.", "The provided labeled dataset is used to train the SVM classifier with linear kernel.", "A new challenge is that the provided training set is imbalanced.", "There are about 3973 neutral microblogs, while the numbers of positive and negative microblogs are 394 and 538, respectively.", "In order to reduce the influences of imbalanced training dataset, the SMOTE algorithm (Chawla et al., 2002) is applied to oversampling the samples on minority class.", "Oversampling ratio is set to 10 and 7.4 for positive class and negative class, respectively.", "In this way, the training dataset becomes balanced.", "Another classifier is CNN-based SVM classifier.", "The classifier framework is shown in Figure 2 .", "Firstly, continuous bog of word (CBOW) model (Mikolov et al., 2013 ) is used to learn word embeddings from Chinese microblog text.", "A deep convolutional neural networks (CNN) model is applied to learn distributed paragraph representation features for Chinese microblog training and testing data.", "Finally, the distributed paragraph representation features are used in SVM classifier to learn the probability distribution over sentiment labels.", "CNN-based SVM classifier Word embedding construction Word embedding, wherein words are projected from a sparse, 1-of-V encoding (here V is the vocabulary size) onto a lower dimensional vector space via a hidden layer, are essentially feature extractors that encode semantic features of words in their dimensions.", "Mikolov et al.", "(2013) introduced CBOW model to learn vector representations which captures a large number of syntactic and semantic word relationships from unstructured text data.", "The main idea of this model is to find word representations which use the surrounding words in a sentence or a document to predict current word.", "In this study, we train the CBOW model by using 16GB Chinese microblog text.", "Finally, we obtain 200-dimension word embeddings for Chinese microblog text.", "CNN-based SVM classifier In the CNN-based SVM classifier, the input is a matrix which is composed of the word embeddings of microblogs.", "There are windows with the lengths of three, four and five words, respectively.", "A convolution operation involves three filters which are applied to these windows to produce new features.", "After convolution operation, a max-over-time pooling operation is applied over these features.", "The maximum value is taken as the feature corresponding to this particular filter.", "The idea is to capture the most important feature which has the largest value.", "Since one feature is extracted from one filter, the model uses multiple filters (with varying window sizes) to obtain multiple features.", "These features constitute the distributed paragraph feature representation.", "In the last step, a SVM classifier is applied on these distributed paragraph representation features to obtain the probability distributions over labels (positive, negative, and neutral).", "A set of merging rules is designed to incorporate the individual classification results of the two classifiers for generating the final result.", "If the two classification outputs are the same, naturally, the final output is the same.", "If the two classification outputs are different, the final result is determined from the merge rules shown in Table 3 .", "Simply speaking, if any of two classifiers output neutral category, the final output is neutral.", "If two classifiers outputs positive and negative, respectively, the final output is the result of CNN-based clas-sifier.", "Such a classification outputs merging strategy is based on the statistical analysis on the individual classifier performances on training dataset.", "Outputs Merging Experimental results and analysis Data set In the SIGHAN-8 Chinese sentiment analysis bakeoff dataset, 4905 topic-based Chinese microblog are provided as training data which consists of 394 positive, 538 negative and 3973 neutral microblogs corresponding to 5 topics, namely \"央行降息\", \"油价\", \"日本马桶\", \"三星 S6\"and \"雾霾\".", "In the testing data, there are 19,469 microblogs corresponding to 20 topic, such as \"12306 验证码\", \"中国政 府也门撤侨\", \"何以笙箫默\", \"刘翔退役\".", "Metrics Precision, recall and F1-value are used as the evaluation metrics, as shown below: P recision = SystemCorrect SystemOutput (1) Recall = SystemCorrect HumanLabeled (2) F 1 = 2 × P recision × Recall P recision + Recall (3) Where System.Output refers to the total number of the submitted results, System.Correct refers to the number of correctly classified results in the submitted results, Human.Labeled refers to the total number of manually labeled results in the Gold Standard.", "The evaluation metrics corresponding to positive, negative and overall are estimated, respectively.", "The corresponding microaverage and macro-average performances are then estimated.", "The micro-average estimates the average performance of the three evaluation metrics over the entire dataset.", "The macro-average estimates the average performances of the evaluation metrics on positive, negative and neutral, respectively.", "Experimental results and analysis There are two subtasks in SIGHAN-8 topicbased Chinese microblog polarity classification Table 6 : Performances by different classifiers in unrestricted resource subtask.", "task: restricted resource and unrestricted resource subtasks.", "Table 4 gives the performances in restricted resource subtask.", "The first column lists the name of participants who achieves higher macro average F1 values while out system is named as HLT_HITSZ.", "It is observed that our proposed approach achieves better performance on negative and positive categories, but obviously lower performance on neutral category.", "The good performance on the recall of minority classes showed the effectiveness of our consideration on imbalanced dataset training.", "The achieved performances in the unrestricted resource subtask are listed in Table 5 .", "Our system achieves about 3% of performance improvement on each category, respectively.", "It shows the contributions of extra training corpus and merging rules.", "In order to validate the effectiveness of merging rules, the performances of Classifier 1 and Classifier 2 are evaluated, individually.", "The achieved performances are given in Table 6.", "It is observed that generally speaking, Classifier 1 achieves a higher classification precision because many features are coming from manually compiled sentiment-related lexicons.", "However, these features are limited to training data so that Classifier 1 achieved a lower recall.", "On the contrary, Classifier 2 may learn the representation features automatically from training data which is better for generalization.", "Thus, a good recall is achieved.", "Meanwhile, the achieved performances show that our joint model obtains better performances compared to two individual classifiers which indicate the effectiveness of our proposed joint classification strategy.", "Conclusion In this work, we propose a joint model for sentiment topic analysis on Chinese microblog messages.", "A word feature based SVM classifier and a SVM classifier using CNN-based paragraph representation features are developed, respectively.", "To overcome the limitation of each classifier, their classification outputs are merged to generate the final output while the merging rules are based on statistical analy-sis on the performances on training dataset.", "Experimental results show that our proposed joint method achieves better sentiment classification performance over individual classifiers which show the effectiveness of the joint classifier strategy.", "In future, we intend to study the way to distinguish the subjective messages from objective messages for further improving the sentiment classification performance." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.2.1", "2.2.2", "2.2.3", "2.3.1", "2.3.2", "3.1", "3.2", "3.3", "4" ], "paper_header_content": [ "Introduction", "Our Approach", "Data preprocessing", "Word feature based classifier", "Feature selection", "Sentiment lexicon expansion", "Classification", "Word embedding construction", "CNN-based SVM classifier", "Data set", "Metrics", "Experimental results and analysis", "Conclusion" ] }
GEM-SciDuet-train-39#paper-1055#slide-1
Data preprocessing
Sharing news with personal comments Galaxy S6# GALAXY S6 Removing information sources S6 iPhone6 MWC2015 @youtube http://t.cn/RwHQzJ8
Sharing news with personal comments Galaxy S6# GALAXY S6 Removing information sources S6 iPhone6 MWC2015 @youtube http://t.cn/RwHQzJ8
[]
GEM-SciDuet-train-39#paper-1055#slide-2
1055
A Joint Model for Chinese Microblog Sentiment Analysis
Topic-based sentiment analysis for Chinese microblog aims to identify the user attitude on specified topics. In this paper, we propose a joint model by incorporating Support Vector Machines (SVM) and deep neural network to improve the performance of sentiment analysis. Firstly, a SVM Classifier is constructed using N-gram, N-POS and sentiment lexicons features. Meanwhile, a convolutional neural network is applied to learn paragraph representation features as the input of another SVM classifier. The classification results outputted by these two classifiers are merged as the final classification results. The evaluations on the SIGHAN-8 Topic-based Chinese microblog sentiment analysis task show that our proposed approach achieves the second rank on micro average F1 and the fourth rank on macro average F1 among a total of 13 submitted systems.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143 ], "paper_content_text": [ "Introduction With the development of the Internet, microblog has become a popular user-generated content platform where users share the newest events or their personal feelings with each other.", "Topic-based microblogs are the most common interactive way for users to share their opinions towards a specified topic.", "To identify the opinions of users, sentiment analysis techniques are investigated to classify texts into different categorizations according to their sentiment polarities.", "Most existing sentiment classification techniques are based on machine learning algorithms, such as Support Vector Machine, Naïve Bayes and Maximum Entropy.", "The machine learning based approach uses feature vectors as the input of classification to predict the classification results.", "Thus, feature engineering, a method for extracting effective features from texts, plays an important role.", "Some commonly used features in sentiment classification are unigram, bigram and sentiment words.", "However, these features cannot work well for cross-domain sentiment classification because of the lack of domain knowledge.", "Danushka Bollegala et al.", "(2011) used multiple sources to construct a sentiment sensitive thesaurus to overcome the lack of domain knowledge.", "New sentiment words expansion is another kind of approach to improve the performance of sentiment analysis.", "Strfano Baccianella et al.", "(2010) constructed SentiWord-Net by extending WordNet with sentiment information.", "It is now widely used in sentiment classification for English.", "As for Chinese sentiment analysis, Minlie Huang et al.", "(2014) proposed a new word detection method by mining the frequent sentiment word patterns.", "This method may discover new sentiment words from a large scale of unlabeled texts.", "With the rapid development of pre-trained word embedding and deep neural networks, a new way to represent texts and features is devloped.", "Mikolov et al.", "(2013) showed that word embedding represents words with meaningful syntactic and semantic information effectively.", "Recursive neural network proposed by Socher et al.", "(2011a; 2011b; is shown efficient to construct sentence representations based on the word embedding.", "Convolutional neural networks (CNN), another deep learn model which achieved success in image recognition field, was applied to nature language processing with word embed-dings.", "Yoon Kim (2014) used CNN with pretrained word embedding to achieve state-ofthe-art performances on some sentence classification tasks, including sentiment classification.", "Siwei Lai et al.", "(2015) incorporated global information in a recurrent convolutional neural network.", "It obtained further improvements comparing to other deep learning models.", "In this paper, we propose a joint model which incorporates traditional machine learning based method (SVM) and deep learning model.", "Two different classifiers are developed.", "One is a word feature based SVM classifier which uses word unigram, bigram and sentiment words as features.", "Another one is a CNN-based SVM classifier which takes paragraph representations features learned by CNN as input features.", "The classification results of these two classifiers are integrated to generate the final classification results.", "The evaluations on the SIGHAN-8 Topic-based Chinese microblog sentiment analysis task show that our proposed approach achieves the second rank on micro average F1 and the fourth rank on macro average F1 among a total of 13 submitted systems.", "Furthermore, the joint classifier strategy brings further performance improvement on individual classifiers.", "The rest of this paper is organized as follows.", "Section 2 presents the design and implementation of our proposed joint model.", "Section 3 gives the evaluation results and discussions.", "Finally, Section 4 gives the conclusion and future research directions.", "Our Approach The SIGHAN8 topic-based Chinese polarity classification task aims to is to classify Chinese microblog into three topic-related sentiment classes, namely neutral, positive and negative.", "This task may be generally regarded as a three-category classification problem.", "The SVM classifier which has been shown effective to document classification is adopted as the core classifier.", "Here, two different feature representation models, namely word-based vector space model and CNN-based composition representation, are adopted to generate the classification features for two classifiers, respectively.", "The classification outputs of two clas-sifiers are integrated to generate the final output.", "Data preprocessing Chinese microblog text is obviously different from formal text.", "Many microblogs have noises, including nickname, hashtag, repost or reply symbols, and URL.", "Therefore, before the feature representation and extraction, preprocessing is performed to filter out noise text in the microblogs.", "Meanwhile, the advertising text and topic-irrelevant microblog are identified as neutral text.", "Especially, this task is designed to identify the topic-relevant sentiments.", "Therefore, the information coming from the reply, repost and sharing parts should be filtered out to avoid their influences to the sentiment analysis of the microblog author.", "Generally speaking, such filtering is based on rules.", "The table 1 shows the example data preprocessing rules with illustrations.", "Table 2 shows the rules for identifying the advertisement and topic-irrelevant microblogs.", "The identified microblogs are labeled as neutral for topic-based sentiment classification.", "Word feature based classifier The word feature based classifier is designed based on the vector model.", "Firstly, the new sentiment words from unlabeled sentences data are recognized to expand the sentiment lexicon.", "The classification features are extracted from the labeled training data and sentiment lexicon resources.", "In order to alleviate the influences of unbalanced training data, SMOTE, which is an oversampling algorithm, is applied to training data before classifier training.", "Finally, a SVM classifier is trained on the balanced data.", "The framework of word feature based classifier is shown in Figure 1.", "Feature selection Unigram, Bigram, Uni-Part-of-Speech and Bi-Part-of-Speech features are selected as the basic features.", "CHI-test based feature selection is applied to obtain the top 20000 features.", "To improve the performance of sentiment classification, additional features based on lexicons including sentiment word lexicons, negation word lexicons, and adverb word lexicons, are incorporated.", "Rules Raw Text Processed Text Sharing news with 好看?吗?//【Galaxy S6:三星证明自 好看?吗? personal comments 己能做出好看的手机】http: //t.cn/ RwHRsIb(分享自 @ 今日头条) Removing HashTag # 三星 Galaxy S6# 三星 GALAXY S6 三星 GALAXY S6, ,挺中意 [酷][酷] [位置] 芒砀路 挺中意 [酷][酷] Removing URL 699 欧元起传三星 Galaxy S6/S6 Edge 售 699 欧元起传三星 Galaxy 价获证实(分享自 @ 新浪科技) S6/S6 Edge 售价获证实 http://t.cn/RwTo3on (分享自 @ 新浪科技) Removing nickname 玻璃取代塑料,更美 Galaxy S6 的 5 大 http://t.cn/RwHY6Az 妥协 http://t.cn/RwHY6Az 罗永浩我去 罗永浩我去小米和三星这 小米和三星这是要闹哪样, , ,老罗。 。不 是要闹哪样, , ,老罗。 。 能忍啊, , , , ,@ 锤子科技营销帐号 @ 罗 不能忍啊, , , , , 永浩 Removing 【视频:三星 S6 对比苹果 iPhone6 【视频:三星 S6 对比苹果 information sources MWC2015 @youtube 科技 】 iPhone6 MWC2015 http://t.cn/RwHQzJ8(来自于优酷安 @youtube 科技 】 卓客户端) http://t.cn/RwHQzJ8 Rules Type Including many different Advertisement topic (\"#...#\") tag.", "Including many words Advertisement like \"微商\", \"商机\", \"想赚钱\",\"面膜\".", "No actual content Topic-irrelevant Table 2 : Microblog text matching rules.", "By analyzing the expressions of the microblog text in training data, some special expression features in microblog text are identified.", "For example, the continuous punctuations are always used to express a strong feeling and thus, the microblog with continuous punctuations tends to be subjective.", "Another adopted feature for microblog text is the use of emoticons.", "Sentiment lexicon expansion In microblogs, abundant new or informal sentiment words are widely used.", "Normally, these new sentiment words are short but meaningful for expressing a strong feeling.", "These new sentiment words play an important role in Chinese microblog sentiment classification.", "Therefore, sentiment word identification is performed to recognize new sentiment words as the supplement of sentiment lexicon.", "Twenty million microblog text collected from Sina Weibo Platform are used in new sentiment word detection.", "Considering that new words normally cannot be correctly segmented by the existing segmentor, identifying new words from preliminary segmentation results together with their POS tags is a feasible method.", "Here, potential components for new words are limited to the segmentation tokens shorter than three.", "Using word frequency, mutual information and context entropy as the evaluation indicators for words, the most possible new word candidates are obtained.", "With the help of word embedding construction model, each word in the corpus can be represented as a low dimension vector together with its context information.", "Hence, the distances between the new words and the existed sentiment words corresponding to difference sentiment polarity are estimated.", "The new words are then classified into one of the three polarity classes by following voting mechanism.", "Classification Two steps are performed to determine the topic-relevant sentiment for input microblogs.", "The first step is to distinguish topic relevant messages from topic irrelevant messages.", "Sentiment classification is then applied to topic relevant messages in the second step.", "Topic relevant words generated by clustering analysis are employed as distinguishable features to filter out topic irrelevant microblogs because normally the topic irrelevant microblogs have few intersections with topic relevant words.", "Some advertisement posts consisting of several hot topic hash tags are also filtered out by considering the number of hash tag types in the microblog.", "The provided labeled dataset is used to train the SVM classifier with linear kernel.", "A new challenge is that the provided training set is imbalanced.", "There are about 3973 neutral microblogs, while the numbers of positive and negative microblogs are 394 and 538, respectively.", "In order to reduce the influences of imbalanced training dataset, the SMOTE algorithm (Chawla et al., 2002) is applied to oversampling the samples on minority class.", "Oversampling ratio is set to 10 and 7.4 for positive class and negative class, respectively.", "In this way, the training dataset becomes balanced.", "Another classifier is CNN-based SVM classifier.", "The classifier framework is shown in Figure 2 .", "Firstly, continuous bog of word (CBOW) model (Mikolov et al., 2013 ) is used to learn word embeddings from Chinese microblog text.", "A deep convolutional neural networks (CNN) model is applied to learn distributed paragraph representation features for Chinese microblog training and testing data.", "Finally, the distributed paragraph representation features are used in SVM classifier to learn the probability distribution over sentiment labels.", "CNN-based SVM classifier Word embedding construction Word embedding, wherein words are projected from a sparse, 1-of-V encoding (here V is the vocabulary size) onto a lower dimensional vector space via a hidden layer, are essentially feature extractors that encode semantic features of words in their dimensions.", "Mikolov et al.", "(2013) introduced CBOW model to learn vector representations which captures a large number of syntactic and semantic word relationships from unstructured text data.", "The main idea of this model is to find word representations which use the surrounding words in a sentence or a document to predict current word.", "In this study, we train the CBOW model by using 16GB Chinese microblog text.", "Finally, we obtain 200-dimension word embeddings for Chinese microblog text.", "CNN-based SVM classifier In the CNN-based SVM classifier, the input is a matrix which is composed of the word embeddings of microblogs.", "There are windows with the lengths of three, four and five words, respectively.", "A convolution operation involves three filters which are applied to these windows to produce new features.", "After convolution operation, a max-over-time pooling operation is applied over these features.", "The maximum value is taken as the feature corresponding to this particular filter.", "The idea is to capture the most important feature which has the largest value.", "Since one feature is extracted from one filter, the model uses multiple filters (with varying window sizes) to obtain multiple features.", "These features constitute the distributed paragraph feature representation.", "In the last step, a SVM classifier is applied on these distributed paragraph representation features to obtain the probability distributions over labels (positive, negative, and neutral).", "A set of merging rules is designed to incorporate the individual classification results of the two classifiers for generating the final result.", "If the two classification outputs are the same, naturally, the final output is the same.", "If the two classification outputs are different, the final result is determined from the merge rules shown in Table 3 .", "Simply speaking, if any of two classifiers output neutral category, the final output is neutral.", "If two classifiers outputs positive and negative, respectively, the final output is the result of CNN-based clas-sifier.", "Such a classification outputs merging strategy is based on the statistical analysis on the individual classifier performances on training dataset.", "Outputs Merging Experimental results and analysis Data set In the SIGHAN-8 Chinese sentiment analysis bakeoff dataset, 4905 topic-based Chinese microblog are provided as training data which consists of 394 positive, 538 negative and 3973 neutral microblogs corresponding to 5 topics, namely \"央行降息\", \"油价\", \"日本马桶\", \"三星 S6\"and \"雾霾\".", "In the testing data, there are 19,469 microblogs corresponding to 20 topic, such as \"12306 验证码\", \"中国政 府也门撤侨\", \"何以笙箫默\", \"刘翔退役\".", "Metrics Precision, recall and F1-value are used as the evaluation metrics, as shown below: P recision = SystemCorrect SystemOutput (1) Recall = SystemCorrect HumanLabeled (2) F 1 = 2 × P recision × Recall P recision + Recall (3) Where System.Output refers to the total number of the submitted results, System.Correct refers to the number of correctly classified results in the submitted results, Human.Labeled refers to the total number of manually labeled results in the Gold Standard.", "The evaluation metrics corresponding to positive, negative and overall are estimated, respectively.", "The corresponding microaverage and macro-average performances are then estimated.", "The micro-average estimates the average performance of the three evaluation metrics over the entire dataset.", "The macro-average estimates the average performances of the evaluation metrics on positive, negative and neutral, respectively.", "Experimental results and analysis There are two subtasks in SIGHAN-8 topicbased Chinese microblog polarity classification Table 6 : Performances by different classifiers in unrestricted resource subtask.", "task: restricted resource and unrestricted resource subtasks.", "Table 4 gives the performances in restricted resource subtask.", "The first column lists the name of participants who achieves higher macro average F1 values while out system is named as HLT_HITSZ.", "It is observed that our proposed approach achieves better performance on negative and positive categories, but obviously lower performance on neutral category.", "The good performance on the recall of minority classes showed the effectiveness of our consideration on imbalanced dataset training.", "The achieved performances in the unrestricted resource subtask are listed in Table 5 .", "Our system achieves about 3% of performance improvement on each category, respectively.", "It shows the contributions of extra training corpus and merging rules.", "In order to validate the effectiveness of merging rules, the performances of Classifier 1 and Classifier 2 are evaluated, individually.", "The achieved performances are given in Table 6.", "It is observed that generally speaking, Classifier 1 achieves a higher classification precision because many features are coming from manually compiled sentiment-related lexicons.", "However, these features are limited to training data so that Classifier 1 achieved a lower recall.", "On the contrary, Classifier 2 may learn the representation features automatically from training data which is better for generalization.", "Thus, a good recall is achieved.", "Meanwhile, the achieved performances show that our joint model obtains better performances compared to two individual classifiers which indicate the effectiveness of our proposed joint classification strategy.", "Conclusion In this work, we propose a joint model for sentiment topic analysis on Chinese microblog messages.", "A word feature based SVM classifier and a SVM classifier using CNN-based paragraph representation features are developed, respectively.", "To overcome the limitation of each classifier, their classification outputs are merged to generate the final output while the merging rules are based on statistical analy-sis on the performances on training dataset.", "Experimental results show that our proposed joint method achieves better sentiment classification performance over individual classifiers which show the effectiveness of the joint classifier strategy.", "In future, we intend to study the way to distinguish the subjective messages from objective messages for further improving the sentiment classification performance." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.2.1", "2.2.2", "2.2.3", "2.3.1", "2.3.2", "3.1", "3.2", "3.3", "4" ], "paper_header_content": [ "Introduction", "Our Approach", "Data preprocessing", "Word feature based classifier", "Feature selection", "Sentiment lexicon expansion", "Classification", "Word embedding construction", "CNN-based SVM classifier", "Data set", "Metrics", "Experimental results and analysis", "Conclusion" ] }
GEM-SciDuet-train-39#paper-1055#slide-2
Word Feature based Classifier
Sentiment Lexicon expansion: To expand existing sentiment lexicon, POS tags, word frequency, mutual information and context entropy are used to mine the new sentiment word from twenty million microblog text. Positive Words Negative Words Word features: unigram, bigram, uni-part-of-speech, bi-part-of- speech, sentiment lexicons Features Selection Methods: CHI-test, TF-IDF Imbalance Data Problem: use SMOTE algorithm to undersampling the major class and oversampling the minor classes. Classifier: SVM with linear kernel
Sentiment Lexicon expansion: To expand existing sentiment lexicon, POS tags, word frequency, mutual information and context entropy are used to mine the new sentiment word from twenty million microblog text. Positive Words Negative Words Word features: unigram, bigram, uni-part-of-speech, bi-part-of- speech, sentiment lexicons Features Selection Methods: CHI-test, TF-IDF Imbalance Data Problem: use SMOTE algorithm to undersampling the major class and oversampling the minor classes. Classifier: SVM with linear kernel
[]
GEM-SciDuet-train-39#paper-1055#slide-3
1055
A Joint Model for Chinese Microblog Sentiment Analysis
Topic-based sentiment analysis for Chinese microblog aims to identify the user attitude on specified topics. In this paper, we propose a joint model by incorporating Support Vector Machines (SVM) and deep neural network to improve the performance of sentiment analysis. Firstly, a SVM Classifier is constructed using N-gram, N-POS and sentiment lexicons features. Meanwhile, a convolutional neural network is applied to learn paragraph representation features as the input of another SVM classifier. The classification results outputted by these two classifiers are merged as the final classification results. The evaluations on the SIGHAN-8 Topic-based Chinese microblog sentiment analysis task show that our proposed approach achieves the second rank on micro average F1 and the fourth rank on macro average F1 among a total of 13 submitted systems.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143 ], "paper_content_text": [ "Introduction With the development of the Internet, microblog has become a popular user-generated content platform where users share the newest events or their personal feelings with each other.", "Topic-based microblogs are the most common interactive way for users to share their opinions towards a specified topic.", "To identify the opinions of users, sentiment analysis techniques are investigated to classify texts into different categorizations according to their sentiment polarities.", "Most existing sentiment classification techniques are based on machine learning algorithms, such as Support Vector Machine, Naïve Bayes and Maximum Entropy.", "The machine learning based approach uses feature vectors as the input of classification to predict the classification results.", "Thus, feature engineering, a method for extracting effective features from texts, plays an important role.", "Some commonly used features in sentiment classification are unigram, bigram and sentiment words.", "However, these features cannot work well for cross-domain sentiment classification because of the lack of domain knowledge.", "Danushka Bollegala et al.", "(2011) used multiple sources to construct a sentiment sensitive thesaurus to overcome the lack of domain knowledge.", "New sentiment words expansion is another kind of approach to improve the performance of sentiment analysis.", "Strfano Baccianella et al.", "(2010) constructed SentiWord-Net by extending WordNet with sentiment information.", "It is now widely used in sentiment classification for English.", "As for Chinese sentiment analysis, Minlie Huang et al.", "(2014) proposed a new word detection method by mining the frequent sentiment word patterns.", "This method may discover new sentiment words from a large scale of unlabeled texts.", "With the rapid development of pre-trained word embedding and deep neural networks, a new way to represent texts and features is devloped.", "Mikolov et al.", "(2013) showed that word embedding represents words with meaningful syntactic and semantic information effectively.", "Recursive neural network proposed by Socher et al.", "(2011a; 2011b; is shown efficient to construct sentence representations based on the word embedding.", "Convolutional neural networks (CNN), another deep learn model which achieved success in image recognition field, was applied to nature language processing with word embed-dings.", "Yoon Kim (2014) used CNN with pretrained word embedding to achieve state-ofthe-art performances on some sentence classification tasks, including sentiment classification.", "Siwei Lai et al.", "(2015) incorporated global information in a recurrent convolutional neural network.", "It obtained further improvements comparing to other deep learning models.", "In this paper, we propose a joint model which incorporates traditional machine learning based method (SVM) and deep learning model.", "Two different classifiers are developed.", "One is a word feature based SVM classifier which uses word unigram, bigram and sentiment words as features.", "Another one is a CNN-based SVM classifier which takes paragraph representations features learned by CNN as input features.", "The classification results of these two classifiers are integrated to generate the final classification results.", "The evaluations on the SIGHAN-8 Topic-based Chinese microblog sentiment analysis task show that our proposed approach achieves the second rank on micro average F1 and the fourth rank on macro average F1 among a total of 13 submitted systems.", "Furthermore, the joint classifier strategy brings further performance improvement on individual classifiers.", "The rest of this paper is organized as follows.", "Section 2 presents the design and implementation of our proposed joint model.", "Section 3 gives the evaluation results and discussions.", "Finally, Section 4 gives the conclusion and future research directions.", "Our Approach The SIGHAN8 topic-based Chinese polarity classification task aims to is to classify Chinese microblog into three topic-related sentiment classes, namely neutral, positive and negative.", "This task may be generally regarded as a three-category classification problem.", "The SVM classifier which has been shown effective to document classification is adopted as the core classifier.", "Here, two different feature representation models, namely word-based vector space model and CNN-based composition representation, are adopted to generate the classification features for two classifiers, respectively.", "The classification outputs of two clas-sifiers are integrated to generate the final output.", "Data preprocessing Chinese microblog text is obviously different from formal text.", "Many microblogs have noises, including nickname, hashtag, repost or reply symbols, and URL.", "Therefore, before the feature representation and extraction, preprocessing is performed to filter out noise text in the microblogs.", "Meanwhile, the advertising text and topic-irrelevant microblog are identified as neutral text.", "Especially, this task is designed to identify the topic-relevant sentiments.", "Therefore, the information coming from the reply, repost and sharing parts should be filtered out to avoid their influences to the sentiment analysis of the microblog author.", "Generally speaking, such filtering is based on rules.", "The table 1 shows the example data preprocessing rules with illustrations.", "Table 2 shows the rules for identifying the advertisement and topic-irrelevant microblogs.", "The identified microblogs are labeled as neutral for topic-based sentiment classification.", "Word feature based classifier The word feature based classifier is designed based on the vector model.", "Firstly, the new sentiment words from unlabeled sentences data are recognized to expand the sentiment lexicon.", "The classification features are extracted from the labeled training data and sentiment lexicon resources.", "In order to alleviate the influences of unbalanced training data, SMOTE, which is an oversampling algorithm, is applied to training data before classifier training.", "Finally, a SVM classifier is trained on the balanced data.", "The framework of word feature based classifier is shown in Figure 1.", "Feature selection Unigram, Bigram, Uni-Part-of-Speech and Bi-Part-of-Speech features are selected as the basic features.", "CHI-test based feature selection is applied to obtain the top 20000 features.", "To improve the performance of sentiment classification, additional features based on lexicons including sentiment word lexicons, negation word lexicons, and adverb word lexicons, are incorporated.", "Rules Raw Text Processed Text Sharing news with 好看?吗?//【Galaxy S6:三星证明自 好看?吗? personal comments 己能做出好看的手机】http: //t.cn/ RwHRsIb(分享自 @ 今日头条) Removing HashTag # 三星 Galaxy S6# 三星 GALAXY S6 三星 GALAXY S6, ,挺中意 [酷][酷] [位置] 芒砀路 挺中意 [酷][酷] Removing URL 699 欧元起传三星 Galaxy S6/S6 Edge 售 699 欧元起传三星 Galaxy 价获证实(分享自 @ 新浪科技) S6/S6 Edge 售价获证实 http://t.cn/RwTo3on (分享自 @ 新浪科技) Removing nickname 玻璃取代塑料,更美 Galaxy S6 的 5 大 http://t.cn/RwHY6Az 妥协 http://t.cn/RwHY6Az 罗永浩我去 罗永浩我去小米和三星这 小米和三星这是要闹哪样, , ,老罗。 。不 是要闹哪样, , ,老罗。 。 能忍啊, , , , ,@ 锤子科技营销帐号 @ 罗 不能忍啊, , , , , 永浩 Removing 【视频:三星 S6 对比苹果 iPhone6 【视频:三星 S6 对比苹果 information sources MWC2015 @youtube 科技 】 iPhone6 MWC2015 http://t.cn/RwHQzJ8(来自于优酷安 @youtube 科技 】 卓客户端) http://t.cn/RwHQzJ8 Rules Type Including many different Advertisement topic (\"#...#\") tag.", "Including many words Advertisement like \"微商\", \"商机\", \"想赚钱\",\"面膜\".", "No actual content Topic-irrelevant Table 2 : Microblog text matching rules.", "By analyzing the expressions of the microblog text in training data, some special expression features in microblog text are identified.", "For example, the continuous punctuations are always used to express a strong feeling and thus, the microblog with continuous punctuations tends to be subjective.", "Another adopted feature for microblog text is the use of emoticons.", "Sentiment lexicon expansion In microblogs, abundant new or informal sentiment words are widely used.", "Normally, these new sentiment words are short but meaningful for expressing a strong feeling.", "These new sentiment words play an important role in Chinese microblog sentiment classification.", "Therefore, sentiment word identification is performed to recognize new sentiment words as the supplement of sentiment lexicon.", "Twenty million microblog text collected from Sina Weibo Platform are used in new sentiment word detection.", "Considering that new words normally cannot be correctly segmented by the existing segmentor, identifying new words from preliminary segmentation results together with their POS tags is a feasible method.", "Here, potential components for new words are limited to the segmentation tokens shorter than three.", "Using word frequency, mutual information and context entropy as the evaluation indicators for words, the most possible new word candidates are obtained.", "With the help of word embedding construction model, each word in the corpus can be represented as a low dimension vector together with its context information.", "Hence, the distances between the new words and the existed sentiment words corresponding to difference sentiment polarity are estimated.", "The new words are then classified into one of the three polarity classes by following voting mechanism.", "Classification Two steps are performed to determine the topic-relevant sentiment for input microblogs.", "The first step is to distinguish topic relevant messages from topic irrelevant messages.", "Sentiment classification is then applied to topic relevant messages in the second step.", "Topic relevant words generated by clustering analysis are employed as distinguishable features to filter out topic irrelevant microblogs because normally the topic irrelevant microblogs have few intersections with topic relevant words.", "Some advertisement posts consisting of several hot topic hash tags are also filtered out by considering the number of hash tag types in the microblog.", "The provided labeled dataset is used to train the SVM classifier with linear kernel.", "A new challenge is that the provided training set is imbalanced.", "There are about 3973 neutral microblogs, while the numbers of positive and negative microblogs are 394 and 538, respectively.", "In order to reduce the influences of imbalanced training dataset, the SMOTE algorithm (Chawla et al., 2002) is applied to oversampling the samples on minority class.", "Oversampling ratio is set to 10 and 7.4 for positive class and negative class, respectively.", "In this way, the training dataset becomes balanced.", "Another classifier is CNN-based SVM classifier.", "The classifier framework is shown in Figure 2 .", "Firstly, continuous bog of word (CBOW) model (Mikolov et al., 2013 ) is used to learn word embeddings from Chinese microblog text.", "A deep convolutional neural networks (CNN) model is applied to learn distributed paragraph representation features for Chinese microblog training and testing data.", "Finally, the distributed paragraph representation features are used in SVM classifier to learn the probability distribution over sentiment labels.", "CNN-based SVM classifier Word embedding construction Word embedding, wherein words are projected from a sparse, 1-of-V encoding (here V is the vocabulary size) onto a lower dimensional vector space via a hidden layer, are essentially feature extractors that encode semantic features of words in their dimensions.", "Mikolov et al.", "(2013) introduced CBOW model to learn vector representations which captures a large number of syntactic and semantic word relationships from unstructured text data.", "The main idea of this model is to find word representations which use the surrounding words in a sentence or a document to predict current word.", "In this study, we train the CBOW model by using 16GB Chinese microblog text.", "Finally, we obtain 200-dimension word embeddings for Chinese microblog text.", "CNN-based SVM classifier In the CNN-based SVM classifier, the input is a matrix which is composed of the word embeddings of microblogs.", "There are windows with the lengths of three, four and five words, respectively.", "A convolution operation involves three filters which are applied to these windows to produce new features.", "After convolution operation, a max-over-time pooling operation is applied over these features.", "The maximum value is taken as the feature corresponding to this particular filter.", "The idea is to capture the most important feature which has the largest value.", "Since one feature is extracted from one filter, the model uses multiple filters (with varying window sizes) to obtain multiple features.", "These features constitute the distributed paragraph feature representation.", "In the last step, a SVM classifier is applied on these distributed paragraph representation features to obtain the probability distributions over labels (positive, negative, and neutral).", "A set of merging rules is designed to incorporate the individual classification results of the two classifiers for generating the final result.", "If the two classification outputs are the same, naturally, the final output is the same.", "If the two classification outputs are different, the final result is determined from the merge rules shown in Table 3 .", "Simply speaking, if any of two classifiers output neutral category, the final output is neutral.", "If two classifiers outputs positive and negative, respectively, the final output is the result of CNN-based clas-sifier.", "Such a classification outputs merging strategy is based on the statistical analysis on the individual classifier performances on training dataset.", "Outputs Merging Experimental results and analysis Data set In the SIGHAN-8 Chinese sentiment analysis bakeoff dataset, 4905 topic-based Chinese microblog are provided as training data which consists of 394 positive, 538 negative and 3973 neutral microblogs corresponding to 5 topics, namely \"央行降息\", \"油价\", \"日本马桶\", \"三星 S6\"and \"雾霾\".", "In the testing data, there are 19,469 microblogs corresponding to 20 topic, such as \"12306 验证码\", \"中国政 府也门撤侨\", \"何以笙箫默\", \"刘翔退役\".", "Metrics Precision, recall and F1-value are used as the evaluation metrics, as shown below: P recision = SystemCorrect SystemOutput (1) Recall = SystemCorrect HumanLabeled (2) F 1 = 2 × P recision × Recall P recision + Recall (3) Where System.Output refers to the total number of the submitted results, System.Correct refers to the number of correctly classified results in the submitted results, Human.Labeled refers to the total number of manually labeled results in the Gold Standard.", "The evaluation metrics corresponding to positive, negative and overall are estimated, respectively.", "The corresponding microaverage and macro-average performances are then estimated.", "The micro-average estimates the average performance of the three evaluation metrics over the entire dataset.", "The macro-average estimates the average performances of the evaluation metrics on positive, negative and neutral, respectively.", "Experimental results and analysis There are two subtasks in SIGHAN-8 topicbased Chinese microblog polarity classification Table 6 : Performances by different classifiers in unrestricted resource subtask.", "task: restricted resource and unrestricted resource subtasks.", "Table 4 gives the performances in restricted resource subtask.", "The first column lists the name of participants who achieves higher macro average F1 values while out system is named as HLT_HITSZ.", "It is observed that our proposed approach achieves better performance on negative and positive categories, but obviously lower performance on neutral category.", "The good performance on the recall of minority classes showed the effectiveness of our consideration on imbalanced dataset training.", "The achieved performances in the unrestricted resource subtask are listed in Table 5 .", "Our system achieves about 3% of performance improvement on each category, respectively.", "It shows the contributions of extra training corpus and merging rules.", "In order to validate the effectiveness of merging rules, the performances of Classifier 1 and Classifier 2 are evaluated, individually.", "The achieved performances are given in Table 6.", "It is observed that generally speaking, Classifier 1 achieves a higher classification precision because many features are coming from manually compiled sentiment-related lexicons.", "However, these features are limited to training data so that Classifier 1 achieved a lower recall.", "On the contrary, Classifier 2 may learn the representation features automatically from training data which is better for generalization.", "Thus, a good recall is achieved.", "Meanwhile, the achieved performances show that our joint model obtains better performances compared to two individual classifiers which indicate the effectiveness of our proposed joint classification strategy.", "Conclusion In this work, we propose a joint model for sentiment topic analysis on Chinese microblog messages.", "A word feature based SVM classifier and a SVM classifier using CNN-based paragraph representation features are developed, respectively.", "To overcome the limitation of each classifier, their classification outputs are merged to generate the final output while the merging rules are based on statistical analy-sis on the performances on training dataset.", "Experimental results show that our proposed joint method achieves better sentiment classification performance over individual classifiers which show the effectiveness of the joint classifier strategy.", "In future, we intend to study the way to distinguish the subjective messages from objective messages for further improving the sentiment classification performance." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.2.1", "2.2.2", "2.2.3", "2.3.1", "2.3.2", "3.1", "3.2", "3.3", "4" ], "paper_header_content": [ "Introduction", "Our Approach", "Data preprocessing", "Word feature based classifier", "Feature selection", "Sentiment lexicon expansion", "Classification", "Word embedding construction", "CNN-based SVM classifier", "Data set", "Metrics", "Experimental results and analysis", "Conclusion" ] }
GEM-SciDuet-train-39#paper-1055#slide-3
CNN based SVM Classifier
Train the CBOW model using 16GB Chinese microblog text Input: a matrix which is composed of the word embeddings of microblogs Features: use CNN to constitute the distributed paragraph feature representation Classifier: SVM with linear kernel
Train the CBOW model using 16GB Chinese microblog text Input: a matrix which is composed of the word embeddings of microblogs Features: use CNN to constitute the distributed paragraph feature representation Classifier: SVM with linear kernel
[]
GEM-SciDuet-train-39#paper-1055#slide-4
1055
A Joint Model for Chinese Microblog Sentiment Analysis
Topic-based sentiment analysis for Chinese microblog aims to identify the user attitude on specified topics. In this paper, we propose a joint model by incorporating Support Vector Machines (SVM) and deep neural network to improve the performance of sentiment analysis. Firstly, a SVM Classifier is constructed using N-gram, N-POS and sentiment lexicons features. Meanwhile, a convolutional neural network is applied to learn paragraph representation features as the input of another SVM classifier. The classification results outputted by these two classifiers are merged as the final classification results. The evaluations on the SIGHAN-8 Topic-based Chinese microblog sentiment analysis task show that our proposed approach achieves the second rank on micro average F1 and the fourth rank on macro average F1 among a total of 13 submitted systems.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143 ], "paper_content_text": [ "Introduction With the development of the Internet, microblog has become a popular user-generated content platform where users share the newest events or their personal feelings with each other.", "Topic-based microblogs are the most common interactive way for users to share their opinions towards a specified topic.", "To identify the opinions of users, sentiment analysis techniques are investigated to classify texts into different categorizations according to their sentiment polarities.", "Most existing sentiment classification techniques are based on machine learning algorithms, such as Support Vector Machine, Naïve Bayes and Maximum Entropy.", "The machine learning based approach uses feature vectors as the input of classification to predict the classification results.", "Thus, feature engineering, a method for extracting effective features from texts, plays an important role.", "Some commonly used features in sentiment classification are unigram, bigram and sentiment words.", "However, these features cannot work well for cross-domain sentiment classification because of the lack of domain knowledge.", "Danushka Bollegala et al.", "(2011) used multiple sources to construct a sentiment sensitive thesaurus to overcome the lack of domain knowledge.", "New sentiment words expansion is another kind of approach to improve the performance of sentiment analysis.", "Strfano Baccianella et al.", "(2010) constructed SentiWord-Net by extending WordNet with sentiment information.", "It is now widely used in sentiment classification for English.", "As for Chinese sentiment analysis, Minlie Huang et al.", "(2014) proposed a new word detection method by mining the frequent sentiment word patterns.", "This method may discover new sentiment words from a large scale of unlabeled texts.", "With the rapid development of pre-trained word embedding and deep neural networks, a new way to represent texts and features is devloped.", "Mikolov et al.", "(2013) showed that word embedding represents words with meaningful syntactic and semantic information effectively.", "Recursive neural network proposed by Socher et al.", "(2011a; 2011b; is shown efficient to construct sentence representations based on the word embedding.", "Convolutional neural networks (CNN), another deep learn model which achieved success in image recognition field, was applied to nature language processing with word embed-dings.", "Yoon Kim (2014) used CNN with pretrained word embedding to achieve state-ofthe-art performances on some sentence classification tasks, including sentiment classification.", "Siwei Lai et al.", "(2015) incorporated global information in a recurrent convolutional neural network.", "It obtained further improvements comparing to other deep learning models.", "In this paper, we propose a joint model which incorporates traditional machine learning based method (SVM) and deep learning model.", "Two different classifiers are developed.", "One is a word feature based SVM classifier which uses word unigram, bigram and sentiment words as features.", "Another one is a CNN-based SVM classifier which takes paragraph representations features learned by CNN as input features.", "The classification results of these two classifiers are integrated to generate the final classification results.", "The evaluations on the SIGHAN-8 Topic-based Chinese microblog sentiment analysis task show that our proposed approach achieves the second rank on micro average F1 and the fourth rank on macro average F1 among a total of 13 submitted systems.", "Furthermore, the joint classifier strategy brings further performance improvement on individual classifiers.", "The rest of this paper is organized as follows.", "Section 2 presents the design and implementation of our proposed joint model.", "Section 3 gives the evaluation results and discussions.", "Finally, Section 4 gives the conclusion and future research directions.", "Our Approach The SIGHAN8 topic-based Chinese polarity classification task aims to is to classify Chinese microblog into three topic-related sentiment classes, namely neutral, positive and negative.", "This task may be generally regarded as a three-category classification problem.", "The SVM classifier which has been shown effective to document classification is adopted as the core classifier.", "Here, two different feature representation models, namely word-based vector space model and CNN-based composition representation, are adopted to generate the classification features for two classifiers, respectively.", "The classification outputs of two clas-sifiers are integrated to generate the final output.", "Data preprocessing Chinese microblog text is obviously different from formal text.", "Many microblogs have noises, including nickname, hashtag, repost or reply symbols, and URL.", "Therefore, before the feature representation and extraction, preprocessing is performed to filter out noise text in the microblogs.", "Meanwhile, the advertising text and topic-irrelevant microblog are identified as neutral text.", "Especially, this task is designed to identify the topic-relevant sentiments.", "Therefore, the information coming from the reply, repost and sharing parts should be filtered out to avoid their influences to the sentiment analysis of the microblog author.", "Generally speaking, such filtering is based on rules.", "The table 1 shows the example data preprocessing rules with illustrations.", "Table 2 shows the rules for identifying the advertisement and topic-irrelevant microblogs.", "The identified microblogs are labeled as neutral for topic-based sentiment classification.", "Word feature based classifier The word feature based classifier is designed based on the vector model.", "Firstly, the new sentiment words from unlabeled sentences data are recognized to expand the sentiment lexicon.", "The classification features are extracted from the labeled training data and sentiment lexicon resources.", "In order to alleviate the influences of unbalanced training data, SMOTE, which is an oversampling algorithm, is applied to training data before classifier training.", "Finally, a SVM classifier is trained on the balanced data.", "The framework of word feature based classifier is shown in Figure 1.", "Feature selection Unigram, Bigram, Uni-Part-of-Speech and Bi-Part-of-Speech features are selected as the basic features.", "CHI-test based feature selection is applied to obtain the top 20000 features.", "To improve the performance of sentiment classification, additional features based on lexicons including sentiment word lexicons, negation word lexicons, and adverb word lexicons, are incorporated.", "Rules Raw Text Processed Text Sharing news with 好看?吗?//【Galaxy S6:三星证明自 好看?吗? personal comments 己能做出好看的手机】http: //t.cn/ RwHRsIb(分享自 @ 今日头条) Removing HashTag # 三星 Galaxy S6# 三星 GALAXY S6 三星 GALAXY S6, ,挺中意 [酷][酷] [位置] 芒砀路 挺中意 [酷][酷] Removing URL 699 欧元起传三星 Galaxy S6/S6 Edge 售 699 欧元起传三星 Galaxy 价获证实(分享自 @ 新浪科技) S6/S6 Edge 售价获证实 http://t.cn/RwTo3on (分享自 @ 新浪科技) Removing nickname 玻璃取代塑料,更美 Galaxy S6 的 5 大 http://t.cn/RwHY6Az 妥协 http://t.cn/RwHY6Az 罗永浩我去 罗永浩我去小米和三星这 小米和三星这是要闹哪样, , ,老罗。 。不 是要闹哪样, , ,老罗。 。 能忍啊, , , , ,@ 锤子科技营销帐号 @ 罗 不能忍啊, , , , , 永浩 Removing 【视频:三星 S6 对比苹果 iPhone6 【视频:三星 S6 对比苹果 information sources MWC2015 @youtube 科技 】 iPhone6 MWC2015 http://t.cn/RwHQzJ8(来自于优酷安 @youtube 科技 】 卓客户端) http://t.cn/RwHQzJ8 Rules Type Including many different Advertisement topic (\"#...#\") tag.", "Including many words Advertisement like \"微商\", \"商机\", \"想赚钱\",\"面膜\".", "No actual content Topic-irrelevant Table 2 : Microblog text matching rules.", "By analyzing the expressions of the microblog text in training data, some special expression features in microblog text are identified.", "For example, the continuous punctuations are always used to express a strong feeling and thus, the microblog with continuous punctuations tends to be subjective.", "Another adopted feature for microblog text is the use of emoticons.", "Sentiment lexicon expansion In microblogs, abundant new or informal sentiment words are widely used.", "Normally, these new sentiment words are short but meaningful for expressing a strong feeling.", "These new sentiment words play an important role in Chinese microblog sentiment classification.", "Therefore, sentiment word identification is performed to recognize new sentiment words as the supplement of sentiment lexicon.", "Twenty million microblog text collected from Sina Weibo Platform are used in new sentiment word detection.", "Considering that new words normally cannot be correctly segmented by the existing segmentor, identifying new words from preliminary segmentation results together with their POS tags is a feasible method.", "Here, potential components for new words are limited to the segmentation tokens shorter than three.", "Using word frequency, mutual information and context entropy as the evaluation indicators for words, the most possible new word candidates are obtained.", "With the help of word embedding construction model, each word in the corpus can be represented as a low dimension vector together with its context information.", "Hence, the distances between the new words and the existed sentiment words corresponding to difference sentiment polarity are estimated.", "The new words are then classified into one of the three polarity classes by following voting mechanism.", "Classification Two steps are performed to determine the topic-relevant sentiment for input microblogs.", "The first step is to distinguish topic relevant messages from topic irrelevant messages.", "Sentiment classification is then applied to topic relevant messages in the second step.", "Topic relevant words generated by clustering analysis are employed as distinguishable features to filter out topic irrelevant microblogs because normally the topic irrelevant microblogs have few intersections with topic relevant words.", "Some advertisement posts consisting of several hot topic hash tags are also filtered out by considering the number of hash tag types in the microblog.", "The provided labeled dataset is used to train the SVM classifier with linear kernel.", "A new challenge is that the provided training set is imbalanced.", "There are about 3973 neutral microblogs, while the numbers of positive and negative microblogs are 394 and 538, respectively.", "In order to reduce the influences of imbalanced training dataset, the SMOTE algorithm (Chawla et al., 2002) is applied to oversampling the samples on minority class.", "Oversampling ratio is set to 10 and 7.4 for positive class and negative class, respectively.", "In this way, the training dataset becomes balanced.", "Another classifier is CNN-based SVM classifier.", "The classifier framework is shown in Figure 2 .", "Firstly, continuous bog of word (CBOW) model (Mikolov et al., 2013 ) is used to learn word embeddings from Chinese microblog text.", "A deep convolutional neural networks (CNN) model is applied to learn distributed paragraph representation features for Chinese microblog training and testing data.", "Finally, the distributed paragraph representation features are used in SVM classifier to learn the probability distribution over sentiment labels.", "CNN-based SVM classifier Word embedding construction Word embedding, wherein words are projected from a sparse, 1-of-V encoding (here V is the vocabulary size) onto a lower dimensional vector space via a hidden layer, are essentially feature extractors that encode semantic features of words in their dimensions.", "Mikolov et al.", "(2013) introduced CBOW model to learn vector representations which captures a large number of syntactic and semantic word relationships from unstructured text data.", "The main idea of this model is to find word representations which use the surrounding words in a sentence or a document to predict current word.", "In this study, we train the CBOW model by using 16GB Chinese microblog text.", "Finally, we obtain 200-dimension word embeddings for Chinese microblog text.", "CNN-based SVM classifier In the CNN-based SVM classifier, the input is a matrix which is composed of the word embeddings of microblogs.", "There are windows with the lengths of three, four and five words, respectively.", "A convolution operation involves three filters which are applied to these windows to produce new features.", "After convolution operation, a max-over-time pooling operation is applied over these features.", "The maximum value is taken as the feature corresponding to this particular filter.", "The idea is to capture the most important feature which has the largest value.", "Since one feature is extracted from one filter, the model uses multiple filters (with varying window sizes) to obtain multiple features.", "These features constitute the distributed paragraph feature representation.", "In the last step, a SVM classifier is applied on these distributed paragraph representation features to obtain the probability distributions over labels (positive, negative, and neutral).", "A set of merging rules is designed to incorporate the individual classification results of the two classifiers for generating the final result.", "If the two classification outputs are the same, naturally, the final output is the same.", "If the two classification outputs are different, the final result is determined from the merge rules shown in Table 3 .", "Simply speaking, if any of two classifiers output neutral category, the final output is neutral.", "If two classifiers outputs positive and negative, respectively, the final output is the result of CNN-based clas-sifier.", "Such a classification outputs merging strategy is based on the statistical analysis on the individual classifier performances on training dataset.", "Outputs Merging Experimental results and analysis Data set In the SIGHAN-8 Chinese sentiment analysis bakeoff dataset, 4905 topic-based Chinese microblog are provided as training data which consists of 394 positive, 538 negative and 3973 neutral microblogs corresponding to 5 topics, namely \"央行降息\", \"油价\", \"日本马桶\", \"三星 S6\"and \"雾霾\".", "In the testing data, there are 19,469 microblogs corresponding to 20 topic, such as \"12306 验证码\", \"中国政 府也门撤侨\", \"何以笙箫默\", \"刘翔退役\".", "Metrics Precision, recall and F1-value are used as the evaluation metrics, as shown below: P recision = SystemCorrect SystemOutput (1) Recall = SystemCorrect HumanLabeled (2) F 1 = 2 × P recision × Recall P recision + Recall (3) Where System.Output refers to the total number of the submitted results, System.Correct refers to the number of correctly classified results in the submitted results, Human.Labeled refers to the total number of manually labeled results in the Gold Standard.", "The evaluation metrics corresponding to positive, negative and overall are estimated, respectively.", "The corresponding microaverage and macro-average performances are then estimated.", "The micro-average estimates the average performance of the three evaluation metrics over the entire dataset.", "The macro-average estimates the average performances of the evaluation metrics on positive, negative and neutral, respectively.", "Experimental results and analysis There are two subtasks in SIGHAN-8 topicbased Chinese microblog polarity classification Table 6 : Performances by different classifiers in unrestricted resource subtask.", "task: restricted resource and unrestricted resource subtasks.", "Table 4 gives the performances in restricted resource subtask.", "The first column lists the name of participants who achieves higher macro average F1 values while out system is named as HLT_HITSZ.", "It is observed that our proposed approach achieves better performance on negative and positive categories, but obviously lower performance on neutral category.", "The good performance on the recall of minority classes showed the effectiveness of our consideration on imbalanced dataset training.", "The achieved performances in the unrestricted resource subtask are listed in Table 5 .", "Our system achieves about 3% of performance improvement on each category, respectively.", "It shows the contributions of extra training corpus and merging rules.", "In order to validate the effectiveness of merging rules, the performances of Classifier 1 and Classifier 2 are evaluated, individually.", "The achieved performances are given in Table 6.", "It is observed that generally speaking, Classifier 1 achieves a higher classification precision because many features are coming from manually compiled sentiment-related lexicons.", "However, these features are limited to training data so that Classifier 1 achieved a lower recall.", "On the contrary, Classifier 2 may learn the representation features automatically from training data which is better for generalization.", "Thus, a good recall is achieved.", "Meanwhile, the achieved performances show that our joint model obtains better performances compared to two individual classifiers which indicate the effectiveness of our proposed joint classification strategy.", "Conclusion In this work, we propose a joint model for sentiment topic analysis on Chinese microblog messages.", "A word feature based SVM classifier and a SVM classifier using CNN-based paragraph representation features are developed, respectively.", "To overcome the limitation of each classifier, their classification outputs are merged to generate the final output while the merging rules are based on statistical analy-sis on the performances on training dataset.", "Experimental results show that our proposed joint method achieves better sentiment classification performance over individual classifiers which show the effectiveness of the joint classifier strategy.", "In future, we intend to study the way to distinguish the subjective messages from objective messages for further improving the sentiment classification performance." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.2.1", "2.2.2", "2.2.3", "2.3.1", "2.3.2", "3.1", "3.2", "3.3", "4" ], "paper_header_content": [ "Introduction", "Our Approach", "Data preprocessing", "Word feature based classifier", "Feature selection", "Sentiment lexicon expansion", "Classification", "Word embedding construction", "CNN-based SVM classifier", "Data set", "Metrics", "Experimental results and analysis", "Conclusion" ] }
GEM-SciDuet-train-39#paper-1055#slide-4
Outputs merging
Two classification outputs are the same =>The final output is the same Two classification outputs are different =>The final result is determined from the merge rules These rules are based on the statistical analysis on the individual classifier performances on training dataset. Final result Classifier 1 Classifier 2
Two classification outputs are the same =>The final output is the same Two classification outputs are different =>The final result is determined from the merge rules These rules are based on the statistical analysis on the individual classifier performances on training dataset. Final result Classifier 1 Classifier 2
[]
GEM-SciDuet-train-39#paper-1055#slide-5
1055
A Joint Model for Chinese Microblog Sentiment Analysis
Topic-based sentiment analysis for Chinese microblog aims to identify the user attitude on specified topics. In this paper, we propose a joint model by incorporating Support Vector Machines (SVM) and deep neural network to improve the performance of sentiment analysis. Firstly, a SVM Classifier is constructed using N-gram, N-POS and sentiment lexicons features. Meanwhile, a convolutional neural network is applied to learn paragraph representation features as the input of another SVM classifier. The classification results outputted by these two classifiers are merged as the final classification results. The evaluations on the SIGHAN-8 Topic-based Chinese microblog sentiment analysis task show that our proposed approach achieves the second rank on micro average F1 and the fourth rank on macro average F1 among a total of 13 submitted systems.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143 ], "paper_content_text": [ "Introduction With the development of the Internet, microblog has become a popular user-generated content platform where users share the newest events or their personal feelings with each other.", "Topic-based microblogs are the most common interactive way for users to share their opinions towards a specified topic.", "To identify the opinions of users, sentiment analysis techniques are investigated to classify texts into different categorizations according to their sentiment polarities.", "Most existing sentiment classification techniques are based on machine learning algorithms, such as Support Vector Machine, Naïve Bayes and Maximum Entropy.", "The machine learning based approach uses feature vectors as the input of classification to predict the classification results.", "Thus, feature engineering, a method for extracting effective features from texts, plays an important role.", "Some commonly used features in sentiment classification are unigram, bigram and sentiment words.", "However, these features cannot work well for cross-domain sentiment classification because of the lack of domain knowledge.", "Danushka Bollegala et al.", "(2011) used multiple sources to construct a sentiment sensitive thesaurus to overcome the lack of domain knowledge.", "New sentiment words expansion is another kind of approach to improve the performance of sentiment analysis.", "Strfano Baccianella et al.", "(2010) constructed SentiWord-Net by extending WordNet with sentiment information.", "It is now widely used in sentiment classification for English.", "As for Chinese sentiment analysis, Minlie Huang et al.", "(2014) proposed a new word detection method by mining the frequent sentiment word patterns.", "This method may discover new sentiment words from a large scale of unlabeled texts.", "With the rapid development of pre-trained word embedding and deep neural networks, a new way to represent texts and features is devloped.", "Mikolov et al.", "(2013) showed that word embedding represents words with meaningful syntactic and semantic information effectively.", "Recursive neural network proposed by Socher et al.", "(2011a; 2011b; is shown efficient to construct sentence representations based on the word embedding.", "Convolutional neural networks (CNN), another deep learn model which achieved success in image recognition field, was applied to nature language processing with word embed-dings.", "Yoon Kim (2014) used CNN with pretrained word embedding to achieve state-ofthe-art performances on some sentence classification tasks, including sentiment classification.", "Siwei Lai et al.", "(2015) incorporated global information in a recurrent convolutional neural network.", "It obtained further improvements comparing to other deep learning models.", "In this paper, we propose a joint model which incorporates traditional machine learning based method (SVM) and deep learning model.", "Two different classifiers are developed.", "One is a word feature based SVM classifier which uses word unigram, bigram and sentiment words as features.", "Another one is a CNN-based SVM classifier which takes paragraph representations features learned by CNN as input features.", "The classification results of these two classifiers are integrated to generate the final classification results.", "The evaluations on the SIGHAN-8 Topic-based Chinese microblog sentiment analysis task show that our proposed approach achieves the second rank on micro average F1 and the fourth rank on macro average F1 among a total of 13 submitted systems.", "Furthermore, the joint classifier strategy brings further performance improvement on individual classifiers.", "The rest of this paper is organized as follows.", "Section 2 presents the design and implementation of our proposed joint model.", "Section 3 gives the evaluation results and discussions.", "Finally, Section 4 gives the conclusion and future research directions.", "Our Approach The SIGHAN8 topic-based Chinese polarity classification task aims to is to classify Chinese microblog into three topic-related sentiment classes, namely neutral, positive and negative.", "This task may be generally regarded as a three-category classification problem.", "The SVM classifier which has been shown effective to document classification is adopted as the core classifier.", "Here, two different feature representation models, namely word-based vector space model and CNN-based composition representation, are adopted to generate the classification features for two classifiers, respectively.", "The classification outputs of two clas-sifiers are integrated to generate the final output.", "Data preprocessing Chinese microblog text is obviously different from formal text.", "Many microblogs have noises, including nickname, hashtag, repost or reply symbols, and URL.", "Therefore, before the feature representation and extraction, preprocessing is performed to filter out noise text in the microblogs.", "Meanwhile, the advertising text and topic-irrelevant microblog are identified as neutral text.", "Especially, this task is designed to identify the topic-relevant sentiments.", "Therefore, the information coming from the reply, repost and sharing parts should be filtered out to avoid their influences to the sentiment analysis of the microblog author.", "Generally speaking, such filtering is based on rules.", "The table 1 shows the example data preprocessing rules with illustrations.", "Table 2 shows the rules for identifying the advertisement and topic-irrelevant microblogs.", "The identified microblogs are labeled as neutral for topic-based sentiment classification.", "Word feature based classifier The word feature based classifier is designed based on the vector model.", "Firstly, the new sentiment words from unlabeled sentences data are recognized to expand the sentiment lexicon.", "The classification features are extracted from the labeled training data and sentiment lexicon resources.", "In order to alleviate the influences of unbalanced training data, SMOTE, which is an oversampling algorithm, is applied to training data before classifier training.", "Finally, a SVM classifier is trained on the balanced data.", "The framework of word feature based classifier is shown in Figure 1.", "Feature selection Unigram, Bigram, Uni-Part-of-Speech and Bi-Part-of-Speech features are selected as the basic features.", "CHI-test based feature selection is applied to obtain the top 20000 features.", "To improve the performance of sentiment classification, additional features based on lexicons including sentiment word lexicons, negation word lexicons, and adverb word lexicons, are incorporated.", "Rules Raw Text Processed Text Sharing news with 好看?吗?//【Galaxy S6:三星证明自 好看?吗? personal comments 己能做出好看的手机】http: //t.cn/ RwHRsIb(分享自 @ 今日头条) Removing HashTag # 三星 Galaxy S6# 三星 GALAXY S6 三星 GALAXY S6, ,挺中意 [酷][酷] [位置] 芒砀路 挺中意 [酷][酷] Removing URL 699 欧元起传三星 Galaxy S6/S6 Edge 售 699 欧元起传三星 Galaxy 价获证实(分享自 @ 新浪科技) S6/S6 Edge 售价获证实 http://t.cn/RwTo3on (分享自 @ 新浪科技) Removing nickname 玻璃取代塑料,更美 Galaxy S6 的 5 大 http://t.cn/RwHY6Az 妥协 http://t.cn/RwHY6Az 罗永浩我去 罗永浩我去小米和三星这 小米和三星这是要闹哪样, , ,老罗。 。不 是要闹哪样, , ,老罗。 。 能忍啊, , , , ,@ 锤子科技营销帐号 @ 罗 不能忍啊, , , , , 永浩 Removing 【视频:三星 S6 对比苹果 iPhone6 【视频:三星 S6 对比苹果 information sources MWC2015 @youtube 科技 】 iPhone6 MWC2015 http://t.cn/RwHQzJ8(来自于优酷安 @youtube 科技 】 卓客户端) http://t.cn/RwHQzJ8 Rules Type Including many different Advertisement topic (\"#...#\") tag.", "Including many words Advertisement like \"微商\", \"商机\", \"想赚钱\",\"面膜\".", "No actual content Topic-irrelevant Table 2 : Microblog text matching rules.", "By analyzing the expressions of the microblog text in training data, some special expression features in microblog text are identified.", "For example, the continuous punctuations are always used to express a strong feeling and thus, the microblog with continuous punctuations tends to be subjective.", "Another adopted feature for microblog text is the use of emoticons.", "Sentiment lexicon expansion In microblogs, abundant new or informal sentiment words are widely used.", "Normally, these new sentiment words are short but meaningful for expressing a strong feeling.", "These new sentiment words play an important role in Chinese microblog sentiment classification.", "Therefore, sentiment word identification is performed to recognize new sentiment words as the supplement of sentiment lexicon.", "Twenty million microblog text collected from Sina Weibo Platform are used in new sentiment word detection.", "Considering that new words normally cannot be correctly segmented by the existing segmentor, identifying new words from preliminary segmentation results together with their POS tags is a feasible method.", "Here, potential components for new words are limited to the segmentation tokens shorter than three.", "Using word frequency, mutual information and context entropy as the evaluation indicators for words, the most possible new word candidates are obtained.", "With the help of word embedding construction model, each word in the corpus can be represented as a low dimension vector together with its context information.", "Hence, the distances between the new words and the existed sentiment words corresponding to difference sentiment polarity are estimated.", "The new words are then classified into one of the three polarity classes by following voting mechanism.", "Classification Two steps are performed to determine the topic-relevant sentiment for input microblogs.", "The first step is to distinguish topic relevant messages from topic irrelevant messages.", "Sentiment classification is then applied to topic relevant messages in the second step.", "Topic relevant words generated by clustering analysis are employed as distinguishable features to filter out topic irrelevant microblogs because normally the topic irrelevant microblogs have few intersections with topic relevant words.", "Some advertisement posts consisting of several hot topic hash tags are also filtered out by considering the number of hash tag types in the microblog.", "The provided labeled dataset is used to train the SVM classifier with linear kernel.", "A new challenge is that the provided training set is imbalanced.", "There are about 3973 neutral microblogs, while the numbers of positive and negative microblogs are 394 and 538, respectively.", "In order to reduce the influences of imbalanced training dataset, the SMOTE algorithm (Chawla et al., 2002) is applied to oversampling the samples on minority class.", "Oversampling ratio is set to 10 and 7.4 for positive class and negative class, respectively.", "In this way, the training dataset becomes balanced.", "Another classifier is CNN-based SVM classifier.", "The classifier framework is shown in Figure 2 .", "Firstly, continuous bog of word (CBOW) model (Mikolov et al., 2013 ) is used to learn word embeddings from Chinese microblog text.", "A deep convolutional neural networks (CNN) model is applied to learn distributed paragraph representation features for Chinese microblog training and testing data.", "Finally, the distributed paragraph representation features are used in SVM classifier to learn the probability distribution over sentiment labels.", "CNN-based SVM classifier Word embedding construction Word embedding, wherein words are projected from a sparse, 1-of-V encoding (here V is the vocabulary size) onto a lower dimensional vector space via a hidden layer, are essentially feature extractors that encode semantic features of words in their dimensions.", "Mikolov et al.", "(2013) introduced CBOW model to learn vector representations which captures a large number of syntactic and semantic word relationships from unstructured text data.", "The main idea of this model is to find word representations which use the surrounding words in a sentence or a document to predict current word.", "In this study, we train the CBOW model by using 16GB Chinese microblog text.", "Finally, we obtain 200-dimension word embeddings for Chinese microblog text.", "CNN-based SVM classifier In the CNN-based SVM classifier, the input is a matrix which is composed of the word embeddings of microblogs.", "There are windows with the lengths of three, four and five words, respectively.", "A convolution operation involves three filters which are applied to these windows to produce new features.", "After convolution operation, a max-over-time pooling operation is applied over these features.", "The maximum value is taken as the feature corresponding to this particular filter.", "The idea is to capture the most important feature which has the largest value.", "Since one feature is extracted from one filter, the model uses multiple filters (with varying window sizes) to obtain multiple features.", "These features constitute the distributed paragraph feature representation.", "In the last step, a SVM classifier is applied on these distributed paragraph representation features to obtain the probability distributions over labels (positive, negative, and neutral).", "A set of merging rules is designed to incorporate the individual classification results of the two classifiers for generating the final result.", "If the two classification outputs are the same, naturally, the final output is the same.", "If the two classification outputs are different, the final result is determined from the merge rules shown in Table 3 .", "Simply speaking, if any of two classifiers output neutral category, the final output is neutral.", "If two classifiers outputs positive and negative, respectively, the final output is the result of CNN-based clas-sifier.", "Such a classification outputs merging strategy is based on the statistical analysis on the individual classifier performances on training dataset.", "Outputs Merging Experimental results and analysis Data set In the SIGHAN-8 Chinese sentiment analysis bakeoff dataset, 4905 topic-based Chinese microblog are provided as training data which consists of 394 positive, 538 negative and 3973 neutral microblogs corresponding to 5 topics, namely \"央行降息\", \"油价\", \"日本马桶\", \"三星 S6\"and \"雾霾\".", "In the testing data, there are 19,469 microblogs corresponding to 20 topic, such as \"12306 验证码\", \"中国政 府也门撤侨\", \"何以笙箫默\", \"刘翔退役\".", "Metrics Precision, recall and F1-value are used as the evaluation metrics, as shown below: P recision = SystemCorrect SystemOutput (1) Recall = SystemCorrect HumanLabeled (2) F 1 = 2 × P recision × Recall P recision + Recall (3) Where System.Output refers to the total number of the submitted results, System.Correct refers to the number of correctly classified results in the submitted results, Human.Labeled refers to the total number of manually labeled results in the Gold Standard.", "The evaluation metrics corresponding to positive, negative and overall are estimated, respectively.", "The corresponding microaverage and macro-average performances are then estimated.", "The micro-average estimates the average performance of the three evaluation metrics over the entire dataset.", "The macro-average estimates the average performances of the evaluation metrics on positive, negative and neutral, respectively.", "Experimental results and analysis There are two subtasks in SIGHAN-8 topicbased Chinese microblog polarity classification Table 6 : Performances by different classifiers in unrestricted resource subtask.", "task: restricted resource and unrestricted resource subtasks.", "Table 4 gives the performances in restricted resource subtask.", "The first column lists the name of participants who achieves higher macro average F1 values while out system is named as HLT_HITSZ.", "It is observed that our proposed approach achieves better performance on negative and positive categories, but obviously lower performance on neutral category.", "The good performance on the recall of minority classes showed the effectiveness of our consideration on imbalanced dataset training.", "The achieved performances in the unrestricted resource subtask are listed in Table 5 .", "Our system achieves about 3% of performance improvement on each category, respectively.", "It shows the contributions of extra training corpus and merging rules.", "In order to validate the effectiveness of merging rules, the performances of Classifier 1 and Classifier 2 are evaluated, individually.", "The achieved performances are given in Table 6.", "It is observed that generally speaking, Classifier 1 achieves a higher classification precision because many features are coming from manually compiled sentiment-related lexicons.", "However, these features are limited to training data so that Classifier 1 achieved a lower recall.", "On the contrary, Classifier 2 may learn the representation features automatically from training data which is better for generalization.", "Thus, a good recall is achieved.", "Meanwhile, the achieved performances show that our joint model obtains better performances compared to two individual classifiers which indicate the effectiveness of our proposed joint classification strategy.", "Conclusion In this work, we propose a joint model for sentiment topic analysis on Chinese microblog messages.", "A word feature based SVM classifier and a SVM classifier using CNN-based paragraph representation features are developed, respectively.", "To overcome the limitation of each classifier, their classification outputs are merged to generate the final output while the merging rules are based on statistical analy-sis on the performances on training dataset.", "Experimental results show that our proposed joint method achieves better sentiment classification performance over individual classifiers which show the effectiveness of the joint classifier strategy.", "In future, we intend to study the way to distinguish the subjective messages from objective messages for further improving the sentiment classification performance." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.2.1", "2.2.2", "2.2.3", "2.3.1", "2.3.2", "3.1", "3.2", "3.3", "4" ], "paper_header_content": [ "Introduction", "Our Approach", "Data preprocessing", "Word feature based classifier", "Feature selection", "Sentiment lexicon expansion", "Classification", "Word embedding construction", "CNN-based SVM classifier", "Data set", "Metrics", "Experimental results and analysis", "Conclusion" ] }
GEM-SciDuet-train-39#paper-1055#slide-5
Experiments
Training data: 4905 microblogs (394 positive, 538 negative and Testing data: 19469 microblogs, 20 topics P r ecision SystemCorrect F Pr ecision Re call Performances in unrestricted resource subtask Team Name Precision Recall F1 Precision Recall F1 Precision Recall F1 Performances by different classifiers in unrestricted resource subtask Approach Precision Recall F1 Precision Recall F1 Precision Recall F1
Training data: 4905 microblogs (394 positive, 538 negative and Testing data: 19469 microblogs, 20 topics P r ecision SystemCorrect F Pr ecision Re call Performances in unrestricted resource subtask Team Name Precision Recall F1 Precision Recall F1 Precision Recall F1 Performances by different classifiers in unrestricted resource subtask Approach Precision Recall F1 Precision Recall F1 Precision Recall F1
[]
GEM-SciDuet-train-39#paper-1055#slide-6
1055
A Joint Model for Chinese Microblog Sentiment Analysis
Topic-based sentiment analysis for Chinese microblog aims to identify the user attitude on specified topics. In this paper, we propose a joint model by incorporating Support Vector Machines (SVM) and deep neural network to improve the performance of sentiment analysis. Firstly, a SVM Classifier is constructed using N-gram, N-POS and sentiment lexicons features. Meanwhile, a convolutional neural network is applied to learn paragraph representation features as the input of another SVM classifier. The classification results outputted by these two classifiers are merged as the final classification results. The evaluations on the SIGHAN-8 Topic-based Chinese microblog sentiment analysis task show that our proposed approach achieves the second rank on micro average F1 and the fourth rank on macro average F1 among a total of 13 submitted systems.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143 ], "paper_content_text": [ "Introduction With the development of the Internet, microblog has become a popular user-generated content platform where users share the newest events or their personal feelings with each other.", "Topic-based microblogs are the most common interactive way for users to share their opinions towards a specified topic.", "To identify the opinions of users, sentiment analysis techniques are investigated to classify texts into different categorizations according to their sentiment polarities.", "Most existing sentiment classification techniques are based on machine learning algorithms, such as Support Vector Machine, Naïve Bayes and Maximum Entropy.", "The machine learning based approach uses feature vectors as the input of classification to predict the classification results.", "Thus, feature engineering, a method for extracting effective features from texts, plays an important role.", "Some commonly used features in sentiment classification are unigram, bigram and sentiment words.", "However, these features cannot work well for cross-domain sentiment classification because of the lack of domain knowledge.", "Danushka Bollegala et al.", "(2011) used multiple sources to construct a sentiment sensitive thesaurus to overcome the lack of domain knowledge.", "New sentiment words expansion is another kind of approach to improve the performance of sentiment analysis.", "Strfano Baccianella et al.", "(2010) constructed SentiWord-Net by extending WordNet with sentiment information.", "It is now widely used in sentiment classification for English.", "As for Chinese sentiment analysis, Minlie Huang et al.", "(2014) proposed a new word detection method by mining the frequent sentiment word patterns.", "This method may discover new sentiment words from a large scale of unlabeled texts.", "With the rapid development of pre-trained word embedding and deep neural networks, a new way to represent texts and features is devloped.", "Mikolov et al.", "(2013) showed that word embedding represents words with meaningful syntactic and semantic information effectively.", "Recursive neural network proposed by Socher et al.", "(2011a; 2011b; is shown efficient to construct sentence representations based on the word embedding.", "Convolutional neural networks (CNN), another deep learn model which achieved success in image recognition field, was applied to nature language processing with word embed-dings.", "Yoon Kim (2014) used CNN with pretrained word embedding to achieve state-ofthe-art performances on some sentence classification tasks, including sentiment classification.", "Siwei Lai et al.", "(2015) incorporated global information in a recurrent convolutional neural network.", "It obtained further improvements comparing to other deep learning models.", "In this paper, we propose a joint model which incorporates traditional machine learning based method (SVM) and deep learning model.", "Two different classifiers are developed.", "One is a word feature based SVM classifier which uses word unigram, bigram and sentiment words as features.", "Another one is a CNN-based SVM classifier which takes paragraph representations features learned by CNN as input features.", "The classification results of these two classifiers are integrated to generate the final classification results.", "The evaluations on the SIGHAN-8 Topic-based Chinese microblog sentiment analysis task show that our proposed approach achieves the second rank on micro average F1 and the fourth rank on macro average F1 among a total of 13 submitted systems.", "Furthermore, the joint classifier strategy brings further performance improvement on individual classifiers.", "The rest of this paper is organized as follows.", "Section 2 presents the design and implementation of our proposed joint model.", "Section 3 gives the evaluation results and discussions.", "Finally, Section 4 gives the conclusion and future research directions.", "Our Approach The SIGHAN8 topic-based Chinese polarity classification task aims to is to classify Chinese microblog into three topic-related sentiment classes, namely neutral, positive and negative.", "This task may be generally regarded as a three-category classification problem.", "The SVM classifier which has been shown effective to document classification is adopted as the core classifier.", "Here, two different feature representation models, namely word-based vector space model and CNN-based composition representation, are adopted to generate the classification features for two classifiers, respectively.", "The classification outputs of two clas-sifiers are integrated to generate the final output.", "Data preprocessing Chinese microblog text is obviously different from formal text.", "Many microblogs have noises, including nickname, hashtag, repost or reply symbols, and URL.", "Therefore, before the feature representation and extraction, preprocessing is performed to filter out noise text in the microblogs.", "Meanwhile, the advertising text and topic-irrelevant microblog are identified as neutral text.", "Especially, this task is designed to identify the topic-relevant sentiments.", "Therefore, the information coming from the reply, repost and sharing parts should be filtered out to avoid their influences to the sentiment analysis of the microblog author.", "Generally speaking, such filtering is based on rules.", "The table 1 shows the example data preprocessing rules with illustrations.", "Table 2 shows the rules for identifying the advertisement and topic-irrelevant microblogs.", "The identified microblogs are labeled as neutral for topic-based sentiment classification.", "Word feature based classifier The word feature based classifier is designed based on the vector model.", "Firstly, the new sentiment words from unlabeled sentences data are recognized to expand the sentiment lexicon.", "The classification features are extracted from the labeled training data and sentiment lexicon resources.", "In order to alleviate the influences of unbalanced training data, SMOTE, which is an oversampling algorithm, is applied to training data before classifier training.", "Finally, a SVM classifier is trained on the balanced data.", "The framework of word feature based classifier is shown in Figure 1.", "Feature selection Unigram, Bigram, Uni-Part-of-Speech and Bi-Part-of-Speech features are selected as the basic features.", "CHI-test based feature selection is applied to obtain the top 20000 features.", "To improve the performance of sentiment classification, additional features based on lexicons including sentiment word lexicons, negation word lexicons, and adverb word lexicons, are incorporated.", "Rules Raw Text Processed Text Sharing news with 好看?吗?//【Galaxy S6:三星证明自 好看?吗? personal comments 己能做出好看的手机】http: //t.cn/ RwHRsIb(分享自 @ 今日头条) Removing HashTag # 三星 Galaxy S6# 三星 GALAXY S6 三星 GALAXY S6, ,挺中意 [酷][酷] [位置] 芒砀路 挺中意 [酷][酷] Removing URL 699 欧元起传三星 Galaxy S6/S6 Edge 售 699 欧元起传三星 Galaxy 价获证实(分享自 @ 新浪科技) S6/S6 Edge 售价获证实 http://t.cn/RwTo3on (分享自 @ 新浪科技) Removing nickname 玻璃取代塑料,更美 Galaxy S6 的 5 大 http://t.cn/RwHY6Az 妥协 http://t.cn/RwHY6Az 罗永浩我去 罗永浩我去小米和三星这 小米和三星这是要闹哪样, , ,老罗。 。不 是要闹哪样, , ,老罗。 。 能忍啊, , , , ,@ 锤子科技营销帐号 @ 罗 不能忍啊, , , , , 永浩 Removing 【视频:三星 S6 对比苹果 iPhone6 【视频:三星 S6 对比苹果 information sources MWC2015 @youtube 科技 】 iPhone6 MWC2015 http://t.cn/RwHQzJ8(来自于优酷安 @youtube 科技 】 卓客户端) http://t.cn/RwHQzJ8 Rules Type Including many different Advertisement topic (\"#...#\") tag.", "Including many words Advertisement like \"微商\", \"商机\", \"想赚钱\",\"面膜\".", "No actual content Topic-irrelevant Table 2 : Microblog text matching rules.", "By analyzing the expressions of the microblog text in training data, some special expression features in microblog text are identified.", "For example, the continuous punctuations are always used to express a strong feeling and thus, the microblog with continuous punctuations tends to be subjective.", "Another adopted feature for microblog text is the use of emoticons.", "Sentiment lexicon expansion In microblogs, abundant new or informal sentiment words are widely used.", "Normally, these new sentiment words are short but meaningful for expressing a strong feeling.", "These new sentiment words play an important role in Chinese microblog sentiment classification.", "Therefore, sentiment word identification is performed to recognize new sentiment words as the supplement of sentiment lexicon.", "Twenty million microblog text collected from Sina Weibo Platform are used in new sentiment word detection.", "Considering that new words normally cannot be correctly segmented by the existing segmentor, identifying new words from preliminary segmentation results together with their POS tags is a feasible method.", "Here, potential components for new words are limited to the segmentation tokens shorter than three.", "Using word frequency, mutual information and context entropy as the evaluation indicators for words, the most possible new word candidates are obtained.", "With the help of word embedding construction model, each word in the corpus can be represented as a low dimension vector together with its context information.", "Hence, the distances between the new words and the existed sentiment words corresponding to difference sentiment polarity are estimated.", "The new words are then classified into one of the three polarity classes by following voting mechanism.", "Classification Two steps are performed to determine the topic-relevant sentiment for input microblogs.", "The first step is to distinguish topic relevant messages from topic irrelevant messages.", "Sentiment classification is then applied to topic relevant messages in the second step.", "Topic relevant words generated by clustering analysis are employed as distinguishable features to filter out topic irrelevant microblogs because normally the topic irrelevant microblogs have few intersections with topic relevant words.", "Some advertisement posts consisting of several hot topic hash tags are also filtered out by considering the number of hash tag types in the microblog.", "The provided labeled dataset is used to train the SVM classifier with linear kernel.", "A new challenge is that the provided training set is imbalanced.", "There are about 3973 neutral microblogs, while the numbers of positive and negative microblogs are 394 and 538, respectively.", "In order to reduce the influences of imbalanced training dataset, the SMOTE algorithm (Chawla et al., 2002) is applied to oversampling the samples on minority class.", "Oversampling ratio is set to 10 and 7.4 for positive class and negative class, respectively.", "In this way, the training dataset becomes balanced.", "Another classifier is CNN-based SVM classifier.", "The classifier framework is shown in Figure 2 .", "Firstly, continuous bog of word (CBOW) model (Mikolov et al., 2013 ) is used to learn word embeddings from Chinese microblog text.", "A deep convolutional neural networks (CNN) model is applied to learn distributed paragraph representation features for Chinese microblog training and testing data.", "Finally, the distributed paragraph representation features are used in SVM classifier to learn the probability distribution over sentiment labels.", "CNN-based SVM classifier Word embedding construction Word embedding, wherein words are projected from a sparse, 1-of-V encoding (here V is the vocabulary size) onto a lower dimensional vector space via a hidden layer, are essentially feature extractors that encode semantic features of words in their dimensions.", "Mikolov et al.", "(2013) introduced CBOW model to learn vector representations which captures a large number of syntactic and semantic word relationships from unstructured text data.", "The main idea of this model is to find word representations which use the surrounding words in a sentence or a document to predict current word.", "In this study, we train the CBOW model by using 16GB Chinese microblog text.", "Finally, we obtain 200-dimension word embeddings for Chinese microblog text.", "CNN-based SVM classifier In the CNN-based SVM classifier, the input is a matrix which is composed of the word embeddings of microblogs.", "There are windows with the lengths of three, four and five words, respectively.", "A convolution operation involves three filters which are applied to these windows to produce new features.", "After convolution operation, a max-over-time pooling operation is applied over these features.", "The maximum value is taken as the feature corresponding to this particular filter.", "The idea is to capture the most important feature which has the largest value.", "Since one feature is extracted from one filter, the model uses multiple filters (with varying window sizes) to obtain multiple features.", "These features constitute the distributed paragraph feature representation.", "In the last step, a SVM classifier is applied on these distributed paragraph representation features to obtain the probability distributions over labels (positive, negative, and neutral).", "A set of merging rules is designed to incorporate the individual classification results of the two classifiers for generating the final result.", "If the two classification outputs are the same, naturally, the final output is the same.", "If the two classification outputs are different, the final result is determined from the merge rules shown in Table 3 .", "Simply speaking, if any of two classifiers output neutral category, the final output is neutral.", "If two classifiers outputs positive and negative, respectively, the final output is the result of CNN-based clas-sifier.", "Such a classification outputs merging strategy is based on the statistical analysis on the individual classifier performances on training dataset.", "Outputs Merging Experimental results and analysis Data set In the SIGHAN-8 Chinese sentiment analysis bakeoff dataset, 4905 topic-based Chinese microblog are provided as training data which consists of 394 positive, 538 negative and 3973 neutral microblogs corresponding to 5 topics, namely \"央行降息\", \"油价\", \"日本马桶\", \"三星 S6\"and \"雾霾\".", "In the testing data, there are 19,469 microblogs corresponding to 20 topic, such as \"12306 验证码\", \"中国政 府也门撤侨\", \"何以笙箫默\", \"刘翔退役\".", "Metrics Precision, recall and F1-value are used as the evaluation metrics, as shown below: P recision = SystemCorrect SystemOutput (1) Recall = SystemCorrect HumanLabeled (2) F 1 = 2 × P recision × Recall P recision + Recall (3) Where System.Output refers to the total number of the submitted results, System.Correct refers to the number of correctly classified results in the submitted results, Human.Labeled refers to the total number of manually labeled results in the Gold Standard.", "The evaluation metrics corresponding to positive, negative and overall are estimated, respectively.", "The corresponding microaverage and macro-average performances are then estimated.", "The micro-average estimates the average performance of the three evaluation metrics over the entire dataset.", "The macro-average estimates the average performances of the evaluation metrics on positive, negative and neutral, respectively.", "Experimental results and analysis There are two subtasks in SIGHAN-8 topicbased Chinese microblog polarity classification Table 6 : Performances by different classifiers in unrestricted resource subtask.", "task: restricted resource and unrestricted resource subtasks.", "Table 4 gives the performances in restricted resource subtask.", "The first column lists the name of participants who achieves higher macro average F1 values while out system is named as HLT_HITSZ.", "It is observed that our proposed approach achieves better performance on negative and positive categories, but obviously lower performance on neutral category.", "The good performance on the recall of minority classes showed the effectiveness of our consideration on imbalanced dataset training.", "The achieved performances in the unrestricted resource subtask are listed in Table 5 .", "Our system achieves about 3% of performance improvement on each category, respectively.", "It shows the contributions of extra training corpus and merging rules.", "In order to validate the effectiveness of merging rules, the performances of Classifier 1 and Classifier 2 are evaluated, individually.", "The achieved performances are given in Table 6.", "It is observed that generally speaking, Classifier 1 achieves a higher classification precision because many features are coming from manually compiled sentiment-related lexicons.", "However, these features are limited to training data so that Classifier 1 achieved a lower recall.", "On the contrary, Classifier 2 may learn the representation features automatically from training data which is better for generalization.", "Thus, a good recall is achieved.", "Meanwhile, the achieved performances show that our joint model obtains better performances compared to two individual classifiers which indicate the effectiveness of our proposed joint classification strategy.", "Conclusion In this work, we propose a joint model for sentiment topic analysis on Chinese microblog messages.", "A word feature based SVM classifier and a SVM classifier using CNN-based paragraph representation features are developed, respectively.", "To overcome the limitation of each classifier, their classification outputs are merged to generate the final output while the merging rules are based on statistical analy-sis on the performances on training dataset.", "Experimental results show that our proposed joint method achieves better sentiment classification performance over individual classifiers which show the effectiveness of the joint classifier strategy.", "In future, we intend to study the way to distinguish the subjective messages from objective messages for further improving the sentiment classification performance." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.2.1", "2.2.2", "2.2.3", "2.3.1", "2.3.2", "3.1", "3.2", "3.3", "4" ], "paper_header_content": [ "Introduction", "Our Approach", "Data preprocessing", "Word feature based classifier", "Feature selection", "Sentiment lexicon expansion", "Classification", "Word embedding construction", "CNN-based SVM classifier", "Data set", "Metrics", "Experimental results and analysis", "Conclusion" ] }
GEM-SciDuet-train-39#paper-1055#slide-6
Conclusion
Word feature based SVM classifier Second rank on micro average F1 value Fourth rank on macro average F1 value
Word feature based SVM classifier Second rank on micro average F1 value Fourth rank on macro average F1 value
[]
GEM-SciDuet-train-40#paper-1056#slide-0
1056
Overview of the NLP-TEA 2015 Shared Task for Chinese Grammatical Error Diagnosis
This paper introduces the NLP-TEA 2015 shared task for Chinese grammatical error diagnosis. We describe the task, data preparation, performance metrics, and evaluation results. The hope is that such an evaluation campaign may produce more advanced Chinese grammatical error diagnosis techniques. All data sets with gold standards and evaluation tools are publicly available for research purposes.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97 ], "paper_content_text": [ "Introduction Human language technologies for English grammatical error correction have attracted more attention in recent years (Ng et al., 2013; .", "In contrast to the plethora of research related to develop NLP tools for learners of English as a foreign language, relatively few studies have focused on detecting and correcting grammatical errors for use by learners of Chinese as a foreign language (CFL).", "A classifier has been designed to detect word-ordering errors in Chinese sentences (Yu and Chen, 2012) .", "A ranking SVMbased model has been further explored to suggest corrections for word-ordering errors (Cheng et al., 2014) .", "Relative positioning and parse template language models have been proposed to detect Chinese grammatical errors written by US learners (Wu et al., 2010) .", "A penalized probabilistic first-order inductive learning algorithm has been presented for Chinese grammatical error diagnosis (Chang et al.", "2012) .", "A set of linguistic rules with syntactic information was manually crafted to detect CFL grammatical errors (Lee et al., 2013) .", "A sentence judgment system has been further developed to integrate both rule-based linguistic analysis and n-gram statistical learning for grammatical error detection .", "The ICCE-2014 workshop on Natural Language Processing Techniques for Educational Applications (NLP-TEA) organized a shared task on CFL grammatical error diagnosis .", "Due to the greater challenge in identifying grammatical errors in CFL leaners' written sentences, the NLP-TEA 2015 shared task features a Chinese Grammatical Error Diagnosis (CGED) task, providing an evaluation platform for the development and implementation of NLP tools for computer-assisted Chinese learning.", "The developed system should identify whether a given sentence contains grammatical errors, identify the error types, and indicate the range of occurred errors.", "This paper gives an overview of this shared task.", "The rest of this article is organized as follows.", "Section 2 provides the details of the designed task.", "Section 3 introduces the developed data sets.", "Section 4 proposes evaluation metrics.", "Section 5 presents the results of participant approaches for performance comparison.", "Section 6 summarizes the findings and offers futures research directions.", "Task Description The goal of this shared task is to develop NLP tools for identifying the grammatical errors in sentences written by the CFL learners.", "Four PADS error types are included in the target modification taxonomy, that is, mis-ordering (Permutation), redundancy (Addition), omission (Deletion), and mis-selection (Substitution).", "For the sake of simplicity, the input sentence is selected to contain one defined error types.", "The developed tool is expected to identify the error types and its position at which it occurs in the sentence.", "The input instance is given a unique sentence number sid.", "If the inputs contain no grammatical errors, the tool should return \"sid, correct\".", "If an input sentence contains a grammatical error, the output format should be a quadruple of \"sid, start_off, end_off, error_type\", where \"start_off\" and \"end_off\" respectively denote the characters at which the grammatical error starts and ends, where each character or punctuation mark occupies 1 space for counting positions.", "\"Error_type\" represents one defined error type in terms of \"Redundant,\" \"Missing,\" \"Selection,\" and \"Disorder\".", "Examples are shown as follows.", "• Example 1 Input: (sid=B2-0080) 他是我的以前的室友 Output: B2-0080, 4, 4, Redundant • Example 2 Input: (sid=A2-0017) 那電影是機器人的故事 Output: A2-0017, 2, 2, Missing • Example 3 Input: (sid=A2-0017) 那部電影是機器人的故事 Output: A2-0017, correct • Example 4 Input: (sid=B1-1193) 吳先生是修理腳踏車的拿手 Output: B1-1193, 11, 12, Selection • Example 5 Input: (sid=B2-2292) 所 以 我 不 會 讓 失 望 她 Output: B2-2292, 7, 9, Disorder The character \"的\" is a redundant character in Ex.", "1.", "There is a missing character between \"那\" and \"電影\" in Ex.", "2, and a missed character \"部\" is shown in the correct sentence in Ex.", "3.", "In Ex.", "4, \"拿手\" is a wrong word.", "One of correct words may be \"好手\".", "\"失望她\" is a word ordering error in Ex.", "5.", "The correct order should be \"她失 望\".", "Data Preparation The learner corpus used in our task was collected from the essay section of the computer-based Test of Chinese as a Foreign Language (TOCFL), administered in Taiwan.", "Native Chinese speakers were trained to manually annotate grammatical errors and provide corrections corresponding to each error.", "The essays were then split into three sets as follows.", "(1) Training Set: This set included 2,205 selected sentences with annotated grammatical errors and their corresponding corrections.", "Each sentence is represented in SGML format as shown in Fig.", "1 .", "Error types were categorized as redundant (430 instances), missing (620), selection (849), and disorder (306).", "All sentences in this set were collected to use for training the grammatical diagnostic tools.", "<DOC> <SENTENCE id=\"B1-1120\"> 我的中文進步了非常快 </SENTENCE> <MISTAKE start_off=\"7\" end_off=\"7\"> <TYPE> Selection </TYPE> <CORRECTION> 我的中文進步得非常快 </CORRECTION> </MISTAKE> </DOC> Figure 1.", "An sentence denoted in SGML format (2) Dryrun Set: A total of 55 sentences were distributed to participants to allow them familiarize themselves with the final testing process.", "Each participant was allowed to submit several runs generated using different models with different parameter settings of their developed tools.", "In addition, to ensure the submitted results could be correctly evaluated, participants were allowed to fine-tune their developed models in the dryrun phase.", "The purpose of dryrun is to validate the submitted output format only, and no dryrun outcomes were considered in the official evaluation (3) Test Set: This set consists of 1,000 testing sentences.", "Half of these sentences contained no grammatical errors, while the other half included a single defined grammatical error: redundant (132 instances), missing (126), selection (110), and disorder (132).", "The evaluation was conducted as an open test.", "In addition to the data sets provided, registered research teams were allowed to employ any linguistic and computational resources to identify the grammatical errors.", "Table 1 shows the confusion matrix used for performance evaluation.", "In the matrix, TP (True Positive) is the number of sentences with grammatical errors that are correctly identified by the developed tool; FP (False Positive) is the number of sentences in which non-existent grammatical errors are identified; TN (True Negative) is the number of sentences without grammatical errors that are correctly identified as such; FN (False Negative) is the number of sentences with grammatical errors for which no errors are identified.", "Performance Metrics The criteria for judging correctness are determined at three levels as follows.", "(1) Detection level: binary classification of a given sentence, that is, correct or incorrect should be completely identical with the gold standard.", "All error types will be regarded as incorrect.", "(2) Identification level: this level could be considered as a multi-class categorization problem.", "All error types should be clearly identified.", "A correct case should be completely identical with the gold standard of the given error type.", "(3) Position level: in addition to identifying the error types, this level also judges the occurred range of grammatical error.", "That is to say, the system results should be perfectly identical with the quadruples of gold standard.", "The following metrics are measured at all levels with the help of the confusion matrix.", "For example, given 8 testing inputs with gold standards shown as \"B1-1138, 7, 10, Disorder\", \"A2-0087, 12, 13, Missing\", \"A2-0904, correct\", \"B1-0990, correct\", \"A2-0789, 2, 3, Selection\", \"B1-0295, correct\", \"B2-0591, 3, 3, Redundant\" and \"A2-0920, correct\", the system may output the result shown as \"B1-1138, 7, 8, Disorder\", \"A2-0087, 12, 13, Missing\", \"A2-0904, 5, 6, Missing\", \"B1-0990, correct\", \"A2-0789, 2, 5, Disorder\", \"B1-0295, correct\", \"B2-0591, 3, 3, Redundant\" and \"A2-0920, 4, 5, Selection\".", "The evaluation tool will yield the following performance.", "• False Positive Rate (FPR) = 0.5 (=2/4) Notes: {\"A2-0904, 5, 6, Missing\", \"A2-0920, 4, 5, Selection\"} /{\"A2-0904, correct\", \"B1-0090, correct\", \"B1-0295, correct\", \"A2-0920, correct\"} • Detection-level • Accuracy =0.75 (=6/8) Notes: {\"B1-1138, Disorder\", \"A2-0087, Missing\", \"B1-0990, correct\", \"A2-0789, Disorder\", \"B1-0295, correct\", \"B2-0591, Redundant\"} / {\"B1-1138, Disorder\", \"A2-0087, Missing\", \"A2-0904, Missing\", \"B1-0990, correct\", \"A2-0789, Disorder\", \"B1-0295, correct\", \"B2-0591, Redundant\", \"A2-0920, Selection\".}", "• Precision = 0.67 (=4/6) Notes: {\"B1-1138, Disorder\", \"A2-0087, Missing\", \"A2-0789, Disorder\", \"B2-0591, Redundant\"} / {\"B1-1138, Disorder\", \"A2-0087, Missing\", \"A2-0904, Missing\", \"A2-0789, Disorder\", \"B2-0591, Redundant\", \"A2-0920, Selection\".}", "• Position-level • Accuracy =0.5 (=4/8) Notes: {\"A2-0087, 12, 13, Missing\", \"B1-0990, correct\", \"B1-0295, correct\", \"B2-0591, 3, 3, Redundant\"} / {\"B1-1138, 7, 8, Disorder\", \"A2-0087, 12, 13, Missing\", \"A2-0904, 5, 6, Missing\", \"B1-0990, correct\", \"A2-0789, 2, 5, Disorder\", \"B1-0295, correct\", \"B2-0591, 3, 3, Redundant\", \"A2-0920, 4, 5, Selection\"} • Precision = 0.33 (=2/6) Notes: {\"A2-0087, 12, 13, Missing\", \"B2-0591, 3, 3, Redundant\"} / {\"B1-1138, 7, 8, Disorder\", \"A2-0087, 12, 13, Missing\", \"A2-0904, 5, 6, Missing\", \"A2-0789, 2, 5, Disorder\", \"B2-0591, 3, 3, Redundant\", \"A2-0920, 4, 5, Selection\"} • Recall = 0.5 (=2/4) Notes: {\"A2-0087, 12, 13, Missing\", \"B2-0591, 3, 3, Redundant\"} / {\"B1-1138, 7, 10, Disorder\", \"A2-0087, 12, 13, Missing\", \"A2-0789, 2, 3, Selection\", \"B2-0591, 3, 3, Redundant\"} • F1=0.4 (=2*0.33*0.5/(0.33+0.5)) Table 2 summarizes the submission statistics for the participating teams.", "Of 13 registered teams, 6 teams submitted their testing results.", "In formal testing phase, each participant was allowed to submit at most three runs using different models or parameter settings.", "In total, we had received 18 runs.", "Table 3 shows the task testing results.", "The CYUT team achieved the lowest false positive rate of 0.082.", "Detection-level evaluations are designed to detect whether a sentence contains grammatical errors or not.", "A neutral baseline can be easily achieved by always reporting all testing errors are correct without errors.", "According to the test data distribution, the baseline system can achieve an accuracy level of 0.5.", "All systems achieved results slightly better than the baseline.", "The system result submitted by NCYU achieved the best detection accuracy of 0.607.", "We used the F1 score to reflect the tradeoff between precision and recall.", "In the testing results, NTOU provided the best error detection results, providing a high F1 score of 0.6754.", "For correction-level evaluations, the systems need to identify the error types in the given sentences.", "The system developed by NCYU provided the highest F1 score of 0.3584 for grammatical error identification.", "For position-level evaluations, CYUT achieved the best F1 score of 0.1742.", "Note that it is difficult to perfectly identify the error positions, partly because no word delimiters exist among Chinese words.", "Table 3 .", "Testing results of our Chinese grammatical error diagnosis task.", "Evaluation Results In summary, none of the submitted systems provided superior performance.", "It is a really difficult task to develop an effective computer-assisted learning tool for grammatical error diagnosis, especially for the CFL uses.", "In general, this research problem still has long way to go.", "Conclusions and Future Work This paper provides an overview of the NLP-TEA 2015 shared task for Chinese grammatical error diagnosis, including task design, data preparation, evaluation metrics, and performance evaluation results.", "Regardless of actual performance, all submissions contribute to the common effort to produce an effective Chinese grammatical diagnosis tool, and the individual reports in the shared task proceedings provide useful insight into Chinese language processing.", "We hope the data sets collected for this shared task can facilitate and expedite the future development of NLP tools for computer-assisted Chinese language learning.", "Therefore, all data sets with gold standards and evaluation tool are publicly available for research purposes at http://ir.itc.ntnu.edu.tw/lre/nlptea15cged.htm.", "We plan to build new language resources to improve existing techniques for computer-aided Chinese language learning.", "In addition, new data sets with the contextual information of target sentences obtained from CFL learners will be investigated for the future enrichment of this research topic." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5", "6" ], "paper_header_content": [ "Introduction", "Task Description", "Data Preparation", "Performance Metrics", "Evaluation Results", "Conclusions and Future Work" ] }
GEM-SciDuet-train-40#paper-1056#slide-0
Introduction
The NLP-TEA 2015 shared task features a Chinese Gramma%cal Error Diagnosis (CGED) task, providing an evalua%on plaMorm for the development and implementa%on of NLP tools for computer-assisted Chinese learning
The NLP-TEA 2015 shared task features a Chinese Gramma%cal Error Diagnosis (CGED) task, providing an evalua%on plaMorm for the development and implementa%on of NLP tools for computer-assisted Chinese learning
[]
GEM-SciDuet-train-40#paper-1056#slide-1
1056
Overview of the NLP-TEA 2015 Shared Task for Chinese Grammatical Error Diagnosis
This paper introduces the NLP-TEA 2015 shared task for Chinese grammatical error diagnosis. We describe the task, data preparation, performance metrics, and evaluation results. The hope is that such an evaluation campaign may produce more advanced Chinese grammatical error diagnosis techniques. All data sets with gold standards and evaluation tools are publicly available for research purposes.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97 ], "paper_content_text": [ "Introduction Human language technologies for English grammatical error correction have attracted more attention in recent years (Ng et al., 2013; .", "In contrast to the plethora of research related to develop NLP tools for learners of English as a foreign language, relatively few studies have focused on detecting and correcting grammatical errors for use by learners of Chinese as a foreign language (CFL).", "A classifier has been designed to detect word-ordering errors in Chinese sentences (Yu and Chen, 2012) .", "A ranking SVMbased model has been further explored to suggest corrections for word-ordering errors (Cheng et al., 2014) .", "Relative positioning and parse template language models have been proposed to detect Chinese grammatical errors written by US learners (Wu et al., 2010) .", "A penalized probabilistic first-order inductive learning algorithm has been presented for Chinese grammatical error diagnosis (Chang et al.", "2012) .", "A set of linguistic rules with syntactic information was manually crafted to detect CFL grammatical errors (Lee et al., 2013) .", "A sentence judgment system has been further developed to integrate both rule-based linguistic analysis and n-gram statistical learning for grammatical error detection .", "The ICCE-2014 workshop on Natural Language Processing Techniques for Educational Applications (NLP-TEA) organized a shared task on CFL grammatical error diagnosis .", "Due to the greater challenge in identifying grammatical errors in CFL leaners' written sentences, the NLP-TEA 2015 shared task features a Chinese Grammatical Error Diagnosis (CGED) task, providing an evaluation platform for the development and implementation of NLP tools for computer-assisted Chinese learning.", "The developed system should identify whether a given sentence contains grammatical errors, identify the error types, and indicate the range of occurred errors.", "This paper gives an overview of this shared task.", "The rest of this article is organized as follows.", "Section 2 provides the details of the designed task.", "Section 3 introduces the developed data sets.", "Section 4 proposes evaluation metrics.", "Section 5 presents the results of participant approaches for performance comparison.", "Section 6 summarizes the findings and offers futures research directions.", "Task Description The goal of this shared task is to develop NLP tools for identifying the grammatical errors in sentences written by the CFL learners.", "Four PADS error types are included in the target modification taxonomy, that is, mis-ordering (Permutation), redundancy (Addition), omission (Deletion), and mis-selection (Substitution).", "For the sake of simplicity, the input sentence is selected to contain one defined error types.", "The developed tool is expected to identify the error types and its position at which it occurs in the sentence.", "The input instance is given a unique sentence number sid.", "If the inputs contain no grammatical errors, the tool should return \"sid, correct\".", "If an input sentence contains a grammatical error, the output format should be a quadruple of \"sid, start_off, end_off, error_type\", where \"start_off\" and \"end_off\" respectively denote the characters at which the grammatical error starts and ends, where each character or punctuation mark occupies 1 space for counting positions.", "\"Error_type\" represents one defined error type in terms of \"Redundant,\" \"Missing,\" \"Selection,\" and \"Disorder\".", "Examples are shown as follows.", "• Example 1 Input: (sid=B2-0080) 他是我的以前的室友 Output: B2-0080, 4, 4, Redundant • Example 2 Input: (sid=A2-0017) 那電影是機器人的故事 Output: A2-0017, 2, 2, Missing • Example 3 Input: (sid=A2-0017) 那部電影是機器人的故事 Output: A2-0017, correct • Example 4 Input: (sid=B1-1193) 吳先生是修理腳踏車的拿手 Output: B1-1193, 11, 12, Selection • Example 5 Input: (sid=B2-2292) 所 以 我 不 會 讓 失 望 她 Output: B2-2292, 7, 9, Disorder The character \"的\" is a redundant character in Ex.", "1.", "There is a missing character between \"那\" and \"電影\" in Ex.", "2, and a missed character \"部\" is shown in the correct sentence in Ex.", "3.", "In Ex.", "4, \"拿手\" is a wrong word.", "One of correct words may be \"好手\".", "\"失望她\" is a word ordering error in Ex.", "5.", "The correct order should be \"她失 望\".", "Data Preparation The learner corpus used in our task was collected from the essay section of the computer-based Test of Chinese as a Foreign Language (TOCFL), administered in Taiwan.", "Native Chinese speakers were trained to manually annotate grammatical errors and provide corrections corresponding to each error.", "The essays were then split into three sets as follows.", "(1) Training Set: This set included 2,205 selected sentences with annotated grammatical errors and their corresponding corrections.", "Each sentence is represented in SGML format as shown in Fig.", "1 .", "Error types were categorized as redundant (430 instances), missing (620), selection (849), and disorder (306).", "All sentences in this set were collected to use for training the grammatical diagnostic tools.", "<DOC> <SENTENCE id=\"B1-1120\"> 我的中文進步了非常快 </SENTENCE> <MISTAKE start_off=\"7\" end_off=\"7\"> <TYPE> Selection </TYPE> <CORRECTION> 我的中文進步得非常快 </CORRECTION> </MISTAKE> </DOC> Figure 1.", "An sentence denoted in SGML format (2) Dryrun Set: A total of 55 sentences were distributed to participants to allow them familiarize themselves with the final testing process.", "Each participant was allowed to submit several runs generated using different models with different parameter settings of their developed tools.", "In addition, to ensure the submitted results could be correctly evaluated, participants were allowed to fine-tune their developed models in the dryrun phase.", "The purpose of dryrun is to validate the submitted output format only, and no dryrun outcomes were considered in the official evaluation (3) Test Set: This set consists of 1,000 testing sentences.", "Half of these sentences contained no grammatical errors, while the other half included a single defined grammatical error: redundant (132 instances), missing (126), selection (110), and disorder (132).", "The evaluation was conducted as an open test.", "In addition to the data sets provided, registered research teams were allowed to employ any linguistic and computational resources to identify the grammatical errors.", "Table 1 shows the confusion matrix used for performance evaluation.", "In the matrix, TP (True Positive) is the number of sentences with grammatical errors that are correctly identified by the developed tool; FP (False Positive) is the number of sentences in which non-existent grammatical errors are identified; TN (True Negative) is the number of sentences without grammatical errors that are correctly identified as such; FN (False Negative) is the number of sentences with grammatical errors for which no errors are identified.", "Performance Metrics The criteria for judging correctness are determined at three levels as follows.", "(1) Detection level: binary classification of a given sentence, that is, correct or incorrect should be completely identical with the gold standard.", "All error types will be regarded as incorrect.", "(2) Identification level: this level could be considered as a multi-class categorization problem.", "All error types should be clearly identified.", "A correct case should be completely identical with the gold standard of the given error type.", "(3) Position level: in addition to identifying the error types, this level also judges the occurred range of grammatical error.", "That is to say, the system results should be perfectly identical with the quadruples of gold standard.", "The following metrics are measured at all levels with the help of the confusion matrix.", "For example, given 8 testing inputs with gold standards shown as \"B1-1138, 7, 10, Disorder\", \"A2-0087, 12, 13, Missing\", \"A2-0904, correct\", \"B1-0990, correct\", \"A2-0789, 2, 3, Selection\", \"B1-0295, correct\", \"B2-0591, 3, 3, Redundant\" and \"A2-0920, correct\", the system may output the result shown as \"B1-1138, 7, 8, Disorder\", \"A2-0087, 12, 13, Missing\", \"A2-0904, 5, 6, Missing\", \"B1-0990, correct\", \"A2-0789, 2, 5, Disorder\", \"B1-0295, correct\", \"B2-0591, 3, 3, Redundant\" and \"A2-0920, 4, 5, Selection\".", "The evaluation tool will yield the following performance.", "• False Positive Rate (FPR) = 0.5 (=2/4) Notes: {\"A2-0904, 5, 6, Missing\", \"A2-0920, 4, 5, Selection\"} /{\"A2-0904, correct\", \"B1-0090, correct\", \"B1-0295, correct\", \"A2-0920, correct\"} • Detection-level • Accuracy =0.75 (=6/8) Notes: {\"B1-1138, Disorder\", \"A2-0087, Missing\", \"B1-0990, correct\", \"A2-0789, Disorder\", \"B1-0295, correct\", \"B2-0591, Redundant\"} / {\"B1-1138, Disorder\", \"A2-0087, Missing\", \"A2-0904, Missing\", \"B1-0990, correct\", \"A2-0789, Disorder\", \"B1-0295, correct\", \"B2-0591, Redundant\", \"A2-0920, Selection\".}", "• Precision = 0.67 (=4/6) Notes: {\"B1-1138, Disorder\", \"A2-0087, Missing\", \"A2-0789, Disorder\", \"B2-0591, Redundant\"} / {\"B1-1138, Disorder\", \"A2-0087, Missing\", \"A2-0904, Missing\", \"A2-0789, Disorder\", \"B2-0591, Redundant\", \"A2-0920, Selection\".}", "• Position-level • Accuracy =0.5 (=4/8) Notes: {\"A2-0087, 12, 13, Missing\", \"B1-0990, correct\", \"B1-0295, correct\", \"B2-0591, 3, 3, Redundant\"} / {\"B1-1138, 7, 8, Disorder\", \"A2-0087, 12, 13, Missing\", \"A2-0904, 5, 6, Missing\", \"B1-0990, correct\", \"A2-0789, 2, 5, Disorder\", \"B1-0295, correct\", \"B2-0591, 3, 3, Redundant\", \"A2-0920, 4, 5, Selection\"} • Precision = 0.33 (=2/6) Notes: {\"A2-0087, 12, 13, Missing\", \"B2-0591, 3, 3, Redundant\"} / {\"B1-1138, 7, 8, Disorder\", \"A2-0087, 12, 13, Missing\", \"A2-0904, 5, 6, Missing\", \"A2-0789, 2, 5, Disorder\", \"B2-0591, 3, 3, Redundant\", \"A2-0920, 4, 5, Selection\"} • Recall = 0.5 (=2/4) Notes: {\"A2-0087, 12, 13, Missing\", \"B2-0591, 3, 3, Redundant\"} / {\"B1-1138, 7, 10, Disorder\", \"A2-0087, 12, 13, Missing\", \"A2-0789, 2, 3, Selection\", \"B2-0591, 3, 3, Redundant\"} • F1=0.4 (=2*0.33*0.5/(0.33+0.5)) Table 2 summarizes the submission statistics for the participating teams.", "Of 13 registered teams, 6 teams submitted their testing results.", "In formal testing phase, each participant was allowed to submit at most three runs using different models or parameter settings.", "In total, we had received 18 runs.", "Table 3 shows the task testing results.", "The CYUT team achieved the lowest false positive rate of 0.082.", "Detection-level evaluations are designed to detect whether a sentence contains grammatical errors or not.", "A neutral baseline can be easily achieved by always reporting all testing errors are correct without errors.", "According to the test data distribution, the baseline system can achieve an accuracy level of 0.5.", "All systems achieved results slightly better than the baseline.", "The system result submitted by NCYU achieved the best detection accuracy of 0.607.", "We used the F1 score to reflect the tradeoff between precision and recall.", "In the testing results, NTOU provided the best error detection results, providing a high F1 score of 0.6754.", "For correction-level evaluations, the systems need to identify the error types in the given sentences.", "The system developed by NCYU provided the highest F1 score of 0.3584 for grammatical error identification.", "For position-level evaluations, CYUT achieved the best F1 score of 0.1742.", "Note that it is difficult to perfectly identify the error positions, partly because no word delimiters exist among Chinese words.", "Table 3 .", "Testing results of our Chinese grammatical error diagnosis task.", "Evaluation Results In summary, none of the submitted systems provided superior performance.", "It is a really difficult task to develop an effective computer-assisted learning tool for grammatical error diagnosis, especially for the CFL uses.", "In general, this research problem still has long way to go.", "Conclusions and Future Work This paper provides an overview of the NLP-TEA 2015 shared task for Chinese grammatical error diagnosis, including task design, data preparation, evaluation metrics, and performance evaluation results.", "Regardless of actual performance, all submissions contribute to the common effort to produce an effective Chinese grammatical diagnosis tool, and the individual reports in the shared task proceedings provide useful insight into Chinese language processing.", "We hope the data sets collected for this shared task can facilitate and expedite the future development of NLP tools for computer-assisted Chinese language learning.", "Therefore, all data sets with gold standards and evaluation tool are publicly available for research purposes at http://ir.itc.ntnu.edu.tw/lre/nlptea15cged.htm.", "We plan to build new language resources to improve existing techniques for computer-aided Chinese language learning.", "In addition, new data sets with the contextual information of target sentences obtained from CFL learners will be investigated for the future enrichment of this research topic." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5", "6" ], "paper_header_content": [ "Introduction", "Task Description", "Data Preparation", "Performance Metrics", "Evaluation Results", "Conclusions and Future Work" ] }
GEM-SciDuet-train-40#paper-1056#slide-1
Shared Task Description
The developed tool is expected to iden%fy the error types and its posi%on at which it occurs in the sentence Four PADS error types are modifica%on taxonomy included in the target For the sake of simplicity, the input sentence is selected to contain one defined error types
The developed tool is expected to iden%fy the error types and its posi%on at which it occurs in the sentence Four PADS error types are modifica%on taxonomy included in the target For the sake of simplicity, the input sentence is selected to contain one defined error types
[]
GEM-SciDuet-train-40#paper-1056#slide-3
1056
Overview of the NLP-TEA 2015 Shared Task for Chinese Grammatical Error Diagnosis
This paper introduces the NLP-TEA 2015 shared task for Chinese grammatical error diagnosis. We describe the task, data preparation, performance metrics, and evaluation results. The hope is that such an evaluation campaign may produce more advanced Chinese grammatical error diagnosis techniques. All data sets with gold standards and evaluation tools are publicly available for research purposes.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97 ], "paper_content_text": [ "Introduction Human language technologies for English grammatical error correction have attracted more attention in recent years (Ng et al., 2013; .", "In contrast to the plethora of research related to develop NLP tools for learners of English as a foreign language, relatively few studies have focused on detecting and correcting grammatical errors for use by learners of Chinese as a foreign language (CFL).", "A classifier has been designed to detect word-ordering errors in Chinese sentences (Yu and Chen, 2012) .", "A ranking SVMbased model has been further explored to suggest corrections for word-ordering errors (Cheng et al., 2014) .", "Relative positioning and parse template language models have been proposed to detect Chinese grammatical errors written by US learners (Wu et al., 2010) .", "A penalized probabilistic first-order inductive learning algorithm has been presented for Chinese grammatical error diagnosis (Chang et al.", "2012) .", "A set of linguistic rules with syntactic information was manually crafted to detect CFL grammatical errors (Lee et al., 2013) .", "A sentence judgment system has been further developed to integrate both rule-based linguistic analysis and n-gram statistical learning for grammatical error detection .", "The ICCE-2014 workshop on Natural Language Processing Techniques for Educational Applications (NLP-TEA) organized a shared task on CFL grammatical error diagnosis .", "Due to the greater challenge in identifying grammatical errors in CFL leaners' written sentences, the NLP-TEA 2015 shared task features a Chinese Grammatical Error Diagnosis (CGED) task, providing an evaluation platform for the development and implementation of NLP tools for computer-assisted Chinese learning.", "The developed system should identify whether a given sentence contains grammatical errors, identify the error types, and indicate the range of occurred errors.", "This paper gives an overview of this shared task.", "The rest of this article is organized as follows.", "Section 2 provides the details of the designed task.", "Section 3 introduces the developed data sets.", "Section 4 proposes evaluation metrics.", "Section 5 presents the results of participant approaches for performance comparison.", "Section 6 summarizes the findings and offers futures research directions.", "Task Description The goal of this shared task is to develop NLP tools for identifying the grammatical errors in sentences written by the CFL learners.", "Four PADS error types are included in the target modification taxonomy, that is, mis-ordering (Permutation), redundancy (Addition), omission (Deletion), and mis-selection (Substitution).", "For the sake of simplicity, the input sentence is selected to contain one defined error types.", "The developed tool is expected to identify the error types and its position at which it occurs in the sentence.", "The input instance is given a unique sentence number sid.", "If the inputs contain no grammatical errors, the tool should return \"sid, correct\".", "If an input sentence contains a grammatical error, the output format should be a quadruple of \"sid, start_off, end_off, error_type\", where \"start_off\" and \"end_off\" respectively denote the characters at which the grammatical error starts and ends, where each character or punctuation mark occupies 1 space for counting positions.", "\"Error_type\" represents one defined error type in terms of \"Redundant,\" \"Missing,\" \"Selection,\" and \"Disorder\".", "Examples are shown as follows.", "• Example 1 Input: (sid=B2-0080) 他是我的以前的室友 Output: B2-0080, 4, 4, Redundant • Example 2 Input: (sid=A2-0017) 那電影是機器人的故事 Output: A2-0017, 2, 2, Missing • Example 3 Input: (sid=A2-0017) 那部電影是機器人的故事 Output: A2-0017, correct • Example 4 Input: (sid=B1-1193) 吳先生是修理腳踏車的拿手 Output: B1-1193, 11, 12, Selection • Example 5 Input: (sid=B2-2292) 所 以 我 不 會 讓 失 望 她 Output: B2-2292, 7, 9, Disorder The character \"的\" is a redundant character in Ex.", "1.", "There is a missing character between \"那\" and \"電影\" in Ex.", "2, and a missed character \"部\" is shown in the correct sentence in Ex.", "3.", "In Ex.", "4, \"拿手\" is a wrong word.", "One of correct words may be \"好手\".", "\"失望她\" is a word ordering error in Ex.", "5.", "The correct order should be \"她失 望\".", "Data Preparation The learner corpus used in our task was collected from the essay section of the computer-based Test of Chinese as a Foreign Language (TOCFL), administered in Taiwan.", "Native Chinese speakers were trained to manually annotate grammatical errors and provide corrections corresponding to each error.", "The essays were then split into three sets as follows.", "(1) Training Set: This set included 2,205 selected sentences with annotated grammatical errors and their corresponding corrections.", "Each sentence is represented in SGML format as shown in Fig.", "1 .", "Error types were categorized as redundant (430 instances), missing (620), selection (849), and disorder (306).", "All sentences in this set were collected to use for training the grammatical diagnostic tools.", "<DOC> <SENTENCE id=\"B1-1120\"> 我的中文進步了非常快 </SENTENCE> <MISTAKE start_off=\"7\" end_off=\"7\"> <TYPE> Selection </TYPE> <CORRECTION> 我的中文進步得非常快 </CORRECTION> </MISTAKE> </DOC> Figure 1.", "An sentence denoted in SGML format (2) Dryrun Set: A total of 55 sentences were distributed to participants to allow them familiarize themselves with the final testing process.", "Each participant was allowed to submit several runs generated using different models with different parameter settings of their developed tools.", "In addition, to ensure the submitted results could be correctly evaluated, participants were allowed to fine-tune their developed models in the dryrun phase.", "The purpose of dryrun is to validate the submitted output format only, and no dryrun outcomes were considered in the official evaluation (3) Test Set: This set consists of 1,000 testing sentences.", "Half of these sentences contained no grammatical errors, while the other half included a single defined grammatical error: redundant (132 instances), missing (126), selection (110), and disorder (132).", "The evaluation was conducted as an open test.", "In addition to the data sets provided, registered research teams were allowed to employ any linguistic and computational resources to identify the grammatical errors.", "Table 1 shows the confusion matrix used for performance evaluation.", "In the matrix, TP (True Positive) is the number of sentences with grammatical errors that are correctly identified by the developed tool; FP (False Positive) is the number of sentences in which non-existent grammatical errors are identified; TN (True Negative) is the number of sentences without grammatical errors that are correctly identified as such; FN (False Negative) is the number of sentences with grammatical errors for which no errors are identified.", "Performance Metrics The criteria for judging correctness are determined at three levels as follows.", "(1) Detection level: binary classification of a given sentence, that is, correct or incorrect should be completely identical with the gold standard.", "All error types will be regarded as incorrect.", "(2) Identification level: this level could be considered as a multi-class categorization problem.", "All error types should be clearly identified.", "A correct case should be completely identical with the gold standard of the given error type.", "(3) Position level: in addition to identifying the error types, this level also judges the occurred range of grammatical error.", "That is to say, the system results should be perfectly identical with the quadruples of gold standard.", "The following metrics are measured at all levels with the help of the confusion matrix.", "For example, given 8 testing inputs with gold standards shown as \"B1-1138, 7, 10, Disorder\", \"A2-0087, 12, 13, Missing\", \"A2-0904, correct\", \"B1-0990, correct\", \"A2-0789, 2, 3, Selection\", \"B1-0295, correct\", \"B2-0591, 3, 3, Redundant\" and \"A2-0920, correct\", the system may output the result shown as \"B1-1138, 7, 8, Disorder\", \"A2-0087, 12, 13, Missing\", \"A2-0904, 5, 6, Missing\", \"B1-0990, correct\", \"A2-0789, 2, 5, Disorder\", \"B1-0295, correct\", \"B2-0591, 3, 3, Redundant\" and \"A2-0920, 4, 5, Selection\".", "The evaluation tool will yield the following performance.", "• False Positive Rate (FPR) = 0.5 (=2/4) Notes: {\"A2-0904, 5, 6, Missing\", \"A2-0920, 4, 5, Selection\"} /{\"A2-0904, correct\", \"B1-0090, correct\", \"B1-0295, correct\", \"A2-0920, correct\"} • Detection-level • Accuracy =0.75 (=6/8) Notes: {\"B1-1138, Disorder\", \"A2-0087, Missing\", \"B1-0990, correct\", \"A2-0789, Disorder\", \"B1-0295, correct\", \"B2-0591, Redundant\"} / {\"B1-1138, Disorder\", \"A2-0087, Missing\", \"A2-0904, Missing\", \"B1-0990, correct\", \"A2-0789, Disorder\", \"B1-0295, correct\", \"B2-0591, Redundant\", \"A2-0920, Selection\".}", "• Precision = 0.67 (=4/6) Notes: {\"B1-1138, Disorder\", \"A2-0087, Missing\", \"A2-0789, Disorder\", \"B2-0591, Redundant\"} / {\"B1-1138, Disorder\", \"A2-0087, Missing\", \"A2-0904, Missing\", \"A2-0789, Disorder\", \"B2-0591, Redundant\", \"A2-0920, Selection\".}", "• Position-level • Accuracy =0.5 (=4/8) Notes: {\"A2-0087, 12, 13, Missing\", \"B1-0990, correct\", \"B1-0295, correct\", \"B2-0591, 3, 3, Redundant\"} / {\"B1-1138, 7, 8, Disorder\", \"A2-0087, 12, 13, Missing\", \"A2-0904, 5, 6, Missing\", \"B1-0990, correct\", \"A2-0789, 2, 5, Disorder\", \"B1-0295, correct\", \"B2-0591, 3, 3, Redundant\", \"A2-0920, 4, 5, Selection\"} • Precision = 0.33 (=2/6) Notes: {\"A2-0087, 12, 13, Missing\", \"B2-0591, 3, 3, Redundant\"} / {\"B1-1138, 7, 8, Disorder\", \"A2-0087, 12, 13, Missing\", \"A2-0904, 5, 6, Missing\", \"A2-0789, 2, 5, Disorder\", \"B2-0591, 3, 3, Redundant\", \"A2-0920, 4, 5, Selection\"} • Recall = 0.5 (=2/4) Notes: {\"A2-0087, 12, 13, Missing\", \"B2-0591, 3, 3, Redundant\"} / {\"B1-1138, 7, 10, Disorder\", \"A2-0087, 12, 13, Missing\", \"A2-0789, 2, 3, Selection\", \"B2-0591, 3, 3, Redundant\"} • F1=0.4 (=2*0.33*0.5/(0.33+0.5)) Table 2 summarizes the submission statistics for the participating teams.", "Of 13 registered teams, 6 teams submitted their testing results.", "In formal testing phase, each participant was allowed to submit at most three runs using different models or parameter settings.", "In total, we had received 18 runs.", "Table 3 shows the task testing results.", "The CYUT team achieved the lowest false positive rate of 0.082.", "Detection-level evaluations are designed to detect whether a sentence contains grammatical errors or not.", "A neutral baseline can be easily achieved by always reporting all testing errors are correct without errors.", "According to the test data distribution, the baseline system can achieve an accuracy level of 0.5.", "All systems achieved results slightly better than the baseline.", "The system result submitted by NCYU achieved the best detection accuracy of 0.607.", "We used the F1 score to reflect the tradeoff between precision and recall.", "In the testing results, NTOU provided the best error detection results, providing a high F1 score of 0.6754.", "For correction-level evaluations, the systems need to identify the error types in the given sentences.", "The system developed by NCYU provided the highest F1 score of 0.3584 for grammatical error identification.", "For position-level evaluations, CYUT achieved the best F1 score of 0.1742.", "Note that it is difficult to perfectly identify the error positions, partly because no word delimiters exist among Chinese words.", "Table 3 .", "Testing results of our Chinese grammatical error diagnosis task.", "Evaluation Results In summary, none of the submitted systems provided superior performance.", "It is a really difficult task to develop an effective computer-assisted learning tool for grammatical error diagnosis, especially for the CFL uses.", "In general, this research problem still has long way to go.", "Conclusions and Future Work This paper provides an overview of the NLP-TEA 2015 shared task for Chinese grammatical error diagnosis, including task design, data preparation, evaluation metrics, and performance evaluation results.", "Regardless of actual performance, all submissions contribute to the common effort to produce an effective Chinese grammatical diagnosis tool, and the individual reports in the shared task proceedings provide useful insight into Chinese language processing.", "We hope the data sets collected for this shared task can facilitate and expedite the future development of NLP tools for computer-assisted Chinese language learning.", "Therefore, all data sets with gold standards and evaluation tool are publicly available for research purposes at http://ir.itc.ntnu.edu.tw/lre/nlptea15cged.htm.", "We plan to build new language resources to improve existing techniques for computer-aided Chinese language learning.", "In addition, new data sets with the contextual information of target sentences obtained from CFL learners will be investigated for the future enrichment of this research topic." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5", "6" ], "paper_header_content": [ "Introduction", "Task Description", "Data Preparation", "Performance Metrics", "Evaluation Results", "Conclusions and Future Work" ] }
GEM-SciDuet-train-40#paper-1056#slide-3
Data Preparation
The essay sec%on of the computer-based Test of Chinese as a Foreign Language (TOCFL) Na%ve Chinese speakers were trained to manually annotate gramma%cal errors and provide error correc%ons corresponding to each
The essay sec%on of the computer-based Test of Chinese as a Foreign Language (TOCFL) Na%ve Chinese speakers were trained to manually annotate gramma%cal errors and provide error correc%ons corresponding to each
[]
GEM-SciDuet-train-40#paper-1056#slide-4
1056
Overview of the NLP-TEA 2015 Shared Task for Chinese Grammatical Error Diagnosis
This paper introduces the NLP-TEA 2015 shared task for Chinese grammatical error diagnosis. We describe the task, data preparation, performance metrics, and evaluation results. The hope is that such an evaluation campaign may produce more advanced Chinese grammatical error diagnosis techniques. All data sets with gold standards and evaluation tools are publicly available for research purposes.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97 ], "paper_content_text": [ "Introduction Human language technologies for English grammatical error correction have attracted more attention in recent years (Ng et al., 2013; .", "In contrast to the plethora of research related to develop NLP tools for learners of English as a foreign language, relatively few studies have focused on detecting and correcting grammatical errors for use by learners of Chinese as a foreign language (CFL).", "A classifier has been designed to detect word-ordering errors in Chinese sentences (Yu and Chen, 2012) .", "A ranking SVMbased model has been further explored to suggest corrections for word-ordering errors (Cheng et al., 2014) .", "Relative positioning and parse template language models have been proposed to detect Chinese grammatical errors written by US learners (Wu et al., 2010) .", "A penalized probabilistic first-order inductive learning algorithm has been presented for Chinese grammatical error diagnosis (Chang et al.", "2012) .", "A set of linguistic rules with syntactic information was manually crafted to detect CFL grammatical errors (Lee et al., 2013) .", "A sentence judgment system has been further developed to integrate both rule-based linguistic analysis and n-gram statistical learning for grammatical error detection .", "The ICCE-2014 workshop on Natural Language Processing Techniques for Educational Applications (NLP-TEA) organized a shared task on CFL grammatical error diagnosis .", "Due to the greater challenge in identifying grammatical errors in CFL leaners' written sentences, the NLP-TEA 2015 shared task features a Chinese Grammatical Error Diagnosis (CGED) task, providing an evaluation platform for the development and implementation of NLP tools for computer-assisted Chinese learning.", "The developed system should identify whether a given sentence contains grammatical errors, identify the error types, and indicate the range of occurred errors.", "This paper gives an overview of this shared task.", "The rest of this article is organized as follows.", "Section 2 provides the details of the designed task.", "Section 3 introduces the developed data sets.", "Section 4 proposes evaluation metrics.", "Section 5 presents the results of participant approaches for performance comparison.", "Section 6 summarizes the findings and offers futures research directions.", "Task Description The goal of this shared task is to develop NLP tools for identifying the grammatical errors in sentences written by the CFL learners.", "Four PADS error types are included in the target modification taxonomy, that is, mis-ordering (Permutation), redundancy (Addition), omission (Deletion), and mis-selection (Substitution).", "For the sake of simplicity, the input sentence is selected to contain one defined error types.", "The developed tool is expected to identify the error types and its position at which it occurs in the sentence.", "The input instance is given a unique sentence number sid.", "If the inputs contain no grammatical errors, the tool should return \"sid, correct\".", "If an input sentence contains a grammatical error, the output format should be a quadruple of \"sid, start_off, end_off, error_type\", where \"start_off\" and \"end_off\" respectively denote the characters at which the grammatical error starts and ends, where each character or punctuation mark occupies 1 space for counting positions.", "\"Error_type\" represents one defined error type in terms of \"Redundant,\" \"Missing,\" \"Selection,\" and \"Disorder\".", "Examples are shown as follows.", "• Example 1 Input: (sid=B2-0080) 他是我的以前的室友 Output: B2-0080, 4, 4, Redundant • Example 2 Input: (sid=A2-0017) 那電影是機器人的故事 Output: A2-0017, 2, 2, Missing • Example 3 Input: (sid=A2-0017) 那部電影是機器人的故事 Output: A2-0017, correct • Example 4 Input: (sid=B1-1193) 吳先生是修理腳踏車的拿手 Output: B1-1193, 11, 12, Selection • Example 5 Input: (sid=B2-2292) 所 以 我 不 會 讓 失 望 她 Output: B2-2292, 7, 9, Disorder The character \"的\" is a redundant character in Ex.", "1.", "There is a missing character between \"那\" and \"電影\" in Ex.", "2, and a missed character \"部\" is shown in the correct sentence in Ex.", "3.", "In Ex.", "4, \"拿手\" is a wrong word.", "One of correct words may be \"好手\".", "\"失望她\" is a word ordering error in Ex.", "5.", "The correct order should be \"她失 望\".", "Data Preparation The learner corpus used in our task was collected from the essay section of the computer-based Test of Chinese as a Foreign Language (TOCFL), administered in Taiwan.", "Native Chinese speakers were trained to manually annotate grammatical errors and provide corrections corresponding to each error.", "The essays were then split into three sets as follows.", "(1) Training Set: This set included 2,205 selected sentences with annotated grammatical errors and their corresponding corrections.", "Each sentence is represented in SGML format as shown in Fig.", "1 .", "Error types were categorized as redundant (430 instances), missing (620), selection (849), and disorder (306).", "All sentences in this set were collected to use for training the grammatical diagnostic tools.", "<DOC> <SENTENCE id=\"B1-1120\"> 我的中文進步了非常快 </SENTENCE> <MISTAKE start_off=\"7\" end_off=\"7\"> <TYPE> Selection </TYPE> <CORRECTION> 我的中文進步得非常快 </CORRECTION> </MISTAKE> </DOC> Figure 1.", "An sentence denoted in SGML format (2) Dryrun Set: A total of 55 sentences were distributed to participants to allow them familiarize themselves with the final testing process.", "Each participant was allowed to submit several runs generated using different models with different parameter settings of their developed tools.", "In addition, to ensure the submitted results could be correctly evaluated, participants were allowed to fine-tune their developed models in the dryrun phase.", "The purpose of dryrun is to validate the submitted output format only, and no dryrun outcomes were considered in the official evaluation (3) Test Set: This set consists of 1,000 testing sentences.", "Half of these sentences contained no grammatical errors, while the other half included a single defined grammatical error: redundant (132 instances), missing (126), selection (110), and disorder (132).", "The evaluation was conducted as an open test.", "In addition to the data sets provided, registered research teams were allowed to employ any linguistic and computational resources to identify the grammatical errors.", "Table 1 shows the confusion matrix used for performance evaluation.", "In the matrix, TP (True Positive) is the number of sentences with grammatical errors that are correctly identified by the developed tool; FP (False Positive) is the number of sentences in which non-existent grammatical errors are identified; TN (True Negative) is the number of sentences without grammatical errors that are correctly identified as such; FN (False Negative) is the number of sentences with grammatical errors for which no errors are identified.", "Performance Metrics The criteria for judging correctness are determined at three levels as follows.", "(1) Detection level: binary classification of a given sentence, that is, correct or incorrect should be completely identical with the gold standard.", "All error types will be regarded as incorrect.", "(2) Identification level: this level could be considered as a multi-class categorization problem.", "All error types should be clearly identified.", "A correct case should be completely identical with the gold standard of the given error type.", "(3) Position level: in addition to identifying the error types, this level also judges the occurred range of grammatical error.", "That is to say, the system results should be perfectly identical with the quadruples of gold standard.", "The following metrics are measured at all levels with the help of the confusion matrix.", "For example, given 8 testing inputs with gold standards shown as \"B1-1138, 7, 10, Disorder\", \"A2-0087, 12, 13, Missing\", \"A2-0904, correct\", \"B1-0990, correct\", \"A2-0789, 2, 3, Selection\", \"B1-0295, correct\", \"B2-0591, 3, 3, Redundant\" and \"A2-0920, correct\", the system may output the result shown as \"B1-1138, 7, 8, Disorder\", \"A2-0087, 12, 13, Missing\", \"A2-0904, 5, 6, Missing\", \"B1-0990, correct\", \"A2-0789, 2, 5, Disorder\", \"B1-0295, correct\", \"B2-0591, 3, 3, Redundant\" and \"A2-0920, 4, 5, Selection\".", "The evaluation tool will yield the following performance.", "• False Positive Rate (FPR) = 0.5 (=2/4) Notes: {\"A2-0904, 5, 6, Missing\", \"A2-0920, 4, 5, Selection\"} /{\"A2-0904, correct\", \"B1-0090, correct\", \"B1-0295, correct\", \"A2-0920, correct\"} • Detection-level • Accuracy =0.75 (=6/8) Notes: {\"B1-1138, Disorder\", \"A2-0087, Missing\", \"B1-0990, correct\", \"A2-0789, Disorder\", \"B1-0295, correct\", \"B2-0591, Redundant\"} / {\"B1-1138, Disorder\", \"A2-0087, Missing\", \"A2-0904, Missing\", \"B1-0990, correct\", \"A2-0789, Disorder\", \"B1-0295, correct\", \"B2-0591, Redundant\", \"A2-0920, Selection\".}", "• Precision = 0.67 (=4/6) Notes: {\"B1-1138, Disorder\", \"A2-0087, Missing\", \"A2-0789, Disorder\", \"B2-0591, Redundant\"} / {\"B1-1138, Disorder\", \"A2-0087, Missing\", \"A2-0904, Missing\", \"A2-0789, Disorder\", \"B2-0591, Redundant\", \"A2-0920, Selection\".}", "• Position-level • Accuracy =0.5 (=4/8) Notes: {\"A2-0087, 12, 13, Missing\", \"B1-0990, correct\", \"B1-0295, correct\", \"B2-0591, 3, 3, Redundant\"} / {\"B1-1138, 7, 8, Disorder\", \"A2-0087, 12, 13, Missing\", \"A2-0904, 5, 6, Missing\", \"B1-0990, correct\", \"A2-0789, 2, 5, Disorder\", \"B1-0295, correct\", \"B2-0591, 3, 3, Redundant\", \"A2-0920, 4, 5, Selection\"} • Precision = 0.33 (=2/6) Notes: {\"A2-0087, 12, 13, Missing\", \"B2-0591, 3, 3, Redundant\"} / {\"B1-1138, 7, 8, Disorder\", \"A2-0087, 12, 13, Missing\", \"A2-0904, 5, 6, Missing\", \"A2-0789, 2, 5, Disorder\", \"B2-0591, 3, 3, Redundant\", \"A2-0920, 4, 5, Selection\"} • Recall = 0.5 (=2/4) Notes: {\"A2-0087, 12, 13, Missing\", \"B2-0591, 3, 3, Redundant\"} / {\"B1-1138, 7, 10, Disorder\", \"A2-0087, 12, 13, Missing\", \"A2-0789, 2, 3, Selection\", \"B2-0591, 3, 3, Redundant\"} • F1=0.4 (=2*0.33*0.5/(0.33+0.5)) Table 2 summarizes the submission statistics for the participating teams.", "Of 13 registered teams, 6 teams submitted their testing results.", "In formal testing phase, each participant was allowed to submit at most three runs using different models or parameter settings.", "In total, we had received 18 runs.", "Table 3 shows the task testing results.", "The CYUT team achieved the lowest false positive rate of 0.082.", "Detection-level evaluations are designed to detect whether a sentence contains grammatical errors or not.", "A neutral baseline can be easily achieved by always reporting all testing errors are correct without errors.", "According to the test data distribution, the baseline system can achieve an accuracy level of 0.5.", "All systems achieved results slightly better than the baseline.", "The system result submitted by NCYU achieved the best detection accuracy of 0.607.", "We used the F1 score to reflect the tradeoff between precision and recall.", "In the testing results, NTOU provided the best error detection results, providing a high F1 score of 0.6754.", "For correction-level evaluations, the systems need to identify the error types in the given sentences.", "The system developed by NCYU provided the highest F1 score of 0.3584 for grammatical error identification.", "For position-level evaluations, CYUT achieved the best F1 score of 0.1742.", "Note that it is difficult to perfectly identify the error positions, partly because no word delimiters exist among Chinese words.", "Table 3 .", "Testing results of our Chinese grammatical error diagnosis task.", "Evaluation Results In summary, none of the submitted systems provided superior performance.", "It is a really difficult task to develop an effective computer-assisted learning tool for grammatical error diagnosis, especially for the CFL uses.", "In general, this research problem still has long way to go.", "Conclusions and Future Work This paper provides an overview of the NLP-TEA 2015 shared task for Chinese grammatical error diagnosis, including task design, data preparation, evaluation metrics, and performance evaluation results.", "Regardless of actual performance, all submissions contribute to the common effort to produce an effective Chinese grammatical diagnosis tool, and the individual reports in the shared task proceedings provide useful insight into Chinese language processing.", "We hope the data sets collected for this shared task can facilitate and expedite the future development of NLP tools for computer-assisted Chinese language learning.", "Therefore, all data sets with gold standards and evaluation tool are publicly available for research purposes at http://ir.itc.ntnu.edu.tw/lre/nlptea15cged.htm.", "We plan to build new language resources to improve existing techniques for computer-aided Chinese language learning.", "In addition, new data sets with the contextual information of target sentences obtained from CFL learners will be investigated for the future enrichment of this research topic." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5", "6" ], "paper_header_content": [ "Introduction", "Task Description", "Data Preparation", "Performance Metrics", "Evaluation Results", "Conclusions and Future Work" ] }
GEM-SciDuet-train-40#paper-1056#slide-4
Training Set
This set included 2,205 selected sentences Error types were categorized as redundant (430 Each sentence is represented in SGML format
This set included 2,205 selected sentences Error types were categorized as redundant (430 Each sentence is represented in SGML format
[]
GEM-SciDuet-train-40#paper-1056#slide-5
1056
Overview of the NLP-TEA 2015 Shared Task for Chinese Grammatical Error Diagnosis
This paper introduces the NLP-TEA 2015 shared task for Chinese grammatical error diagnosis. We describe the task, data preparation, performance metrics, and evaluation results. The hope is that such an evaluation campaign may produce more advanced Chinese grammatical error diagnosis techniques. All data sets with gold standards and evaluation tools are publicly available for research purposes.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97 ], "paper_content_text": [ "Introduction Human language technologies for English grammatical error correction have attracted more attention in recent years (Ng et al., 2013; .", "In contrast to the plethora of research related to develop NLP tools for learners of English as a foreign language, relatively few studies have focused on detecting and correcting grammatical errors for use by learners of Chinese as a foreign language (CFL).", "A classifier has been designed to detect word-ordering errors in Chinese sentences (Yu and Chen, 2012) .", "A ranking SVMbased model has been further explored to suggest corrections for word-ordering errors (Cheng et al., 2014) .", "Relative positioning and parse template language models have been proposed to detect Chinese grammatical errors written by US learners (Wu et al., 2010) .", "A penalized probabilistic first-order inductive learning algorithm has been presented for Chinese grammatical error diagnosis (Chang et al.", "2012) .", "A set of linguistic rules with syntactic information was manually crafted to detect CFL grammatical errors (Lee et al., 2013) .", "A sentence judgment system has been further developed to integrate both rule-based linguistic analysis and n-gram statistical learning for grammatical error detection .", "The ICCE-2014 workshop on Natural Language Processing Techniques for Educational Applications (NLP-TEA) organized a shared task on CFL grammatical error diagnosis .", "Due to the greater challenge in identifying grammatical errors in CFL leaners' written sentences, the NLP-TEA 2015 shared task features a Chinese Grammatical Error Diagnosis (CGED) task, providing an evaluation platform for the development and implementation of NLP tools for computer-assisted Chinese learning.", "The developed system should identify whether a given sentence contains grammatical errors, identify the error types, and indicate the range of occurred errors.", "This paper gives an overview of this shared task.", "The rest of this article is organized as follows.", "Section 2 provides the details of the designed task.", "Section 3 introduces the developed data sets.", "Section 4 proposes evaluation metrics.", "Section 5 presents the results of participant approaches for performance comparison.", "Section 6 summarizes the findings and offers futures research directions.", "Task Description The goal of this shared task is to develop NLP tools for identifying the grammatical errors in sentences written by the CFL learners.", "Four PADS error types are included in the target modification taxonomy, that is, mis-ordering (Permutation), redundancy (Addition), omission (Deletion), and mis-selection (Substitution).", "For the sake of simplicity, the input sentence is selected to contain one defined error types.", "The developed tool is expected to identify the error types and its position at which it occurs in the sentence.", "The input instance is given a unique sentence number sid.", "If the inputs contain no grammatical errors, the tool should return \"sid, correct\".", "If an input sentence contains a grammatical error, the output format should be a quadruple of \"sid, start_off, end_off, error_type\", where \"start_off\" and \"end_off\" respectively denote the characters at which the grammatical error starts and ends, where each character or punctuation mark occupies 1 space for counting positions.", "\"Error_type\" represents one defined error type in terms of \"Redundant,\" \"Missing,\" \"Selection,\" and \"Disorder\".", "Examples are shown as follows.", "• Example 1 Input: (sid=B2-0080) 他是我的以前的室友 Output: B2-0080, 4, 4, Redundant • Example 2 Input: (sid=A2-0017) 那電影是機器人的故事 Output: A2-0017, 2, 2, Missing • Example 3 Input: (sid=A2-0017) 那部電影是機器人的故事 Output: A2-0017, correct • Example 4 Input: (sid=B1-1193) 吳先生是修理腳踏車的拿手 Output: B1-1193, 11, 12, Selection • Example 5 Input: (sid=B2-2292) 所 以 我 不 會 讓 失 望 她 Output: B2-2292, 7, 9, Disorder The character \"的\" is a redundant character in Ex.", "1.", "There is a missing character between \"那\" and \"電影\" in Ex.", "2, and a missed character \"部\" is shown in the correct sentence in Ex.", "3.", "In Ex.", "4, \"拿手\" is a wrong word.", "One of correct words may be \"好手\".", "\"失望她\" is a word ordering error in Ex.", "5.", "The correct order should be \"她失 望\".", "Data Preparation The learner corpus used in our task was collected from the essay section of the computer-based Test of Chinese as a Foreign Language (TOCFL), administered in Taiwan.", "Native Chinese speakers were trained to manually annotate grammatical errors and provide corrections corresponding to each error.", "The essays were then split into three sets as follows.", "(1) Training Set: This set included 2,205 selected sentences with annotated grammatical errors and their corresponding corrections.", "Each sentence is represented in SGML format as shown in Fig.", "1 .", "Error types were categorized as redundant (430 instances), missing (620), selection (849), and disorder (306).", "All sentences in this set were collected to use for training the grammatical diagnostic tools.", "<DOC> <SENTENCE id=\"B1-1120\"> 我的中文進步了非常快 </SENTENCE> <MISTAKE start_off=\"7\" end_off=\"7\"> <TYPE> Selection </TYPE> <CORRECTION> 我的中文進步得非常快 </CORRECTION> </MISTAKE> </DOC> Figure 1.", "An sentence denoted in SGML format (2) Dryrun Set: A total of 55 sentences were distributed to participants to allow them familiarize themselves with the final testing process.", "Each participant was allowed to submit several runs generated using different models with different parameter settings of their developed tools.", "In addition, to ensure the submitted results could be correctly evaluated, participants were allowed to fine-tune their developed models in the dryrun phase.", "The purpose of dryrun is to validate the submitted output format only, and no dryrun outcomes were considered in the official evaluation (3) Test Set: This set consists of 1,000 testing sentences.", "Half of these sentences contained no grammatical errors, while the other half included a single defined grammatical error: redundant (132 instances), missing (126), selection (110), and disorder (132).", "The evaluation was conducted as an open test.", "In addition to the data sets provided, registered research teams were allowed to employ any linguistic and computational resources to identify the grammatical errors.", "Table 1 shows the confusion matrix used for performance evaluation.", "In the matrix, TP (True Positive) is the number of sentences with grammatical errors that are correctly identified by the developed tool; FP (False Positive) is the number of sentences in which non-existent grammatical errors are identified; TN (True Negative) is the number of sentences without grammatical errors that are correctly identified as such; FN (False Negative) is the number of sentences with grammatical errors for which no errors are identified.", "Performance Metrics The criteria for judging correctness are determined at three levels as follows.", "(1) Detection level: binary classification of a given sentence, that is, correct or incorrect should be completely identical with the gold standard.", "All error types will be regarded as incorrect.", "(2) Identification level: this level could be considered as a multi-class categorization problem.", "All error types should be clearly identified.", "A correct case should be completely identical with the gold standard of the given error type.", "(3) Position level: in addition to identifying the error types, this level also judges the occurred range of grammatical error.", "That is to say, the system results should be perfectly identical with the quadruples of gold standard.", "The following metrics are measured at all levels with the help of the confusion matrix.", "For example, given 8 testing inputs with gold standards shown as \"B1-1138, 7, 10, Disorder\", \"A2-0087, 12, 13, Missing\", \"A2-0904, correct\", \"B1-0990, correct\", \"A2-0789, 2, 3, Selection\", \"B1-0295, correct\", \"B2-0591, 3, 3, Redundant\" and \"A2-0920, correct\", the system may output the result shown as \"B1-1138, 7, 8, Disorder\", \"A2-0087, 12, 13, Missing\", \"A2-0904, 5, 6, Missing\", \"B1-0990, correct\", \"A2-0789, 2, 5, Disorder\", \"B1-0295, correct\", \"B2-0591, 3, 3, Redundant\" and \"A2-0920, 4, 5, Selection\".", "The evaluation tool will yield the following performance.", "• False Positive Rate (FPR) = 0.5 (=2/4) Notes: {\"A2-0904, 5, 6, Missing\", \"A2-0920, 4, 5, Selection\"} /{\"A2-0904, correct\", \"B1-0090, correct\", \"B1-0295, correct\", \"A2-0920, correct\"} • Detection-level • Accuracy =0.75 (=6/8) Notes: {\"B1-1138, Disorder\", \"A2-0087, Missing\", \"B1-0990, correct\", \"A2-0789, Disorder\", \"B1-0295, correct\", \"B2-0591, Redundant\"} / {\"B1-1138, Disorder\", \"A2-0087, Missing\", \"A2-0904, Missing\", \"B1-0990, correct\", \"A2-0789, Disorder\", \"B1-0295, correct\", \"B2-0591, Redundant\", \"A2-0920, Selection\".}", "• Precision = 0.67 (=4/6) Notes: {\"B1-1138, Disorder\", \"A2-0087, Missing\", \"A2-0789, Disorder\", \"B2-0591, Redundant\"} / {\"B1-1138, Disorder\", \"A2-0087, Missing\", \"A2-0904, Missing\", \"A2-0789, Disorder\", \"B2-0591, Redundant\", \"A2-0920, Selection\".}", "• Position-level • Accuracy =0.5 (=4/8) Notes: {\"A2-0087, 12, 13, Missing\", \"B1-0990, correct\", \"B1-0295, correct\", \"B2-0591, 3, 3, Redundant\"} / {\"B1-1138, 7, 8, Disorder\", \"A2-0087, 12, 13, Missing\", \"A2-0904, 5, 6, Missing\", \"B1-0990, correct\", \"A2-0789, 2, 5, Disorder\", \"B1-0295, correct\", \"B2-0591, 3, 3, Redundant\", \"A2-0920, 4, 5, Selection\"} • Precision = 0.33 (=2/6) Notes: {\"A2-0087, 12, 13, Missing\", \"B2-0591, 3, 3, Redundant\"} / {\"B1-1138, 7, 8, Disorder\", \"A2-0087, 12, 13, Missing\", \"A2-0904, 5, 6, Missing\", \"A2-0789, 2, 5, Disorder\", \"B2-0591, 3, 3, Redundant\", \"A2-0920, 4, 5, Selection\"} • Recall = 0.5 (=2/4) Notes: {\"A2-0087, 12, 13, Missing\", \"B2-0591, 3, 3, Redundant\"} / {\"B1-1138, 7, 10, Disorder\", \"A2-0087, 12, 13, Missing\", \"A2-0789, 2, 3, Selection\", \"B2-0591, 3, 3, Redundant\"} • F1=0.4 (=2*0.33*0.5/(0.33+0.5)) Table 2 summarizes the submission statistics for the participating teams.", "Of 13 registered teams, 6 teams submitted their testing results.", "In formal testing phase, each participant was allowed to submit at most three runs using different models or parameter settings.", "In total, we had received 18 runs.", "Table 3 shows the task testing results.", "The CYUT team achieved the lowest false positive rate of 0.082.", "Detection-level evaluations are designed to detect whether a sentence contains grammatical errors or not.", "A neutral baseline can be easily achieved by always reporting all testing errors are correct without errors.", "According to the test data distribution, the baseline system can achieve an accuracy level of 0.5.", "All systems achieved results slightly better than the baseline.", "The system result submitted by NCYU achieved the best detection accuracy of 0.607.", "We used the F1 score to reflect the tradeoff between precision and recall.", "In the testing results, NTOU provided the best error detection results, providing a high F1 score of 0.6754.", "For correction-level evaluations, the systems need to identify the error types in the given sentences.", "The system developed by NCYU provided the highest F1 score of 0.3584 for grammatical error identification.", "For position-level evaluations, CYUT achieved the best F1 score of 0.1742.", "Note that it is difficult to perfectly identify the error positions, partly because no word delimiters exist among Chinese words.", "Table 3 .", "Testing results of our Chinese grammatical error diagnosis task.", "Evaluation Results In summary, none of the submitted systems provided superior performance.", "It is a really difficult task to develop an effective computer-assisted learning tool for grammatical error diagnosis, especially for the CFL uses.", "In general, this research problem still has long way to go.", "Conclusions and Future Work This paper provides an overview of the NLP-TEA 2015 shared task for Chinese grammatical error diagnosis, including task design, data preparation, evaluation metrics, and performance evaluation results.", "Regardless of actual performance, all submissions contribute to the common effort to produce an effective Chinese grammatical diagnosis tool, and the individual reports in the shared task proceedings provide useful insight into Chinese language processing.", "We hope the data sets collected for this shared task can facilitate and expedite the future development of NLP tools for computer-assisted Chinese language learning.", "Therefore, all data sets with gold standards and evaluation tool are publicly available for research purposes at http://ir.itc.ntnu.edu.tw/lre/nlptea15cged.htm.", "We plan to build new language resources to improve existing techniques for computer-aided Chinese language learning.", "In addition, new data sets with the contextual information of target sentences obtained from CFL learners will be investigated for the future enrichment of this research topic." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5", "6" ], "paper_header_content": [ "Introduction", "Task Description", "Data Preparation", "Performance Metrics", "Evaluation Results", "Conclusions and Future Work" ] }
GEM-SciDuet-train-40#paper-1056#slide-5
Dryrun Set
A total of 55 sentences were given to par%cipants to familiarize themselves with the f inal tes%ng process. The purpose is output format valida%on only No macer which performance can be achieved that will not be official evalua%on. included in our
A total of 55 sentences were given to par%cipants to familiarize themselves with the f inal tes%ng process. The purpose is output format valida%on only No macer which performance can be achieved that will not be official evalua%on. included in our
[]
GEM-SciDuet-train-40#paper-1056#slide-6
1056
Overview of the NLP-TEA 2015 Shared Task for Chinese Grammatical Error Diagnosis
This paper introduces the NLP-TEA 2015 shared task for Chinese grammatical error diagnosis. We describe the task, data preparation, performance metrics, and evaluation results. The hope is that such an evaluation campaign may produce more advanced Chinese grammatical error diagnosis techniques. All data sets with gold standards and evaluation tools are publicly available for research purposes.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97 ], "paper_content_text": [ "Introduction Human language technologies for English grammatical error correction have attracted more attention in recent years (Ng et al., 2013; .", "In contrast to the plethora of research related to develop NLP tools for learners of English as a foreign language, relatively few studies have focused on detecting and correcting grammatical errors for use by learners of Chinese as a foreign language (CFL).", "A classifier has been designed to detect word-ordering errors in Chinese sentences (Yu and Chen, 2012) .", "A ranking SVMbased model has been further explored to suggest corrections for word-ordering errors (Cheng et al., 2014) .", "Relative positioning and parse template language models have been proposed to detect Chinese grammatical errors written by US learners (Wu et al., 2010) .", "A penalized probabilistic first-order inductive learning algorithm has been presented for Chinese grammatical error diagnosis (Chang et al.", "2012) .", "A set of linguistic rules with syntactic information was manually crafted to detect CFL grammatical errors (Lee et al., 2013) .", "A sentence judgment system has been further developed to integrate both rule-based linguistic analysis and n-gram statistical learning for grammatical error detection .", "The ICCE-2014 workshop on Natural Language Processing Techniques for Educational Applications (NLP-TEA) organized a shared task on CFL grammatical error diagnosis .", "Due to the greater challenge in identifying grammatical errors in CFL leaners' written sentences, the NLP-TEA 2015 shared task features a Chinese Grammatical Error Diagnosis (CGED) task, providing an evaluation platform for the development and implementation of NLP tools for computer-assisted Chinese learning.", "The developed system should identify whether a given sentence contains grammatical errors, identify the error types, and indicate the range of occurred errors.", "This paper gives an overview of this shared task.", "The rest of this article is organized as follows.", "Section 2 provides the details of the designed task.", "Section 3 introduces the developed data sets.", "Section 4 proposes evaluation metrics.", "Section 5 presents the results of participant approaches for performance comparison.", "Section 6 summarizes the findings and offers futures research directions.", "Task Description The goal of this shared task is to develop NLP tools for identifying the grammatical errors in sentences written by the CFL learners.", "Four PADS error types are included in the target modification taxonomy, that is, mis-ordering (Permutation), redundancy (Addition), omission (Deletion), and mis-selection (Substitution).", "For the sake of simplicity, the input sentence is selected to contain one defined error types.", "The developed tool is expected to identify the error types and its position at which it occurs in the sentence.", "The input instance is given a unique sentence number sid.", "If the inputs contain no grammatical errors, the tool should return \"sid, correct\".", "If an input sentence contains a grammatical error, the output format should be a quadruple of \"sid, start_off, end_off, error_type\", where \"start_off\" and \"end_off\" respectively denote the characters at which the grammatical error starts and ends, where each character or punctuation mark occupies 1 space for counting positions.", "\"Error_type\" represents one defined error type in terms of \"Redundant,\" \"Missing,\" \"Selection,\" and \"Disorder\".", "Examples are shown as follows.", "• Example 1 Input: (sid=B2-0080) 他是我的以前的室友 Output: B2-0080, 4, 4, Redundant • Example 2 Input: (sid=A2-0017) 那電影是機器人的故事 Output: A2-0017, 2, 2, Missing • Example 3 Input: (sid=A2-0017) 那部電影是機器人的故事 Output: A2-0017, correct • Example 4 Input: (sid=B1-1193) 吳先生是修理腳踏車的拿手 Output: B1-1193, 11, 12, Selection • Example 5 Input: (sid=B2-2292) 所 以 我 不 會 讓 失 望 她 Output: B2-2292, 7, 9, Disorder The character \"的\" is a redundant character in Ex.", "1.", "There is a missing character between \"那\" and \"電影\" in Ex.", "2, and a missed character \"部\" is shown in the correct sentence in Ex.", "3.", "In Ex.", "4, \"拿手\" is a wrong word.", "One of correct words may be \"好手\".", "\"失望她\" is a word ordering error in Ex.", "5.", "The correct order should be \"她失 望\".", "Data Preparation The learner corpus used in our task was collected from the essay section of the computer-based Test of Chinese as a Foreign Language (TOCFL), administered in Taiwan.", "Native Chinese speakers were trained to manually annotate grammatical errors and provide corrections corresponding to each error.", "The essays were then split into three sets as follows.", "(1) Training Set: This set included 2,205 selected sentences with annotated grammatical errors and their corresponding corrections.", "Each sentence is represented in SGML format as shown in Fig.", "1 .", "Error types were categorized as redundant (430 instances), missing (620), selection (849), and disorder (306).", "All sentences in this set were collected to use for training the grammatical diagnostic tools.", "<DOC> <SENTENCE id=\"B1-1120\"> 我的中文進步了非常快 </SENTENCE> <MISTAKE start_off=\"7\" end_off=\"7\"> <TYPE> Selection </TYPE> <CORRECTION> 我的中文進步得非常快 </CORRECTION> </MISTAKE> </DOC> Figure 1.", "An sentence denoted in SGML format (2) Dryrun Set: A total of 55 sentences were distributed to participants to allow them familiarize themselves with the final testing process.", "Each participant was allowed to submit several runs generated using different models with different parameter settings of their developed tools.", "In addition, to ensure the submitted results could be correctly evaluated, participants were allowed to fine-tune their developed models in the dryrun phase.", "The purpose of dryrun is to validate the submitted output format only, and no dryrun outcomes were considered in the official evaluation (3) Test Set: This set consists of 1,000 testing sentences.", "Half of these sentences contained no grammatical errors, while the other half included a single defined grammatical error: redundant (132 instances), missing (126), selection (110), and disorder (132).", "The evaluation was conducted as an open test.", "In addition to the data sets provided, registered research teams were allowed to employ any linguistic and computational resources to identify the grammatical errors.", "Table 1 shows the confusion matrix used for performance evaluation.", "In the matrix, TP (True Positive) is the number of sentences with grammatical errors that are correctly identified by the developed tool; FP (False Positive) is the number of sentences in which non-existent grammatical errors are identified; TN (True Negative) is the number of sentences without grammatical errors that are correctly identified as such; FN (False Negative) is the number of sentences with grammatical errors for which no errors are identified.", "Performance Metrics The criteria for judging correctness are determined at three levels as follows.", "(1) Detection level: binary classification of a given sentence, that is, correct or incorrect should be completely identical with the gold standard.", "All error types will be regarded as incorrect.", "(2) Identification level: this level could be considered as a multi-class categorization problem.", "All error types should be clearly identified.", "A correct case should be completely identical with the gold standard of the given error type.", "(3) Position level: in addition to identifying the error types, this level also judges the occurred range of grammatical error.", "That is to say, the system results should be perfectly identical with the quadruples of gold standard.", "The following metrics are measured at all levels with the help of the confusion matrix.", "For example, given 8 testing inputs with gold standards shown as \"B1-1138, 7, 10, Disorder\", \"A2-0087, 12, 13, Missing\", \"A2-0904, correct\", \"B1-0990, correct\", \"A2-0789, 2, 3, Selection\", \"B1-0295, correct\", \"B2-0591, 3, 3, Redundant\" and \"A2-0920, correct\", the system may output the result shown as \"B1-1138, 7, 8, Disorder\", \"A2-0087, 12, 13, Missing\", \"A2-0904, 5, 6, Missing\", \"B1-0990, correct\", \"A2-0789, 2, 5, Disorder\", \"B1-0295, correct\", \"B2-0591, 3, 3, Redundant\" and \"A2-0920, 4, 5, Selection\".", "The evaluation tool will yield the following performance.", "• False Positive Rate (FPR) = 0.5 (=2/4) Notes: {\"A2-0904, 5, 6, Missing\", \"A2-0920, 4, 5, Selection\"} /{\"A2-0904, correct\", \"B1-0090, correct\", \"B1-0295, correct\", \"A2-0920, correct\"} • Detection-level • Accuracy =0.75 (=6/8) Notes: {\"B1-1138, Disorder\", \"A2-0087, Missing\", \"B1-0990, correct\", \"A2-0789, Disorder\", \"B1-0295, correct\", \"B2-0591, Redundant\"} / {\"B1-1138, Disorder\", \"A2-0087, Missing\", \"A2-0904, Missing\", \"B1-0990, correct\", \"A2-0789, Disorder\", \"B1-0295, correct\", \"B2-0591, Redundant\", \"A2-0920, Selection\".}", "• Precision = 0.67 (=4/6) Notes: {\"B1-1138, Disorder\", \"A2-0087, Missing\", \"A2-0789, Disorder\", \"B2-0591, Redundant\"} / {\"B1-1138, Disorder\", \"A2-0087, Missing\", \"A2-0904, Missing\", \"A2-0789, Disorder\", \"B2-0591, Redundant\", \"A2-0920, Selection\".}", "• Position-level • Accuracy =0.5 (=4/8) Notes: {\"A2-0087, 12, 13, Missing\", \"B1-0990, correct\", \"B1-0295, correct\", \"B2-0591, 3, 3, Redundant\"} / {\"B1-1138, 7, 8, Disorder\", \"A2-0087, 12, 13, Missing\", \"A2-0904, 5, 6, Missing\", \"B1-0990, correct\", \"A2-0789, 2, 5, Disorder\", \"B1-0295, correct\", \"B2-0591, 3, 3, Redundant\", \"A2-0920, 4, 5, Selection\"} • Precision = 0.33 (=2/6) Notes: {\"A2-0087, 12, 13, Missing\", \"B2-0591, 3, 3, Redundant\"} / {\"B1-1138, 7, 8, Disorder\", \"A2-0087, 12, 13, Missing\", \"A2-0904, 5, 6, Missing\", \"A2-0789, 2, 5, Disorder\", \"B2-0591, 3, 3, Redundant\", \"A2-0920, 4, 5, Selection\"} • Recall = 0.5 (=2/4) Notes: {\"A2-0087, 12, 13, Missing\", \"B2-0591, 3, 3, Redundant\"} / {\"B1-1138, 7, 10, Disorder\", \"A2-0087, 12, 13, Missing\", \"A2-0789, 2, 3, Selection\", \"B2-0591, 3, 3, Redundant\"} • F1=0.4 (=2*0.33*0.5/(0.33+0.5)) Table 2 summarizes the submission statistics for the participating teams.", "Of 13 registered teams, 6 teams submitted their testing results.", "In formal testing phase, each participant was allowed to submit at most three runs using different models or parameter settings.", "In total, we had received 18 runs.", "Table 3 shows the task testing results.", "The CYUT team achieved the lowest false positive rate of 0.082.", "Detection-level evaluations are designed to detect whether a sentence contains grammatical errors or not.", "A neutral baseline can be easily achieved by always reporting all testing errors are correct without errors.", "According to the test data distribution, the baseline system can achieve an accuracy level of 0.5.", "All systems achieved results slightly better than the baseline.", "The system result submitted by NCYU achieved the best detection accuracy of 0.607.", "We used the F1 score to reflect the tradeoff between precision and recall.", "In the testing results, NTOU provided the best error detection results, providing a high F1 score of 0.6754.", "For correction-level evaluations, the systems need to identify the error types in the given sentences.", "The system developed by NCYU provided the highest F1 score of 0.3584 for grammatical error identification.", "For position-level evaluations, CYUT achieved the best F1 score of 0.1742.", "Note that it is difficult to perfectly identify the error positions, partly because no word delimiters exist among Chinese words.", "Table 3 .", "Testing results of our Chinese grammatical error diagnosis task.", "Evaluation Results In summary, none of the submitted systems provided superior performance.", "It is a really difficult task to develop an effective computer-assisted learning tool for grammatical error diagnosis, especially for the CFL uses.", "In general, this research problem still has long way to go.", "Conclusions and Future Work This paper provides an overview of the NLP-TEA 2015 shared task for Chinese grammatical error diagnosis, including task design, data preparation, evaluation metrics, and performance evaluation results.", "Regardless of actual performance, all submissions contribute to the common effort to produce an effective Chinese grammatical diagnosis tool, and the individual reports in the shared task proceedings provide useful insight into Chinese language processing.", "We hope the data sets collected for this shared task can facilitate and expedite the future development of NLP tools for computer-assisted Chinese language learning.", "Therefore, all data sets with gold standards and evaluation tool are publicly available for research purposes at http://ir.itc.ntnu.edu.tw/lre/nlptea15cged.htm.", "We plan to build new language resources to improve existing techniques for computer-aided Chinese language learning.", "In addition, new data sets with the contextual information of target sentences obtained from CFL learners will be investigated for the future enrichment of this research topic." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5", "6" ], "paper_header_content": [ "Introduction", "Task Description", "Data Preparation", "Performance Metrics", "Evaluation Results", "Conclusions and Future Work" ] }
GEM-SciDuet-train-40#paper-1056#slide-6
Test Set
This set consists of 1,000 tes%ng sentences Half of these sentences gramma%cal errors, while contained the other no half included a single defined gramma%cal error: redundant (132 instances), missing
This set consists of 1,000 tes%ng sentences Half of these sentences gramma%cal errors, while contained the other no half included a single defined gramma%cal error: redundant (132 instances), missing
[]
GEM-SciDuet-train-40#paper-1056#slide-7
1056
Overview of the NLP-TEA 2015 Shared Task for Chinese Grammatical Error Diagnosis
This paper introduces the NLP-TEA 2015 shared task for Chinese grammatical error diagnosis. We describe the task, data preparation, performance metrics, and evaluation results. The hope is that such an evaluation campaign may produce more advanced Chinese grammatical error diagnosis techniques. All data sets with gold standards and evaluation tools are publicly available for research purposes.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97 ], "paper_content_text": [ "Introduction Human language technologies for English grammatical error correction have attracted more attention in recent years (Ng et al., 2013; .", "In contrast to the plethora of research related to develop NLP tools for learners of English as a foreign language, relatively few studies have focused on detecting and correcting grammatical errors for use by learners of Chinese as a foreign language (CFL).", "A classifier has been designed to detect word-ordering errors in Chinese sentences (Yu and Chen, 2012) .", "A ranking SVMbased model has been further explored to suggest corrections for word-ordering errors (Cheng et al., 2014) .", "Relative positioning and parse template language models have been proposed to detect Chinese grammatical errors written by US learners (Wu et al., 2010) .", "A penalized probabilistic first-order inductive learning algorithm has been presented for Chinese grammatical error diagnosis (Chang et al.", "2012) .", "A set of linguistic rules with syntactic information was manually crafted to detect CFL grammatical errors (Lee et al., 2013) .", "A sentence judgment system has been further developed to integrate both rule-based linguistic analysis and n-gram statistical learning for grammatical error detection .", "The ICCE-2014 workshop on Natural Language Processing Techniques for Educational Applications (NLP-TEA) organized a shared task on CFL grammatical error diagnosis .", "Due to the greater challenge in identifying grammatical errors in CFL leaners' written sentences, the NLP-TEA 2015 shared task features a Chinese Grammatical Error Diagnosis (CGED) task, providing an evaluation platform for the development and implementation of NLP tools for computer-assisted Chinese learning.", "The developed system should identify whether a given sentence contains grammatical errors, identify the error types, and indicate the range of occurred errors.", "This paper gives an overview of this shared task.", "The rest of this article is organized as follows.", "Section 2 provides the details of the designed task.", "Section 3 introduces the developed data sets.", "Section 4 proposes evaluation metrics.", "Section 5 presents the results of participant approaches for performance comparison.", "Section 6 summarizes the findings and offers futures research directions.", "Task Description The goal of this shared task is to develop NLP tools for identifying the grammatical errors in sentences written by the CFL learners.", "Four PADS error types are included in the target modification taxonomy, that is, mis-ordering (Permutation), redundancy (Addition), omission (Deletion), and mis-selection (Substitution).", "For the sake of simplicity, the input sentence is selected to contain one defined error types.", "The developed tool is expected to identify the error types and its position at which it occurs in the sentence.", "The input instance is given a unique sentence number sid.", "If the inputs contain no grammatical errors, the tool should return \"sid, correct\".", "If an input sentence contains a grammatical error, the output format should be a quadruple of \"sid, start_off, end_off, error_type\", where \"start_off\" and \"end_off\" respectively denote the characters at which the grammatical error starts and ends, where each character or punctuation mark occupies 1 space for counting positions.", "\"Error_type\" represents one defined error type in terms of \"Redundant,\" \"Missing,\" \"Selection,\" and \"Disorder\".", "Examples are shown as follows.", "• Example 1 Input: (sid=B2-0080) 他是我的以前的室友 Output: B2-0080, 4, 4, Redundant • Example 2 Input: (sid=A2-0017) 那電影是機器人的故事 Output: A2-0017, 2, 2, Missing • Example 3 Input: (sid=A2-0017) 那部電影是機器人的故事 Output: A2-0017, correct • Example 4 Input: (sid=B1-1193) 吳先生是修理腳踏車的拿手 Output: B1-1193, 11, 12, Selection • Example 5 Input: (sid=B2-2292) 所 以 我 不 會 讓 失 望 她 Output: B2-2292, 7, 9, Disorder The character \"的\" is a redundant character in Ex.", "1.", "There is a missing character between \"那\" and \"電影\" in Ex.", "2, and a missed character \"部\" is shown in the correct sentence in Ex.", "3.", "In Ex.", "4, \"拿手\" is a wrong word.", "One of correct words may be \"好手\".", "\"失望她\" is a word ordering error in Ex.", "5.", "The correct order should be \"她失 望\".", "Data Preparation The learner corpus used in our task was collected from the essay section of the computer-based Test of Chinese as a Foreign Language (TOCFL), administered in Taiwan.", "Native Chinese speakers were trained to manually annotate grammatical errors and provide corrections corresponding to each error.", "The essays were then split into three sets as follows.", "(1) Training Set: This set included 2,205 selected sentences with annotated grammatical errors and their corresponding corrections.", "Each sentence is represented in SGML format as shown in Fig.", "1 .", "Error types were categorized as redundant (430 instances), missing (620), selection (849), and disorder (306).", "All sentences in this set were collected to use for training the grammatical diagnostic tools.", "<DOC> <SENTENCE id=\"B1-1120\"> 我的中文進步了非常快 </SENTENCE> <MISTAKE start_off=\"7\" end_off=\"7\"> <TYPE> Selection </TYPE> <CORRECTION> 我的中文進步得非常快 </CORRECTION> </MISTAKE> </DOC> Figure 1.", "An sentence denoted in SGML format (2) Dryrun Set: A total of 55 sentences were distributed to participants to allow them familiarize themselves with the final testing process.", "Each participant was allowed to submit several runs generated using different models with different parameter settings of their developed tools.", "In addition, to ensure the submitted results could be correctly evaluated, participants were allowed to fine-tune their developed models in the dryrun phase.", "The purpose of dryrun is to validate the submitted output format only, and no dryrun outcomes were considered in the official evaluation (3) Test Set: This set consists of 1,000 testing sentences.", "Half of these sentences contained no grammatical errors, while the other half included a single defined grammatical error: redundant (132 instances), missing (126), selection (110), and disorder (132).", "The evaluation was conducted as an open test.", "In addition to the data sets provided, registered research teams were allowed to employ any linguistic and computational resources to identify the grammatical errors.", "Table 1 shows the confusion matrix used for performance evaluation.", "In the matrix, TP (True Positive) is the number of sentences with grammatical errors that are correctly identified by the developed tool; FP (False Positive) is the number of sentences in which non-existent grammatical errors are identified; TN (True Negative) is the number of sentences without grammatical errors that are correctly identified as such; FN (False Negative) is the number of sentences with grammatical errors for which no errors are identified.", "Performance Metrics The criteria for judging correctness are determined at three levels as follows.", "(1) Detection level: binary classification of a given sentence, that is, correct or incorrect should be completely identical with the gold standard.", "All error types will be regarded as incorrect.", "(2) Identification level: this level could be considered as a multi-class categorization problem.", "All error types should be clearly identified.", "A correct case should be completely identical with the gold standard of the given error type.", "(3) Position level: in addition to identifying the error types, this level also judges the occurred range of grammatical error.", "That is to say, the system results should be perfectly identical with the quadruples of gold standard.", "The following metrics are measured at all levels with the help of the confusion matrix.", "For example, given 8 testing inputs with gold standards shown as \"B1-1138, 7, 10, Disorder\", \"A2-0087, 12, 13, Missing\", \"A2-0904, correct\", \"B1-0990, correct\", \"A2-0789, 2, 3, Selection\", \"B1-0295, correct\", \"B2-0591, 3, 3, Redundant\" and \"A2-0920, correct\", the system may output the result shown as \"B1-1138, 7, 8, Disorder\", \"A2-0087, 12, 13, Missing\", \"A2-0904, 5, 6, Missing\", \"B1-0990, correct\", \"A2-0789, 2, 5, Disorder\", \"B1-0295, correct\", \"B2-0591, 3, 3, Redundant\" and \"A2-0920, 4, 5, Selection\".", "The evaluation tool will yield the following performance.", "• False Positive Rate (FPR) = 0.5 (=2/4) Notes: {\"A2-0904, 5, 6, Missing\", \"A2-0920, 4, 5, Selection\"} /{\"A2-0904, correct\", \"B1-0090, correct\", \"B1-0295, correct\", \"A2-0920, correct\"} • Detection-level • Accuracy =0.75 (=6/8) Notes: {\"B1-1138, Disorder\", \"A2-0087, Missing\", \"B1-0990, correct\", \"A2-0789, Disorder\", \"B1-0295, correct\", \"B2-0591, Redundant\"} / {\"B1-1138, Disorder\", \"A2-0087, Missing\", \"A2-0904, Missing\", \"B1-0990, correct\", \"A2-0789, Disorder\", \"B1-0295, correct\", \"B2-0591, Redundant\", \"A2-0920, Selection\".}", "• Precision = 0.67 (=4/6) Notes: {\"B1-1138, Disorder\", \"A2-0087, Missing\", \"A2-0789, Disorder\", \"B2-0591, Redundant\"} / {\"B1-1138, Disorder\", \"A2-0087, Missing\", \"A2-0904, Missing\", \"A2-0789, Disorder\", \"B2-0591, Redundant\", \"A2-0920, Selection\".}", "• Position-level • Accuracy =0.5 (=4/8) Notes: {\"A2-0087, 12, 13, Missing\", \"B1-0990, correct\", \"B1-0295, correct\", \"B2-0591, 3, 3, Redundant\"} / {\"B1-1138, 7, 8, Disorder\", \"A2-0087, 12, 13, Missing\", \"A2-0904, 5, 6, Missing\", \"B1-0990, correct\", \"A2-0789, 2, 5, Disorder\", \"B1-0295, correct\", \"B2-0591, 3, 3, Redundant\", \"A2-0920, 4, 5, Selection\"} • Precision = 0.33 (=2/6) Notes: {\"A2-0087, 12, 13, Missing\", \"B2-0591, 3, 3, Redundant\"} / {\"B1-1138, 7, 8, Disorder\", \"A2-0087, 12, 13, Missing\", \"A2-0904, 5, 6, Missing\", \"A2-0789, 2, 5, Disorder\", \"B2-0591, 3, 3, Redundant\", \"A2-0920, 4, 5, Selection\"} • Recall = 0.5 (=2/4) Notes: {\"A2-0087, 12, 13, Missing\", \"B2-0591, 3, 3, Redundant\"} / {\"B1-1138, 7, 10, Disorder\", \"A2-0087, 12, 13, Missing\", \"A2-0789, 2, 3, Selection\", \"B2-0591, 3, 3, Redundant\"} • F1=0.4 (=2*0.33*0.5/(0.33+0.5)) Table 2 summarizes the submission statistics for the participating teams.", "Of 13 registered teams, 6 teams submitted their testing results.", "In formal testing phase, each participant was allowed to submit at most three runs using different models or parameter settings.", "In total, we had received 18 runs.", "Table 3 shows the task testing results.", "The CYUT team achieved the lowest false positive rate of 0.082.", "Detection-level evaluations are designed to detect whether a sentence contains grammatical errors or not.", "A neutral baseline can be easily achieved by always reporting all testing errors are correct without errors.", "According to the test data distribution, the baseline system can achieve an accuracy level of 0.5.", "All systems achieved results slightly better than the baseline.", "The system result submitted by NCYU achieved the best detection accuracy of 0.607.", "We used the F1 score to reflect the tradeoff between precision and recall.", "In the testing results, NTOU provided the best error detection results, providing a high F1 score of 0.6754.", "For correction-level evaluations, the systems need to identify the error types in the given sentences.", "The system developed by NCYU provided the highest F1 score of 0.3584 for grammatical error identification.", "For position-level evaluations, CYUT achieved the best F1 score of 0.1742.", "Note that it is difficult to perfectly identify the error positions, partly because no word delimiters exist among Chinese words.", "Table 3 .", "Testing results of our Chinese grammatical error diagnosis task.", "Evaluation Results In summary, none of the submitted systems provided superior performance.", "It is a really difficult task to develop an effective computer-assisted learning tool for grammatical error diagnosis, especially for the CFL uses.", "In general, this research problem still has long way to go.", "Conclusions and Future Work This paper provides an overview of the NLP-TEA 2015 shared task for Chinese grammatical error diagnosis, including task design, data preparation, evaluation metrics, and performance evaluation results.", "Regardless of actual performance, all submissions contribute to the common effort to produce an effective Chinese grammatical diagnosis tool, and the individual reports in the shared task proceedings provide useful insight into Chinese language processing.", "We hope the data sets collected for this shared task can facilitate and expedite the future development of NLP tools for computer-assisted Chinese language learning.", "Therefore, all data sets with gold standards and evaluation tool are publicly available for research purposes at http://ir.itc.ntnu.edu.tw/lre/nlptea15cged.htm.", "We plan to build new language resources to improve existing techniques for computer-aided Chinese language learning.", "In addition, new data sets with the contextual information of target sentences obtained from CFL learners will be investigated for the future enrichment of this research topic." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5", "6" ], "paper_header_content": [ "Introduction", "Task Description", "Data Preparation", "Performance Metrics", "Evaluation Results", "Conclusions and Future Work" ] }
GEM-SciDuet-train-40#paper-1056#slide-7
Performance Metric
Correctness is determined at three levels False posi%ve rate (FPR) = FP / (FP+TP) Precision = TP / (TP+FP) F1 = 2 * Precision * Recall / (Precision+Recall)
Correctness is determined at three levels False posi%ve rate (FPR) = FP / (FP+TP) Precision = TP / (TP+FP) F1 = 2 * Precision * Recall / (Precision+Recall)
[]
GEM-SciDuet-train-40#paper-1056#slide-9
1056
Overview of the NLP-TEA 2015 Shared Task for Chinese Grammatical Error Diagnosis
This paper introduces the NLP-TEA 2015 shared task for Chinese grammatical error diagnosis. We describe the task, data preparation, performance metrics, and evaluation results. The hope is that such an evaluation campaign may produce more advanced Chinese grammatical error diagnosis techniques. All data sets with gold standards and evaluation tools are publicly available for research purposes.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97 ], "paper_content_text": [ "Introduction Human language technologies for English grammatical error correction have attracted more attention in recent years (Ng et al., 2013; .", "In contrast to the plethora of research related to develop NLP tools for learners of English as a foreign language, relatively few studies have focused on detecting and correcting grammatical errors for use by learners of Chinese as a foreign language (CFL).", "A classifier has been designed to detect word-ordering errors in Chinese sentences (Yu and Chen, 2012) .", "A ranking SVMbased model has been further explored to suggest corrections for word-ordering errors (Cheng et al., 2014) .", "Relative positioning and parse template language models have been proposed to detect Chinese grammatical errors written by US learners (Wu et al., 2010) .", "A penalized probabilistic first-order inductive learning algorithm has been presented for Chinese grammatical error diagnosis (Chang et al.", "2012) .", "A set of linguistic rules with syntactic information was manually crafted to detect CFL grammatical errors (Lee et al., 2013) .", "A sentence judgment system has been further developed to integrate both rule-based linguistic analysis and n-gram statistical learning for grammatical error detection .", "The ICCE-2014 workshop on Natural Language Processing Techniques for Educational Applications (NLP-TEA) organized a shared task on CFL grammatical error diagnosis .", "Due to the greater challenge in identifying grammatical errors in CFL leaners' written sentences, the NLP-TEA 2015 shared task features a Chinese Grammatical Error Diagnosis (CGED) task, providing an evaluation platform for the development and implementation of NLP tools for computer-assisted Chinese learning.", "The developed system should identify whether a given sentence contains grammatical errors, identify the error types, and indicate the range of occurred errors.", "This paper gives an overview of this shared task.", "The rest of this article is organized as follows.", "Section 2 provides the details of the designed task.", "Section 3 introduces the developed data sets.", "Section 4 proposes evaluation metrics.", "Section 5 presents the results of participant approaches for performance comparison.", "Section 6 summarizes the findings and offers futures research directions.", "Task Description The goal of this shared task is to develop NLP tools for identifying the grammatical errors in sentences written by the CFL learners.", "Four PADS error types are included in the target modification taxonomy, that is, mis-ordering (Permutation), redundancy (Addition), omission (Deletion), and mis-selection (Substitution).", "For the sake of simplicity, the input sentence is selected to contain one defined error types.", "The developed tool is expected to identify the error types and its position at which it occurs in the sentence.", "The input instance is given a unique sentence number sid.", "If the inputs contain no grammatical errors, the tool should return \"sid, correct\".", "If an input sentence contains a grammatical error, the output format should be a quadruple of \"sid, start_off, end_off, error_type\", where \"start_off\" and \"end_off\" respectively denote the characters at which the grammatical error starts and ends, where each character or punctuation mark occupies 1 space for counting positions.", "\"Error_type\" represents one defined error type in terms of \"Redundant,\" \"Missing,\" \"Selection,\" and \"Disorder\".", "Examples are shown as follows.", "• Example 1 Input: (sid=B2-0080) 他是我的以前的室友 Output: B2-0080, 4, 4, Redundant • Example 2 Input: (sid=A2-0017) 那電影是機器人的故事 Output: A2-0017, 2, 2, Missing • Example 3 Input: (sid=A2-0017) 那部電影是機器人的故事 Output: A2-0017, correct • Example 4 Input: (sid=B1-1193) 吳先生是修理腳踏車的拿手 Output: B1-1193, 11, 12, Selection • Example 5 Input: (sid=B2-2292) 所 以 我 不 會 讓 失 望 她 Output: B2-2292, 7, 9, Disorder The character \"的\" is a redundant character in Ex.", "1.", "There is a missing character between \"那\" and \"電影\" in Ex.", "2, and a missed character \"部\" is shown in the correct sentence in Ex.", "3.", "In Ex.", "4, \"拿手\" is a wrong word.", "One of correct words may be \"好手\".", "\"失望她\" is a word ordering error in Ex.", "5.", "The correct order should be \"她失 望\".", "Data Preparation The learner corpus used in our task was collected from the essay section of the computer-based Test of Chinese as a Foreign Language (TOCFL), administered in Taiwan.", "Native Chinese speakers were trained to manually annotate grammatical errors and provide corrections corresponding to each error.", "The essays were then split into three sets as follows.", "(1) Training Set: This set included 2,205 selected sentences with annotated grammatical errors and their corresponding corrections.", "Each sentence is represented in SGML format as shown in Fig.", "1 .", "Error types were categorized as redundant (430 instances), missing (620), selection (849), and disorder (306).", "All sentences in this set were collected to use for training the grammatical diagnostic tools.", "<DOC> <SENTENCE id=\"B1-1120\"> 我的中文進步了非常快 </SENTENCE> <MISTAKE start_off=\"7\" end_off=\"7\"> <TYPE> Selection </TYPE> <CORRECTION> 我的中文進步得非常快 </CORRECTION> </MISTAKE> </DOC> Figure 1.", "An sentence denoted in SGML format (2) Dryrun Set: A total of 55 sentences were distributed to participants to allow them familiarize themselves with the final testing process.", "Each participant was allowed to submit several runs generated using different models with different parameter settings of their developed tools.", "In addition, to ensure the submitted results could be correctly evaluated, participants were allowed to fine-tune their developed models in the dryrun phase.", "The purpose of dryrun is to validate the submitted output format only, and no dryrun outcomes were considered in the official evaluation (3) Test Set: This set consists of 1,000 testing sentences.", "Half of these sentences contained no grammatical errors, while the other half included a single defined grammatical error: redundant (132 instances), missing (126), selection (110), and disorder (132).", "The evaluation was conducted as an open test.", "In addition to the data sets provided, registered research teams were allowed to employ any linguistic and computational resources to identify the grammatical errors.", "Table 1 shows the confusion matrix used for performance evaluation.", "In the matrix, TP (True Positive) is the number of sentences with grammatical errors that are correctly identified by the developed tool; FP (False Positive) is the number of sentences in which non-existent grammatical errors are identified; TN (True Negative) is the number of sentences without grammatical errors that are correctly identified as such; FN (False Negative) is the number of sentences with grammatical errors for which no errors are identified.", "Performance Metrics The criteria for judging correctness are determined at three levels as follows.", "(1) Detection level: binary classification of a given sentence, that is, correct or incorrect should be completely identical with the gold standard.", "All error types will be regarded as incorrect.", "(2) Identification level: this level could be considered as a multi-class categorization problem.", "All error types should be clearly identified.", "A correct case should be completely identical with the gold standard of the given error type.", "(3) Position level: in addition to identifying the error types, this level also judges the occurred range of grammatical error.", "That is to say, the system results should be perfectly identical with the quadruples of gold standard.", "The following metrics are measured at all levels with the help of the confusion matrix.", "For example, given 8 testing inputs with gold standards shown as \"B1-1138, 7, 10, Disorder\", \"A2-0087, 12, 13, Missing\", \"A2-0904, correct\", \"B1-0990, correct\", \"A2-0789, 2, 3, Selection\", \"B1-0295, correct\", \"B2-0591, 3, 3, Redundant\" and \"A2-0920, correct\", the system may output the result shown as \"B1-1138, 7, 8, Disorder\", \"A2-0087, 12, 13, Missing\", \"A2-0904, 5, 6, Missing\", \"B1-0990, correct\", \"A2-0789, 2, 5, Disorder\", \"B1-0295, correct\", \"B2-0591, 3, 3, Redundant\" and \"A2-0920, 4, 5, Selection\".", "The evaluation tool will yield the following performance.", "• False Positive Rate (FPR) = 0.5 (=2/4) Notes: {\"A2-0904, 5, 6, Missing\", \"A2-0920, 4, 5, Selection\"} /{\"A2-0904, correct\", \"B1-0090, correct\", \"B1-0295, correct\", \"A2-0920, correct\"} • Detection-level • Accuracy =0.75 (=6/8) Notes: {\"B1-1138, Disorder\", \"A2-0087, Missing\", \"B1-0990, correct\", \"A2-0789, Disorder\", \"B1-0295, correct\", \"B2-0591, Redundant\"} / {\"B1-1138, Disorder\", \"A2-0087, Missing\", \"A2-0904, Missing\", \"B1-0990, correct\", \"A2-0789, Disorder\", \"B1-0295, correct\", \"B2-0591, Redundant\", \"A2-0920, Selection\".}", "• Precision = 0.67 (=4/6) Notes: {\"B1-1138, Disorder\", \"A2-0087, Missing\", \"A2-0789, Disorder\", \"B2-0591, Redundant\"} / {\"B1-1138, Disorder\", \"A2-0087, Missing\", \"A2-0904, Missing\", \"A2-0789, Disorder\", \"B2-0591, Redundant\", \"A2-0920, Selection\".}", "• Position-level • Accuracy =0.5 (=4/8) Notes: {\"A2-0087, 12, 13, Missing\", \"B1-0990, correct\", \"B1-0295, correct\", \"B2-0591, 3, 3, Redundant\"} / {\"B1-1138, 7, 8, Disorder\", \"A2-0087, 12, 13, Missing\", \"A2-0904, 5, 6, Missing\", \"B1-0990, correct\", \"A2-0789, 2, 5, Disorder\", \"B1-0295, correct\", \"B2-0591, 3, 3, Redundant\", \"A2-0920, 4, 5, Selection\"} • Precision = 0.33 (=2/6) Notes: {\"A2-0087, 12, 13, Missing\", \"B2-0591, 3, 3, Redundant\"} / {\"B1-1138, 7, 8, Disorder\", \"A2-0087, 12, 13, Missing\", \"A2-0904, 5, 6, Missing\", \"A2-0789, 2, 5, Disorder\", \"B2-0591, 3, 3, Redundant\", \"A2-0920, 4, 5, Selection\"} • Recall = 0.5 (=2/4) Notes: {\"A2-0087, 12, 13, Missing\", \"B2-0591, 3, 3, Redundant\"} / {\"B1-1138, 7, 10, Disorder\", \"A2-0087, 12, 13, Missing\", \"A2-0789, 2, 3, Selection\", \"B2-0591, 3, 3, Redundant\"} • F1=0.4 (=2*0.33*0.5/(0.33+0.5)) Table 2 summarizes the submission statistics for the participating teams.", "Of 13 registered teams, 6 teams submitted their testing results.", "In formal testing phase, each participant was allowed to submit at most three runs using different models or parameter settings.", "In total, we had received 18 runs.", "Table 3 shows the task testing results.", "The CYUT team achieved the lowest false positive rate of 0.082.", "Detection-level evaluations are designed to detect whether a sentence contains grammatical errors or not.", "A neutral baseline can be easily achieved by always reporting all testing errors are correct without errors.", "According to the test data distribution, the baseline system can achieve an accuracy level of 0.5.", "All systems achieved results slightly better than the baseline.", "The system result submitted by NCYU achieved the best detection accuracy of 0.607.", "We used the F1 score to reflect the tradeoff between precision and recall.", "In the testing results, NTOU provided the best error detection results, providing a high F1 score of 0.6754.", "For correction-level evaluations, the systems need to identify the error types in the given sentences.", "The system developed by NCYU provided the highest F1 score of 0.3584 for grammatical error identification.", "For position-level evaluations, CYUT achieved the best F1 score of 0.1742.", "Note that it is difficult to perfectly identify the error positions, partly because no word delimiters exist among Chinese words.", "Table 3 .", "Testing results of our Chinese grammatical error diagnosis task.", "Evaluation Results In summary, none of the submitted systems provided superior performance.", "It is a really difficult task to develop an effective computer-assisted learning tool for grammatical error diagnosis, especially for the CFL uses.", "In general, this research problem still has long way to go.", "Conclusions and Future Work This paper provides an overview of the NLP-TEA 2015 shared task for Chinese grammatical error diagnosis, including task design, data preparation, evaluation metrics, and performance evaluation results.", "Regardless of actual performance, all submissions contribute to the common effort to produce an effective Chinese grammatical diagnosis tool, and the individual reports in the shared task proceedings provide useful insight into Chinese language processing.", "We hope the data sets collected for this shared task can facilitate and expedite the future development of NLP tools for computer-assisted Chinese language learning.", "Therefore, all data sets with gold standards and evaluation tool are publicly available for research purposes at http://ir.itc.ntnu.edu.tw/lre/nlptea15cged.htm.", "We plan to build new language resources to improve existing techniques for computer-aided Chinese language learning.", "In addition, new data sets with the contextual information of target sentences obtained from CFL learners will be investigated for the future enrichment of this research topic." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5", "6" ], "paper_header_content": [ "Introduction", "Task Description", "Data Preparation", "Performance Metrics", "Evaluation Results", "Conclusions and Future Work" ] }
GEM-SciDuet-train-40#paper-1056#slide-9
13 Participants and 18 Submitted Runs
13 Par%cipants and 18 Submiced Runs
13 Par%cipants and 18 Submiced Runs
[]
GEM-SciDuet-train-40#paper-1056#slide-11
1056
Overview of the NLP-TEA 2015 Shared Task for Chinese Grammatical Error Diagnosis
This paper introduces the NLP-TEA 2015 shared task for Chinese grammatical error diagnosis. We describe the task, data preparation, performance metrics, and evaluation results. The hope is that such an evaluation campaign may produce more advanced Chinese grammatical error diagnosis techniques. All data sets with gold standards and evaluation tools are publicly available for research purposes.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97 ], "paper_content_text": [ "Introduction Human language technologies for English grammatical error correction have attracted more attention in recent years (Ng et al., 2013; .", "In contrast to the plethora of research related to develop NLP tools for learners of English as a foreign language, relatively few studies have focused on detecting and correcting grammatical errors for use by learners of Chinese as a foreign language (CFL).", "A classifier has been designed to detect word-ordering errors in Chinese sentences (Yu and Chen, 2012) .", "A ranking SVMbased model has been further explored to suggest corrections for word-ordering errors (Cheng et al., 2014) .", "Relative positioning and parse template language models have been proposed to detect Chinese grammatical errors written by US learners (Wu et al., 2010) .", "A penalized probabilistic first-order inductive learning algorithm has been presented for Chinese grammatical error diagnosis (Chang et al.", "2012) .", "A set of linguistic rules with syntactic information was manually crafted to detect CFL grammatical errors (Lee et al., 2013) .", "A sentence judgment system has been further developed to integrate both rule-based linguistic analysis and n-gram statistical learning for grammatical error detection .", "The ICCE-2014 workshop on Natural Language Processing Techniques for Educational Applications (NLP-TEA) organized a shared task on CFL grammatical error diagnosis .", "Due to the greater challenge in identifying grammatical errors in CFL leaners' written sentences, the NLP-TEA 2015 shared task features a Chinese Grammatical Error Diagnosis (CGED) task, providing an evaluation platform for the development and implementation of NLP tools for computer-assisted Chinese learning.", "The developed system should identify whether a given sentence contains grammatical errors, identify the error types, and indicate the range of occurred errors.", "This paper gives an overview of this shared task.", "The rest of this article is organized as follows.", "Section 2 provides the details of the designed task.", "Section 3 introduces the developed data sets.", "Section 4 proposes evaluation metrics.", "Section 5 presents the results of participant approaches for performance comparison.", "Section 6 summarizes the findings and offers futures research directions.", "Task Description The goal of this shared task is to develop NLP tools for identifying the grammatical errors in sentences written by the CFL learners.", "Four PADS error types are included in the target modification taxonomy, that is, mis-ordering (Permutation), redundancy (Addition), omission (Deletion), and mis-selection (Substitution).", "For the sake of simplicity, the input sentence is selected to contain one defined error types.", "The developed tool is expected to identify the error types and its position at which it occurs in the sentence.", "The input instance is given a unique sentence number sid.", "If the inputs contain no grammatical errors, the tool should return \"sid, correct\".", "If an input sentence contains a grammatical error, the output format should be a quadruple of \"sid, start_off, end_off, error_type\", where \"start_off\" and \"end_off\" respectively denote the characters at which the grammatical error starts and ends, where each character or punctuation mark occupies 1 space for counting positions.", "\"Error_type\" represents one defined error type in terms of \"Redundant,\" \"Missing,\" \"Selection,\" and \"Disorder\".", "Examples are shown as follows.", "• Example 1 Input: (sid=B2-0080) 他是我的以前的室友 Output: B2-0080, 4, 4, Redundant • Example 2 Input: (sid=A2-0017) 那電影是機器人的故事 Output: A2-0017, 2, 2, Missing • Example 3 Input: (sid=A2-0017) 那部電影是機器人的故事 Output: A2-0017, correct • Example 4 Input: (sid=B1-1193) 吳先生是修理腳踏車的拿手 Output: B1-1193, 11, 12, Selection • Example 5 Input: (sid=B2-2292) 所 以 我 不 會 讓 失 望 她 Output: B2-2292, 7, 9, Disorder The character \"的\" is a redundant character in Ex.", "1.", "There is a missing character between \"那\" and \"電影\" in Ex.", "2, and a missed character \"部\" is shown in the correct sentence in Ex.", "3.", "In Ex.", "4, \"拿手\" is a wrong word.", "One of correct words may be \"好手\".", "\"失望她\" is a word ordering error in Ex.", "5.", "The correct order should be \"她失 望\".", "Data Preparation The learner corpus used in our task was collected from the essay section of the computer-based Test of Chinese as a Foreign Language (TOCFL), administered in Taiwan.", "Native Chinese speakers were trained to manually annotate grammatical errors and provide corrections corresponding to each error.", "The essays were then split into three sets as follows.", "(1) Training Set: This set included 2,205 selected sentences with annotated grammatical errors and their corresponding corrections.", "Each sentence is represented in SGML format as shown in Fig.", "1 .", "Error types were categorized as redundant (430 instances), missing (620), selection (849), and disorder (306).", "All sentences in this set were collected to use for training the grammatical diagnostic tools.", "<DOC> <SENTENCE id=\"B1-1120\"> 我的中文進步了非常快 </SENTENCE> <MISTAKE start_off=\"7\" end_off=\"7\"> <TYPE> Selection </TYPE> <CORRECTION> 我的中文進步得非常快 </CORRECTION> </MISTAKE> </DOC> Figure 1.", "An sentence denoted in SGML format (2) Dryrun Set: A total of 55 sentences were distributed to participants to allow them familiarize themselves with the final testing process.", "Each participant was allowed to submit several runs generated using different models with different parameter settings of their developed tools.", "In addition, to ensure the submitted results could be correctly evaluated, participants were allowed to fine-tune their developed models in the dryrun phase.", "The purpose of dryrun is to validate the submitted output format only, and no dryrun outcomes were considered in the official evaluation (3) Test Set: This set consists of 1,000 testing sentences.", "Half of these sentences contained no grammatical errors, while the other half included a single defined grammatical error: redundant (132 instances), missing (126), selection (110), and disorder (132).", "The evaluation was conducted as an open test.", "In addition to the data sets provided, registered research teams were allowed to employ any linguistic and computational resources to identify the grammatical errors.", "Table 1 shows the confusion matrix used for performance evaluation.", "In the matrix, TP (True Positive) is the number of sentences with grammatical errors that are correctly identified by the developed tool; FP (False Positive) is the number of sentences in which non-existent grammatical errors are identified; TN (True Negative) is the number of sentences without grammatical errors that are correctly identified as such; FN (False Negative) is the number of sentences with grammatical errors for which no errors are identified.", "Performance Metrics The criteria for judging correctness are determined at three levels as follows.", "(1) Detection level: binary classification of a given sentence, that is, correct or incorrect should be completely identical with the gold standard.", "All error types will be regarded as incorrect.", "(2) Identification level: this level could be considered as a multi-class categorization problem.", "All error types should be clearly identified.", "A correct case should be completely identical with the gold standard of the given error type.", "(3) Position level: in addition to identifying the error types, this level also judges the occurred range of grammatical error.", "That is to say, the system results should be perfectly identical with the quadruples of gold standard.", "The following metrics are measured at all levels with the help of the confusion matrix.", "For example, given 8 testing inputs with gold standards shown as \"B1-1138, 7, 10, Disorder\", \"A2-0087, 12, 13, Missing\", \"A2-0904, correct\", \"B1-0990, correct\", \"A2-0789, 2, 3, Selection\", \"B1-0295, correct\", \"B2-0591, 3, 3, Redundant\" and \"A2-0920, correct\", the system may output the result shown as \"B1-1138, 7, 8, Disorder\", \"A2-0087, 12, 13, Missing\", \"A2-0904, 5, 6, Missing\", \"B1-0990, correct\", \"A2-0789, 2, 5, Disorder\", \"B1-0295, correct\", \"B2-0591, 3, 3, Redundant\" and \"A2-0920, 4, 5, Selection\".", "The evaluation tool will yield the following performance.", "• False Positive Rate (FPR) = 0.5 (=2/4) Notes: {\"A2-0904, 5, 6, Missing\", \"A2-0920, 4, 5, Selection\"} /{\"A2-0904, correct\", \"B1-0090, correct\", \"B1-0295, correct\", \"A2-0920, correct\"} • Detection-level • Accuracy =0.75 (=6/8) Notes: {\"B1-1138, Disorder\", \"A2-0087, Missing\", \"B1-0990, correct\", \"A2-0789, Disorder\", \"B1-0295, correct\", \"B2-0591, Redundant\"} / {\"B1-1138, Disorder\", \"A2-0087, Missing\", \"A2-0904, Missing\", \"B1-0990, correct\", \"A2-0789, Disorder\", \"B1-0295, correct\", \"B2-0591, Redundant\", \"A2-0920, Selection\".}", "• Precision = 0.67 (=4/6) Notes: {\"B1-1138, Disorder\", \"A2-0087, Missing\", \"A2-0789, Disorder\", \"B2-0591, Redundant\"} / {\"B1-1138, Disorder\", \"A2-0087, Missing\", \"A2-0904, Missing\", \"A2-0789, Disorder\", \"B2-0591, Redundant\", \"A2-0920, Selection\".}", "• Position-level • Accuracy =0.5 (=4/8) Notes: {\"A2-0087, 12, 13, Missing\", \"B1-0990, correct\", \"B1-0295, correct\", \"B2-0591, 3, 3, Redundant\"} / {\"B1-1138, 7, 8, Disorder\", \"A2-0087, 12, 13, Missing\", \"A2-0904, 5, 6, Missing\", \"B1-0990, correct\", \"A2-0789, 2, 5, Disorder\", \"B1-0295, correct\", \"B2-0591, 3, 3, Redundant\", \"A2-0920, 4, 5, Selection\"} • Precision = 0.33 (=2/6) Notes: {\"A2-0087, 12, 13, Missing\", \"B2-0591, 3, 3, Redundant\"} / {\"B1-1138, 7, 8, Disorder\", \"A2-0087, 12, 13, Missing\", \"A2-0904, 5, 6, Missing\", \"A2-0789, 2, 5, Disorder\", \"B2-0591, 3, 3, Redundant\", \"A2-0920, 4, 5, Selection\"} • Recall = 0.5 (=2/4) Notes: {\"A2-0087, 12, 13, Missing\", \"B2-0591, 3, 3, Redundant\"} / {\"B1-1138, 7, 10, Disorder\", \"A2-0087, 12, 13, Missing\", \"A2-0789, 2, 3, Selection\", \"B2-0591, 3, 3, Redundant\"} • F1=0.4 (=2*0.33*0.5/(0.33+0.5)) Table 2 summarizes the submission statistics for the participating teams.", "Of 13 registered teams, 6 teams submitted their testing results.", "In formal testing phase, each participant was allowed to submit at most three runs using different models or parameter settings.", "In total, we had received 18 runs.", "Table 3 shows the task testing results.", "The CYUT team achieved the lowest false positive rate of 0.082.", "Detection-level evaluations are designed to detect whether a sentence contains grammatical errors or not.", "A neutral baseline can be easily achieved by always reporting all testing errors are correct without errors.", "According to the test data distribution, the baseline system can achieve an accuracy level of 0.5.", "All systems achieved results slightly better than the baseline.", "The system result submitted by NCYU achieved the best detection accuracy of 0.607.", "We used the F1 score to reflect the tradeoff between precision and recall.", "In the testing results, NTOU provided the best error detection results, providing a high F1 score of 0.6754.", "For correction-level evaluations, the systems need to identify the error types in the given sentences.", "The system developed by NCYU provided the highest F1 score of 0.3584 for grammatical error identification.", "For position-level evaluations, CYUT achieved the best F1 score of 0.1742.", "Note that it is difficult to perfectly identify the error positions, partly because no word delimiters exist among Chinese words.", "Table 3 .", "Testing results of our Chinese grammatical error diagnosis task.", "Evaluation Results In summary, none of the submitted systems provided superior performance.", "It is a really difficult task to develop an effective computer-assisted learning tool for grammatical error diagnosis, especially for the CFL uses.", "In general, this research problem still has long way to go.", "Conclusions and Future Work This paper provides an overview of the NLP-TEA 2015 shared task for Chinese grammatical error diagnosis, including task design, data preparation, evaluation metrics, and performance evaluation results.", "Regardless of actual performance, all submissions contribute to the common effort to produce an effective Chinese grammatical diagnosis tool, and the individual reports in the shared task proceedings provide useful insight into Chinese language processing.", "We hope the data sets collected for this shared task can facilitate and expedite the future development of NLP tools for computer-assisted Chinese language learning.", "Therefore, all data sets with gold standards and evaluation tool are publicly available for research purposes at http://ir.itc.ntnu.edu.tw/lre/nlptea15cged.htm.", "We plan to build new language resources to improve existing techniques for computer-aided Chinese language learning.", "In addition, new data sets with the contextual information of target sentences obtained from CFL learners will be investigated for the future enrichment of this research topic." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5", "6" ], "paper_header_content": [ "Introduction", "Task Description", "Data Preparation", "Performance Metrics", "Evaluation Results", "Conclusions and Future Work" ] }
GEM-SciDuet-train-40#paper-1056#slide-11
Summary
It is a really difficult task to develop the computer-assisted Chinese learning tool, since there are only target sentences without the help of their context None of superior the submiced performance. systems provided In general, this research problem s%ll has long way to go.
It is a really difficult task to develop the computer-assisted Chinese learning tool, since there are only target sentences without the help of their context None of superior the submiced performance. systems provided In general, this research problem s%ll has long way to go.
[]
GEM-SciDuet-train-40#paper-1056#slide-12
1056
Overview of the NLP-TEA 2015 Shared Task for Chinese Grammatical Error Diagnosis
This paper introduces the NLP-TEA 2015 shared task for Chinese grammatical error diagnosis. We describe the task, data preparation, performance metrics, and evaluation results. The hope is that such an evaluation campaign may produce more advanced Chinese grammatical error diagnosis techniques. All data sets with gold standards and evaluation tools are publicly available for research purposes.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97 ], "paper_content_text": [ "Introduction Human language technologies for English grammatical error correction have attracted more attention in recent years (Ng et al., 2013; .", "In contrast to the plethora of research related to develop NLP tools for learners of English as a foreign language, relatively few studies have focused on detecting and correcting grammatical errors for use by learners of Chinese as a foreign language (CFL).", "A classifier has been designed to detect word-ordering errors in Chinese sentences (Yu and Chen, 2012) .", "A ranking SVMbased model has been further explored to suggest corrections for word-ordering errors (Cheng et al., 2014) .", "Relative positioning and parse template language models have been proposed to detect Chinese grammatical errors written by US learners (Wu et al., 2010) .", "A penalized probabilistic first-order inductive learning algorithm has been presented for Chinese grammatical error diagnosis (Chang et al.", "2012) .", "A set of linguistic rules with syntactic information was manually crafted to detect CFL grammatical errors (Lee et al., 2013) .", "A sentence judgment system has been further developed to integrate both rule-based linguistic analysis and n-gram statistical learning for grammatical error detection .", "The ICCE-2014 workshop on Natural Language Processing Techniques for Educational Applications (NLP-TEA) organized a shared task on CFL grammatical error diagnosis .", "Due to the greater challenge in identifying grammatical errors in CFL leaners' written sentences, the NLP-TEA 2015 shared task features a Chinese Grammatical Error Diagnosis (CGED) task, providing an evaluation platform for the development and implementation of NLP tools for computer-assisted Chinese learning.", "The developed system should identify whether a given sentence contains grammatical errors, identify the error types, and indicate the range of occurred errors.", "This paper gives an overview of this shared task.", "The rest of this article is organized as follows.", "Section 2 provides the details of the designed task.", "Section 3 introduces the developed data sets.", "Section 4 proposes evaluation metrics.", "Section 5 presents the results of participant approaches for performance comparison.", "Section 6 summarizes the findings and offers futures research directions.", "Task Description The goal of this shared task is to develop NLP tools for identifying the grammatical errors in sentences written by the CFL learners.", "Four PADS error types are included in the target modification taxonomy, that is, mis-ordering (Permutation), redundancy (Addition), omission (Deletion), and mis-selection (Substitution).", "For the sake of simplicity, the input sentence is selected to contain one defined error types.", "The developed tool is expected to identify the error types and its position at which it occurs in the sentence.", "The input instance is given a unique sentence number sid.", "If the inputs contain no grammatical errors, the tool should return \"sid, correct\".", "If an input sentence contains a grammatical error, the output format should be a quadruple of \"sid, start_off, end_off, error_type\", where \"start_off\" and \"end_off\" respectively denote the characters at which the grammatical error starts and ends, where each character or punctuation mark occupies 1 space for counting positions.", "\"Error_type\" represents one defined error type in terms of \"Redundant,\" \"Missing,\" \"Selection,\" and \"Disorder\".", "Examples are shown as follows.", "• Example 1 Input: (sid=B2-0080) 他是我的以前的室友 Output: B2-0080, 4, 4, Redundant • Example 2 Input: (sid=A2-0017) 那電影是機器人的故事 Output: A2-0017, 2, 2, Missing • Example 3 Input: (sid=A2-0017) 那部電影是機器人的故事 Output: A2-0017, correct • Example 4 Input: (sid=B1-1193) 吳先生是修理腳踏車的拿手 Output: B1-1193, 11, 12, Selection • Example 5 Input: (sid=B2-2292) 所 以 我 不 會 讓 失 望 她 Output: B2-2292, 7, 9, Disorder The character \"的\" is a redundant character in Ex.", "1.", "There is a missing character between \"那\" and \"電影\" in Ex.", "2, and a missed character \"部\" is shown in the correct sentence in Ex.", "3.", "In Ex.", "4, \"拿手\" is a wrong word.", "One of correct words may be \"好手\".", "\"失望她\" is a word ordering error in Ex.", "5.", "The correct order should be \"她失 望\".", "Data Preparation The learner corpus used in our task was collected from the essay section of the computer-based Test of Chinese as a Foreign Language (TOCFL), administered in Taiwan.", "Native Chinese speakers were trained to manually annotate grammatical errors and provide corrections corresponding to each error.", "The essays were then split into three sets as follows.", "(1) Training Set: This set included 2,205 selected sentences with annotated grammatical errors and their corresponding corrections.", "Each sentence is represented in SGML format as shown in Fig.", "1 .", "Error types were categorized as redundant (430 instances), missing (620), selection (849), and disorder (306).", "All sentences in this set were collected to use for training the grammatical diagnostic tools.", "<DOC> <SENTENCE id=\"B1-1120\"> 我的中文進步了非常快 </SENTENCE> <MISTAKE start_off=\"7\" end_off=\"7\"> <TYPE> Selection </TYPE> <CORRECTION> 我的中文進步得非常快 </CORRECTION> </MISTAKE> </DOC> Figure 1.", "An sentence denoted in SGML format (2) Dryrun Set: A total of 55 sentences were distributed to participants to allow them familiarize themselves with the final testing process.", "Each participant was allowed to submit several runs generated using different models with different parameter settings of their developed tools.", "In addition, to ensure the submitted results could be correctly evaluated, participants were allowed to fine-tune their developed models in the dryrun phase.", "The purpose of dryrun is to validate the submitted output format only, and no dryrun outcomes were considered in the official evaluation (3) Test Set: This set consists of 1,000 testing sentences.", "Half of these sentences contained no grammatical errors, while the other half included a single defined grammatical error: redundant (132 instances), missing (126), selection (110), and disorder (132).", "The evaluation was conducted as an open test.", "In addition to the data sets provided, registered research teams were allowed to employ any linguistic and computational resources to identify the grammatical errors.", "Table 1 shows the confusion matrix used for performance evaluation.", "In the matrix, TP (True Positive) is the number of sentences with grammatical errors that are correctly identified by the developed tool; FP (False Positive) is the number of sentences in which non-existent grammatical errors are identified; TN (True Negative) is the number of sentences without grammatical errors that are correctly identified as such; FN (False Negative) is the number of sentences with grammatical errors for which no errors are identified.", "Performance Metrics The criteria for judging correctness are determined at three levels as follows.", "(1) Detection level: binary classification of a given sentence, that is, correct or incorrect should be completely identical with the gold standard.", "All error types will be regarded as incorrect.", "(2) Identification level: this level could be considered as a multi-class categorization problem.", "All error types should be clearly identified.", "A correct case should be completely identical with the gold standard of the given error type.", "(3) Position level: in addition to identifying the error types, this level also judges the occurred range of grammatical error.", "That is to say, the system results should be perfectly identical with the quadruples of gold standard.", "The following metrics are measured at all levels with the help of the confusion matrix.", "For example, given 8 testing inputs with gold standards shown as \"B1-1138, 7, 10, Disorder\", \"A2-0087, 12, 13, Missing\", \"A2-0904, correct\", \"B1-0990, correct\", \"A2-0789, 2, 3, Selection\", \"B1-0295, correct\", \"B2-0591, 3, 3, Redundant\" and \"A2-0920, correct\", the system may output the result shown as \"B1-1138, 7, 8, Disorder\", \"A2-0087, 12, 13, Missing\", \"A2-0904, 5, 6, Missing\", \"B1-0990, correct\", \"A2-0789, 2, 5, Disorder\", \"B1-0295, correct\", \"B2-0591, 3, 3, Redundant\" and \"A2-0920, 4, 5, Selection\".", "The evaluation tool will yield the following performance.", "• False Positive Rate (FPR) = 0.5 (=2/4) Notes: {\"A2-0904, 5, 6, Missing\", \"A2-0920, 4, 5, Selection\"} /{\"A2-0904, correct\", \"B1-0090, correct\", \"B1-0295, correct\", \"A2-0920, correct\"} • Detection-level • Accuracy =0.75 (=6/8) Notes: {\"B1-1138, Disorder\", \"A2-0087, Missing\", \"B1-0990, correct\", \"A2-0789, Disorder\", \"B1-0295, correct\", \"B2-0591, Redundant\"} / {\"B1-1138, Disorder\", \"A2-0087, Missing\", \"A2-0904, Missing\", \"B1-0990, correct\", \"A2-0789, Disorder\", \"B1-0295, correct\", \"B2-0591, Redundant\", \"A2-0920, Selection\".}", "• Precision = 0.67 (=4/6) Notes: {\"B1-1138, Disorder\", \"A2-0087, Missing\", \"A2-0789, Disorder\", \"B2-0591, Redundant\"} / {\"B1-1138, Disorder\", \"A2-0087, Missing\", \"A2-0904, Missing\", \"A2-0789, Disorder\", \"B2-0591, Redundant\", \"A2-0920, Selection\".}", "• Position-level • Accuracy =0.5 (=4/8) Notes: {\"A2-0087, 12, 13, Missing\", \"B1-0990, correct\", \"B1-0295, correct\", \"B2-0591, 3, 3, Redundant\"} / {\"B1-1138, 7, 8, Disorder\", \"A2-0087, 12, 13, Missing\", \"A2-0904, 5, 6, Missing\", \"B1-0990, correct\", \"A2-0789, 2, 5, Disorder\", \"B1-0295, correct\", \"B2-0591, 3, 3, Redundant\", \"A2-0920, 4, 5, Selection\"} • Precision = 0.33 (=2/6) Notes: {\"A2-0087, 12, 13, Missing\", \"B2-0591, 3, 3, Redundant\"} / {\"B1-1138, 7, 8, Disorder\", \"A2-0087, 12, 13, Missing\", \"A2-0904, 5, 6, Missing\", \"A2-0789, 2, 5, Disorder\", \"B2-0591, 3, 3, Redundant\", \"A2-0920, 4, 5, Selection\"} • Recall = 0.5 (=2/4) Notes: {\"A2-0087, 12, 13, Missing\", \"B2-0591, 3, 3, Redundant\"} / {\"B1-1138, 7, 10, Disorder\", \"A2-0087, 12, 13, Missing\", \"A2-0789, 2, 3, Selection\", \"B2-0591, 3, 3, Redundant\"} • F1=0.4 (=2*0.33*0.5/(0.33+0.5)) Table 2 summarizes the submission statistics for the participating teams.", "Of 13 registered teams, 6 teams submitted their testing results.", "In formal testing phase, each participant was allowed to submit at most three runs using different models or parameter settings.", "In total, we had received 18 runs.", "Table 3 shows the task testing results.", "The CYUT team achieved the lowest false positive rate of 0.082.", "Detection-level evaluations are designed to detect whether a sentence contains grammatical errors or not.", "A neutral baseline can be easily achieved by always reporting all testing errors are correct without errors.", "According to the test data distribution, the baseline system can achieve an accuracy level of 0.5.", "All systems achieved results slightly better than the baseline.", "The system result submitted by NCYU achieved the best detection accuracy of 0.607.", "We used the F1 score to reflect the tradeoff between precision and recall.", "In the testing results, NTOU provided the best error detection results, providing a high F1 score of 0.6754.", "For correction-level evaluations, the systems need to identify the error types in the given sentences.", "The system developed by NCYU provided the highest F1 score of 0.3584 for grammatical error identification.", "For position-level evaluations, CYUT achieved the best F1 score of 0.1742.", "Note that it is difficult to perfectly identify the error positions, partly because no word delimiters exist among Chinese words.", "Table 3 .", "Testing results of our Chinese grammatical error diagnosis task.", "Evaluation Results In summary, none of the submitted systems provided superior performance.", "It is a really difficult task to develop an effective computer-assisted learning tool for grammatical error diagnosis, especially for the CFL uses.", "In general, this research problem still has long way to go.", "Conclusions and Future Work This paper provides an overview of the NLP-TEA 2015 shared task for Chinese grammatical error diagnosis, including task design, data preparation, evaluation metrics, and performance evaluation results.", "Regardless of actual performance, all submissions contribute to the common effort to produce an effective Chinese grammatical diagnosis tool, and the individual reports in the shared task proceedings provide useful insight into Chinese language processing.", "We hope the data sets collected for this shared task can facilitate and expedite the future development of NLP tools for computer-assisted Chinese language learning.", "Therefore, all data sets with gold standards and evaluation tool are publicly available for research purposes at http://ir.itc.ntnu.edu.tw/lre/nlptea15cged.htm.", "We plan to build new language resources to improve existing techniques for computer-aided Chinese language learning.", "In addition, new data sets with the contextual information of target sentences obtained from CFL learners will be investigated for the future enrichment of this research topic." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5", "6" ], "paper_header_content": [ "Introduction", "Task Description", "Data Preparation", "Performance Metrics", "Evaluation Results", "Conclusions and Future Work" ] }
GEM-SciDuet-train-40#paper-1056#slide-12
Conclusions
All submissions contribute to the common effort to produce an effec%ve Chinese gramma%cal diagnosis tool The individual reports proceedings provide in the useful shared insight task into
All submissions contribute to the common effort to produce an effec%ve Chinese gramma%cal diagnosis tool The individual reports proceedings provide in the useful shared insight task into
[]
GEM-SciDuet-train-40#paper-1056#slide-13
1056
Overview of the NLP-TEA 2015 Shared Task for Chinese Grammatical Error Diagnosis
This paper introduces the NLP-TEA 2015 shared task for Chinese grammatical error diagnosis. We describe the task, data preparation, performance metrics, and evaluation results. The hope is that such an evaluation campaign may produce more advanced Chinese grammatical error diagnosis techniques. All data sets with gold standards and evaluation tools are publicly available for research purposes.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97 ], "paper_content_text": [ "Introduction Human language technologies for English grammatical error correction have attracted more attention in recent years (Ng et al., 2013; .", "In contrast to the plethora of research related to develop NLP tools for learners of English as a foreign language, relatively few studies have focused on detecting and correcting grammatical errors for use by learners of Chinese as a foreign language (CFL).", "A classifier has been designed to detect word-ordering errors in Chinese sentences (Yu and Chen, 2012) .", "A ranking SVMbased model has been further explored to suggest corrections for word-ordering errors (Cheng et al., 2014) .", "Relative positioning and parse template language models have been proposed to detect Chinese grammatical errors written by US learners (Wu et al., 2010) .", "A penalized probabilistic first-order inductive learning algorithm has been presented for Chinese grammatical error diagnosis (Chang et al.", "2012) .", "A set of linguistic rules with syntactic information was manually crafted to detect CFL grammatical errors (Lee et al., 2013) .", "A sentence judgment system has been further developed to integrate both rule-based linguistic analysis and n-gram statistical learning for grammatical error detection .", "The ICCE-2014 workshop on Natural Language Processing Techniques for Educational Applications (NLP-TEA) organized a shared task on CFL grammatical error diagnosis .", "Due to the greater challenge in identifying grammatical errors in CFL leaners' written sentences, the NLP-TEA 2015 shared task features a Chinese Grammatical Error Diagnosis (CGED) task, providing an evaluation platform for the development and implementation of NLP tools for computer-assisted Chinese learning.", "The developed system should identify whether a given sentence contains grammatical errors, identify the error types, and indicate the range of occurred errors.", "This paper gives an overview of this shared task.", "The rest of this article is organized as follows.", "Section 2 provides the details of the designed task.", "Section 3 introduces the developed data sets.", "Section 4 proposes evaluation metrics.", "Section 5 presents the results of participant approaches for performance comparison.", "Section 6 summarizes the findings and offers futures research directions.", "Task Description The goal of this shared task is to develop NLP tools for identifying the grammatical errors in sentences written by the CFL learners.", "Four PADS error types are included in the target modification taxonomy, that is, mis-ordering (Permutation), redundancy (Addition), omission (Deletion), and mis-selection (Substitution).", "For the sake of simplicity, the input sentence is selected to contain one defined error types.", "The developed tool is expected to identify the error types and its position at which it occurs in the sentence.", "The input instance is given a unique sentence number sid.", "If the inputs contain no grammatical errors, the tool should return \"sid, correct\".", "If an input sentence contains a grammatical error, the output format should be a quadruple of \"sid, start_off, end_off, error_type\", where \"start_off\" and \"end_off\" respectively denote the characters at which the grammatical error starts and ends, where each character or punctuation mark occupies 1 space for counting positions.", "\"Error_type\" represents one defined error type in terms of \"Redundant,\" \"Missing,\" \"Selection,\" and \"Disorder\".", "Examples are shown as follows.", "• Example 1 Input: (sid=B2-0080) 他是我的以前的室友 Output: B2-0080, 4, 4, Redundant • Example 2 Input: (sid=A2-0017) 那電影是機器人的故事 Output: A2-0017, 2, 2, Missing • Example 3 Input: (sid=A2-0017) 那部電影是機器人的故事 Output: A2-0017, correct • Example 4 Input: (sid=B1-1193) 吳先生是修理腳踏車的拿手 Output: B1-1193, 11, 12, Selection • Example 5 Input: (sid=B2-2292) 所 以 我 不 會 讓 失 望 她 Output: B2-2292, 7, 9, Disorder The character \"的\" is a redundant character in Ex.", "1.", "There is a missing character between \"那\" and \"電影\" in Ex.", "2, and a missed character \"部\" is shown in the correct sentence in Ex.", "3.", "In Ex.", "4, \"拿手\" is a wrong word.", "One of correct words may be \"好手\".", "\"失望她\" is a word ordering error in Ex.", "5.", "The correct order should be \"她失 望\".", "Data Preparation The learner corpus used in our task was collected from the essay section of the computer-based Test of Chinese as a Foreign Language (TOCFL), administered in Taiwan.", "Native Chinese speakers were trained to manually annotate grammatical errors and provide corrections corresponding to each error.", "The essays were then split into three sets as follows.", "(1) Training Set: This set included 2,205 selected sentences with annotated grammatical errors and their corresponding corrections.", "Each sentence is represented in SGML format as shown in Fig.", "1 .", "Error types were categorized as redundant (430 instances), missing (620), selection (849), and disorder (306).", "All sentences in this set were collected to use for training the grammatical diagnostic tools.", "<DOC> <SENTENCE id=\"B1-1120\"> 我的中文進步了非常快 </SENTENCE> <MISTAKE start_off=\"7\" end_off=\"7\"> <TYPE> Selection </TYPE> <CORRECTION> 我的中文進步得非常快 </CORRECTION> </MISTAKE> </DOC> Figure 1.", "An sentence denoted in SGML format (2) Dryrun Set: A total of 55 sentences were distributed to participants to allow them familiarize themselves with the final testing process.", "Each participant was allowed to submit several runs generated using different models with different parameter settings of their developed tools.", "In addition, to ensure the submitted results could be correctly evaluated, participants were allowed to fine-tune their developed models in the dryrun phase.", "The purpose of dryrun is to validate the submitted output format only, and no dryrun outcomes were considered in the official evaluation (3) Test Set: This set consists of 1,000 testing sentences.", "Half of these sentences contained no grammatical errors, while the other half included a single defined grammatical error: redundant (132 instances), missing (126), selection (110), and disorder (132).", "The evaluation was conducted as an open test.", "In addition to the data sets provided, registered research teams were allowed to employ any linguistic and computational resources to identify the grammatical errors.", "Table 1 shows the confusion matrix used for performance evaluation.", "In the matrix, TP (True Positive) is the number of sentences with grammatical errors that are correctly identified by the developed tool; FP (False Positive) is the number of sentences in which non-existent grammatical errors are identified; TN (True Negative) is the number of sentences without grammatical errors that are correctly identified as such; FN (False Negative) is the number of sentences with grammatical errors for which no errors are identified.", "Performance Metrics The criteria for judging correctness are determined at three levels as follows.", "(1) Detection level: binary classification of a given sentence, that is, correct or incorrect should be completely identical with the gold standard.", "All error types will be regarded as incorrect.", "(2) Identification level: this level could be considered as a multi-class categorization problem.", "All error types should be clearly identified.", "A correct case should be completely identical with the gold standard of the given error type.", "(3) Position level: in addition to identifying the error types, this level also judges the occurred range of grammatical error.", "That is to say, the system results should be perfectly identical with the quadruples of gold standard.", "The following metrics are measured at all levels with the help of the confusion matrix.", "For example, given 8 testing inputs with gold standards shown as \"B1-1138, 7, 10, Disorder\", \"A2-0087, 12, 13, Missing\", \"A2-0904, correct\", \"B1-0990, correct\", \"A2-0789, 2, 3, Selection\", \"B1-0295, correct\", \"B2-0591, 3, 3, Redundant\" and \"A2-0920, correct\", the system may output the result shown as \"B1-1138, 7, 8, Disorder\", \"A2-0087, 12, 13, Missing\", \"A2-0904, 5, 6, Missing\", \"B1-0990, correct\", \"A2-0789, 2, 5, Disorder\", \"B1-0295, correct\", \"B2-0591, 3, 3, Redundant\" and \"A2-0920, 4, 5, Selection\".", "The evaluation tool will yield the following performance.", "• False Positive Rate (FPR) = 0.5 (=2/4) Notes: {\"A2-0904, 5, 6, Missing\", \"A2-0920, 4, 5, Selection\"} /{\"A2-0904, correct\", \"B1-0090, correct\", \"B1-0295, correct\", \"A2-0920, correct\"} • Detection-level • Accuracy =0.75 (=6/8) Notes: {\"B1-1138, Disorder\", \"A2-0087, Missing\", \"B1-0990, correct\", \"A2-0789, Disorder\", \"B1-0295, correct\", \"B2-0591, Redundant\"} / {\"B1-1138, Disorder\", \"A2-0087, Missing\", \"A2-0904, Missing\", \"B1-0990, correct\", \"A2-0789, Disorder\", \"B1-0295, correct\", \"B2-0591, Redundant\", \"A2-0920, Selection\".}", "• Precision = 0.67 (=4/6) Notes: {\"B1-1138, Disorder\", \"A2-0087, Missing\", \"A2-0789, Disorder\", \"B2-0591, Redundant\"} / {\"B1-1138, Disorder\", \"A2-0087, Missing\", \"A2-0904, Missing\", \"A2-0789, Disorder\", \"B2-0591, Redundant\", \"A2-0920, Selection\".}", "• Position-level • Accuracy =0.5 (=4/8) Notes: {\"A2-0087, 12, 13, Missing\", \"B1-0990, correct\", \"B1-0295, correct\", \"B2-0591, 3, 3, Redundant\"} / {\"B1-1138, 7, 8, Disorder\", \"A2-0087, 12, 13, Missing\", \"A2-0904, 5, 6, Missing\", \"B1-0990, correct\", \"A2-0789, 2, 5, Disorder\", \"B1-0295, correct\", \"B2-0591, 3, 3, Redundant\", \"A2-0920, 4, 5, Selection\"} • Precision = 0.33 (=2/6) Notes: {\"A2-0087, 12, 13, Missing\", \"B2-0591, 3, 3, Redundant\"} / {\"B1-1138, 7, 8, Disorder\", \"A2-0087, 12, 13, Missing\", \"A2-0904, 5, 6, Missing\", \"A2-0789, 2, 5, Disorder\", \"B2-0591, 3, 3, Redundant\", \"A2-0920, 4, 5, Selection\"} • Recall = 0.5 (=2/4) Notes: {\"A2-0087, 12, 13, Missing\", \"B2-0591, 3, 3, Redundant\"} / {\"B1-1138, 7, 10, Disorder\", \"A2-0087, 12, 13, Missing\", \"A2-0789, 2, 3, Selection\", \"B2-0591, 3, 3, Redundant\"} • F1=0.4 (=2*0.33*0.5/(0.33+0.5)) Table 2 summarizes the submission statistics for the participating teams.", "Of 13 registered teams, 6 teams submitted their testing results.", "In formal testing phase, each participant was allowed to submit at most three runs using different models or parameter settings.", "In total, we had received 18 runs.", "Table 3 shows the task testing results.", "The CYUT team achieved the lowest false positive rate of 0.082.", "Detection-level evaluations are designed to detect whether a sentence contains grammatical errors or not.", "A neutral baseline can be easily achieved by always reporting all testing errors are correct without errors.", "According to the test data distribution, the baseline system can achieve an accuracy level of 0.5.", "All systems achieved results slightly better than the baseline.", "The system result submitted by NCYU achieved the best detection accuracy of 0.607.", "We used the F1 score to reflect the tradeoff between precision and recall.", "In the testing results, NTOU provided the best error detection results, providing a high F1 score of 0.6754.", "For correction-level evaluations, the systems need to identify the error types in the given sentences.", "The system developed by NCYU provided the highest F1 score of 0.3584 for grammatical error identification.", "For position-level evaluations, CYUT achieved the best F1 score of 0.1742.", "Note that it is difficult to perfectly identify the error positions, partly because no word delimiters exist among Chinese words.", "Table 3 .", "Testing results of our Chinese grammatical error diagnosis task.", "Evaluation Results In summary, none of the submitted systems provided superior performance.", "It is a really difficult task to develop an effective computer-assisted learning tool for grammatical error diagnosis, especially for the CFL uses.", "In general, this research problem still has long way to go.", "Conclusions and Future Work This paper provides an overview of the NLP-TEA 2015 shared task for Chinese grammatical error diagnosis, including task design, data preparation, evaluation metrics, and performance evaluation results.", "Regardless of actual performance, all submissions contribute to the common effort to produce an effective Chinese grammatical diagnosis tool, and the individual reports in the shared task proceedings provide useful insight into Chinese language processing.", "We hope the data sets collected for this shared task can facilitate and expedite the future development of NLP tools for computer-assisted Chinese language learning.", "Therefore, all data sets with gold standards and evaluation tool are publicly available for research purposes at http://ir.itc.ntnu.edu.tw/lre/nlptea15cged.htm.", "We plan to build new language resources to improve existing techniques for computer-aided Chinese language learning.", "In addition, new data sets with the contextual information of target sentences obtained from CFL learners will be investigated for the future enrichment of this research topic." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5", "6" ], "paper_header_content": [ "Introduction", "Task Description", "Data Preparation", "Performance Metrics", "Evaluation Results", "Conclusions and Future Work" ] }
GEM-SciDuet-train-40#paper-1056#slide-13
Future Work
NLP-TEA-3 Workshop in COLING 2016 Chinese Gramma%cal Error Diagnosis
NLP-TEA-3 Workshop in COLING 2016 Chinese Gramma%cal Error Diagnosis
[]
GEM-SciDuet-train-41#paper-1061#slide-0
1061
Large-Scale Multi-Domain Belief Tracking with Knowledge Sharing
Robust dialogue belief tracking is a key component in maintaining good quality dialogue systems. The tasks that dialogue systems are trying to solve are becoming increasingly complex, requiring scalability to multi-domain, semantically rich dialogues. However, most current approaches have difficulty scaling up with domains because of the dependency of the model parameters on the dialogue ontology. In this paper, a novel approach is introduced that fully utilizes semantic similarity between dialogue utterances and the ontology terms, allowing the information to be shared across domains. The evaluation is performed on a recently collected multi-domain dialogues dataset, one order of magnitude larger than currently available corpora. Our model demonstrates great capability in handling multi-domain dialogues, simultaneously outperforming existing state-of-the-art models in singledomain dialogue tracking tasks.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111 ], "paper_content_text": [ "Introduction Spoken Dialogue Systems (SDS) are computer programs that can hold a conversation with a human.", "These can be task-based systems that help the user achieve specific goals, e.g.", "finding and booking hotels or restaurants.", "In order for the SDS to infer the user goals/intentions during the conversation, its Belief Tracking (BT) component maintains a distribution of states, called a belief state, across dialogue turns (Young et al., 2010) .", "The belief state is used by the system to take actions in each turn until the conversation is concluded and the user goal is achieved.", "In order to extract these belief states from the conversation, traditional approaches use a Spoken Language Understanding (SLU) unit that utilizes a semantic dictionary to hold all the key terms, rephrasings and alternative mentions of a belief state.", "The SLU then delexicalises each turn using this semantic dictionary, before it passes it to the BT component (Wang and Lemon, 2013; Henderson et al., 2014b; Williams, 2014; Zilka and Jurcicek, 2015; Perez and Liu, 2016; Rastogi et al., 2017) .", "However, this approach is not scalable to multi-domain dialogues because of the effort required to define a semantic dictionary for each domain.", "More advanced approaches, such as the Neural Belief Tracker (NBT), use word embeddings to alleviate the need for delexicalisation and combine the SLU and BT into one unit, mapping directly from turns to belief states .", "Nevertheless, the NBT model does not tackle the problem of mixing different domains in a conversation.", "Moreover, as each slot is trained independently without sharing information between different slots, scaling such approaches to large multi-domain systems is greatly hindered.", "In this paper, we propose a model that jointly identifies the domain and tracks the belief states corresponding to that domain.", "It uses semantic similarity between ontology terms and turn utterances to allow for parameter sharing between different slots across domains and within a single domain.", "In addition, the model parameters are independent of the ontology/belief states, thus the dimensionality of the parameters does not increase with the size of the ontology, making the model practically feasible to deploy in multidomain environments without any modifications.", "Finally, we introduce a new, large-scale corpora of natural, human-human conversations providing new possibilities to train complex, neural-based models.", "Our model systematically improves upon state-of-the-art neural approaches both in single and multi-domain conversations.", "Background The belief states of the BT are defined based on an ontology -the structured representation of the database which contains entities the system can talk about.", "The ontology defines the terms over which the distribution is to be tracked in the dialogue.", "This ontology is constructed in terms of slots and values in a single domain setting.", "Or, alternatively, in terms of domains, slots and values in a multi-domain environment.", "Each domain consists of multiple slots and each slot contains several values, e.g.", "domain=hotel, slot=price, value=expensive.", "In each turn, the BT fits a distribution over the values of each slot in each domain, and a none value is added to each slot to indicate if the slot is not mentioned so that the distribution sums up to 1.", "The BT then passes these states to the Policy Optimization unit as full probability distributions to take actions.", "This allows robustness to noisy environments (Young et al., 2010) .", "The larger the ontology, the more flexible and multi-purposed the system is, but the harder it is to train and maintain a good quality BT.", "Related Work In recent years, a plethora of research has been generated on belief tracking (Williams et al., 2016) .", "For the purposes of this paper, two previously proposed models are particularly relevant.", "Neural Belief Tracker (NBT) The main idea behind the NBT is to use semantically specialized pretrained word embeddings to encode the user utterance, the system act and the candidate slots and values taken from the ontology.", "These are fed to semantic decoding and context modeling modules that apply a three-way gating mechanism and pass the output to a non-linear classifier layer to produce a distribution over the values for each slot.", "It uses a simple update rule, p(s t ) = βp(s t−1 ) + λy, where p(s t ) is the belief state at time step t, y is the output of the binary decision maker of the NBT and β and λ are tunable parameters.", "The NBT leverages semantic information from the word embeddings to resolve lexical/morphological ambiguity and maximize the shared parameters across the values of each slot.", "However, it only applies to a single domain and does not share parameters across slots.", "Multi-domain Dialogue State Tracking Recently, Rastogi et al.", "(2017) proposed a multidomain approach using delexicalized utterances fed to a two layer stacked bi-directional GRU network to extract features from the user and the system utterances.", "These, combined with the candidate slots and values, are passed to a feed-forward neural network with a softmax in the last layer.", "The candidate set fed to the network consists of the selected candidates from the previous turn and candidates from the ontology to a limit K, which restricts the maximum size of the chosen set.", "Consequently, the model does not need an ad-hoc belief state update mechanism like in the NBT.", "The parameters of the GRU network are defined for the domain, whereas the parameters of the feed-forward network are defined per slot, allowing transfer learning across different domains.", "However, the model relies on delexicalization to extract the features, which limits the performance of the BT, as it does not scale to the rich variety of the language.", "Moreover, the number of parameters increases with the number of slots.", "Method The core idea is to leverage semantic similarities between the utterances and ontology terms to compute the belief state distribution.", "In this way, the model parameters only learn to model the interactions between turn utterances and ontology terms in the semantic space, rather than the mapping from utterances to states.", "Consequently, information is shared between both slots and across domains.", "Additionally, the number of parameters does not increase with the ontology size.", "Domain tracking is considered as a separate task but is learned jointly with the belief state tracking of the slots and values.", "The proposed model uses semantically specialized pre-trained word embeddings (Wieting et al., 2015) .", "To encode the user and system utterances, we employed 7 independent bi-directional LSTMs (Graves and Schmidhuber, 2005) .", "Three of them are used to encode the system utterance for domain, slot and value tracking respectively.", "Similarly, three Bi-LSTMs encode the user utterance while and the last one is used to track the user affirmation.", "A variant of the CNNs as a feature extractor, similar to the one used in the NBT-CNN is also employed.", "Other variants of the model use CNNs as feature extractors (Kim, 2014; Kalchbrenner et al., 2014) .", "Domain Tracking Figure 1 presents the system architecture with two bi-directional LSTM networks as information encoders running over the word embeddings of the user and system utterances.", "The last hidden states of the forward and backward layers are concatenated to produce h d usr , h d sys of size L for the user and system utterances respectively.", "In the second variant of the model, CNNs are used to produce these vectors (Kim, 2014; Kalchbrenner et al., 2014) .", "To detect the presence of the domain in the dialogue turn, element-wise multiplication is used as a similarity metric between the hidden states and the ontology embeddings of the domain: d k = h d k tanh(W d e d + b d ), where k ∈ {usr, sys}, e d is the embedding vector of the domain and W d ∈ R L×D transforms the domain word embeddings of dimension D to the hidden representation.", "The information about semantic similarity is held by d usr and d sys , which are fed to a non-linear layer to output a binary decision: P t (d) = σ(w d {d usr ⊕ d sys } + b d ), where w d ∈ R 2L and b d are learnable parameters that map the semantic similarity to a belief state probability P t (d) of a domain d at a turn t. Candidate Slots and Values Tracking Slots and values are tracked using a similar architecture as for domain tracking (Figure 1) .", "However, to correctly model the context of the systemuser dialogue at each turn, three different cases are considered when computing the similarity vectors: 1.", "Inform: The user is informing the system about his/her goal, e.g.", "'I am looking for a restaurant that serves Turkish food'.", "2.", "Request: The system is requesting information by asking the user about the value of a specific slot.", "If the system utterance is: 'When do you want the taxi to arrive?'", "and the user answers with '19:30'.", "3.", "Confirm: The system wants to confirm information about the value of a specific slot.", "If the system asked: 'Would you like free parking?", "', the user can either affirm positively or negatively.", "The model detects the user affirmation, using a separate bi-directional LSTM or CNN to output h a usr .", "The three cases are modelled as following: y s,v inf = w inf {s usr ⊕ v usr } + b inf , y s,v req = w req {s sys ⊕ v usr } + b req , y s,v af = w af {s sys ⊕ v sys ⊕ h a usr } + b af , where s k , v k for k ∈ {usr, sys} represent semantic similarity between the user and system utterances and the ontology slot and value terms respectively computed as shown in Figure 1 , and w and b are learnable parameters.", "The distribution over the values of slot s in domain d at turn t can be computed by summing the unscaled states, y inf , y req and y af for each value v in s, and applying a softmax to normalize the distribution: P t (s, v) = softmax(y s,v inf + y s,v req + y s,v af ).", "Belief State Update Since dialogue systems in the real-world operate in noisy environments, a robust BT should utilize the flow of the conversation to reduce the uncertainty in the belief state distribution.", "This can be achieved by passing the output of the decision maker, at each turn, as an input to an RNN that runs over the dialogue turns as shown in Figure 1 , which allows the gradients to be propagated across turns.", "This alleviates the problem of tuning hyper-parameters for rule-based updates.", "To avoid the vanishing gradient problem, three networks were tested: a simple RNN, an RNN with a memory cell (Henderson et al., 2014a ) and a LSTM.", "The RNN with a memory cell proved to give the best results.", "In addition to the fact that it reduces the vanishing gradient problem, this variant is less complex than an LSTM, which makes training easier.", "Furthermore, a variant of RNN used for domain tracking has all its weights of the form: W i = α i I, where α i is a distinct learnable parameter for hidden, memory and previous state layers and I is the identity matrix.", "Similarly, weights of the RNN used to track the slots and values is of the form: W j = γ j I + λ j (1 − I), where γ j and λ j are the learnable parameters.", "These two variants of RNN are a combination of Henderson et al.", "(2014a) and Mrkvsić and Vulić (2018) previous works.", "The output is P 1:T (d) and P 1:T (s, v), which represents the joint probability distribution of the domains and slots and values respectively over the complete dialogue.", "Combining these together produces the full belief state distribution of the dialogue: Training Criteria Domain tracking and slots and values tracking are trained disjointly.", "Belief state labels for each turn are split into domains and slots and values.", "Thanks to the disjoint training, the learning of slot and value belief states are not restricted to a specific domain.", "Therefore, the model shares the knowledge of slots and values across different domains.", "The loss function for the domain tracking is: L d = − N n=1 d∈D t n (d)logP n 1:T (d), where d is a vector of domains over the dialogue, t n (d) is the domain label for the dialogue n and N is the number of dialogues.", "Similarly, the loss function for the slots and values tracking is: L s,v = − N n=1 s,v∈S,V t n (s, v)logP n 1:T (s, v), where s and v are vectors of slots and values over the dialogue and t n (s, v) is the joint label vector for the dialogue n. Datasets and Baselines Neural approaches to statistical dialogue development, especially in a task-oriented paradigm, are greatly hindered by the lack of large scale datasets.", "That is why, following the Wizard-of-Oz (WOZ) approach (Kelley, 1984; , we ran text-based multi-domain corpus data collection scheme through Amazon MTurk.", "The main goal of the data collection was to acquire humanhuman conversations between a tourist visiting a city and a clerk from an information center.", "At the beginning of each dialogue the user (visitor) was given explicit instructions about the goal to fulfill, which often spanned multiple domains.", "The task of the system (wizard) is to assist a visitor having an access to databases over domains.", "The WOZ paradigm allowed us to obtain natural and semantically rich multi-topic dialogues spanning over multiple domains such as hotels, attractions, restaurants, booking trains or taxis.", "The dialogues cover from 1 up to 5 domains per dialogue greatly varying in length and complexity.", "Data Structure The data consists of 2480 single-domain dialogues and 7375 multi-domain dialogues usually spanning from 2 up to 5 domains.", "Some domains consists also of sub-domains like booking.", "The average sentence lengths are 11.63 and 15.01 for users Evaluation We also used the extended WOZ 2.0 dataset (Wen et al., 2017).", "2 WOZ2 dataset consists of 1200 single topic dialogues constrained to the restaurant domain.", "All the weights were initialised using normal distribution of zero mean and unit variance and biases were initialised to zero.", "ADAM optimizer (Kingma and Ba, 2014) (with 64 batch size) is used to train all the models for 600 epochs.", "Dropout (Srivastava et al., 2014) was used for regularisation (50% dropout rate on all the intermediate representations).", "For each of the two datasets we compare our proposed architecture (using either Bi-LSTM or CNN as encoders) to the NBT model 3 .", "This is because the dialogues in the new dataset are richer and more noisier, as a closer resemblance to real environment dialogues.", "Table 2 presents the results on multi-domain dialogues from the new dataset described in Section 5.", "To demonstrate the difficulty of the multidomain belief tracking problem, values of a theoretical baseline that samples the belief state uniformly at random are also presented.", "Our model gracefully handles such a difficult task.", "In most of the cases, CNNs demonstrate better performance than Bi-LSTMs.", "We hypothesize that this comes from the effectiveness of extracting local and position-invariant features, which are crucial for semantic similarities (Yin et al., 2017) .", "Results Conclusions In this paper, we proposed a new approach that tackles the issue of multi-domain belief tracking, such as model parameter scalability with the ontology size.", "Our model shows improved performance in single-domain tasks compared to the state-ofthe-art NBT method.", "By exploiting semantic similarities between dialogue utterances and ontology terms, the model alleviates the need for ontologydependent parameters and maximizes the amount of information shared between slots and across domains.", "In future, we intend to investigate introducing new domains and ontology terms without further training thus performing zero-shot learning." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "4", "4.1", "4.2", "4.3", "4.4", "5", "5.1", "5.2", "7" ], "paper_header_content": [ "Introduction", "Background", "Related Work", "Neural Belief Tracker (NBT)", "Multi-domain Dialogue State Tracking", "Method", "Domain Tracking", "Candidate Slots and Values Tracking", "Belief State Update", "Training Criteria", "Datasets and Baselines", "Data Structure", "Evaluation", "Conclusions" ] }
GEM-SciDuet-train-41#paper-1061#slide-0
Main Components of Spoken Dialogue Systems
Semantic decoding and belief tracking require different type of label led data Combining these two units, reduces the amount of l abelled data required and avoid possibilit y of information l oss in the SD
Semantic decoding and belief tracking require different type of label led data Combining these two units, reduces the amount of l abelled data required and avoid possibilit y of information l oss in the SD
[]
GEM-SciDuet-train-41#paper-1061#slide-2
1061
Large-Scale Multi-Domain Belief Tracking with Knowledge Sharing
Robust dialogue belief tracking is a key component in maintaining good quality dialogue systems. The tasks that dialogue systems are trying to solve are becoming increasingly complex, requiring scalability to multi-domain, semantically rich dialogues. However, most current approaches have difficulty scaling up with domains because of the dependency of the model parameters on the dialogue ontology. In this paper, a novel approach is introduced that fully utilizes semantic similarity between dialogue utterances and the ontology terms, allowing the information to be shared across domains. The evaluation is performed on a recently collected multi-domain dialogues dataset, one order of magnitude larger than currently available corpora. Our model demonstrates great capability in handling multi-domain dialogues, simultaneously outperforming existing state-of-the-art models in singledomain dialogue tracking tasks.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111 ], "paper_content_text": [ "Introduction Spoken Dialogue Systems (SDS) are computer programs that can hold a conversation with a human.", "These can be task-based systems that help the user achieve specific goals, e.g.", "finding and booking hotels or restaurants.", "In order for the SDS to infer the user goals/intentions during the conversation, its Belief Tracking (BT) component maintains a distribution of states, called a belief state, across dialogue turns (Young et al., 2010) .", "The belief state is used by the system to take actions in each turn until the conversation is concluded and the user goal is achieved.", "In order to extract these belief states from the conversation, traditional approaches use a Spoken Language Understanding (SLU) unit that utilizes a semantic dictionary to hold all the key terms, rephrasings and alternative mentions of a belief state.", "The SLU then delexicalises each turn using this semantic dictionary, before it passes it to the BT component (Wang and Lemon, 2013; Henderson et al., 2014b; Williams, 2014; Zilka and Jurcicek, 2015; Perez and Liu, 2016; Rastogi et al., 2017) .", "However, this approach is not scalable to multi-domain dialogues because of the effort required to define a semantic dictionary for each domain.", "More advanced approaches, such as the Neural Belief Tracker (NBT), use word embeddings to alleviate the need for delexicalisation and combine the SLU and BT into one unit, mapping directly from turns to belief states .", "Nevertheless, the NBT model does not tackle the problem of mixing different domains in a conversation.", "Moreover, as each slot is trained independently without sharing information between different slots, scaling such approaches to large multi-domain systems is greatly hindered.", "In this paper, we propose a model that jointly identifies the domain and tracks the belief states corresponding to that domain.", "It uses semantic similarity between ontology terms and turn utterances to allow for parameter sharing between different slots across domains and within a single domain.", "In addition, the model parameters are independent of the ontology/belief states, thus the dimensionality of the parameters does not increase with the size of the ontology, making the model practically feasible to deploy in multidomain environments without any modifications.", "Finally, we introduce a new, large-scale corpora of natural, human-human conversations providing new possibilities to train complex, neural-based models.", "Our model systematically improves upon state-of-the-art neural approaches both in single and multi-domain conversations.", "Background The belief states of the BT are defined based on an ontology -the structured representation of the database which contains entities the system can talk about.", "The ontology defines the terms over which the distribution is to be tracked in the dialogue.", "This ontology is constructed in terms of slots and values in a single domain setting.", "Or, alternatively, in terms of domains, slots and values in a multi-domain environment.", "Each domain consists of multiple slots and each slot contains several values, e.g.", "domain=hotel, slot=price, value=expensive.", "In each turn, the BT fits a distribution over the values of each slot in each domain, and a none value is added to each slot to indicate if the slot is not mentioned so that the distribution sums up to 1.", "The BT then passes these states to the Policy Optimization unit as full probability distributions to take actions.", "This allows robustness to noisy environments (Young et al., 2010) .", "The larger the ontology, the more flexible and multi-purposed the system is, but the harder it is to train and maintain a good quality BT.", "Related Work In recent years, a plethora of research has been generated on belief tracking (Williams et al., 2016) .", "For the purposes of this paper, two previously proposed models are particularly relevant.", "Neural Belief Tracker (NBT) The main idea behind the NBT is to use semantically specialized pretrained word embeddings to encode the user utterance, the system act and the candidate slots and values taken from the ontology.", "These are fed to semantic decoding and context modeling modules that apply a three-way gating mechanism and pass the output to a non-linear classifier layer to produce a distribution over the values for each slot.", "It uses a simple update rule, p(s t ) = βp(s t−1 ) + λy, where p(s t ) is the belief state at time step t, y is the output of the binary decision maker of the NBT and β and λ are tunable parameters.", "The NBT leverages semantic information from the word embeddings to resolve lexical/morphological ambiguity and maximize the shared parameters across the values of each slot.", "However, it only applies to a single domain and does not share parameters across slots.", "Multi-domain Dialogue State Tracking Recently, Rastogi et al.", "(2017) proposed a multidomain approach using delexicalized utterances fed to a two layer stacked bi-directional GRU network to extract features from the user and the system utterances.", "These, combined with the candidate slots and values, are passed to a feed-forward neural network with a softmax in the last layer.", "The candidate set fed to the network consists of the selected candidates from the previous turn and candidates from the ontology to a limit K, which restricts the maximum size of the chosen set.", "Consequently, the model does not need an ad-hoc belief state update mechanism like in the NBT.", "The parameters of the GRU network are defined for the domain, whereas the parameters of the feed-forward network are defined per slot, allowing transfer learning across different domains.", "However, the model relies on delexicalization to extract the features, which limits the performance of the BT, as it does not scale to the rich variety of the language.", "Moreover, the number of parameters increases with the number of slots.", "Method The core idea is to leverage semantic similarities between the utterances and ontology terms to compute the belief state distribution.", "In this way, the model parameters only learn to model the interactions between turn utterances and ontology terms in the semantic space, rather than the mapping from utterances to states.", "Consequently, information is shared between both slots and across domains.", "Additionally, the number of parameters does not increase with the ontology size.", "Domain tracking is considered as a separate task but is learned jointly with the belief state tracking of the slots and values.", "The proposed model uses semantically specialized pre-trained word embeddings (Wieting et al., 2015) .", "To encode the user and system utterances, we employed 7 independent bi-directional LSTMs (Graves and Schmidhuber, 2005) .", "Three of them are used to encode the system utterance for domain, slot and value tracking respectively.", "Similarly, three Bi-LSTMs encode the user utterance while and the last one is used to track the user affirmation.", "A variant of the CNNs as a feature extractor, similar to the one used in the NBT-CNN is also employed.", "Other variants of the model use CNNs as feature extractors (Kim, 2014; Kalchbrenner et al., 2014) .", "Domain Tracking Figure 1 presents the system architecture with two bi-directional LSTM networks as information encoders running over the word embeddings of the user and system utterances.", "The last hidden states of the forward and backward layers are concatenated to produce h d usr , h d sys of size L for the user and system utterances respectively.", "In the second variant of the model, CNNs are used to produce these vectors (Kim, 2014; Kalchbrenner et al., 2014) .", "To detect the presence of the domain in the dialogue turn, element-wise multiplication is used as a similarity metric between the hidden states and the ontology embeddings of the domain: d k = h d k tanh(W d e d + b d ), where k ∈ {usr, sys}, e d is the embedding vector of the domain and W d ∈ R L×D transforms the domain word embeddings of dimension D to the hidden representation.", "The information about semantic similarity is held by d usr and d sys , which are fed to a non-linear layer to output a binary decision: P t (d) = σ(w d {d usr ⊕ d sys } + b d ), where w d ∈ R 2L and b d are learnable parameters that map the semantic similarity to a belief state probability P t (d) of a domain d at a turn t. Candidate Slots and Values Tracking Slots and values are tracked using a similar architecture as for domain tracking (Figure 1) .", "However, to correctly model the context of the systemuser dialogue at each turn, three different cases are considered when computing the similarity vectors: 1.", "Inform: The user is informing the system about his/her goal, e.g.", "'I am looking for a restaurant that serves Turkish food'.", "2.", "Request: The system is requesting information by asking the user about the value of a specific slot.", "If the system utterance is: 'When do you want the taxi to arrive?'", "and the user answers with '19:30'.", "3.", "Confirm: The system wants to confirm information about the value of a specific slot.", "If the system asked: 'Would you like free parking?", "', the user can either affirm positively or negatively.", "The model detects the user affirmation, using a separate bi-directional LSTM or CNN to output h a usr .", "The three cases are modelled as following: y s,v inf = w inf {s usr ⊕ v usr } + b inf , y s,v req = w req {s sys ⊕ v usr } + b req , y s,v af = w af {s sys ⊕ v sys ⊕ h a usr } + b af , where s k , v k for k ∈ {usr, sys} represent semantic similarity between the user and system utterances and the ontology slot and value terms respectively computed as shown in Figure 1 , and w and b are learnable parameters.", "The distribution over the values of slot s in domain d at turn t can be computed by summing the unscaled states, y inf , y req and y af for each value v in s, and applying a softmax to normalize the distribution: P t (s, v) = softmax(y s,v inf + y s,v req + y s,v af ).", "Belief State Update Since dialogue systems in the real-world operate in noisy environments, a robust BT should utilize the flow of the conversation to reduce the uncertainty in the belief state distribution.", "This can be achieved by passing the output of the decision maker, at each turn, as an input to an RNN that runs over the dialogue turns as shown in Figure 1 , which allows the gradients to be propagated across turns.", "This alleviates the problem of tuning hyper-parameters for rule-based updates.", "To avoid the vanishing gradient problem, three networks were tested: a simple RNN, an RNN with a memory cell (Henderson et al., 2014a ) and a LSTM.", "The RNN with a memory cell proved to give the best results.", "In addition to the fact that it reduces the vanishing gradient problem, this variant is less complex than an LSTM, which makes training easier.", "Furthermore, a variant of RNN used for domain tracking has all its weights of the form: W i = α i I, where α i is a distinct learnable parameter for hidden, memory and previous state layers and I is the identity matrix.", "Similarly, weights of the RNN used to track the slots and values is of the form: W j = γ j I + λ j (1 − I), where γ j and λ j are the learnable parameters.", "These two variants of RNN are a combination of Henderson et al.", "(2014a) and Mrkvsić and Vulić (2018) previous works.", "The output is P 1:T (d) and P 1:T (s, v), which represents the joint probability distribution of the domains and slots and values respectively over the complete dialogue.", "Combining these together produces the full belief state distribution of the dialogue: Training Criteria Domain tracking and slots and values tracking are trained disjointly.", "Belief state labels for each turn are split into domains and slots and values.", "Thanks to the disjoint training, the learning of slot and value belief states are not restricted to a specific domain.", "Therefore, the model shares the knowledge of slots and values across different domains.", "The loss function for the domain tracking is: L d = − N n=1 d∈D t n (d)logP n 1:T (d), where d is a vector of domains over the dialogue, t n (d) is the domain label for the dialogue n and N is the number of dialogues.", "Similarly, the loss function for the slots and values tracking is: L s,v = − N n=1 s,v∈S,V t n (s, v)logP n 1:T (s, v), where s and v are vectors of slots and values over the dialogue and t n (s, v) is the joint label vector for the dialogue n. Datasets and Baselines Neural approaches to statistical dialogue development, especially in a task-oriented paradigm, are greatly hindered by the lack of large scale datasets.", "That is why, following the Wizard-of-Oz (WOZ) approach (Kelley, 1984; , we ran text-based multi-domain corpus data collection scheme through Amazon MTurk.", "The main goal of the data collection was to acquire humanhuman conversations between a tourist visiting a city and a clerk from an information center.", "At the beginning of each dialogue the user (visitor) was given explicit instructions about the goal to fulfill, which often spanned multiple domains.", "The task of the system (wizard) is to assist a visitor having an access to databases over domains.", "The WOZ paradigm allowed us to obtain natural and semantically rich multi-topic dialogues spanning over multiple domains such as hotels, attractions, restaurants, booking trains or taxis.", "The dialogues cover from 1 up to 5 domains per dialogue greatly varying in length and complexity.", "Data Structure The data consists of 2480 single-domain dialogues and 7375 multi-domain dialogues usually spanning from 2 up to 5 domains.", "Some domains consists also of sub-domains like booking.", "The average sentence lengths are 11.63 and 15.01 for users Evaluation We also used the extended WOZ 2.0 dataset (Wen et al., 2017).", "2 WOZ2 dataset consists of 1200 single topic dialogues constrained to the restaurant domain.", "All the weights were initialised using normal distribution of zero mean and unit variance and biases were initialised to zero.", "ADAM optimizer (Kingma and Ba, 2014) (with 64 batch size) is used to train all the models for 600 epochs.", "Dropout (Srivastava et al., 2014) was used for regularisation (50% dropout rate on all the intermediate representations).", "For each of the two datasets we compare our proposed architecture (using either Bi-LSTM or CNN as encoders) to the NBT model 3 .", "This is because the dialogues in the new dataset are richer and more noisier, as a closer resemblance to real environment dialogues.", "Table 2 presents the results on multi-domain dialogues from the new dataset described in Section 5.", "To demonstrate the difficulty of the multidomain belief tracking problem, values of a theoretical baseline that samples the belief state uniformly at random are also presented.", "Our model gracefully handles such a difficult task.", "In most of the cases, CNNs demonstrate better performance than Bi-LSTMs.", "We hypothesize that this comes from the effectiveness of extracting local and position-invariant features, which are crucial for semantic similarities (Yin et al., 2017) .", "Results Conclusions In this paper, we proposed a new approach that tackles the issue of multi-domain belief tracking, such as model parameter scalability with the ontology size.", "Our model shows improved performance in single-domain tasks compared to the state-ofthe-art NBT method.", "By exploiting semantic similarities between dialogue utterances and ontology terms, the model alleviates the need for ontologydependent parameters and maximizes the amount of information shared between slots and across domains.", "In future, we intend to investigate introducing new domains and ontology terms without further training thus performing zero-shot learning." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "4", "4.1", "4.2", "4.3", "4.4", "5", "5.1", "5.2", "7" ], "paper_header_content": [ "Introduction", "Background", "Related Work", "Neural Belief Tracker (NBT)", "Multi-domain Dialogue State Tracking", "Method", "Domain Tracking", "Candidate Slots and Values Tracking", "Belief State Update", "Training Criteria", "Datasets and Baselines", "Data Structure", "Evaluation", "Conclusions" ] }
GEM-SciDuet-train-41#paper-1061#slide-2
Limitations of Current Belief Trackers
1. The model parameters increase wit h the size of the ontol ogy 2. Many approaches rely on the delexicalization except for Neural Belief Tracker (NBT), Mrksic et al 2017 3. Current multidomain models do not handle mixed domains within a single dialogue This causes a bottleneck in scaling the belief tracker to larger domains and complex dial ogues
1. The model parameters increase wit h the size of the ontol ogy 2. Many approaches rely on the delexicalization except for Neural Belief Tracker (NBT), Mrksic et al 2017 3. Current multidomain models do not handle mixed domains within a single dialogue This causes a bottleneck in scaling the belief tracker to larger domains and complex dial ogues
[]
GEM-SciDuet-train-41#paper-1061#slide-3
1061
Large-Scale Multi-Domain Belief Tracking with Knowledge Sharing
Robust dialogue belief tracking is a key component in maintaining good quality dialogue systems. The tasks that dialogue systems are trying to solve are becoming increasingly complex, requiring scalability to multi-domain, semantically rich dialogues. However, most current approaches have difficulty scaling up with domains because of the dependency of the model parameters on the dialogue ontology. In this paper, a novel approach is introduced that fully utilizes semantic similarity between dialogue utterances and the ontology terms, allowing the information to be shared across domains. The evaluation is performed on a recently collected multi-domain dialogues dataset, one order of magnitude larger than currently available corpora. Our model demonstrates great capability in handling multi-domain dialogues, simultaneously outperforming existing state-of-the-art models in singledomain dialogue tracking tasks.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111 ], "paper_content_text": [ "Introduction Spoken Dialogue Systems (SDS) are computer programs that can hold a conversation with a human.", "These can be task-based systems that help the user achieve specific goals, e.g.", "finding and booking hotels or restaurants.", "In order for the SDS to infer the user goals/intentions during the conversation, its Belief Tracking (BT) component maintains a distribution of states, called a belief state, across dialogue turns (Young et al., 2010) .", "The belief state is used by the system to take actions in each turn until the conversation is concluded and the user goal is achieved.", "In order to extract these belief states from the conversation, traditional approaches use a Spoken Language Understanding (SLU) unit that utilizes a semantic dictionary to hold all the key terms, rephrasings and alternative mentions of a belief state.", "The SLU then delexicalises each turn using this semantic dictionary, before it passes it to the BT component (Wang and Lemon, 2013; Henderson et al., 2014b; Williams, 2014; Zilka and Jurcicek, 2015; Perez and Liu, 2016; Rastogi et al., 2017) .", "However, this approach is not scalable to multi-domain dialogues because of the effort required to define a semantic dictionary for each domain.", "More advanced approaches, such as the Neural Belief Tracker (NBT), use word embeddings to alleviate the need for delexicalisation and combine the SLU and BT into one unit, mapping directly from turns to belief states .", "Nevertheless, the NBT model does not tackle the problem of mixing different domains in a conversation.", "Moreover, as each slot is trained independently without sharing information between different slots, scaling such approaches to large multi-domain systems is greatly hindered.", "In this paper, we propose a model that jointly identifies the domain and tracks the belief states corresponding to that domain.", "It uses semantic similarity between ontology terms and turn utterances to allow for parameter sharing between different slots across domains and within a single domain.", "In addition, the model parameters are independent of the ontology/belief states, thus the dimensionality of the parameters does not increase with the size of the ontology, making the model practically feasible to deploy in multidomain environments without any modifications.", "Finally, we introduce a new, large-scale corpora of natural, human-human conversations providing new possibilities to train complex, neural-based models.", "Our model systematically improves upon state-of-the-art neural approaches both in single and multi-domain conversations.", "Background The belief states of the BT are defined based on an ontology -the structured representation of the database which contains entities the system can talk about.", "The ontology defines the terms over which the distribution is to be tracked in the dialogue.", "This ontology is constructed in terms of slots and values in a single domain setting.", "Or, alternatively, in terms of domains, slots and values in a multi-domain environment.", "Each domain consists of multiple slots and each slot contains several values, e.g.", "domain=hotel, slot=price, value=expensive.", "In each turn, the BT fits a distribution over the values of each slot in each domain, and a none value is added to each slot to indicate if the slot is not mentioned so that the distribution sums up to 1.", "The BT then passes these states to the Policy Optimization unit as full probability distributions to take actions.", "This allows robustness to noisy environments (Young et al., 2010) .", "The larger the ontology, the more flexible and multi-purposed the system is, but the harder it is to train and maintain a good quality BT.", "Related Work In recent years, a plethora of research has been generated on belief tracking (Williams et al., 2016) .", "For the purposes of this paper, two previously proposed models are particularly relevant.", "Neural Belief Tracker (NBT) The main idea behind the NBT is to use semantically specialized pretrained word embeddings to encode the user utterance, the system act and the candidate slots and values taken from the ontology.", "These are fed to semantic decoding and context modeling modules that apply a three-way gating mechanism and pass the output to a non-linear classifier layer to produce a distribution over the values for each slot.", "It uses a simple update rule, p(s t ) = βp(s t−1 ) + λy, where p(s t ) is the belief state at time step t, y is the output of the binary decision maker of the NBT and β and λ are tunable parameters.", "The NBT leverages semantic information from the word embeddings to resolve lexical/morphological ambiguity and maximize the shared parameters across the values of each slot.", "However, it only applies to a single domain and does not share parameters across slots.", "Multi-domain Dialogue State Tracking Recently, Rastogi et al.", "(2017) proposed a multidomain approach using delexicalized utterances fed to a two layer stacked bi-directional GRU network to extract features from the user and the system utterances.", "These, combined with the candidate slots and values, are passed to a feed-forward neural network with a softmax in the last layer.", "The candidate set fed to the network consists of the selected candidates from the previous turn and candidates from the ontology to a limit K, which restricts the maximum size of the chosen set.", "Consequently, the model does not need an ad-hoc belief state update mechanism like in the NBT.", "The parameters of the GRU network are defined for the domain, whereas the parameters of the feed-forward network are defined per slot, allowing transfer learning across different domains.", "However, the model relies on delexicalization to extract the features, which limits the performance of the BT, as it does not scale to the rich variety of the language.", "Moreover, the number of parameters increases with the number of slots.", "Method The core idea is to leverage semantic similarities between the utterances and ontology terms to compute the belief state distribution.", "In this way, the model parameters only learn to model the interactions between turn utterances and ontology terms in the semantic space, rather than the mapping from utterances to states.", "Consequently, information is shared between both slots and across domains.", "Additionally, the number of parameters does not increase with the ontology size.", "Domain tracking is considered as a separate task but is learned jointly with the belief state tracking of the slots and values.", "The proposed model uses semantically specialized pre-trained word embeddings (Wieting et al., 2015) .", "To encode the user and system utterances, we employed 7 independent bi-directional LSTMs (Graves and Schmidhuber, 2005) .", "Three of them are used to encode the system utterance for domain, slot and value tracking respectively.", "Similarly, three Bi-LSTMs encode the user utterance while and the last one is used to track the user affirmation.", "A variant of the CNNs as a feature extractor, similar to the one used in the NBT-CNN is also employed.", "Other variants of the model use CNNs as feature extractors (Kim, 2014; Kalchbrenner et al., 2014) .", "Domain Tracking Figure 1 presents the system architecture with two bi-directional LSTM networks as information encoders running over the word embeddings of the user and system utterances.", "The last hidden states of the forward and backward layers are concatenated to produce h d usr , h d sys of size L for the user and system utterances respectively.", "In the second variant of the model, CNNs are used to produce these vectors (Kim, 2014; Kalchbrenner et al., 2014) .", "To detect the presence of the domain in the dialogue turn, element-wise multiplication is used as a similarity metric between the hidden states and the ontology embeddings of the domain: d k = h d k tanh(W d e d + b d ), where k ∈ {usr, sys}, e d is the embedding vector of the domain and W d ∈ R L×D transforms the domain word embeddings of dimension D to the hidden representation.", "The information about semantic similarity is held by d usr and d sys , which are fed to a non-linear layer to output a binary decision: P t (d) = σ(w d {d usr ⊕ d sys } + b d ), where w d ∈ R 2L and b d are learnable parameters that map the semantic similarity to a belief state probability P t (d) of a domain d at a turn t. Candidate Slots and Values Tracking Slots and values are tracked using a similar architecture as for domain tracking (Figure 1) .", "However, to correctly model the context of the systemuser dialogue at each turn, three different cases are considered when computing the similarity vectors: 1.", "Inform: The user is informing the system about his/her goal, e.g.", "'I am looking for a restaurant that serves Turkish food'.", "2.", "Request: The system is requesting information by asking the user about the value of a specific slot.", "If the system utterance is: 'When do you want the taxi to arrive?'", "and the user answers with '19:30'.", "3.", "Confirm: The system wants to confirm information about the value of a specific slot.", "If the system asked: 'Would you like free parking?", "', the user can either affirm positively or negatively.", "The model detects the user affirmation, using a separate bi-directional LSTM or CNN to output h a usr .", "The three cases are modelled as following: y s,v inf = w inf {s usr ⊕ v usr } + b inf , y s,v req = w req {s sys ⊕ v usr } + b req , y s,v af = w af {s sys ⊕ v sys ⊕ h a usr } + b af , where s k , v k for k ∈ {usr, sys} represent semantic similarity between the user and system utterances and the ontology slot and value terms respectively computed as shown in Figure 1 , and w and b are learnable parameters.", "The distribution over the values of slot s in domain d at turn t can be computed by summing the unscaled states, y inf , y req and y af for each value v in s, and applying a softmax to normalize the distribution: P t (s, v) = softmax(y s,v inf + y s,v req + y s,v af ).", "Belief State Update Since dialogue systems in the real-world operate in noisy environments, a robust BT should utilize the flow of the conversation to reduce the uncertainty in the belief state distribution.", "This can be achieved by passing the output of the decision maker, at each turn, as an input to an RNN that runs over the dialogue turns as shown in Figure 1 , which allows the gradients to be propagated across turns.", "This alleviates the problem of tuning hyper-parameters for rule-based updates.", "To avoid the vanishing gradient problem, three networks were tested: a simple RNN, an RNN with a memory cell (Henderson et al., 2014a ) and a LSTM.", "The RNN with a memory cell proved to give the best results.", "In addition to the fact that it reduces the vanishing gradient problem, this variant is less complex than an LSTM, which makes training easier.", "Furthermore, a variant of RNN used for domain tracking has all its weights of the form: W i = α i I, where α i is a distinct learnable parameter for hidden, memory and previous state layers and I is the identity matrix.", "Similarly, weights of the RNN used to track the slots and values is of the form: W j = γ j I + λ j (1 − I), where γ j and λ j are the learnable parameters.", "These two variants of RNN are a combination of Henderson et al.", "(2014a) and Mrkvsić and Vulić (2018) previous works.", "The output is P 1:T (d) and P 1:T (s, v), which represents the joint probability distribution of the domains and slots and values respectively over the complete dialogue.", "Combining these together produces the full belief state distribution of the dialogue: Training Criteria Domain tracking and slots and values tracking are trained disjointly.", "Belief state labels for each turn are split into domains and slots and values.", "Thanks to the disjoint training, the learning of slot and value belief states are not restricted to a specific domain.", "Therefore, the model shares the knowledge of slots and values across different domains.", "The loss function for the domain tracking is: L d = − N n=1 d∈D t n (d)logP n 1:T (d), where d is a vector of domains over the dialogue, t n (d) is the domain label for the dialogue n and N is the number of dialogues.", "Similarly, the loss function for the slots and values tracking is: L s,v = − N n=1 s,v∈S,V t n (s, v)logP n 1:T (s, v), where s and v are vectors of slots and values over the dialogue and t n (s, v) is the joint label vector for the dialogue n. Datasets and Baselines Neural approaches to statistical dialogue development, especially in a task-oriented paradigm, are greatly hindered by the lack of large scale datasets.", "That is why, following the Wizard-of-Oz (WOZ) approach (Kelley, 1984; , we ran text-based multi-domain corpus data collection scheme through Amazon MTurk.", "The main goal of the data collection was to acquire humanhuman conversations between a tourist visiting a city and a clerk from an information center.", "At the beginning of each dialogue the user (visitor) was given explicit instructions about the goal to fulfill, which often spanned multiple domains.", "The task of the system (wizard) is to assist a visitor having an access to databases over domains.", "The WOZ paradigm allowed us to obtain natural and semantically rich multi-topic dialogues spanning over multiple domains such as hotels, attractions, restaurants, booking trains or taxis.", "The dialogues cover from 1 up to 5 domains per dialogue greatly varying in length and complexity.", "Data Structure The data consists of 2480 single-domain dialogues and 7375 multi-domain dialogues usually spanning from 2 up to 5 domains.", "Some domains consists also of sub-domains like booking.", "The average sentence lengths are 11.63 and 15.01 for users Evaluation We also used the extended WOZ 2.0 dataset (Wen et al., 2017).", "2 WOZ2 dataset consists of 1200 single topic dialogues constrained to the restaurant domain.", "All the weights were initialised using normal distribution of zero mean and unit variance and biases were initialised to zero.", "ADAM optimizer (Kingma and Ba, 2014) (with 64 batch size) is used to train all the models for 600 epochs.", "Dropout (Srivastava et al., 2014) was used for regularisation (50% dropout rate on all the intermediate representations).", "For each of the two datasets we compare our proposed architecture (using either Bi-LSTM or CNN as encoders) to the NBT model 3 .", "This is because the dialogues in the new dataset are richer and more noisier, as a closer resemblance to real environment dialogues.", "Table 2 presents the results on multi-domain dialogues from the new dataset described in Section 5.", "To demonstrate the difficulty of the multidomain belief tracking problem, values of a theoretical baseline that samples the belief state uniformly at random are also presented.", "Our model gracefully handles such a difficult task.", "In most of the cases, CNNs demonstrate better performance than Bi-LSTMs.", "We hypothesize that this comes from the effectiveness of extracting local and position-invariant features, which are crucial for semantic similarities (Yin et al., 2017) .", "Results Conclusions In this paper, we proposed a new approach that tackles the issue of multi-domain belief tracking, such as model parameter scalability with the ontology size.", "Our model shows improved performance in single-domain tasks compared to the state-ofthe-art NBT method.", "By exploiting semantic similarities between dialogue utterances and ontology terms, the model alleviates the need for ontologydependent parameters and maximizes the amount of information shared between slots and across domains.", "In future, we intend to investigate introducing new domains and ontology terms without further training thus performing zero-shot learning." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "4", "4.1", "4.2", "4.3", "4.4", "5", "5.1", "5.2", "7" ], "paper_header_content": [ "Introduction", "Background", "Related Work", "Neural Belief Tracker (NBT)", "Multi-domain Dialogue State Tracking", "Method", "Domain Tracking", "Candidate Slots and Values Tracking", "Belief State Update", "Training Criteria", "Datasets and Baselines", "Data Structure", "Evaluation", "Conclusions" ] }
GEM-SciDuet-train-41#paper-1061#slide-3
Problem Formulation
1. What is in the dial ogue ontol ogy? 2. What does the system output refer to? 3. What does the user input refer to? 4. How do we track the dial ogue context? 5. How do we handle many domains?
1. What is in the dial ogue ontol ogy? 2. What does the system output refer to? 3. What does the user input refer to? 4. How do we track the dial ogue context? 5. How do we handle many domains?
[]
GEM-SciDuet-train-41#paper-1061#slide-5
1061
Large-Scale Multi-Domain Belief Tracking with Knowledge Sharing
Robust dialogue belief tracking is a key component in maintaining good quality dialogue systems. The tasks that dialogue systems are trying to solve are becoming increasingly complex, requiring scalability to multi-domain, semantically rich dialogues. However, most current approaches have difficulty scaling up with domains because of the dependency of the model parameters on the dialogue ontology. In this paper, a novel approach is introduced that fully utilizes semantic similarity between dialogue utterances and the ontology terms, allowing the information to be shared across domains. The evaluation is performed on a recently collected multi-domain dialogues dataset, one order of magnitude larger than currently available corpora. Our model demonstrates great capability in handling multi-domain dialogues, simultaneously outperforming existing state-of-the-art models in singledomain dialogue tracking tasks.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111 ], "paper_content_text": [ "Introduction Spoken Dialogue Systems (SDS) are computer programs that can hold a conversation with a human.", "These can be task-based systems that help the user achieve specific goals, e.g.", "finding and booking hotels or restaurants.", "In order for the SDS to infer the user goals/intentions during the conversation, its Belief Tracking (BT) component maintains a distribution of states, called a belief state, across dialogue turns (Young et al., 2010) .", "The belief state is used by the system to take actions in each turn until the conversation is concluded and the user goal is achieved.", "In order to extract these belief states from the conversation, traditional approaches use a Spoken Language Understanding (SLU) unit that utilizes a semantic dictionary to hold all the key terms, rephrasings and alternative mentions of a belief state.", "The SLU then delexicalises each turn using this semantic dictionary, before it passes it to the BT component (Wang and Lemon, 2013; Henderson et al., 2014b; Williams, 2014; Zilka and Jurcicek, 2015; Perez and Liu, 2016; Rastogi et al., 2017) .", "However, this approach is not scalable to multi-domain dialogues because of the effort required to define a semantic dictionary for each domain.", "More advanced approaches, such as the Neural Belief Tracker (NBT), use word embeddings to alleviate the need for delexicalisation and combine the SLU and BT into one unit, mapping directly from turns to belief states .", "Nevertheless, the NBT model does not tackle the problem of mixing different domains in a conversation.", "Moreover, as each slot is trained independently without sharing information between different slots, scaling such approaches to large multi-domain systems is greatly hindered.", "In this paper, we propose a model that jointly identifies the domain and tracks the belief states corresponding to that domain.", "It uses semantic similarity between ontology terms and turn utterances to allow for parameter sharing between different slots across domains and within a single domain.", "In addition, the model parameters are independent of the ontology/belief states, thus the dimensionality of the parameters does not increase with the size of the ontology, making the model practically feasible to deploy in multidomain environments without any modifications.", "Finally, we introduce a new, large-scale corpora of natural, human-human conversations providing new possibilities to train complex, neural-based models.", "Our model systematically improves upon state-of-the-art neural approaches both in single and multi-domain conversations.", "Background The belief states of the BT are defined based on an ontology -the structured representation of the database which contains entities the system can talk about.", "The ontology defines the terms over which the distribution is to be tracked in the dialogue.", "This ontology is constructed in terms of slots and values in a single domain setting.", "Or, alternatively, in terms of domains, slots and values in a multi-domain environment.", "Each domain consists of multiple slots and each slot contains several values, e.g.", "domain=hotel, slot=price, value=expensive.", "In each turn, the BT fits a distribution over the values of each slot in each domain, and a none value is added to each slot to indicate if the slot is not mentioned so that the distribution sums up to 1.", "The BT then passes these states to the Policy Optimization unit as full probability distributions to take actions.", "This allows robustness to noisy environments (Young et al., 2010) .", "The larger the ontology, the more flexible and multi-purposed the system is, but the harder it is to train and maintain a good quality BT.", "Related Work In recent years, a plethora of research has been generated on belief tracking (Williams et al., 2016) .", "For the purposes of this paper, two previously proposed models are particularly relevant.", "Neural Belief Tracker (NBT) The main idea behind the NBT is to use semantically specialized pretrained word embeddings to encode the user utterance, the system act and the candidate slots and values taken from the ontology.", "These are fed to semantic decoding and context modeling modules that apply a three-way gating mechanism and pass the output to a non-linear classifier layer to produce a distribution over the values for each slot.", "It uses a simple update rule, p(s t ) = βp(s t−1 ) + λy, where p(s t ) is the belief state at time step t, y is the output of the binary decision maker of the NBT and β and λ are tunable parameters.", "The NBT leverages semantic information from the word embeddings to resolve lexical/morphological ambiguity and maximize the shared parameters across the values of each slot.", "However, it only applies to a single domain and does not share parameters across slots.", "Multi-domain Dialogue State Tracking Recently, Rastogi et al.", "(2017) proposed a multidomain approach using delexicalized utterances fed to a two layer stacked bi-directional GRU network to extract features from the user and the system utterances.", "These, combined with the candidate slots and values, are passed to a feed-forward neural network with a softmax in the last layer.", "The candidate set fed to the network consists of the selected candidates from the previous turn and candidates from the ontology to a limit K, which restricts the maximum size of the chosen set.", "Consequently, the model does not need an ad-hoc belief state update mechanism like in the NBT.", "The parameters of the GRU network are defined for the domain, whereas the parameters of the feed-forward network are defined per slot, allowing transfer learning across different domains.", "However, the model relies on delexicalization to extract the features, which limits the performance of the BT, as it does not scale to the rich variety of the language.", "Moreover, the number of parameters increases with the number of slots.", "Method The core idea is to leverage semantic similarities between the utterances and ontology terms to compute the belief state distribution.", "In this way, the model parameters only learn to model the interactions between turn utterances and ontology terms in the semantic space, rather than the mapping from utterances to states.", "Consequently, information is shared between both slots and across domains.", "Additionally, the number of parameters does not increase with the ontology size.", "Domain tracking is considered as a separate task but is learned jointly with the belief state tracking of the slots and values.", "The proposed model uses semantically specialized pre-trained word embeddings (Wieting et al., 2015) .", "To encode the user and system utterances, we employed 7 independent bi-directional LSTMs (Graves and Schmidhuber, 2005) .", "Three of them are used to encode the system utterance for domain, slot and value tracking respectively.", "Similarly, three Bi-LSTMs encode the user utterance while and the last one is used to track the user affirmation.", "A variant of the CNNs as a feature extractor, similar to the one used in the NBT-CNN is also employed.", "Other variants of the model use CNNs as feature extractors (Kim, 2014; Kalchbrenner et al., 2014) .", "Domain Tracking Figure 1 presents the system architecture with two bi-directional LSTM networks as information encoders running over the word embeddings of the user and system utterances.", "The last hidden states of the forward and backward layers are concatenated to produce h d usr , h d sys of size L for the user and system utterances respectively.", "In the second variant of the model, CNNs are used to produce these vectors (Kim, 2014; Kalchbrenner et al., 2014) .", "To detect the presence of the domain in the dialogue turn, element-wise multiplication is used as a similarity metric between the hidden states and the ontology embeddings of the domain: d k = h d k tanh(W d e d + b d ), where k ∈ {usr, sys}, e d is the embedding vector of the domain and W d ∈ R L×D transforms the domain word embeddings of dimension D to the hidden representation.", "The information about semantic similarity is held by d usr and d sys , which are fed to a non-linear layer to output a binary decision: P t (d) = σ(w d {d usr ⊕ d sys } + b d ), where w d ∈ R 2L and b d are learnable parameters that map the semantic similarity to a belief state probability P t (d) of a domain d at a turn t. Candidate Slots and Values Tracking Slots and values are tracked using a similar architecture as for domain tracking (Figure 1) .", "However, to correctly model the context of the systemuser dialogue at each turn, three different cases are considered when computing the similarity vectors: 1.", "Inform: The user is informing the system about his/her goal, e.g.", "'I am looking for a restaurant that serves Turkish food'.", "2.", "Request: The system is requesting information by asking the user about the value of a specific slot.", "If the system utterance is: 'When do you want the taxi to arrive?'", "and the user answers with '19:30'.", "3.", "Confirm: The system wants to confirm information about the value of a specific slot.", "If the system asked: 'Would you like free parking?", "', the user can either affirm positively or negatively.", "The model detects the user affirmation, using a separate bi-directional LSTM or CNN to output h a usr .", "The three cases are modelled as following: y s,v inf = w inf {s usr ⊕ v usr } + b inf , y s,v req = w req {s sys ⊕ v usr } + b req , y s,v af = w af {s sys ⊕ v sys ⊕ h a usr } + b af , where s k , v k for k ∈ {usr, sys} represent semantic similarity between the user and system utterances and the ontology slot and value terms respectively computed as shown in Figure 1 , and w and b are learnable parameters.", "The distribution over the values of slot s in domain d at turn t can be computed by summing the unscaled states, y inf , y req and y af for each value v in s, and applying a softmax to normalize the distribution: P t (s, v) = softmax(y s,v inf + y s,v req + y s,v af ).", "Belief State Update Since dialogue systems in the real-world operate in noisy environments, a robust BT should utilize the flow of the conversation to reduce the uncertainty in the belief state distribution.", "This can be achieved by passing the output of the decision maker, at each turn, as an input to an RNN that runs over the dialogue turns as shown in Figure 1 , which allows the gradients to be propagated across turns.", "This alleviates the problem of tuning hyper-parameters for rule-based updates.", "To avoid the vanishing gradient problem, three networks were tested: a simple RNN, an RNN with a memory cell (Henderson et al., 2014a ) and a LSTM.", "The RNN with a memory cell proved to give the best results.", "In addition to the fact that it reduces the vanishing gradient problem, this variant is less complex than an LSTM, which makes training easier.", "Furthermore, a variant of RNN used for domain tracking has all its weights of the form: W i = α i I, where α i is a distinct learnable parameter for hidden, memory and previous state layers and I is the identity matrix.", "Similarly, weights of the RNN used to track the slots and values is of the form: W j = γ j I + λ j (1 − I), where γ j and λ j are the learnable parameters.", "These two variants of RNN are a combination of Henderson et al.", "(2014a) and Mrkvsić and Vulić (2018) previous works.", "The output is P 1:T (d) and P 1:T (s, v), which represents the joint probability distribution of the domains and slots and values respectively over the complete dialogue.", "Combining these together produces the full belief state distribution of the dialogue: Training Criteria Domain tracking and slots and values tracking are trained disjointly.", "Belief state labels for each turn are split into domains and slots and values.", "Thanks to the disjoint training, the learning of slot and value belief states are not restricted to a specific domain.", "Therefore, the model shares the knowledge of slots and values across different domains.", "The loss function for the domain tracking is: L d = − N n=1 d∈D t n (d)logP n 1:T (d), where d is a vector of domains over the dialogue, t n (d) is the domain label for the dialogue n and N is the number of dialogues.", "Similarly, the loss function for the slots and values tracking is: L s,v = − N n=1 s,v∈S,V t n (s, v)logP n 1:T (s, v), where s and v are vectors of slots and values over the dialogue and t n (s, v) is the joint label vector for the dialogue n. Datasets and Baselines Neural approaches to statistical dialogue development, especially in a task-oriented paradigm, are greatly hindered by the lack of large scale datasets.", "That is why, following the Wizard-of-Oz (WOZ) approach (Kelley, 1984; , we ran text-based multi-domain corpus data collection scheme through Amazon MTurk.", "The main goal of the data collection was to acquire humanhuman conversations between a tourist visiting a city and a clerk from an information center.", "At the beginning of each dialogue the user (visitor) was given explicit instructions about the goal to fulfill, which often spanned multiple domains.", "The task of the system (wizard) is to assist a visitor having an access to databases over domains.", "The WOZ paradigm allowed us to obtain natural and semantically rich multi-topic dialogues spanning over multiple domains such as hotels, attractions, restaurants, booking trains or taxis.", "The dialogues cover from 1 up to 5 domains per dialogue greatly varying in length and complexity.", "Data Structure The data consists of 2480 single-domain dialogues and 7375 multi-domain dialogues usually spanning from 2 up to 5 domains.", "Some domains consists also of sub-domains like booking.", "The average sentence lengths are 11.63 and 15.01 for users Evaluation We also used the extended WOZ 2.0 dataset (Wen et al., 2017).", "2 WOZ2 dataset consists of 1200 single topic dialogues constrained to the restaurant domain.", "All the weights were initialised using normal distribution of zero mean and unit variance and biases were initialised to zero.", "ADAM optimizer (Kingma and Ba, 2014) (with 64 batch size) is used to train all the models for 600 epochs.", "Dropout (Srivastava et al., 2014) was used for regularisation (50% dropout rate on all the intermediate representations).", "For each of the two datasets we compare our proposed architecture (using either Bi-LSTM or CNN as encoders) to the NBT model 3 .", "This is because the dialogues in the new dataset are richer and more noisier, as a closer resemblance to real environment dialogues.", "Table 2 presents the results on multi-domain dialogues from the new dataset described in Section 5.", "To demonstrate the difficulty of the multidomain belief tracking problem, values of a theoretical baseline that samples the belief state uniformly at random are also presented.", "Our model gracefully handles such a difficult task.", "In most of the cases, CNNs demonstrate better performance than Bi-LSTMs.", "We hypothesize that this comes from the effectiveness of extracting local and position-invariant features, which are crucial for semantic similarities (Yin et al., 2017) .", "Results Conclusions In this paper, we proposed a new approach that tackles the issue of multi-domain belief tracking, such as model parameter scalability with the ontology size.", "Our model shows improved performance in single-domain tasks compared to the state-ofthe-art NBT method.", "By exploiting semantic similarities between dialogue utterances and ontology terms, the model alleviates the need for ontologydependent parameters and maximizes the amount of information shared between slots and across domains.", "In future, we intend to investigate introducing new domains and ontology terms without further training thus performing zero-shot learning." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "4", "4.1", "4.2", "4.3", "4.4", "5", "5.1", "5.2", "7" ], "paper_header_content": [ "Introduction", "Background", "Related Work", "Neural Belief Tracker (NBT)", "Multi-domain Dialogue State Tracking", "Method", "Domain Tracking", "Candidate Slots and Values Tracking", "Belief State Update", "Training Criteria", "Datasets and Baselines", "Data Structure", "Evaluation", "Conclusions" ] }
GEM-SciDuet-train-41#paper-1061#slide-5
Belief State Update
Use a statistical belief update mechanism modelled by a RNN
Use a statistical belief update mechanism modelled by a RNN
[]
GEM-SciDuet-train-41#paper-1061#slide-6
1061
Large-Scale Multi-Domain Belief Tracking with Knowledge Sharing
Robust dialogue belief tracking is a key component in maintaining good quality dialogue systems. The tasks that dialogue systems are trying to solve are becoming increasingly complex, requiring scalability to multi-domain, semantically rich dialogues. However, most current approaches have difficulty scaling up with domains because of the dependency of the model parameters on the dialogue ontology. In this paper, a novel approach is introduced that fully utilizes semantic similarity between dialogue utterances and the ontology terms, allowing the information to be shared across domains. The evaluation is performed on a recently collected multi-domain dialogues dataset, one order of magnitude larger than currently available corpora. Our model demonstrates great capability in handling multi-domain dialogues, simultaneously outperforming existing state-of-the-art models in singledomain dialogue tracking tasks.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111 ], "paper_content_text": [ "Introduction Spoken Dialogue Systems (SDS) are computer programs that can hold a conversation with a human.", "These can be task-based systems that help the user achieve specific goals, e.g.", "finding and booking hotels or restaurants.", "In order for the SDS to infer the user goals/intentions during the conversation, its Belief Tracking (BT) component maintains a distribution of states, called a belief state, across dialogue turns (Young et al., 2010) .", "The belief state is used by the system to take actions in each turn until the conversation is concluded and the user goal is achieved.", "In order to extract these belief states from the conversation, traditional approaches use a Spoken Language Understanding (SLU) unit that utilizes a semantic dictionary to hold all the key terms, rephrasings and alternative mentions of a belief state.", "The SLU then delexicalises each turn using this semantic dictionary, before it passes it to the BT component (Wang and Lemon, 2013; Henderson et al., 2014b; Williams, 2014; Zilka and Jurcicek, 2015; Perez and Liu, 2016; Rastogi et al., 2017) .", "However, this approach is not scalable to multi-domain dialogues because of the effort required to define a semantic dictionary for each domain.", "More advanced approaches, such as the Neural Belief Tracker (NBT), use word embeddings to alleviate the need for delexicalisation and combine the SLU and BT into one unit, mapping directly from turns to belief states .", "Nevertheless, the NBT model does not tackle the problem of mixing different domains in a conversation.", "Moreover, as each slot is trained independently without sharing information between different slots, scaling such approaches to large multi-domain systems is greatly hindered.", "In this paper, we propose a model that jointly identifies the domain and tracks the belief states corresponding to that domain.", "It uses semantic similarity between ontology terms and turn utterances to allow for parameter sharing between different slots across domains and within a single domain.", "In addition, the model parameters are independent of the ontology/belief states, thus the dimensionality of the parameters does not increase with the size of the ontology, making the model practically feasible to deploy in multidomain environments without any modifications.", "Finally, we introduce a new, large-scale corpora of natural, human-human conversations providing new possibilities to train complex, neural-based models.", "Our model systematically improves upon state-of-the-art neural approaches both in single and multi-domain conversations.", "Background The belief states of the BT are defined based on an ontology -the structured representation of the database which contains entities the system can talk about.", "The ontology defines the terms over which the distribution is to be tracked in the dialogue.", "This ontology is constructed in terms of slots and values in a single domain setting.", "Or, alternatively, in terms of domains, slots and values in a multi-domain environment.", "Each domain consists of multiple slots and each slot contains several values, e.g.", "domain=hotel, slot=price, value=expensive.", "In each turn, the BT fits a distribution over the values of each slot in each domain, and a none value is added to each slot to indicate if the slot is not mentioned so that the distribution sums up to 1.", "The BT then passes these states to the Policy Optimization unit as full probability distributions to take actions.", "This allows robustness to noisy environments (Young et al., 2010) .", "The larger the ontology, the more flexible and multi-purposed the system is, but the harder it is to train and maintain a good quality BT.", "Related Work In recent years, a plethora of research has been generated on belief tracking (Williams et al., 2016) .", "For the purposes of this paper, two previously proposed models are particularly relevant.", "Neural Belief Tracker (NBT) The main idea behind the NBT is to use semantically specialized pretrained word embeddings to encode the user utterance, the system act and the candidate slots and values taken from the ontology.", "These are fed to semantic decoding and context modeling modules that apply a three-way gating mechanism and pass the output to a non-linear classifier layer to produce a distribution over the values for each slot.", "It uses a simple update rule, p(s t ) = βp(s t−1 ) + λy, where p(s t ) is the belief state at time step t, y is the output of the binary decision maker of the NBT and β and λ are tunable parameters.", "The NBT leverages semantic information from the word embeddings to resolve lexical/morphological ambiguity and maximize the shared parameters across the values of each slot.", "However, it only applies to a single domain and does not share parameters across slots.", "Multi-domain Dialogue State Tracking Recently, Rastogi et al.", "(2017) proposed a multidomain approach using delexicalized utterances fed to a two layer stacked bi-directional GRU network to extract features from the user and the system utterances.", "These, combined with the candidate slots and values, are passed to a feed-forward neural network with a softmax in the last layer.", "The candidate set fed to the network consists of the selected candidates from the previous turn and candidates from the ontology to a limit K, which restricts the maximum size of the chosen set.", "Consequently, the model does not need an ad-hoc belief state update mechanism like in the NBT.", "The parameters of the GRU network are defined for the domain, whereas the parameters of the feed-forward network are defined per slot, allowing transfer learning across different domains.", "However, the model relies on delexicalization to extract the features, which limits the performance of the BT, as it does not scale to the rich variety of the language.", "Moreover, the number of parameters increases with the number of slots.", "Method The core idea is to leverage semantic similarities between the utterances and ontology terms to compute the belief state distribution.", "In this way, the model parameters only learn to model the interactions between turn utterances and ontology terms in the semantic space, rather than the mapping from utterances to states.", "Consequently, information is shared between both slots and across domains.", "Additionally, the number of parameters does not increase with the ontology size.", "Domain tracking is considered as a separate task but is learned jointly with the belief state tracking of the slots and values.", "The proposed model uses semantically specialized pre-trained word embeddings (Wieting et al., 2015) .", "To encode the user and system utterances, we employed 7 independent bi-directional LSTMs (Graves and Schmidhuber, 2005) .", "Three of them are used to encode the system utterance for domain, slot and value tracking respectively.", "Similarly, three Bi-LSTMs encode the user utterance while and the last one is used to track the user affirmation.", "A variant of the CNNs as a feature extractor, similar to the one used in the NBT-CNN is also employed.", "Other variants of the model use CNNs as feature extractors (Kim, 2014; Kalchbrenner et al., 2014) .", "Domain Tracking Figure 1 presents the system architecture with two bi-directional LSTM networks as information encoders running over the word embeddings of the user and system utterances.", "The last hidden states of the forward and backward layers are concatenated to produce h d usr , h d sys of size L for the user and system utterances respectively.", "In the second variant of the model, CNNs are used to produce these vectors (Kim, 2014; Kalchbrenner et al., 2014) .", "To detect the presence of the domain in the dialogue turn, element-wise multiplication is used as a similarity metric between the hidden states and the ontology embeddings of the domain: d k = h d k tanh(W d e d + b d ), where k ∈ {usr, sys}, e d is the embedding vector of the domain and W d ∈ R L×D transforms the domain word embeddings of dimension D to the hidden representation.", "The information about semantic similarity is held by d usr and d sys , which are fed to a non-linear layer to output a binary decision: P t (d) = σ(w d {d usr ⊕ d sys } + b d ), where w d ∈ R 2L and b d are learnable parameters that map the semantic similarity to a belief state probability P t (d) of a domain d at a turn t. Candidate Slots and Values Tracking Slots and values are tracked using a similar architecture as for domain tracking (Figure 1) .", "However, to correctly model the context of the systemuser dialogue at each turn, three different cases are considered when computing the similarity vectors: 1.", "Inform: The user is informing the system about his/her goal, e.g.", "'I am looking for a restaurant that serves Turkish food'.", "2.", "Request: The system is requesting information by asking the user about the value of a specific slot.", "If the system utterance is: 'When do you want the taxi to arrive?'", "and the user answers with '19:30'.", "3.", "Confirm: The system wants to confirm information about the value of a specific slot.", "If the system asked: 'Would you like free parking?", "', the user can either affirm positively or negatively.", "The model detects the user affirmation, using a separate bi-directional LSTM or CNN to output h a usr .", "The three cases are modelled as following: y s,v inf = w inf {s usr ⊕ v usr } + b inf , y s,v req = w req {s sys ⊕ v usr } + b req , y s,v af = w af {s sys ⊕ v sys ⊕ h a usr } + b af , where s k , v k for k ∈ {usr, sys} represent semantic similarity between the user and system utterances and the ontology slot and value terms respectively computed as shown in Figure 1 , and w and b are learnable parameters.", "The distribution over the values of slot s in domain d at turn t can be computed by summing the unscaled states, y inf , y req and y af for each value v in s, and applying a softmax to normalize the distribution: P t (s, v) = softmax(y s,v inf + y s,v req + y s,v af ).", "Belief State Update Since dialogue systems in the real-world operate in noisy environments, a robust BT should utilize the flow of the conversation to reduce the uncertainty in the belief state distribution.", "This can be achieved by passing the output of the decision maker, at each turn, as an input to an RNN that runs over the dialogue turns as shown in Figure 1 , which allows the gradients to be propagated across turns.", "This alleviates the problem of tuning hyper-parameters for rule-based updates.", "To avoid the vanishing gradient problem, three networks were tested: a simple RNN, an RNN with a memory cell (Henderson et al., 2014a ) and a LSTM.", "The RNN with a memory cell proved to give the best results.", "In addition to the fact that it reduces the vanishing gradient problem, this variant is less complex than an LSTM, which makes training easier.", "Furthermore, a variant of RNN used for domain tracking has all its weights of the form: W i = α i I, where α i is a distinct learnable parameter for hidden, memory and previous state layers and I is the identity matrix.", "Similarly, weights of the RNN used to track the slots and values is of the form: W j = γ j I + λ j (1 − I), where γ j and λ j are the learnable parameters.", "These two variants of RNN are a combination of Henderson et al.", "(2014a) and Mrkvsić and Vulić (2018) previous works.", "The output is P 1:T (d) and P 1:T (s, v), which represents the joint probability distribution of the domains and slots and values respectively over the complete dialogue.", "Combining these together produces the full belief state distribution of the dialogue: Training Criteria Domain tracking and slots and values tracking are trained disjointly.", "Belief state labels for each turn are split into domains and slots and values.", "Thanks to the disjoint training, the learning of slot and value belief states are not restricted to a specific domain.", "Therefore, the model shares the knowledge of slots and values across different domains.", "The loss function for the domain tracking is: L d = − N n=1 d∈D t n (d)logP n 1:T (d), where d is a vector of domains over the dialogue, t n (d) is the domain label for the dialogue n and N is the number of dialogues.", "Similarly, the loss function for the slots and values tracking is: L s,v = − N n=1 s,v∈S,V t n (s, v)logP n 1:T (s, v), where s and v are vectors of slots and values over the dialogue and t n (s, v) is the joint label vector for the dialogue n. Datasets and Baselines Neural approaches to statistical dialogue development, especially in a task-oriented paradigm, are greatly hindered by the lack of large scale datasets.", "That is why, following the Wizard-of-Oz (WOZ) approach (Kelley, 1984; , we ran text-based multi-domain corpus data collection scheme through Amazon MTurk.", "The main goal of the data collection was to acquire humanhuman conversations between a tourist visiting a city and a clerk from an information center.", "At the beginning of each dialogue the user (visitor) was given explicit instructions about the goal to fulfill, which often spanned multiple domains.", "The task of the system (wizard) is to assist a visitor having an access to databases over domains.", "The WOZ paradigm allowed us to obtain natural and semantically rich multi-topic dialogues spanning over multiple domains such as hotels, attractions, restaurants, booking trains or taxis.", "The dialogues cover from 1 up to 5 domains per dialogue greatly varying in length and complexity.", "Data Structure The data consists of 2480 single-domain dialogues and 7375 multi-domain dialogues usually spanning from 2 up to 5 domains.", "Some domains consists also of sub-domains like booking.", "The average sentence lengths are 11.63 and 15.01 for users Evaluation We also used the extended WOZ 2.0 dataset (Wen et al., 2017).", "2 WOZ2 dataset consists of 1200 single topic dialogues constrained to the restaurant domain.", "All the weights were initialised using normal distribution of zero mean and unit variance and biases were initialised to zero.", "ADAM optimizer (Kingma and Ba, 2014) (with 64 batch size) is used to train all the models for 600 epochs.", "Dropout (Srivastava et al., 2014) was used for regularisation (50% dropout rate on all the intermediate representations).", "For each of the two datasets we compare our proposed architecture (using either Bi-LSTM or CNN as encoders) to the NBT model 3 .", "This is because the dialogues in the new dataset are richer and more noisier, as a closer resemblance to real environment dialogues.", "Table 2 presents the results on multi-domain dialogues from the new dataset described in Section 5.", "To demonstrate the difficulty of the multidomain belief tracking problem, values of a theoretical baseline that samples the belief state uniformly at random are also presented.", "Our model gracefully handles such a difficult task.", "In most of the cases, CNNs demonstrate better performance than Bi-LSTMs.", "We hypothesize that this comes from the effectiveness of extracting local and position-invariant features, which are crucial for semantic similarities (Yin et al., 2017) .", "Results Conclusions In this paper, we proposed a new approach that tackles the issue of multi-domain belief tracking, such as model parameter scalability with the ontology size.", "Our model shows improved performance in single-domain tasks compared to the state-ofthe-art NBT method.", "By exploiting semantic similarities between dialogue utterances and ontology terms, the model alleviates the need for ontologydependent parameters and maximizes the amount of information shared between slots and across domains.", "In future, we intend to investigate introducing new domains and ontology terms without further training thus performing zero-shot learning." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "4", "4.1", "4.2", "4.3", "4.4", "5", "5.1", "5.2", "7" ], "paper_header_content": [ "Introduction", "Background", "Related Work", "Neural Belief Tracker (NBT)", "Multi-domain Dialogue State Tracking", "Method", "Domain Tracking", "Candidate Slots and Values Tracking", "Belief State Update", "Training Criteria", "Datasets and Baselines", "Data Structure", "Evaluation", "Conclusions" ] }
GEM-SciDuet-train-41#paper-1061#slide-6
Datasets
Wizard of Oz framework for col l ecting data for bel ief tracking Amazon MTurk users given tasks to compl ete, access to the database They produce dialogues and annotate them Singledomain dataset WOZ 2.0 (Wen et al 2016) New multidomain dataset Mul tiWOZ
Wizard of Oz framework for col l ecting data for bel ief tracking Amazon MTurk users given tasks to compl ete, access to the database They produce dialogues and annotate them Singledomain dataset WOZ 2.0 (Wen et al 2016) New multidomain dataset Mul tiWOZ
[]
GEM-SciDuet-train-41#paper-1061#slide-7
1061
Large-Scale Multi-Domain Belief Tracking with Knowledge Sharing
Robust dialogue belief tracking is a key component in maintaining good quality dialogue systems. The tasks that dialogue systems are trying to solve are becoming increasingly complex, requiring scalability to multi-domain, semantically rich dialogues. However, most current approaches have difficulty scaling up with domains because of the dependency of the model parameters on the dialogue ontology. In this paper, a novel approach is introduced that fully utilizes semantic similarity between dialogue utterances and the ontology terms, allowing the information to be shared across domains. The evaluation is performed on a recently collected multi-domain dialogues dataset, one order of magnitude larger than currently available corpora. Our model demonstrates great capability in handling multi-domain dialogues, simultaneously outperforming existing state-of-the-art models in singledomain dialogue tracking tasks.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111 ], "paper_content_text": [ "Introduction Spoken Dialogue Systems (SDS) are computer programs that can hold a conversation with a human.", "These can be task-based systems that help the user achieve specific goals, e.g.", "finding and booking hotels or restaurants.", "In order for the SDS to infer the user goals/intentions during the conversation, its Belief Tracking (BT) component maintains a distribution of states, called a belief state, across dialogue turns (Young et al., 2010) .", "The belief state is used by the system to take actions in each turn until the conversation is concluded and the user goal is achieved.", "In order to extract these belief states from the conversation, traditional approaches use a Spoken Language Understanding (SLU) unit that utilizes a semantic dictionary to hold all the key terms, rephrasings and alternative mentions of a belief state.", "The SLU then delexicalises each turn using this semantic dictionary, before it passes it to the BT component (Wang and Lemon, 2013; Henderson et al., 2014b; Williams, 2014; Zilka and Jurcicek, 2015; Perez and Liu, 2016; Rastogi et al., 2017) .", "However, this approach is not scalable to multi-domain dialogues because of the effort required to define a semantic dictionary for each domain.", "More advanced approaches, such as the Neural Belief Tracker (NBT), use word embeddings to alleviate the need for delexicalisation and combine the SLU and BT into one unit, mapping directly from turns to belief states .", "Nevertheless, the NBT model does not tackle the problem of mixing different domains in a conversation.", "Moreover, as each slot is trained independently without sharing information between different slots, scaling such approaches to large multi-domain systems is greatly hindered.", "In this paper, we propose a model that jointly identifies the domain and tracks the belief states corresponding to that domain.", "It uses semantic similarity between ontology terms and turn utterances to allow for parameter sharing between different slots across domains and within a single domain.", "In addition, the model parameters are independent of the ontology/belief states, thus the dimensionality of the parameters does not increase with the size of the ontology, making the model practically feasible to deploy in multidomain environments without any modifications.", "Finally, we introduce a new, large-scale corpora of natural, human-human conversations providing new possibilities to train complex, neural-based models.", "Our model systematically improves upon state-of-the-art neural approaches both in single and multi-domain conversations.", "Background The belief states of the BT are defined based on an ontology -the structured representation of the database which contains entities the system can talk about.", "The ontology defines the terms over which the distribution is to be tracked in the dialogue.", "This ontology is constructed in terms of slots and values in a single domain setting.", "Or, alternatively, in terms of domains, slots and values in a multi-domain environment.", "Each domain consists of multiple slots and each slot contains several values, e.g.", "domain=hotel, slot=price, value=expensive.", "In each turn, the BT fits a distribution over the values of each slot in each domain, and a none value is added to each slot to indicate if the slot is not mentioned so that the distribution sums up to 1.", "The BT then passes these states to the Policy Optimization unit as full probability distributions to take actions.", "This allows robustness to noisy environments (Young et al., 2010) .", "The larger the ontology, the more flexible and multi-purposed the system is, but the harder it is to train and maintain a good quality BT.", "Related Work In recent years, a plethora of research has been generated on belief tracking (Williams et al., 2016) .", "For the purposes of this paper, two previously proposed models are particularly relevant.", "Neural Belief Tracker (NBT) The main idea behind the NBT is to use semantically specialized pretrained word embeddings to encode the user utterance, the system act and the candidate slots and values taken from the ontology.", "These are fed to semantic decoding and context modeling modules that apply a three-way gating mechanism and pass the output to a non-linear classifier layer to produce a distribution over the values for each slot.", "It uses a simple update rule, p(s t ) = βp(s t−1 ) + λy, where p(s t ) is the belief state at time step t, y is the output of the binary decision maker of the NBT and β and λ are tunable parameters.", "The NBT leverages semantic information from the word embeddings to resolve lexical/morphological ambiguity and maximize the shared parameters across the values of each slot.", "However, it only applies to a single domain and does not share parameters across slots.", "Multi-domain Dialogue State Tracking Recently, Rastogi et al.", "(2017) proposed a multidomain approach using delexicalized utterances fed to a two layer stacked bi-directional GRU network to extract features from the user and the system utterances.", "These, combined with the candidate slots and values, are passed to a feed-forward neural network with a softmax in the last layer.", "The candidate set fed to the network consists of the selected candidates from the previous turn and candidates from the ontology to a limit K, which restricts the maximum size of the chosen set.", "Consequently, the model does not need an ad-hoc belief state update mechanism like in the NBT.", "The parameters of the GRU network are defined for the domain, whereas the parameters of the feed-forward network are defined per slot, allowing transfer learning across different domains.", "However, the model relies on delexicalization to extract the features, which limits the performance of the BT, as it does not scale to the rich variety of the language.", "Moreover, the number of parameters increases with the number of slots.", "Method The core idea is to leverage semantic similarities between the utterances and ontology terms to compute the belief state distribution.", "In this way, the model parameters only learn to model the interactions between turn utterances and ontology terms in the semantic space, rather than the mapping from utterances to states.", "Consequently, information is shared between both slots and across domains.", "Additionally, the number of parameters does not increase with the ontology size.", "Domain tracking is considered as a separate task but is learned jointly with the belief state tracking of the slots and values.", "The proposed model uses semantically specialized pre-trained word embeddings (Wieting et al., 2015) .", "To encode the user and system utterances, we employed 7 independent bi-directional LSTMs (Graves and Schmidhuber, 2005) .", "Three of them are used to encode the system utterance for domain, slot and value tracking respectively.", "Similarly, three Bi-LSTMs encode the user utterance while and the last one is used to track the user affirmation.", "A variant of the CNNs as a feature extractor, similar to the one used in the NBT-CNN is also employed.", "Other variants of the model use CNNs as feature extractors (Kim, 2014; Kalchbrenner et al., 2014) .", "Domain Tracking Figure 1 presents the system architecture with two bi-directional LSTM networks as information encoders running over the word embeddings of the user and system utterances.", "The last hidden states of the forward and backward layers are concatenated to produce h d usr , h d sys of size L for the user and system utterances respectively.", "In the second variant of the model, CNNs are used to produce these vectors (Kim, 2014; Kalchbrenner et al., 2014) .", "To detect the presence of the domain in the dialogue turn, element-wise multiplication is used as a similarity metric between the hidden states and the ontology embeddings of the domain: d k = h d k tanh(W d e d + b d ), where k ∈ {usr, sys}, e d is the embedding vector of the domain and W d ∈ R L×D transforms the domain word embeddings of dimension D to the hidden representation.", "The information about semantic similarity is held by d usr and d sys , which are fed to a non-linear layer to output a binary decision: P t (d) = σ(w d {d usr ⊕ d sys } + b d ), where w d ∈ R 2L and b d are learnable parameters that map the semantic similarity to a belief state probability P t (d) of a domain d at a turn t. Candidate Slots and Values Tracking Slots and values are tracked using a similar architecture as for domain tracking (Figure 1) .", "However, to correctly model the context of the systemuser dialogue at each turn, three different cases are considered when computing the similarity vectors: 1.", "Inform: The user is informing the system about his/her goal, e.g.", "'I am looking for a restaurant that serves Turkish food'.", "2.", "Request: The system is requesting information by asking the user about the value of a specific slot.", "If the system utterance is: 'When do you want the taxi to arrive?'", "and the user answers with '19:30'.", "3.", "Confirm: The system wants to confirm information about the value of a specific slot.", "If the system asked: 'Would you like free parking?", "', the user can either affirm positively or negatively.", "The model detects the user affirmation, using a separate bi-directional LSTM or CNN to output h a usr .", "The three cases are modelled as following: y s,v inf = w inf {s usr ⊕ v usr } + b inf , y s,v req = w req {s sys ⊕ v usr } + b req , y s,v af = w af {s sys ⊕ v sys ⊕ h a usr } + b af , where s k , v k for k ∈ {usr, sys} represent semantic similarity between the user and system utterances and the ontology slot and value terms respectively computed as shown in Figure 1 , and w and b are learnable parameters.", "The distribution over the values of slot s in domain d at turn t can be computed by summing the unscaled states, y inf , y req and y af for each value v in s, and applying a softmax to normalize the distribution: P t (s, v) = softmax(y s,v inf + y s,v req + y s,v af ).", "Belief State Update Since dialogue systems in the real-world operate in noisy environments, a robust BT should utilize the flow of the conversation to reduce the uncertainty in the belief state distribution.", "This can be achieved by passing the output of the decision maker, at each turn, as an input to an RNN that runs over the dialogue turns as shown in Figure 1 , which allows the gradients to be propagated across turns.", "This alleviates the problem of tuning hyper-parameters for rule-based updates.", "To avoid the vanishing gradient problem, three networks were tested: a simple RNN, an RNN with a memory cell (Henderson et al., 2014a ) and a LSTM.", "The RNN with a memory cell proved to give the best results.", "In addition to the fact that it reduces the vanishing gradient problem, this variant is less complex than an LSTM, which makes training easier.", "Furthermore, a variant of RNN used for domain tracking has all its weights of the form: W i = α i I, where α i is a distinct learnable parameter for hidden, memory and previous state layers and I is the identity matrix.", "Similarly, weights of the RNN used to track the slots and values is of the form: W j = γ j I + λ j (1 − I), where γ j and λ j are the learnable parameters.", "These two variants of RNN are a combination of Henderson et al.", "(2014a) and Mrkvsić and Vulić (2018) previous works.", "The output is P 1:T (d) and P 1:T (s, v), which represents the joint probability distribution of the domains and slots and values respectively over the complete dialogue.", "Combining these together produces the full belief state distribution of the dialogue: Training Criteria Domain tracking and slots and values tracking are trained disjointly.", "Belief state labels for each turn are split into domains and slots and values.", "Thanks to the disjoint training, the learning of slot and value belief states are not restricted to a specific domain.", "Therefore, the model shares the knowledge of slots and values across different domains.", "The loss function for the domain tracking is: L d = − N n=1 d∈D t n (d)logP n 1:T (d), where d is a vector of domains over the dialogue, t n (d) is the domain label for the dialogue n and N is the number of dialogues.", "Similarly, the loss function for the slots and values tracking is: L s,v = − N n=1 s,v∈S,V t n (s, v)logP n 1:T (s, v), where s and v are vectors of slots and values over the dialogue and t n (s, v) is the joint label vector for the dialogue n. Datasets and Baselines Neural approaches to statistical dialogue development, especially in a task-oriented paradigm, are greatly hindered by the lack of large scale datasets.", "That is why, following the Wizard-of-Oz (WOZ) approach (Kelley, 1984; , we ran text-based multi-domain corpus data collection scheme through Amazon MTurk.", "The main goal of the data collection was to acquire humanhuman conversations between a tourist visiting a city and a clerk from an information center.", "At the beginning of each dialogue the user (visitor) was given explicit instructions about the goal to fulfill, which often spanned multiple domains.", "The task of the system (wizard) is to assist a visitor having an access to databases over domains.", "The WOZ paradigm allowed us to obtain natural and semantically rich multi-topic dialogues spanning over multiple domains such as hotels, attractions, restaurants, booking trains or taxis.", "The dialogues cover from 1 up to 5 domains per dialogue greatly varying in length and complexity.", "Data Structure The data consists of 2480 single-domain dialogues and 7375 multi-domain dialogues usually spanning from 2 up to 5 domains.", "Some domains consists also of sub-domains like booking.", "The average sentence lengths are 11.63 and 15.01 for users Evaluation We also used the extended WOZ 2.0 dataset (Wen et al., 2017).", "2 WOZ2 dataset consists of 1200 single topic dialogues constrained to the restaurant domain.", "All the weights were initialised using normal distribution of zero mean and unit variance and biases were initialised to zero.", "ADAM optimizer (Kingma and Ba, 2014) (with 64 batch size) is used to train all the models for 600 epochs.", "Dropout (Srivastava et al., 2014) was used for regularisation (50% dropout rate on all the intermediate representations).", "For each of the two datasets we compare our proposed architecture (using either Bi-LSTM or CNN as encoders) to the NBT model 3 .", "This is because the dialogues in the new dataset are richer and more noisier, as a closer resemblance to real environment dialogues.", "Table 2 presents the results on multi-domain dialogues from the new dataset described in Section 5.", "To demonstrate the difficulty of the multidomain belief tracking problem, values of a theoretical baseline that samples the belief state uniformly at random are also presented.", "Our model gracefully handles such a difficult task.", "In most of the cases, CNNs demonstrate better performance than Bi-LSTMs.", "We hypothesize that this comes from the effectiveness of extracting local and position-invariant features, which are crucial for semantic similarities (Yin et al., 2017) .", "Results Conclusions In this paper, we proposed a new approach that tackles the issue of multi-domain belief tracking, such as model parameter scalability with the ontology size.", "Our model shows improved performance in single-domain tasks compared to the state-ofthe-art NBT method.", "By exploiting semantic similarities between dialogue utterances and ontology terms, the model alleviates the need for ontologydependent parameters and maximizes the amount of information shared between slots and across domains.", "In future, we intend to investigate introducing new domains and ontology terms without further training thus performing zero-shot learning." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "4", "4.1", "4.2", "4.3", "4.4", "5", "5.1", "5.2", "7" ], "paper_header_content": [ "Introduction", "Background", "Related Work", "Neural Belief Tracker (NBT)", "Multi-domain Dialogue State Tracking", "Method", "Domain Tracking", "Candidate Slots and Values Tracking", "Belief State Update", "Training Criteria", "Datasets and Baselines", "Data Structure", "Evaluation", "Conclusions" ] }
GEM-SciDuet-train-41#paper-1061#slide-7
Results
1. Singl edomain Dialogues:
1. Singl edomain Dialogues:
[]
GEM-SciDuet-train-41#paper-1061#slide-8
1061
Large-Scale Multi-Domain Belief Tracking with Knowledge Sharing
Robust dialogue belief tracking is a key component in maintaining good quality dialogue systems. The tasks that dialogue systems are trying to solve are becoming increasingly complex, requiring scalability to multi-domain, semantically rich dialogues. However, most current approaches have difficulty scaling up with domains because of the dependency of the model parameters on the dialogue ontology. In this paper, a novel approach is introduced that fully utilizes semantic similarity between dialogue utterances and the ontology terms, allowing the information to be shared across domains. The evaluation is performed on a recently collected multi-domain dialogues dataset, one order of magnitude larger than currently available corpora. Our model demonstrates great capability in handling multi-domain dialogues, simultaneously outperforming existing state-of-the-art models in singledomain dialogue tracking tasks.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111 ], "paper_content_text": [ "Introduction Spoken Dialogue Systems (SDS) are computer programs that can hold a conversation with a human.", "These can be task-based systems that help the user achieve specific goals, e.g.", "finding and booking hotels or restaurants.", "In order for the SDS to infer the user goals/intentions during the conversation, its Belief Tracking (BT) component maintains a distribution of states, called a belief state, across dialogue turns (Young et al., 2010) .", "The belief state is used by the system to take actions in each turn until the conversation is concluded and the user goal is achieved.", "In order to extract these belief states from the conversation, traditional approaches use a Spoken Language Understanding (SLU) unit that utilizes a semantic dictionary to hold all the key terms, rephrasings and alternative mentions of a belief state.", "The SLU then delexicalises each turn using this semantic dictionary, before it passes it to the BT component (Wang and Lemon, 2013; Henderson et al., 2014b; Williams, 2014; Zilka and Jurcicek, 2015; Perez and Liu, 2016; Rastogi et al., 2017) .", "However, this approach is not scalable to multi-domain dialogues because of the effort required to define a semantic dictionary for each domain.", "More advanced approaches, such as the Neural Belief Tracker (NBT), use word embeddings to alleviate the need for delexicalisation and combine the SLU and BT into one unit, mapping directly from turns to belief states .", "Nevertheless, the NBT model does not tackle the problem of mixing different domains in a conversation.", "Moreover, as each slot is trained independently without sharing information between different slots, scaling such approaches to large multi-domain systems is greatly hindered.", "In this paper, we propose a model that jointly identifies the domain and tracks the belief states corresponding to that domain.", "It uses semantic similarity between ontology terms and turn utterances to allow for parameter sharing between different slots across domains and within a single domain.", "In addition, the model parameters are independent of the ontology/belief states, thus the dimensionality of the parameters does not increase with the size of the ontology, making the model practically feasible to deploy in multidomain environments without any modifications.", "Finally, we introduce a new, large-scale corpora of natural, human-human conversations providing new possibilities to train complex, neural-based models.", "Our model systematically improves upon state-of-the-art neural approaches both in single and multi-domain conversations.", "Background The belief states of the BT are defined based on an ontology -the structured representation of the database which contains entities the system can talk about.", "The ontology defines the terms over which the distribution is to be tracked in the dialogue.", "This ontology is constructed in terms of slots and values in a single domain setting.", "Or, alternatively, in terms of domains, slots and values in a multi-domain environment.", "Each domain consists of multiple slots and each slot contains several values, e.g.", "domain=hotel, slot=price, value=expensive.", "In each turn, the BT fits a distribution over the values of each slot in each domain, and a none value is added to each slot to indicate if the slot is not mentioned so that the distribution sums up to 1.", "The BT then passes these states to the Policy Optimization unit as full probability distributions to take actions.", "This allows robustness to noisy environments (Young et al., 2010) .", "The larger the ontology, the more flexible and multi-purposed the system is, but the harder it is to train and maintain a good quality BT.", "Related Work In recent years, a plethora of research has been generated on belief tracking (Williams et al., 2016) .", "For the purposes of this paper, two previously proposed models are particularly relevant.", "Neural Belief Tracker (NBT) The main idea behind the NBT is to use semantically specialized pretrained word embeddings to encode the user utterance, the system act and the candidate slots and values taken from the ontology.", "These are fed to semantic decoding and context modeling modules that apply a three-way gating mechanism and pass the output to a non-linear classifier layer to produce a distribution over the values for each slot.", "It uses a simple update rule, p(s t ) = βp(s t−1 ) + λy, where p(s t ) is the belief state at time step t, y is the output of the binary decision maker of the NBT and β and λ are tunable parameters.", "The NBT leverages semantic information from the word embeddings to resolve lexical/morphological ambiguity and maximize the shared parameters across the values of each slot.", "However, it only applies to a single domain and does not share parameters across slots.", "Multi-domain Dialogue State Tracking Recently, Rastogi et al.", "(2017) proposed a multidomain approach using delexicalized utterances fed to a two layer stacked bi-directional GRU network to extract features from the user and the system utterances.", "These, combined with the candidate slots and values, are passed to a feed-forward neural network with a softmax in the last layer.", "The candidate set fed to the network consists of the selected candidates from the previous turn and candidates from the ontology to a limit K, which restricts the maximum size of the chosen set.", "Consequently, the model does not need an ad-hoc belief state update mechanism like in the NBT.", "The parameters of the GRU network are defined for the domain, whereas the parameters of the feed-forward network are defined per slot, allowing transfer learning across different domains.", "However, the model relies on delexicalization to extract the features, which limits the performance of the BT, as it does not scale to the rich variety of the language.", "Moreover, the number of parameters increases with the number of slots.", "Method The core idea is to leverage semantic similarities between the utterances and ontology terms to compute the belief state distribution.", "In this way, the model parameters only learn to model the interactions between turn utterances and ontology terms in the semantic space, rather than the mapping from utterances to states.", "Consequently, information is shared between both slots and across domains.", "Additionally, the number of parameters does not increase with the ontology size.", "Domain tracking is considered as a separate task but is learned jointly with the belief state tracking of the slots and values.", "The proposed model uses semantically specialized pre-trained word embeddings (Wieting et al., 2015) .", "To encode the user and system utterances, we employed 7 independent bi-directional LSTMs (Graves and Schmidhuber, 2005) .", "Three of them are used to encode the system utterance for domain, slot and value tracking respectively.", "Similarly, three Bi-LSTMs encode the user utterance while and the last one is used to track the user affirmation.", "A variant of the CNNs as a feature extractor, similar to the one used in the NBT-CNN is also employed.", "Other variants of the model use CNNs as feature extractors (Kim, 2014; Kalchbrenner et al., 2014) .", "Domain Tracking Figure 1 presents the system architecture with two bi-directional LSTM networks as information encoders running over the word embeddings of the user and system utterances.", "The last hidden states of the forward and backward layers are concatenated to produce h d usr , h d sys of size L for the user and system utterances respectively.", "In the second variant of the model, CNNs are used to produce these vectors (Kim, 2014; Kalchbrenner et al., 2014) .", "To detect the presence of the domain in the dialogue turn, element-wise multiplication is used as a similarity metric between the hidden states and the ontology embeddings of the domain: d k = h d k tanh(W d e d + b d ), where k ∈ {usr, sys}, e d is the embedding vector of the domain and W d ∈ R L×D transforms the domain word embeddings of dimension D to the hidden representation.", "The information about semantic similarity is held by d usr and d sys , which are fed to a non-linear layer to output a binary decision: P t (d) = σ(w d {d usr ⊕ d sys } + b d ), where w d ∈ R 2L and b d are learnable parameters that map the semantic similarity to a belief state probability P t (d) of a domain d at a turn t. Candidate Slots and Values Tracking Slots and values are tracked using a similar architecture as for domain tracking (Figure 1) .", "However, to correctly model the context of the systemuser dialogue at each turn, three different cases are considered when computing the similarity vectors: 1.", "Inform: The user is informing the system about his/her goal, e.g.", "'I am looking for a restaurant that serves Turkish food'.", "2.", "Request: The system is requesting information by asking the user about the value of a specific slot.", "If the system utterance is: 'When do you want the taxi to arrive?'", "and the user answers with '19:30'.", "3.", "Confirm: The system wants to confirm information about the value of a specific slot.", "If the system asked: 'Would you like free parking?", "', the user can either affirm positively or negatively.", "The model detects the user affirmation, using a separate bi-directional LSTM or CNN to output h a usr .", "The three cases are modelled as following: y s,v inf = w inf {s usr ⊕ v usr } + b inf , y s,v req = w req {s sys ⊕ v usr } + b req , y s,v af = w af {s sys ⊕ v sys ⊕ h a usr } + b af , where s k , v k for k ∈ {usr, sys} represent semantic similarity between the user and system utterances and the ontology slot and value terms respectively computed as shown in Figure 1 , and w and b are learnable parameters.", "The distribution over the values of slot s in domain d at turn t can be computed by summing the unscaled states, y inf , y req and y af for each value v in s, and applying a softmax to normalize the distribution: P t (s, v) = softmax(y s,v inf + y s,v req + y s,v af ).", "Belief State Update Since dialogue systems in the real-world operate in noisy environments, a robust BT should utilize the flow of the conversation to reduce the uncertainty in the belief state distribution.", "This can be achieved by passing the output of the decision maker, at each turn, as an input to an RNN that runs over the dialogue turns as shown in Figure 1 , which allows the gradients to be propagated across turns.", "This alleviates the problem of tuning hyper-parameters for rule-based updates.", "To avoid the vanishing gradient problem, three networks were tested: a simple RNN, an RNN with a memory cell (Henderson et al., 2014a ) and a LSTM.", "The RNN with a memory cell proved to give the best results.", "In addition to the fact that it reduces the vanishing gradient problem, this variant is less complex than an LSTM, which makes training easier.", "Furthermore, a variant of RNN used for domain tracking has all its weights of the form: W i = α i I, where α i is a distinct learnable parameter for hidden, memory and previous state layers and I is the identity matrix.", "Similarly, weights of the RNN used to track the slots and values is of the form: W j = γ j I + λ j (1 − I), where γ j and λ j are the learnable parameters.", "These two variants of RNN are a combination of Henderson et al.", "(2014a) and Mrkvsić and Vulić (2018) previous works.", "The output is P 1:T (d) and P 1:T (s, v), which represents the joint probability distribution of the domains and slots and values respectively over the complete dialogue.", "Combining these together produces the full belief state distribution of the dialogue: Training Criteria Domain tracking and slots and values tracking are trained disjointly.", "Belief state labels for each turn are split into domains and slots and values.", "Thanks to the disjoint training, the learning of slot and value belief states are not restricted to a specific domain.", "Therefore, the model shares the knowledge of slots and values across different domains.", "The loss function for the domain tracking is: L d = − N n=1 d∈D t n (d)logP n 1:T (d), where d is a vector of domains over the dialogue, t n (d) is the domain label for the dialogue n and N is the number of dialogues.", "Similarly, the loss function for the slots and values tracking is: L s,v = − N n=1 s,v∈S,V t n (s, v)logP n 1:T (s, v), where s and v are vectors of slots and values over the dialogue and t n (s, v) is the joint label vector for the dialogue n. Datasets and Baselines Neural approaches to statistical dialogue development, especially in a task-oriented paradigm, are greatly hindered by the lack of large scale datasets.", "That is why, following the Wizard-of-Oz (WOZ) approach (Kelley, 1984; , we ran text-based multi-domain corpus data collection scheme through Amazon MTurk.", "The main goal of the data collection was to acquire humanhuman conversations between a tourist visiting a city and a clerk from an information center.", "At the beginning of each dialogue the user (visitor) was given explicit instructions about the goal to fulfill, which often spanned multiple domains.", "The task of the system (wizard) is to assist a visitor having an access to databases over domains.", "The WOZ paradigm allowed us to obtain natural and semantically rich multi-topic dialogues spanning over multiple domains such as hotels, attractions, restaurants, booking trains or taxis.", "The dialogues cover from 1 up to 5 domains per dialogue greatly varying in length and complexity.", "Data Structure The data consists of 2480 single-domain dialogues and 7375 multi-domain dialogues usually spanning from 2 up to 5 domains.", "Some domains consists also of sub-domains like booking.", "The average sentence lengths are 11.63 and 15.01 for users Evaluation We also used the extended WOZ 2.0 dataset (Wen et al., 2017).", "2 WOZ2 dataset consists of 1200 single topic dialogues constrained to the restaurant domain.", "All the weights were initialised using normal distribution of zero mean and unit variance and biases were initialised to zero.", "ADAM optimizer (Kingma and Ba, 2014) (with 64 batch size) is used to train all the models for 600 epochs.", "Dropout (Srivastava et al., 2014) was used for regularisation (50% dropout rate on all the intermediate representations).", "For each of the two datasets we compare our proposed architecture (using either Bi-LSTM or CNN as encoders) to the NBT model 3 .", "This is because the dialogues in the new dataset are richer and more noisier, as a closer resemblance to real environment dialogues.", "Table 2 presents the results on multi-domain dialogues from the new dataset described in Section 5.", "To demonstrate the difficulty of the multidomain belief tracking problem, values of a theoretical baseline that samples the belief state uniformly at random are also presented.", "Our model gracefully handles such a difficult task.", "In most of the cases, CNNs demonstrate better performance than Bi-LSTMs.", "We hypothesize that this comes from the effectiveness of extracting local and position-invariant features, which are crucial for semantic similarities (Yin et al., 2017) .", "Results Conclusions In this paper, we proposed a new approach that tackles the issue of multi-domain belief tracking, such as model parameter scalability with the ontology size.", "Our model shows improved performance in single-domain tasks compared to the state-ofthe-art NBT method.", "By exploiting semantic similarities between dialogue utterances and ontology terms, the model alleviates the need for ontologydependent parameters and maximizes the amount of information shared between slots and across domains.", "In future, we intend to investigate introducing new domains and ontology terms without further training thus performing zero-shot learning." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "4", "4.1", "4.2", "4.3", "4.4", "5", "5.1", "5.2", "7" ], "paper_header_content": [ "Introduction", "Background", "Related Work", "Neural Belief Tracker (NBT)", "Multi-domain Dialogue State Tracking", "Method", "Domain Tracking", "Candidate Slots and Values Tracking", "Belief State Update", "Training Criteria", "Datasets and Baselines", "Data Structure", "Evaluation", "Conclusions" ] }
GEM-SciDuet-train-41#paper-1061#slide-8
Conclusion
1. We proposed a model with ontologyindependent parameters 2. It also achieves stateoftheart results in singledomain 3. The model demonstrates also great capability in handling mixed Future work is to test the model on outofdomain tracking The data collection was funded through Google Faculty Award
1. We proposed a model with ontologyindependent parameters 2. It also achieves stateoftheart results in singledomain 3. The model demonstrates also great capability in handling mixed Future work is to test the model on outofdomain tracking The data collection was funded through Google Faculty Award
[]
GEM-SciDuet-train-42#paper-1062#slide-0
1062
Neural Hidden Markov Model for Machine Translation
This work aims to investigate alternative neural machine translation (NMT) approaches and thus proposes a neural hidden Markov model (HMM) consisting of neural network-based alignment and lexicon models. The neural models make use of encoder and decoder components, but drop the attention component. The training is end-to-end and the standalone decoder is able to provide comparable performance with the state-of-the-art attention-based models on three different translation tasks.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105 ], "paper_content_text": [ "Introduction Attention-based neural translation models (Bahdanau et al., 2015; Luong et al., 2015) attend to specific positions on the source side to generate translation.", "Using the attention component provides significant improvements over the pure encoder-decoder sequence-to-sequence approach (Sutskever et al., 2014) that uses no such attention mechanism.", "In this work, we aim to compare the performance of attention-based models to another baseline, namely, neural hidden Markov models.", "The neural HMM has been successfully applied in the literature on top of conventional phrasebased systems (Wang et al., 2017) .", "In this work, our purpose is to explore its application in standalone decoding, i.e.", "the model is used to generate and score candidates without assistance from a phrase-based system.", "Because translation is done standalone using only neural models, we still refer to this as NMT.", "In addition, while Wang et al.", "(2017) applied feedforward networks to model alignment and translation, the recurrent structures proposed in this work surpass the feedforward variants by up to 1.3% in BLEU.", "By comparing neural HMM and attention-based NMT, we shed light on the role of the attention component.", "To this end, we use an alignmentbased model that has a recurrent bidirectional encoder and a recurrent decoder, but use no attention component.", "We replace the attention mechanism by a first-order HMM alignment model.", "Attention levels are deterministic normalized similarity scores part of the architecture design of an otherwise fully supervised classifier.", "HMM-style alignments on the other hand are discrete random variables and (unlike attention levels) must be marginalized.", "Once alignments are marginalized, which is tractable for a first-order HMM, parameters can be estimated to attain a local optimum of log-likelihood of observations as usual.", "Motivation In attention-based approaches, the alignment distribution is used to select the positions in the source sentence that the decoder attends to during translation.", "Thus the alignment model can be considered as an implicit part of the translation model.", "On the other hand, separating the alignment model from the lexicon model has its own advantages: First of all, this leads to more flexibility in modeling and training: The models can not only be trained separately, but they can also have different model types, such as neural models, count-based models, etc.", "Second, the separation avoids propagating errors from one model to another.", "In attention-based systems, the translation score is based on the alignment distribution, in which errors can be propagated from the alignment part to the translation part.", "Third, probabilistic treatment to alignments in NMT typically implies an extended degree of interpretability (e.g.", "one can inspect posteriors) and control over the model (e.g.", "one can impose priors over alignments and lexical distributions).", "Neural Hidden Markov Model Given a source sentence f J 1 = f 1 ...f j ...f J and a target sentence e I 1 = e 1 ...e i ...e I , where j = b i is the source position aligned to the target position i, we model translation using an alignment model and a lexicon model: p(e I 1 |f J 1 ) = b I 1 p(e I 1 , b I 1 |f J 1 ) (1) := b I 1 I i=1 p(e i |b i 1 , e i−1 0 , f J 1 ) lexicon model · p(b i |b i−1 1 , e i−1 0 , f J 1 ) alignment model (2) Instead of predicting the absolute source position b i , we use an alignment model Wang et al.", "(2017) applied feedforward neural networks for modeling the lexicon and alignment probabilities.", "In this work, we would like to model these distributions using recurrent neural networks (RNN).", "RNNs have been shown to outperform feedforward variants in language and translation modeling.", "This is mainly due to that RNN can handle arbitrary input lengths and thus include unbounded context information.", "Unfortunately, the recurrent hidden layer cannot be easily applied for the neural hidden Markov model, since it will significantly complicate the computation of forward-backward messages when running Baum-Welch.", "Nevertheless, we can apply long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997) structure for source and target words embedding.", "With this technique we can take the essence of LSTM RNN and do not break any sequential generative model assumptions.", "p(∆ i |b i−1 1 , e i−1 0 , f J 1 ) that predicts the jump ∆ i = b i − b i−1 .", "Our models are close in structure to the model proposed in Luong et al.", "(2015) , where we have a component that encodes the source sentence, and another that encodes the target sentence.", "As shown in Figure 1 , we use a source side bidi- rectional LSTM embedding h j = − → h j + ← − h j , where − → h j = LSTM(W, f j , − → h j−1 ) and ← − h j = LSTM(V, f j , ← − h j+1 ), as well as a target side LSTM embedding s i−1 = LSTM(U, e i−1 , s i−2 ).", "h j , − → h j , ← − h j and s i−1 , s i−2 are vectors, W , V and U are weight matrices.", "Before the non-linear hidden layers, there is a projection layer which f1 · · · fj−1 fj fj+1 concatenates h j , s i−1 and e i−1 .", "Then the neural network-based lexicon model is given by · · · fJ e1 · · · ei−2 ei−1 − → s i−1 · · · · · · · · · · · · · · · · · · · · · − → h j ← − h j p(ei|hj, si−1, ei−1) p(e i |b i 1 , e i−1 0 , f J 1 ) := p(e i |h j , s i−1 , e i−1 ) (3) and the neural network-based alignment model p(b i |b i−1 1 , e i−1 0 , f J 1 ) := p(∆ i |h j , s i−1 , e i−1 ) (4) where j = b i−1 .", "The training criterion is the logarithm of sentence posterior probabilities over training sentence pairs (F r , E r ), r = 1, ..., R: arg max θ r log p θ (E r |F r ) (5) The derivative for a single sentence pair (F, E) = (f J 1 , e I 1 ) is: ∂ ∂θ log p θ (E|F ) = j ,j i p i (j , j|f J 1 , e I 1 ; θ) · ∂ ∂θ log p(j, e i |j , e i−1 0 , f J 1 ; θ) (6) with HMM posterior weights p i (j , j|f J 1 , e I 1 ; θ), which can be computed using the forwardbackward algorithm.", "The entire training procedure can be summarized as backpropagation in an EM framework: 1. compute: • the posterior HMM weights • the local gradients (backpropagation) 2. update neural network weights Decoding In the decoding stage we still calculate the sum over alignments and apply a target-synchronous beam search for the target string.", "The auxiliary quantity for each unknown partial string e i 0 is specified as Q(i, j; e i 0 ).", "During search, the partial hypothesis is extended from e i−1 0 to e i 0 : Q(i, j; e i 0 ) = j p(j, e i |j , e i−1 0 , f J 1 ) · Q(i − 1, j ; e i−1 0 ) (7) The decoder is shown in Algorithm 1.", "In the innermost loop (line 11-13), alignments are hypothesized and used to calculate the auxiliary quantity Q(i, j; e i 0 ).", "Then for each source position j, the lexical distribution over the full target vocabulary is computed (line 14).", "The distributions are accumulated (Q(i; e i 0 ) = j Q(i, j; e i 0 ), line 16), then sorted (line 18) and the best candidate translations (arg max e i Q(i; e i 0 )) lying within the beam are used to expand the partial hypotheses (line 19-23).", "cache is a two-dimensional list of size J × |V src | (source vocabulary size), which is used to cache the current quantities.", "Whenever a partial hypothesis in the beam ends with the sentence end symbol (<EOF>), the counter will be increased by 1 (line 26-28).", "The translation is terminated if the counter reaches the beam size or hypothesis sentence length reaches three times the source sentence length (line 6).", "If a hypothesis stops but its score is worse than other hypotheses, it is eliminated from the beam, but it still contests non-terminated hypotheses.", "During comparison the scores are normalized by hypothesis sentence length.", "Note that we have no explicit coverage constraints.", "This means that a source position can be revisited many times, whereby creating one-to-many alignment cases.", "This also allows unaligned source words.", "In the neural HMM decoder, word alignments are estimated and scored according to the distribution calculated by the neural network alignment model, leading alignment decisions to become part of the beam search.", "The search space consists of both alignment and translation decisions.", "In contrast, the search space in attentionbased decoding consists only of translation decisions.", "The decoding complexity is O(J 2 · I) (J = source sentence length, I = target sentence length) return GETBEST(hyps) 33: end function compared to O(J · I) for attention-based models.", "These are theoretical complexities of decoding on a CPU only considering source and target sentence lengths.", "In practice, the size of the neural network must also be taken into account, and there are some optimized matrix multiplications for decoding on a GPU.", "In general, the decoding speed of our model is about 3 times slower than that of a standard attention model (1.07 sentences per second vs. 3.00 sentences per second) on a single GPU.", "This is still an initial decoder and we did not spend much time on accelerating its decoding yet.", "The optimization of our decoder would be a promising future work.", "Experiments The experiments are conducted on the WMT 2017 German↔English and Chinese→English translation tasks, which consist of 5M and 23M parallel sentence pairs respectively.", "Translation quality is measured with the case sensitive BLEU (Papineni et al., 2002) and TER (Snover et al., 2006) metric on newstests 2017, which contain 3004 (German↔English) and 2001 (Chinese→English) sentence pairs.", "For German and English preprocessing, we use the Moses tokenizer with hyphen splitting, and perform truecasing with Moses scripts (Koehn et al., 2007) .", "For German↔English subword segmentation , we use 20K joint BPE operations.", "For the Chinese data, we segment it using the Jieba 1 segmenter.", "We then learn a BPE model on the segmented Chinese, also using 20K merge operations.", "During training, sentences with a length greater than 50 subwords are filtered out.", "Attention-Based System The attention-based systems are trained with Sockeye (Hieber et al., 2017) , which implement an attentional encoder-decoder with small modifications to the model in Bahdanau et al.", "(2015) .", "The encoder and decoder word embeddings are of size 620.", "The encoder consists of a bidirectional layer with 1000 LSTMs with peephole connections to encode the source side.", "We use Adam (Kingma and Ba, 2015) as optimizer with a learning rate of 0.001, and a batch size of 50.", "The network is trained with 30% dropout for up to 500K iterations and evaluated every 10K iterations on the development set with BLEU.", "Decoding is done using beam search with a beam size of 12.", "In the end the four best models are averaged as described in 1 https://github.com/fxsjy/jieba the beginning of Junczys-Dowmunt et al.", "(2016) .", "Neural Hidden Markov Model The entire neural hidden Markov model is implemented in TensorFlow (Abadi et al., 2016) .", "The feedforward models have three hidden layers of sizes 1000, 1000 and 500 respectively, with a 5word source window and a 3-gram target history.", "200 nodes are used for word embeddings.", "The output layer of the neural lexicon model consists of around 25K nodes for all subword units, while the neural alignment model has a small output layer with 201 nodes, which reflects that the aligned position can jump within the scope from −100 to 100.", "Apart from the basic projection layer, we also applied LSTM layers for the source and target words embedding.", "The embedding layers have 350 nodes and the size of the projection layer is 800 (400 + 200 + 200, Figure 1 ).", "We use Adam as optimizer with a learning rate of 0.001.", "Neural lexicon and alignment models are trained with 30% dropout and the norm of the gradient is clipped with a threshold 1 (Pascanu et al., 2014) .", "In decoding we use a beam size of 12 and the element-wise average of all weights of the four best models also results in better performance.", "Results We compare the neural HMM approach (Subsection 5.2) with the state-of-the-art attention-based approach (Subsection 5.1) on different translation tasks.", "The results are presented in Table 1 .", "Compare to the model presented in Wang et al.", "(2017) , switching to LSTM models has a clear advantage, which improves the FFNN-based system by up to 1.3% BLEU and 1.8% TER.", "It seems that the HMM model benefits from richer features, such as LSTM states, which are very similar to what an attention mechanism would require.", "We actually WMT Attention-based NMT e r w o l l t e n i e a n i r g e n d e i n e r A r t v o n A u s e i n a n d e r s e t z u n g t e i l n e h m expected it to do with less, the reason being that alignment distributions get refined a posteriori and so they do not have to be as strong a priori.", "We can also observe that the performance of our approach is comparable with the state-of-the-art attentionbased system with 25M more parameters on all three tasks.", "Alignment Analysis We show an example from the German→English newstest 2017 in Figure 2 , along with the attention and alignment matrices.", "We can observe that the neural network-based HMM could generate a more clear alignment path compared to the attention weights.", "In this example, it can exactly estimate the alignment positions for words wanted and of.", "Discussion We described a novel formulation for a neural network-based machine translation system, which applied neural networks to the conventional hidden Markov model.", "The training is end-to-end, the model is monolithic and can be used as a standalone decoder.", "This results in a more modern and efficient way to use HMM in machine translation and enables neural networks to benefit from HMMs.", "Experiments show that replacing attention with alignment does not improve the translation performance of NMT significantly.", "One possible reason is that alignment may fail to capture relevant contexts as attention does.", "While alignment aims to identify translation equivalents between two lan-guages, attention is designed to find relevant context for predicting the next target word.", "Source words with high attention weights are not necessarily translation equivalents of the target word.", "Although using alignment does not lead to significant improvements in terms of BLEU over attention, we think alignment-based NMT models are still useful for automatic post editing and developing coverage-based models.", "These might be interesting future directions to explore." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5", "5.1", "5.2", "5.3", "5.4", "6" ], "paper_header_content": [ "Introduction", "Motivation", "Neural Hidden Markov Model", "Decoding", "Experiments", "Attention-Based System", "Neural Hidden Markov Model", "Results", "Alignment Analysis", "Discussion" ] }
GEM-SciDuet-train-42#paper-1062#slide-0
Introduction
I Attention-based neural translation models attend to specific positions on the source side to generate translation improvements over pure encoder-decoder sequence-to-sequence approach I Neural HMM has been successfully applied on top of SMT systems I This work explores its application in standalone decoding end-to-end, only with neural networks NMT LSTM structures outperform FFNN variants in [Wang & Alkhouli+
I Attention-based neural translation models attend to specific positions on the source side to generate translation improvements over pure encoder-decoder sequence-to-sequence approach I Neural HMM has been successfully applied on top of SMT systems I This work explores its application in standalone decoding end-to-end, only with neural networks NMT LSTM structures outperform FFNN variants in [Wang & Alkhouli+
[]
GEM-SciDuet-train-42#paper-1062#slide-1
1062
Neural Hidden Markov Model for Machine Translation
This work aims to investigate alternative neural machine translation (NMT) approaches and thus proposes a neural hidden Markov model (HMM) consisting of neural network-based alignment and lexicon models. The neural models make use of encoder and decoder components, but drop the attention component. The training is end-to-end and the standalone decoder is able to provide comparable performance with the state-of-the-art attention-based models on three different translation tasks.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105 ], "paper_content_text": [ "Introduction Attention-based neural translation models (Bahdanau et al., 2015; Luong et al., 2015) attend to specific positions on the source side to generate translation.", "Using the attention component provides significant improvements over the pure encoder-decoder sequence-to-sequence approach (Sutskever et al., 2014) that uses no such attention mechanism.", "In this work, we aim to compare the performance of attention-based models to another baseline, namely, neural hidden Markov models.", "The neural HMM has been successfully applied in the literature on top of conventional phrasebased systems (Wang et al., 2017) .", "In this work, our purpose is to explore its application in standalone decoding, i.e.", "the model is used to generate and score candidates without assistance from a phrase-based system.", "Because translation is done standalone using only neural models, we still refer to this as NMT.", "In addition, while Wang et al.", "(2017) applied feedforward networks to model alignment and translation, the recurrent structures proposed in this work surpass the feedforward variants by up to 1.3% in BLEU.", "By comparing neural HMM and attention-based NMT, we shed light on the role of the attention component.", "To this end, we use an alignmentbased model that has a recurrent bidirectional encoder and a recurrent decoder, but use no attention component.", "We replace the attention mechanism by a first-order HMM alignment model.", "Attention levels are deterministic normalized similarity scores part of the architecture design of an otherwise fully supervised classifier.", "HMM-style alignments on the other hand are discrete random variables and (unlike attention levels) must be marginalized.", "Once alignments are marginalized, which is tractable for a first-order HMM, parameters can be estimated to attain a local optimum of log-likelihood of observations as usual.", "Motivation In attention-based approaches, the alignment distribution is used to select the positions in the source sentence that the decoder attends to during translation.", "Thus the alignment model can be considered as an implicit part of the translation model.", "On the other hand, separating the alignment model from the lexicon model has its own advantages: First of all, this leads to more flexibility in modeling and training: The models can not only be trained separately, but they can also have different model types, such as neural models, count-based models, etc.", "Second, the separation avoids propagating errors from one model to another.", "In attention-based systems, the translation score is based on the alignment distribution, in which errors can be propagated from the alignment part to the translation part.", "Third, probabilistic treatment to alignments in NMT typically implies an extended degree of interpretability (e.g.", "one can inspect posteriors) and control over the model (e.g.", "one can impose priors over alignments and lexical distributions).", "Neural Hidden Markov Model Given a source sentence f J 1 = f 1 ...f j ...f J and a target sentence e I 1 = e 1 ...e i ...e I , where j = b i is the source position aligned to the target position i, we model translation using an alignment model and a lexicon model: p(e I 1 |f J 1 ) = b I 1 p(e I 1 , b I 1 |f J 1 ) (1) := b I 1 I i=1 p(e i |b i 1 , e i−1 0 , f J 1 ) lexicon model · p(b i |b i−1 1 , e i−1 0 , f J 1 ) alignment model (2) Instead of predicting the absolute source position b i , we use an alignment model Wang et al.", "(2017) applied feedforward neural networks for modeling the lexicon and alignment probabilities.", "In this work, we would like to model these distributions using recurrent neural networks (RNN).", "RNNs have been shown to outperform feedforward variants in language and translation modeling.", "This is mainly due to that RNN can handle arbitrary input lengths and thus include unbounded context information.", "Unfortunately, the recurrent hidden layer cannot be easily applied for the neural hidden Markov model, since it will significantly complicate the computation of forward-backward messages when running Baum-Welch.", "Nevertheless, we can apply long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997) structure for source and target words embedding.", "With this technique we can take the essence of LSTM RNN and do not break any sequential generative model assumptions.", "p(∆ i |b i−1 1 , e i−1 0 , f J 1 ) that predicts the jump ∆ i = b i − b i−1 .", "Our models are close in structure to the model proposed in Luong et al.", "(2015) , where we have a component that encodes the source sentence, and another that encodes the target sentence.", "As shown in Figure 1 , we use a source side bidi- rectional LSTM embedding h j = − → h j + ← − h j , where − → h j = LSTM(W, f j , − → h j−1 ) and ← − h j = LSTM(V, f j , ← − h j+1 ), as well as a target side LSTM embedding s i−1 = LSTM(U, e i−1 , s i−2 ).", "h j , − → h j , ← − h j and s i−1 , s i−2 are vectors, W , V and U are weight matrices.", "Before the non-linear hidden layers, there is a projection layer which f1 · · · fj−1 fj fj+1 concatenates h j , s i−1 and e i−1 .", "Then the neural network-based lexicon model is given by · · · fJ e1 · · · ei−2 ei−1 − → s i−1 · · · · · · · · · · · · · · · · · · · · · − → h j ← − h j p(ei|hj, si−1, ei−1) p(e i |b i 1 , e i−1 0 , f J 1 ) := p(e i |h j , s i−1 , e i−1 ) (3) and the neural network-based alignment model p(b i |b i−1 1 , e i−1 0 , f J 1 ) := p(∆ i |h j , s i−1 , e i−1 ) (4) where j = b i−1 .", "The training criterion is the logarithm of sentence posterior probabilities over training sentence pairs (F r , E r ), r = 1, ..., R: arg max θ r log p θ (E r |F r ) (5) The derivative for a single sentence pair (F, E) = (f J 1 , e I 1 ) is: ∂ ∂θ log p θ (E|F ) = j ,j i p i (j , j|f J 1 , e I 1 ; θ) · ∂ ∂θ log p(j, e i |j , e i−1 0 , f J 1 ; θ) (6) with HMM posterior weights p i (j , j|f J 1 , e I 1 ; θ), which can be computed using the forwardbackward algorithm.", "The entire training procedure can be summarized as backpropagation in an EM framework: 1. compute: • the posterior HMM weights • the local gradients (backpropagation) 2. update neural network weights Decoding In the decoding stage we still calculate the sum over alignments and apply a target-synchronous beam search for the target string.", "The auxiliary quantity for each unknown partial string e i 0 is specified as Q(i, j; e i 0 ).", "During search, the partial hypothesis is extended from e i−1 0 to e i 0 : Q(i, j; e i 0 ) = j p(j, e i |j , e i−1 0 , f J 1 ) · Q(i − 1, j ; e i−1 0 ) (7) The decoder is shown in Algorithm 1.", "In the innermost loop (line 11-13), alignments are hypothesized and used to calculate the auxiliary quantity Q(i, j; e i 0 ).", "Then for each source position j, the lexical distribution over the full target vocabulary is computed (line 14).", "The distributions are accumulated (Q(i; e i 0 ) = j Q(i, j; e i 0 ), line 16), then sorted (line 18) and the best candidate translations (arg max e i Q(i; e i 0 )) lying within the beam are used to expand the partial hypotheses (line 19-23).", "cache is a two-dimensional list of size J × |V src | (source vocabulary size), which is used to cache the current quantities.", "Whenever a partial hypothesis in the beam ends with the sentence end symbol (<EOF>), the counter will be increased by 1 (line 26-28).", "The translation is terminated if the counter reaches the beam size or hypothesis sentence length reaches three times the source sentence length (line 6).", "If a hypothesis stops but its score is worse than other hypotheses, it is eliminated from the beam, but it still contests non-terminated hypotheses.", "During comparison the scores are normalized by hypothesis sentence length.", "Note that we have no explicit coverage constraints.", "This means that a source position can be revisited many times, whereby creating one-to-many alignment cases.", "This also allows unaligned source words.", "In the neural HMM decoder, word alignments are estimated and scored according to the distribution calculated by the neural network alignment model, leading alignment decisions to become part of the beam search.", "The search space consists of both alignment and translation decisions.", "In contrast, the search space in attentionbased decoding consists only of translation decisions.", "The decoding complexity is O(J 2 · I) (J = source sentence length, I = target sentence length) return GETBEST(hyps) 33: end function compared to O(J · I) for attention-based models.", "These are theoretical complexities of decoding on a CPU only considering source and target sentence lengths.", "In practice, the size of the neural network must also be taken into account, and there are some optimized matrix multiplications for decoding on a GPU.", "In general, the decoding speed of our model is about 3 times slower than that of a standard attention model (1.07 sentences per second vs. 3.00 sentences per second) on a single GPU.", "This is still an initial decoder and we did not spend much time on accelerating its decoding yet.", "The optimization of our decoder would be a promising future work.", "Experiments The experiments are conducted on the WMT 2017 German↔English and Chinese→English translation tasks, which consist of 5M and 23M parallel sentence pairs respectively.", "Translation quality is measured with the case sensitive BLEU (Papineni et al., 2002) and TER (Snover et al., 2006) metric on newstests 2017, which contain 3004 (German↔English) and 2001 (Chinese→English) sentence pairs.", "For German and English preprocessing, we use the Moses tokenizer with hyphen splitting, and perform truecasing with Moses scripts (Koehn et al., 2007) .", "For German↔English subword segmentation , we use 20K joint BPE operations.", "For the Chinese data, we segment it using the Jieba 1 segmenter.", "We then learn a BPE model on the segmented Chinese, also using 20K merge operations.", "During training, sentences with a length greater than 50 subwords are filtered out.", "Attention-Based System The attention-based systems are trained with Sockeye (Hieber et al., 2017) , which implement an attentional encoder-decoder with small modifications to the model in Bahdanau et al.", "(2015) .", "The encoder and decoder word embeddings are of size 620.", "The encoder consists of a bidirectional layer with 1000 LSTMs with peephole connections to encode the source side.", "We use Adam (Kingma and Ba, 2015) as optimizer with a learning rate of 0.001, and a batch size of 50.", "The network is trained with 30% dropout for up to 500K iterations and evaluated every 10K iterations on the development set with BLEU.", "Decoding is done using beam search with a beam size of 12.", "In the end the four best models are averaged as described in 1 https://github.com/fxsjy/jieba the beginning of Junczys-Dowmunt et al.", "(2016) .", "Neural Hidden Markov Model The entire neural hidden Markov model is implemented in TensorFlow (Abadi et al., 2016) .", "The feedforward models have three hidden layers of sizes 1000, 1000 and 500 respectively, with a 5word source window and a 3-gram target history.", "200 nodes are used for word embeddings.", "The output layer of the neural lexicon model consists of around 25K nodes for all subword units, while the neural alignment model has a small output layer with 201 nodes, which reflects that the aligned position can jump within the scope from −100 to 100.", "Apart from the basic projection layer, we also applied LSTM layers for the source and target words embedding.", "The embedding layers have 350 nodes and the size of the projection layer is 800 (400 + 200 + 200, Figure 1 ).", "We use Adam as optimizer with a learning rate of 0.001.", "Neural lexicon and alignment models are trained with 30% dropout and the norm of the gradient is clipped with a threshold 1 (Pascanu et al., 2014) .", "In decoding we use a beam size of 12 and the element-wise average of all weights of the four best models also results in better performance.", "Results We compare the neural HMM approach (Subsection 5.2) with the state-of-the-art attention-based approach (Subsection 5.1) on different translation tasks.", "The results are presented in Table 1 .", "Compare to the model presented in Wang et al.", "(2017) , switching to LSTM models has a clear advantage, which improves the FFNN-based system by up to 1.3% BLEU and 1.8% TER.", "It seems that the HMM model benefits from richer features, such as LSTM states, which are very similar to what an attention mechanism would require.", "We actually WMT Attention-based NMT e r w o l l t e n i e a n i r g e n d e i n e r A r t v o n A u s e i n a n d e r s e t z u n g t e i l n e h m expected it to do with less, the reason being that alignment distributions get refined a posteriori and so they do not have to be as strong a priori.", "We can also observe that the performance of our approach is comparable with the state-of-the-art attentionbased system with 25M more parameters on all three tasks.", "Alignment Analysis We show an example from the German→English newstest 2017 in Figure 2 , along with the attention and alignment matrices.", "We can observe that the neural network-based HMM could generate a more clear alignment path compared to the attention weights.", "In this example, it can exactly estimate the alignment positions for words wanted and of.", "Discussion We described a novel formulation for a neural network-based machine translation system, which applied neural networks to the conventional hidden Markov model.", "The training is end-to-end, the model is monolithic and can be used as a standalone decoder.", "This results in a more modern and efficient way to use HMM in machine translation and enables neural networks to benefit from HMMs.", "Experiments show that replacing attention with alignment does not improve the translation performance of NMT significantly.", "One possible reason is that alignment may fail to capture relevant contexts as attention does.", "While alignment aims to identify translation equivalents between two lan-guages, attention is designed to find relevant context for predicting the next target word.", "Source words with high attention weights are not necessarily translation equivalents of the target word.", "Although using alignment does not lead to significant improvements in terms of BLEU over attention, we think alignment-based NMT models are still useful for automatic post editing and developing coverage-based models.", "These might be interesting future directions to explore." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5", "5.1", "5.2", "5.3", "5.4", "6" ], "paper_header_content": [ "Introduction", "Motivation", "Neural Hidden Markov Model", "Decoding", "Experiments", "Attention-Based System", "Neural Hidden Markov Model", "Results", "Alignment Analysis", "Discussion" ] }
GEM-SciDuet-train-42#paper-1062#slide-1
Neural Hidden Markov Model
alignment i j bi I Model translation using an alignment model and a lexicon model: bI1 i=1 lexicon model alignment model predicts the jump i bi bi1 I Neural network based lexicon model I Neural network based alignment model (j bi1)
alignment i j bi I Model translation using an alignment model and a lexicon model: bI1 i=1 lexicon model alignment model predicts the jump i bi bi1 I Neural network based lexicon model I Neural network based alignment model (j bi1)
[]
GEM-SciDuet-train-42#paper-1062#slide-2
1062
Neural Hidden Markov Model for Machine Translation
This work aims to investigate alternative neural machine translation (NMT) approaches and thus proposes a neural hidden Markov model (HMM) consisting of neural network-based alignment and lexicon models. The neural models make use of encoder and decoder components, but drop the attention component. The training is end-to-end and the standalone decoder is able to provide comparable performance with the state-of-the-art attention-based models on three different translation tasks.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105 ], "paper_content_text": [ "Introduction Attention-based neural translation models (Bahdanau et al., 2015; Luong et al., 2015) attend to specific positions on the source side to generate translation.", "Using the attention component provides significant improvements over the pure encoder-decoder sequence-to-sequence approach (Sutskever et al., 2014) that uses no such attention mechanism.", "In this work, we aim to compare the performance of attention-based models to another baseline, namely, neural hidden Markov models.", "The neural HMM has been successfully applied in the literature on top of conventional phrasebased systems (Wang et al., 2017) .", "In this work, our purpose is to explore its application in standalone decoding, i.e.", "the model is used to generate and score candidates without assistance from a phrase-based system.", "Because translation is done standalone using only neural models, we still refer to this as NMT.", "In addition, while Wang et al.", "(2017) applied feedforward networks to model alignment and translation, the recurrent structures proposed in this work surpass the feedforward variants by up to 1.3% in BLEU.", "By comparing neural HMM and attention-based NMT, we shed light on the role of the attention component.", "To this end, we use an alignmentbased model that has a recurrent bidirectional encoder and a recurrent decoder, but use no attention component.", "We replace the attention mechanism by a first-order HMM alignment model.", "Attention levels are deterministic normalized similarity scores part of the architecture design of an otherwise fully supervised classifier.", "HMM-style alignments on the other hand are discrete random variables and (unlike attention levels) must be marginalized.", "Once alignments are marginalized, which is tractable for a first-order HMM, parameters can be estimated to attain a local optimum of log-likelihood of observations as usual.", "Motivation In attention-based approaches, the alignment distribution is used to select the positions in the source sentence that the decoder attends to during translation.", "Thus the alignment model can be considered as an implicit part of the translation model.", "On the other hand, separating the alignment model from the lexicon model has its own advantages: First of all, this leads to more flexibility in modeling and training: The models can not only be trained separately, but they can also have different model types, such as neural models, count-based models, etc.", "Second, the separation avoids propagating errors from one model to another.", "In attention-based systems, the translation score is based on the alignment distribution, in which errors can be propagated from the alignment part to the translation part.", "Third, probabilistic treatment to alignments in NMT typically implies an extended degree of interpretability (e.g.", "one can inspect posteriors) and control over the model (e.g.", "one can impose priors over alignments and lexical distributions).", "Neural Hidden Markov Model Given a source sentence f J 1 = f 1 ...f j ...f J and a target sentence e I 1 = e 1 ...e i ...e I , where j = b i is the source position aligned to the target position i, we model translation using an alignment model and a lexicon model: p(e I 1 |f J 1 ) = b I 1 p(e I 1 , b I 1 |f J 1 ) (1) := b I 1 I i=1 p(e i |b i 1 , e i−1 0 , f J 1 ) lexicon model · p(b i |b i−1 1 , e i−1 0 , f J 1 ) alignment model (2) Instead of predicting the absolute source position b i , we use an alignment model Wang et al.", "(2017) applied feedforward neural networks for modeling the lexicon and alignment probabilities.", "In this work, we would like to model these distributions using recurrent neural networks (RNN).", "RNNs have been shown to outperform feedforward variants in language and translation modeling.", "This is mainly due to that RNN can handle arbitrary input lengths and thus include unbounded context information.", "Unfortunately, the recurrent hidden layer cannot be easily applied for the neural hidden Markov model, since it will significantly complicate the computation of forward-backward messages when running Baum-Welch.", "Nevertheless, we can apply long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997) structure for source and target words embedding.", "With this technique we can take the essence of LSTM RNN and do not break any sequential generative model assumptions.", "p(∆ i |b i−1 1 , e i−1 0 , f J 1 ) that predicts the jump ∆ i = b i − b i−1 .", "Our models are close in structure to the model proposed in Luong et al.", "(2015) , where we have a component that encodes the source sentence, and another that encodes the target sentence.", "As shown in Figure 1 , we use a source side bidi- rectional LSTM embedding h j = − → h j + ← − h j , where − → h j = LSTM(W, f j , − → h j−1 ) and ← − h j = LSTM(V, f j , ← − h j+1 ), as well as a target side LSTM embedding s i−1 = LSTM(U, e i−1 , s i−2 ).", "h j , − → h j , ← − h j and s i−1 , s i−2 are vectors, W , V and U are weight matrices.", "Before the non-linear hidden layers, there is a projection layer which f1 · · · fj−1 fj fj+1 concatenates h j , s i−1 and e i−1 .", "Then the neural network-based lexicon model is given by · · · fJ e1 · · · ei−2 ei−1 − → s i−1 · · · · · · · · · · · · · · · · · · · · · − → h j ← − h j p(ei|hj, si−1, ei−1) p(e i |b i 1 , e i−1 0 , f J 1 ) := p(e i |h j , s i−1 , e i−1 ) (3) and the neural network-based alignment model p(b i |b i−1 1 , e i−1 0 , f J 1 ) := p(∆ i |h j , s i−1 , e i−1 ) (4) where j = b i−1 .", "The training criterion is the logarithm of sentence posterior probabilities over training sentence pairs (F r , E r ), r = 1, ..., R: arg max θ r log p θ (E r |F r ) (5) The derivative for a single sentence pair (F, E) = (f J 1 , e I 1 ) is: ∂ ∂θ log p θ (E|F ) = j ,j i p i (j , j|f J 1 , e I 1 ; θ) · ∂ ∂θ log p(j, e i |j , e i−1 0 , f J 1 ; θ) (6) with HMM posterior weights p i (j , j|f J 1 , e I 1 ; θ), which can be computed using the forwardbackward algorithm.", "The entire training procedure can be summarized as backpropagation in an EM framework: 1. compute: • the posterior HMM weights • the local gradients (backpropagation) 2. update neural network weights Decoding In the decoding stage we still calculate the sum over alignments and apply a target-synchronous beam search for the target string.", "The auxiliary quantity for each unknown partial string e i 0 is specified as Q(i, j; e i 0 ).", "During search, the partial hypothesis is extended from e i−1 0 to e i 0 : Q(i, j; e i 0 ) = j p(j, e i |j , e i−1 0 , f J 1 ) · Q(i − 1, j ; e i−1 0 ) (7) The decoder is shown in Algorithm 1.", "In the innermost loop (line 11-13), alignments are hypothesized and used to calculate the auxiliary quantity Q(i, j; e i 0 ).", "Then for each source position j, the lexical distribution over the full target vocabulary is computed (line 14).", "The distributions are accumulated (Q(i; e i 0 ) = j Q(i, j; e i 0 ), line 16), then sorted (line 18) and the best candidate translations (arg max e i Q(i; e i 0 )) lying within the beam are used to expand the partial hypotheses (line 19-23).", "cache is a two-dimensional list of size J × |V src | (source vocabulary size), which is used to cache the current quantities.", "Whenever a partial hypothesis in the beam ends with the sentence end symbol (<EOF>), the counter will be increased by 1 (line 26-28).", "The translation is terminated if the counter reaches the beam size or hypothesis sentence length reaches three times the source sentence length (line 6).", "If a hypothesis stops but its score is worse than other hypotheses, it is eliminated from the beam, but it still contests non-terminated hypotheses.", "During comparison the scores are normalized by hypothesis sentence length.", "Note that we have no explicit coverage constraints.", "This means that a source position can be revisited many times, whereby creating one-to-many alignment cases.", "This also allows unaligned source words.", "In the neural HMM decoder, word alignments are estimated and scored according to the distribution calculated by the neural network alignment model, leading alignment decisions to become part of the beam search.", "The search space consists of both alignment and translation decisions.", "In contrast, the search space in attentionbased decoding consists only of translation decisions.", "The decoding complexity is O(J 2 · I) (J = source sentence length, I = target sentence length) return GETBEST(hyps) 33: end function compared to O(J · I) for attention-based models.", "These are theoretical complexities of decoding on a CPU only considering source and target sentence lengths.", "In practice, the size of the neural network must also be taken into account, and there are some optimized matrix multiplications for decoding on a GPU.", "In general, the decoding speed of our model is about 3 times slower than that of a standard attention model (1.07 sentences per second vs. 3.00 sentences per second) on a single GPU.", "This is still an initial decoder and we did not spend much time on accelerating its decoding yet.", "The optimization of our decoder would be a promising future work.", "Experiments The experiments are conducted on the WMT 2017 German↔English and Chinese→English translation tasks, which consist of 5M and 23M parallel sentence pairs respectively.", "Translation quality is measured with the case sensitive BLEU (Papineni et al., 2002) and TER (Snover et al., 2006) metric on newstests 2017, which contain 3004 (German↔English) and 2001 (Chinese→English) sentence pairs.", "For German and English preprocessing, we use the Moses tokenizer with hyphen splitting, and perform truecasing with Moses scripts (Koehn et al., 2007) .", "For German↔English subword segmentation , we use 20K joint BPE operations.", "For the Chinese data, we segment it using the Jieba 1 segmenter.", "We then learn a BPE model on the segmented Chinese, also using 20K merge operations.", "During training, sentences with a length greater than 50 subwords are filtered out.", "Attention-Based System The attention-based systems are trained with Sockeye (Hieber et al., 2017) , which implement an attentional encoder-decoder with small modifications to the model in Bahdanau et al.", "(2015) .", "The encoder and decoder word embeddings are of size 620.", "The encoder consists of a bidirectional layer with 1000 LSTMs with peephole connections to encode the source side.", "We use Adam (Kingma and Ba, 2015) as optimizer with a learning rate of 0.001, and a batch size of 50.", "The network is trained with 30% dropout for up to 500K iterations and evaluated every 10K iterations on the development set with BLEU.", "Decoding is done using beam search with a beam size of 12.", "In the end the four best models are averaged as described in 1 https://github.com/fxsjy/jieba the beginning of Junczys-Dowmunt et al.", "(2016) .", "Neural Hidden Markov Model The entire neural hidden Markov model is implemented in TensorFlow (Abadi et al., 2016) .", "The feedforward models have three hidden layers of sizes 1000, 1000 and 500 respectively, with a 5word source window and a 3-gram target history.", "200 nodes are used for word embeddings.", "The output layer of the neural lexicon model consists of around 25K nodes for all subword units, while the neural alignment model has a small output layer with 201 nodes, which reflects that the aligned position can jump within the scope from −100 to 100.", "Apart from the basic projection layer, we also applied LSTM layers for the source and target words embedding.", "The embedding layers have 350 nodes and the size of the projection layer is 800 (400 + 200 + 200, Figure 1 ).", "We use Adam as optimizer with a learning rate of 0.001.", "Neural lexicon and alignment models are trained with 30% dropout and the norm of the gradient is clipped with a threshold 1 (Pascanu et al., 2014) .", "In decoding we use a beam size of 12 and the element-wise average of all weights of the four best models also results in better performance.", "Results We compare the neural HMM approach (Subsection 5.2) with the state-of-the-art attention-based approach (Subsection 5.1) on different translation tasks.", "The results are presented in Table 1 .", "Compare to the model presented in Wang et al.", "(2017) , switching to LSTM models has a clear advantage, which improves the FFNN-based system by up to 1.3% BLEU and 1.8% TER.", "It seems that the HMM model benefits from richer features, such as LSTM states, which are very similar to what an attention mechanism would require.", "We actually WMT Attention-based NMT e r w o l l t e n i e a n i r g e n d e i n e r A r t v o n A u s e i n a n d e r s e t z u n g t e i l n e h m expected it to do with less, the reason being that alignment distributions get refined a posteriori and so they do not have to be as strong a priori.", "We can also observe that the performance of our approach is comparable with the state-of-the-art attentionbased system with 25M more parameters on all three tasks.", "Alignment Analysis We show an example from the German→English newstest 2017 in Figure 2 , along with the attention and alignment matrices.", "We can observe that the neural network-based HMM could generate a more clear alignment path compared to the attention weights.", "In this example, it can exactly estimate the alignment positions for words wanted and of.", "Discussion We described a novel formulation for a neural network-based machine translation system, which applied neural networks to the conventional hidden Markov model.", "The training is end-to-end, the model is monolithic and can be used as a standalone decoder.", "This results in a more modern and efficient way to use HMM in machine translation and enables neural networks to benefit from HMMs.", "Experiments show that replacing attention with alignment does not improve the translation performance of NMT significantly.", "One possible reason is that alignment may fail to capture relevant contexts as attention does.", "While alignment aims to identify translation equivalents between two lan-guages, attention is designed to find relevant context for predicting the next target word.", "Source words with high attention weights are not necessarily translation equivalents of the target word.", "Although using alignment does not lead to significant improvements in terms of BLEU over attention, we think alignment-based NMT models are still useful for automatic post editing and developing coverage-based models.", "These might be interesting future directions to explore." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5", "5.1", "5.2", "5.3", "5.4", "6" ], "paper_header_content": [ "Introduction", "Motivation", "Neural Hidden Markov Model", "Decoding", "Experiments", "Attention-Based System", "Neural Hidden Markov Model", "Results", "Alignment Analysis", "Discussion" ] }
GEM-SciDuet-train-42#paper-1062#slide-2
Training
I Derivative for a single sentence pair (F,E) = (fJ1 , eI the HMM posterior weights the local gradients (backpropagation) I Entire training procedure: backpropagation in an EM framework 2. update neural network weights
I Derivative for a single sentence pair (F,E) = (fJ1 , eI the HMM posterior weights the local gradients (backpropagation) I Entire training procedure: backpropagation in an EM framework 2. update neural network weights
[]
GEM-SciDuet-train-42#paper-1062#slide-3
1062
Neural Hidden Markov Model for Machine Translation
This work aims to investigate alternative neural machine translation (NMT) approaches and thus proposes a neural hidden Markov model (HMM) consisting of neural network-based alignment and lexicon models. The neural models make use of encoder and decoder components, but drop the attention component. The training is end-to-end and the standalone decoder is able to provide comparable performance with the state-of-the-art attention-based models on three different translation tasks.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105 ], "paper_content_text": [ "Introduction Attention-based neural translation models (Bahdanau et al., 2015; Luong et al., 2015) attend to specific positions on the source side to generate translation.", "Using the attention component provides significant improvements over the pure encoder-decoder sequence-to-sequence approach (Sutskever et al., 2014) that uses no such attention mechanism.", "In this work, we aim to compare the performance of attention-based models to another baseline, namely, neural hidden Markov models.", "The neural HMM has been successfully applied in the literature on top of conventional phrasebased systems (Wang et al., 2017) .", "In this work, our purpose is to explore its application in standalone decoding, i.e.", "the model is used to generate and score candidates without assistance from a phrase-based system.", "Because translation is done standalone using only neural models, we still refer to this as NMT.", "In addition, while Wang et al.", "(2017) applied feedforward networks to model alignment and translation, the recurrent structures proposed in this work surpass the feedforward variants by up to 1.3% in BLEU.", "By comparing neural HMM and attention-based NMT, we shed light on the role of the attention component.", "To this end, we use an alignmentbased model that has a recurrent bidirectional encoder and a recurrent decoder, but use no attention component.", "We replace the attention mechanism by a first-order HMM alignment model.", "Attention levels are deterministic normalized similarity scores part of the architecture design of an otherwise fully supervised classifier.", "HMM-style alignments on the other hand are discrete random variables and (unlike attention levels) must be marginalized.", "Once alignments are marginalized, which is tractable for a first-order HMM, parameters can be estimated to attain a local optimum of log-likelihood of observations as usual.", "Motivation In attention-based approaches, the alignment distribution is used to select the positions in the source sentence that the decoder attends to during translation.", "Thus the alignment model can be considered as an implicit part of the translation model.", "On the other hand, separating the alignment model from the lexicon model has its own advantages: First of all, this leads to more flexibility in modeling and training: The models can not only be trained separately, but they can also have different model types, such as neural models, count-based models, etc.", "Second, the separation avoids propagating errors from one model to another.", "In attention-based systems, the translation score is based on the alignment distribution, in which errors can be propagated from the alignment part to the translation part.", "Third, probabilistic treatment to alignments in NMT typically implies an extended degree of interpretability (e.g.", "one can inspect posteriors) and control over the model (e.g.", "one can impose priors over alignments and lexical distributions).", "Neural Hidden Markov Model Given a source sentence f J 1 = f 1 ...f j ...f J and a target sentence e I 1 = e 1 ...e i ...e I , where j = b i is the source position aligned to the target position i, we model translation using an alignment model and a lexicon model: p(e I 1 |f J 1 ) = b I 1 p(e I 1 , b I 1 |f J 1 ) (1) := b I 1 I i=1 p(e i |b i 1 , e i−1 0 , f J 1 ) lexicon model · p(b i |b i−1 1 , e i−1 0 , f J 1 ) alignment model (2) Instead of predicting the absolute source position b i , we use an alignment model Wang et al.", "(2017) applied feedforward neural networks for modeling the lexicon and alignment probabilities.", "In this work, we would like to model these distributions using recurrent neural networks (RNN).", "RNNs have been shown to outperform feedforward variants in language and translation modeling.", "This is mainly due to that RNN can handle arbitrary input lengths and thus include unbounded context information.", "Unfortunately, the recurrent hidden layer cannot be easily applied for the neural hidden Markov model, since it will significantly complicate the computation of forward-backward messages when running Baum-Welch.", "Nevertheless, we can apply long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997) structure for source and target words embedding.", "With this technique we can take the essence of LSTM RNN and do not break any sequential generative model assumptions.", "p(∆ i |b i−1 1 , e i−1 0 , f J 1 ) that predicts the jump ∆ i = b i − b i−1 .", "Our models are close in structure to the model proposed in Luong et al.", "(2015) , where we have a component that encodes the source sentence, and another that encodes the target sentence.", "As shown in Figure 1 , we use a source side bidi- rectional LSTM embedding h j = − → h j + ← − h j , where − → h j = LSTM(W, f j , − → h j−1 ) and ← − h j = LSTM(V, f j , ← − h j+1 ), as well as a target side LSTM embedding s i−1 = LSTM(U, e i−1 , s i−2 ).", "h j , − → h j , ← − h j and s i−1 , s i−2 are vectors, W , V and U are weight matrices.", "Before the non-linear hidden layers, there is a projection layer which f1 · · · fj−1 fj fj+1 concatenates h j , s i−1 and e i−1 .", "Then the neural network-based lexicon model is given by · · · fJ e1 · · · ei−2 ei−1 − → s i−1 · · · · · · · · · · · · · · · · · · · · · − → h j ← − h j p(ei|hj, si−1, ei−1) p(e i |b i 1 , e i−1 0 , f J 1 ) := p(e i |h j , s i−1 , e i−1 ) (3) and the neural network-based alignment model p(b i |b i−1 1 , e i−1 0 , f J 1 ) := p(∆ i |h j , s i−1 , e i−1 ) (4) where j = b i−1 .", "The training criterion is the logarithm of sentence posterior probabilities over training sentence pairs (F r , E r ), r = 1, ..., R: arg max θ r log p θ (E r |F r ) (5) The derivative for a single sentence pair (F, E) = (f J 1 , e I 1 ) is: ∂ ∂θ log p θ (E|F ) = j ,j i p i (j , j|f J 1 , e I 1 ; θ) · ∂ ∂θ log p(j, e i |j , e i−1 0 , f J 1 ; θ) (6) with HMM posterior weights p i (j , j|f J 1 , e I 1 ; θ), which can be computed using the forwardbackward algorithm.", "The entire training procedure can be summarized as backpropagation in an EM framework: 1. compute: • the posterior HMM weights • the local gradients (backpropagation) 2. update neural network weights Decoding In the decoding stage we still calculate the sum over alignments and apply a target-synchronous beam search for the target string.", "The auxiliary quantity for each unknown partial string e i 0 is specified as Q(i, j; e i 0 ).", "During search, the partial hypothesis is extended from e i−1 0 to e i 0 : Q(i, j; e i 0 ) = j p(j, e i |j , e i−1 0 , f J 1 ) · Q(i − 1, j ; e i−1 0 ) (7) The decoder is shown in Algorithm 1.", "In the innermost loop (line 11-13), alignments are hypothesized and used to calculate the auxiliary quantity Q(i, j; e i 0 ).", "Then for each source position j, the lexical distribution over the full target vocabulary is computed (line 14).", "The distributions are accumulated (Q(i; e i 0 ) = j Q(i, j; e i 0 ), line 16), then sorted (line 18) and the best candidate translations (arg max e i Q(i; e i 0 )) lying within the beam are used to expand the partial hypotheses (line 19-23).", "cache is a two-dimensional list of size J × |V src | (source vocabulary size), which is used to cache the current quantities.", "Whenever a partial hypothesis in the beam ends with the sentence end symbol (<EOF>), the counter will be increased by 1 (line 26-28).", "The translation is terminated if the counter reaches the beam size or hypothesis sentence length reaches three times the source sentence length (line 6).", "If a hypothesis stops but its score is worse than other hypotheses, it is eliminated from the beam, but it still contests non-terminated hypotheses.", "During comparison the scores are normalized by hypothesis sentence length.", "Note that we have no explicit coverage constraints.", "This means that a source position can be revisited many times, whereby creating one-to-many alignment cases.", "This also allows unaligned source words.", "In the neural HMM decoder, word alignments are estimated and scored according to the distribution calculated by the neural network alignment model, leading alignment decisions to become part of the beam search.", "The search space consists of both alignment and translation decisions.", "In contrast, the search space in attentionbased decoding consists only of translation decisions.", "The decoding complexity is O(J 2 · I) (J = source sentence length, I = target sentence length) return GETBEST(hyps) 33: end function compared to O(J · I) for attention-based models.", "These are theoretical complexities of decoding on a CPU only considering source and target sentence lengths.", "In practice, the size of the neural network must also be taken into account, and there are some optimized matrix multiplications for decoding on a GPU.", "In general, the decoding speed of our model is about 3 times slower than that of a standard attention model (1.07 sentences per second vs. 3.00 sentences per second) on a single GPU.", "This is still an initial decoder and we did not spend much time on accelerating its decoding yet.", "The optimization of our decoder would be a promising future work.", "Experiments The experiments are conducted on the WMT 2017 German↔English and Chinese→English translation tasks, which consist of 5M and 23M parallel sentence pairs respectively.", "Translation quality is measured with the case sensitive BLEU (Papineni et al., 2002) and TER (Snover et al., 2006) metric on newstests 2017, which contain 3004 (German↔English) and 2001 (Chinese→English) sentence pairs.", "For German and English preprocessing, we use the Moses tokenizer with hyphen splitting, and perform truecasing with Moses scripts (Koehn et al., 2007) .", "For German↔English subword segmentation , we use 20K joint BPE operations.", "For the Chinese data, we segment it using the Jieba 1 segmenter.", "We then learn a BPE model on the segmented Chinese, also using 20K merge operations.", "During training, sentences with a length greater than 50 subwords are filtered out.", "Attention-Based System The attention-based systems are trained with Sockeye (Hieber et al., 2017) , which implement an attentional encoder-decoder with small modifications to the model in Bahdanau et al.", "(2015) .", "The encoder and decoder word embeddings are of size 620.", "The encoder consists of a bidirectional layer with 1000 LSTMs with peephole connections to encode the source side.", "We use Adam (Kingma and Ba, 2015) as optimizer with a learning rate of 0.001, and a batch size of 50.", "The network is trained with 30% dropout for up to 500K iterations and evaluated every 10K iterations on the development set with BLEU.", "Decoding is done using beam search with a beam size of 12.", "In the end the four best models are averaged as described in 1 https://github.com/fxsjy/jieba the beginning of Junczys-Dowmunt et al.", "(2016) .", "Neural Hidden Markov Model The entire neural hidden Markov model is implemented in TensorFlow (Abadi et al., 2016) .", "The feedforward models have three hidden layers of sizes 1000, 1000 and 500 respectively, with a 5word source window and a 3-gram target history.", "200 nodes are used for word embeddings.", "The output layer of the neural lexicon model consists of around 25K nodes for all subword units, while the neural alignment model has a small output layer with 201 nodes, which reflects that the aligned position can jump within the scope from −100 to 100.", "Apart from the basic projection layer, we also applied LSTM layers for the source and target words embedding.", "The embedding layers have 350 nodes and the size of the projection layer is 800 (400 + 200 + 200, Figure 1 ).", "We use Adam as optimizer with a learning rate of 0.001.", "Neural lexicon and alignment models are trained with 30% dropout and the norm of the gradient is clipped with a threshold 1 (Pascanu et al., 2014) .", "In decoding we use a beam size of 12 and the element-wise average of all weights of the four best models also results in better performance.", "Results We compare the neural HMM approach (Subsection 5.2) with the state-of-the-art attention-based approach (Subsection 5.1) on different translation tasks.", "The results are presented in Table 1 .", "Compare to the model presented in Wang et al.", "(2017) , switching to LSTM models has a clear advantage, which improves the FFNN-based system by up to 1.3% BLEU and 1.8% TER.", "It seems that the HMM model benefits from richer features, such as LSTM states, which are very similar to what an attention mechanism would require.", "We actually WMT Attention-based NMT e r w o l l t e n i e a n i r g e n d e i n e r A r t v o n A u s e i n a n d e r s e t z u n g t e i l n e h m expected it to do with less, the reason being that alignment distributions get refined a posteriori and so they do not have to be as strong a priori.", "We can also observe that the performance of our approach is comparable with the state-of-the-art attentionbased system with 25M more parameters on all three tasks.", "Alignment Analysis We show an example from the German→English newstest 2017 in Figure 2 , along with the attention and alignment matrices.", "We can observe that the neural network-based HMM could generate a more clear alignment path compared to the attention weights.", "In this example, it can exactly estimate the alignment positions for words wanted and of.", "Discussion We described a novel formulation for a neural network-based machine translation system, which applied neural networks to the conventional hidden Markov model.", "The training is end-to-end, the model is monolithic and can be used as a standalone decoder.", "This results in a more modern and efficient way to use HMM in machine translation and enables neural networks to benefit from HMMs.", "Experiments show that replacing attention with alignment does not improve the translation performance of NMT significantly.", "One possible reason is that alignment may fail to capture relevant contexts as attention does.", "While alignment aims to identify translation equivalents between two lan-guages, attention is designed to find relevant context for predicting the next target word.", "Source words with high attention weights are not necessarily translation equivalents of the target word.", "Although using alignment does not lead to significant improvements in terms of BLEU over attention, we think alignment-based NMT models are still useful for automatic post editing and developing coverage-based models.", "These might be interesting future directions to explore." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5", "5.1", "5.2", "5.3", "5.4", "6" ], "paper_header_content": [ "Introduction", "Motivation", "Neural Hidden Markov Model", "Decoding", "Experiments", "Attention-Based System", "Neural Hidden Markov Model", "Results", "Alignment Analysis", "Discussion" ] }
GEM-SciDuet-train-42#paper-1062#slide-3
Decoding
I Search over all possible target strings I Extending partial hypothesis from ei10 to ei0 argmax Q(i; ei0) select several candidates ei I No explicit coverage constraints one-to-many alignment cases and unaligned source words I Search space in decoding neural HMM: consists of both alignment and translation decisions attention model: consists only of translation decisions neural HMM: O(J2 I) attention model: O(J I) in practice, neural HMM 3 times slower than attention model
I Search over all possible target strings I Extending partial hypothesis from ei10 to ei0 argmax Q(i; ei0) select several candidates ei I No explicit coverage constraints one-to-many alignment cases and unaligned source words I Search space in decoding neural HMM: consists of both alignment and translation decisions attention model: consists only of translation decisions neural HMM: O(J2 I) attention model: O(J I) in practice, neural HMM 3 times slower than attention model
[]
GEM-SciDuet-train-42#paper-1062#slide-4
1062
Neural Hidden Markov Model for Machine Translation
This work aims to investigate alternative neural machine translation (NMT) approaches and thus proposes a neural hidden Markov model (HMM) consisting of neural network-based alignment and lexicon models. The neural models make use of encoder and decoder components, but drop the attention component. The training is end-to-end and the standalone decoder is able to provide comparable performance with the state-of-the-art attention-based models on three different translation tasks.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105 ], "paper_content_text": [ "Introduction Attention-based neural translation models (Bahdanau et al., 2015; Luong et al., 2015) attend to specific positions on the source side to generate translation.", "Using the attention component provides significant improvements over the pure encoder-decoder sequence-to-sequence approach (Sutskever et al., 2014) that uses no such attention mechanism.", "In this work, we aim to compare the performance of attention-based models to another baseline, namely, neural hidden Markov models.", "The neural HMM has been successfully applied in the literature on top of conventional phrasebased systems (Wang et al., 2017) .", "In this work, our purpose is to explore its application in standalone decoding, i.e.", "the model is used to generate and score candidates without assistance from a phrase-based system.", "Because translation is done standalone using only neural models, we still refer to this as NMT.", "In addition, while Wang et al.", "(2017) applied feedforward networks to model alignment and translation, the recurrent structures proposed in this work surpass the feedforward variants by up to 1.3% in BLEU.", "By comparing neural HMM and attention-based NMT, we shed light on the role of the attention component.", "To this end, we use an alignmentbased model that has a recurrent bidirectional encoder and a recurrent decoder, but use no attention component.", "We replace the attention mechanism by a first-order HMM alignment model.", "Attention levels are deterministic normalized similarity scores part of the architecture design of an otherwise fully supervised classifier.", "HMM-style alignments on the other hand are discrete random variables and (unlike attention levels) must be marginalized.", "Once alignments are marginalized, which is tractable for a first-order HMM, parameters can be estimated to attain a local optimum of log-likelihood of observations as usual.", "Motivation In attention-based approaches, the alignment distribution is used to select the positions in the source sentence that the decoder attends to during translation.", "Thus the alignment model can be considered as an implicit part of the translation model.", "On the other hand, separating the alignment model from the lexicon model has its own advantages: First of all, this leads to more flexibility in modeling and training: The models can not only be trained separately, but they can also have different model types, such as neural models, count-based models, etc.", "Second, the separation avoids propagating errors from one model to another.", "In attention-based systems, the translation score is based on the alignment distribution, in which errors can be propagated from the alignment part to the translation part.", "Third, probabilistic treatment to alignments in NMT typically implies an extended degree of interpretability (e.g.", "one can inspect posteriors) and control over the model (e.g.", "one can impose priors over alignments and lexical distributions).", "Neural Hidden Markov Model Given a source sentence f J 1 = f 1 ...f j ...f J and a target sentence e I 1 = e 1 ...e i ...e I , where j = b i is the source position aligned to the target position i, we model translation using an alignment model and a lexicon model: p(e I 1 |f J 1 ) = b I 1 p(e I 1 , b I 1 |f J 1 ) (1) := b I 1 I i=1 p(e i |b i 1 , e i−1 0 , f J 1 ) lexicon model · p(b i |b i−1 1 , e i−1 0 , f J 1 ) alignment model (2) Instead of predicting the absolute source position b i , we use an alignment model Wang et al.", "(2017) applied feedforward neural networks for modeling the lexicon and alignment probabilities.", "In this work, we would like to model these distributions using recurrent neural networks (RNN).", "RNNs have been shown to outperform feedforward variants in language and translation modeling.", "This is mainly due to that RNN can handle arbitrary input lengths and thus include unbounded context information.", "Unfortunately, the recurrent hidden layer cannot be easily applied for the neural hidden Markov model, since it will significantly complicate the computation of forward-backward messages when running Baum-Welch.", "Nevertheless, we can apply long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997) structure for source and target words embedding.", "With this technique we can take the essence of LSTM RNN and do not break any sequential generative model assumptions.", "p(∆ i |b i−1 1 , e i−1 0 , f J 1 ) that predicts the jump ∆ i = b i − b i−1 .", "Our models are close in structure to the model proposed in Luong et al.", "(2015) , where we have a component that encodes the source sentence, and another that encodes the target sentence.", "As shown in Figure 1 , we use a source side bidi- rectional LSTM embedding h j = − → h j + ← − h j , where − → h j = LSTM(W, f j , − → h j−1 ) and ← − h j = LSTM(V, f j , ← − h j+1 ), as well as a target side LSTM embedding s i−1 = LSTM(U, e i−1 , s i−2 ).", "h j , − → h j , ← − h j and s i−1 , s i−2 are vectors, W , V and U are weight matrices.", "Before the non-linear hidden layers, there is a projection layer which f1 · · · fj−1 fj fj+1 concatenates h j , s i−1 and e i−1 .", "Then the neural network-based lexicon model is given by · · · fJ e1 · · · ei−2 ei−1 − → s i−1 · · · · · · · · · · · · · · · · · · · · · − → h j ← − h j p(ei|hj, si−1, ei−1) p(e i |b i 1 , e i−1 0 , f J 1 ) := p(e i |h j , s i−1 , e i−1 ) (3) and the neural network-based alignment model p(b i |b i−1 1 , e i−1 0 , f J 1 ) := p(∆ i |h j , s i−1 , e i−1 ) (4) where j = b i−1 .", "The training criterion is the logarithm of sentence posterior probabilities over training sentence pairs (F r , E r ), r = 1, ..., R: arg max θ r log p θ (E r |F r ) (5) The derivative for a single sentence pair (F, E) = (f J 1 , e I 1 ) is: ∂ ∂θ log p θ (E|F ) = j ,j i p i (j , j|f J 1 , e I 1 ; θ) · ∂ ∂θ log p(j, e i |j , e i−1 0 , f J 1 ; θ) (6) with HMM posterior weights p i (j , j|f J 1 , e I 1 ; θ), which can be computed using the forwardbackward algorithm.", "The entire training procedure can be summarized as backpropagation in an EM framework: 1. compute: • the posterior HMM weights • the local gradients (backpropagation) 2. update neural network weights Decoding In the decoding stage we still calculate the sum over alignments and apply a target-synchronous beam search for the target string.", "The auxiliary quantity for each unknown partial string e i 0 is specified as Q(i, j; e i 0 ).", "During search, the partial hypothesis is extended from e i−1 0 to e i 0 : Q(i, j; e i 0 ) = j p(j, e i |j , e i−1 0 , f J 1 ) · Q(i − 1, j ; e i−1 0 ) (7) The decoder is shown in Algorithm 1.", "In the innermost loop (line 11-13), alignments are hypothesized and used to calculate the auxiliary quantity Q(i, j; e i 0 ).", "Then for each source position j, the lexical distribution over the full target vocabulary is computed (line 14).", "The distributions are accumulated (Q(i; e i 0 ) = j Q(i, j; e i 0 ), line 16), then sorted (line 18) and the best candidate translations (arg max e i Q(i; e i 0 )) lying within the beam are used to expand the partial hypotheses (line 19-23).", "cache is a two-dimensional list of size J × |V src | (source vocabulary size), which is used to cache the current quantities.", "Whenever a partial hypothesis in the beam ends with the sentence end symbol (<EOF>), the counter will be increased by 1 (line 26-28).", "The translation is terminated if the counter reaches the beam size or hypothesis sentence length reaches three times the source sentence length (line 6).", "If a hypothesis stops but its score is worse than other hypotheses, it is eliminated from the beam, but it still contests non-terminated hypotheses.", "During comparison the scores are normalized by hypothesis sentence length.", "Note that we have no explicit coverage constraints.", "This means that a source position can be revisited many times, whereby creating one-to-many alignment cases.", "This also allows unaligned source words.", "In the neural HMM decoder, word alignments are estimated and scored according to the distribution calculated by the neural network alignment model, leading alignment decisions to become part of the beam search.", "The search space consists of both alignment and translation decisions.", "In contrast, the search space in attentionbased decoding consists only of translation decisions.", "The decoding complexity is O(J 2 · I) (J = source sentence length, I = target sentence length) return GETBEST(hyps) 33: end function compared to O(J · I) for attention-based models.", "These are theoretical complexities of decoding on a CPU only considering source and target sentence lengths.", "In practice, the size of the neural network must also be taken into account, and there are some optimized matrix multiplications for decoding on a GPU.", "In general, the decoding speed of our model is about 3 times slower than that of a standard attention model (1.07 sentences per second vs. 3.00 sentences per second) on a single GPU.", "This is still an initial decoder and we did not spend much time on accelerating its decoding yet.", "The optimization of our decoder would be a promising future work.", "Experiments The experiments are conducted on the WMT 2017 German↔English and Chinese→English translation tasks, which consist of 5M and 23M parallel sentence pairs respectively.", "Translation quality is measured with the case sensitive BLEU (Papineni et al., 2002) and TER (Snover et al., 2006) metric on newstests 2017, which contain 3004 (German↔English) and 2001 (Chinese→English) sentence pairs.", "For German and English preprocessing, we use the Moses tokenizer with hyphen splitting, and perform truecasing with Moses scripts (Koehn et al., 2007) .", "For German↔English subword segmentation , we use 20K joint BPE operations.", "For the Chinese data, we segment it using the Jieba 1 segmenter.", "We then learn a BPE model on the segmented Chinese, also using 20K merge operations.", "During training, sentences with a length greater than 50 subwords are filtered out.", "Attention-Based System The attention-based systems are trained with Sockeye (Hieber et al., 2017) , which implement an attentional encoder-decoder with small modifications to the model in Bahdanau et al.", "(2015) .", "The encoder and decoder word embeddings are of size 620.", "The encoder consists of a bidirectional layer with 1000 LSTMs with peephole connections to encode the source side.", "We use Adam (Kingma and Ba, 2015) as optimizer with a learning rate of 0.001, and a batch size of 50.", "The network is trained with 30% dropout for up to 500K iterations and evaluated every 10K iterations on the development set with BLEU.", "Decoding is done using beam search with a beam size of 12.", "In the end the four best models are averaged as described in 1 https://github.com/fxsjy/jieba the beginning of Junczys-Dowmunt et al.", "(2016) .", "Neural Hidden Markov Model The entire neural hidden Markov model is implemented in TensorFlow (Abadi et al., 2016) .", "The feedforward models have three hidden layers of sizes 1000, 1000 and 500 respectively, with a 5word source window and a 3-gram target history.", "200 nodes are used for word embeddings.", "The output layer of the neural lexicon model consists of around 25K nodes for all subword units, while the neural alignment model has a small output layer with 201 nodes, which reflects that the aligned position can jump within the scope from −100 to 100.", "Apart from the basic projection layer, we also applied LSTM layers for the source and target words embedding.", "The embedding layers have 350 nodes and the size of the projection layer is 800 (400 + 200 + 200, Figure 1 ).", "We use Adam as optimizer with a learning rate of 0.001.", "Neural lexicon and alignment models are trained with 30% dropout and the norm of the gradient is clipped with a threshold 1 (Pascanu et al., 2014) .", "In decoding we use a beam size of 12 and the element-wise average of all weights of the four best models also results in better performance.", "Results We compare the neural HMM approach (Subsection 5.2) with the state-of-the-art attention-based approach (Subsection 5.1) on different translation tasks.", "The results are presented in Table 1 .", "Compare to the model presented in Wang et al.", "(2017) , switching to LSTM models has a clear advantage, which improves the FFNN-based system by up to 1.3% BLEU and 1.8% TER.", "It seems that the HMM model benefits from richer features, such as LSTM states, which are very similar to what an attention mechanism would require.", "We actually WMT Attention-based NMT e r w o l l t e n i e a n i r g e n d e i n e r A r t v o n A u s e i n a n d e r s e t z u n g t e i l n e h m expected it to do with less, the reason being that alignment distributions get refined a posteriori and so they do not have to be as strong a priori.", "We can also observe that the performance of our approach is comparable with the state-of-the-art attentionbased system with 25M more parameters on all three tasks.", "Alignment Analysis We show an example from the German→English newstest 2017 in Figure 2 , along with the attention and alignment matrices.", "We can observe that the neural network-based HMM could generate a more clear alignment path compared to the attention weights.", "In this example, it can exactly estimate the alignment positions for words wanted and of.", "Discussion We described a novel formulation for a neural network-based machine translation system, which applied neural networks to the conventional hidden Markov model.", "The training is end-to-end, the model is monolithic and can be used as a standalone decoder.", "This results in a more modern and efficient way to use HMM in machine translation and enables neural networks to benefit from HMMs.", "Experiments show that replacing attention with alignment does not improve the translation performance of NMT significantly.", "One possible reason is that alignment may fail to capture relevant contexts as attention does.", "While alignment aims to identify translation equivalents between two lan-guages, attention is designed to find relevant context for predicting the next target word.", "Source words with high attention weights are not necessarily translation equivalents of the target word.", "Although using alignment does not lead to significant improvements in terms of BLEU over attention, we think alignment-based NMT models are still useful for automatic post editing and developing coverage-based models.", "These might be interesting future directions to explore." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5", "5.1", "5.2", "5.3", "5.4", "6" ], "paper_header_content": [ "Introduction", "Motivation", "Neural Hidden Markov Model", "Decoding", "Experiments", "Attention-Based System", "Neural Hidden Markov Model", "Results", "Alignment Analysis", "Discussion" ] }
GEM-SciDuet-train-42#paper-1062#slide-4
Experimental Setup
I WMT 2017 GermanEnglish and ChineseEnglish translation tasks I Quality measured with case sensitive BLEU and TER on newstests2017 I Moses tokenizer and truecasing scripts [Koehn & Hoang+ I Jieba1 segmenter for Chinese data I 20K byte pair encoding (BPE) operations [Sennrich & Haddow+ joint for GermanEnglish and separate for ChineseEnglish I Attention-based system are trained with Sockeye [Hieber & Domhan+ encoder and decoder embedding layer size 620 a bidirectional encoder layer with 1000 LSTMs with peephole connections Adam [Kingma & Ba 15] as optimizer with a learning rate of 0.001 beam search with beam size 12 model weights averaging 1https://github.com/fxsjy/jieba W. Wang: Neural HMM for MT July 17th, 2018 I Neural hidden markov model implemented in TensorFlow [Abadi & Agarwal+ three hidden layers of sizes 1000, 1000 and 500 respectively normal softmax layer lexicon model: large output layer with roughly 25K nodes alignment model: small output layer with 201 nodes
I WMT 2017 GermanEnglish and ChineseEnglish translation tasks I Quality measured with case sensitive BLEU and TER on newstests2017 I Moses tokenizer and truecasing scripts [Koehn & Hoang+ I Jieba1 segmenter for Chinese data I 20K byte pair encoding (BPE) operations [Sennrich & Haddow+ joint for GermanEnglish and separate for ChineseEnglish I Attention-based system are trained with Sockeye [Hieber & Domhan+ encoder and decoder embedding layer size 620 a bidirectional encoder layer with 1000 LSTMs with peephole connections Adam [Kingma & Ba 15] as optimizer with a learning rate of 0.001 beam search with beam size 12 model weights averaging 1https://github.com/fxsjy/jieba W. Wang: Neural HMM for MT July 17th, 2018 I Neural hidden markov model implemented in TensorFlow [Abadi & Agarwal+ three hidden layers of sizes 1000, 1000 and 500 respectively normal softmax layer lexicon model: large output layer with roughly 25K nodes alignment model: small output layer with 201 nodes
[]
GEM-SciDuet-train-42#paper-1062#slide-5
1062
Neural Hidden Markov Model for Machine Translation
This work aims to investigate alternative neural machine translation (NMT) approaches and thus proposes a neural hidden Markov model (HMM) consisting of neural network-based alignment and lexicon models. The neural models make use of encoder and decoder components, but drop the attention component. The training is end-to-end and the standalone decoder is able to provide comparable performance with the state-of-the-art attention-based models on three different translation tasks.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105 ], "paper_content_text": [ "Introduction Attention-based neural translation models (Bahdanau et al., 2015; Luong et al., 2015) attend to specific positions on the source side to generate translation.", "Using the attention component provides significant improvements over the pure encoder-decoder sequence-to-sequence approach (Sutskever et al., 2014) that uses no such attention mechanism.", "In this work, we aim to compare the performance of attention-based models to another baseline, namely, neural hidden Markov models.", "The neural HMM has been successfully applied in the literature on top of conventional phrasebased systems (Wang et al., 2017) .", "In this work, our purpose is to explore its application in standalone decoding, i.e.", "the model is used to generate and score candidates without assistance from a phrase-based system.", "Because translation is done standalone using only neural models, we still refer to this as NMT.", "In addition, while Wang et al.", "(2017) applied feedforward networks to model alignment and translation, the recurrent structures proposed in this work surpass the feedforward variants by up to 1.3% in BLEU.", "By comparing neural HMM and attention-based NMT, we shed light on the role of the attention component.", "To this end, we use an alignmentbased model that has a recurrent bidirectional encoder and a recurrent decoder, but use no attention component.", "We replace the attention mechanism by a first-order HMM alignment model.", "Attention levels are deterministic normalized similarity scores part of the architecture design of an otherwise fully supervised classifier.", "HMM-style alignments on the other hand are discrete random variables and (unlike attention levels) must be marginalized.", "Once alignments are marginalized, which is tractable for a first-order HMM, parameters can be estimated to attain a local optimum of log-likelihood of observations as usual.", "Motivation In attention-based approaches, the alignment distribution is used to select the positions in the source sentence that the decoder attends to during translation.", "Thus the alignment model can be considered as an implicit part of the translation model.", "On the other hand, separating the alignment model from the lexicon model has its own advantages: First of all, this leads to more flexibility in modeling and training: The models can not only be trained separately, but they can also have different model types, such as neural models, count-based models, etc.", "Second, the separation avoids propagating errors from one model to another.", "In attention-based systems, the translation score is based on the alignment distribution, in which errors can be propagated from the alignment part to the translation part.", "Third, probabilistic treatment to alignments in NMT typically implies an extended degree of interpretability (e.g.", "one can inspect posteriors) and control over the model (e.g.", "one can impose priors over alignments and lexical distributions).", "Neural Hidden Markov Model Given a source sentence f J 1 = f 1 ...f j ...f J and a target sentence e I 1 = e 1 ...e i ...e I , where j = b i is the source position aligned to the target position i, we model translation using an alignment model and a lexicon model: p(e I 1 |f J 1 ) = b I 1 p(e I 1 , b I 1 |f J 1 ) (1) := b I 1 I i=1 p(e i |b i 1 , e i−1 0 , f J 1 ) lexicon model · p(b i |b i−1 1 , e i−1 0 , f J 1 ) alignment model (2) Instead of predicting the absolute source position b i , we use an alignment model Wang et al.", "(2017) applied feedforward neural networks for modeling the lexicon and alignment probabilities.", "In this work, we would like to model these distributions using recurrent neural networks (RNN).", "RNNs have been shown to outperform feedforward variants in language and translation modeling.", "This is mainly due to that RNN can handle arbitrary input lengths and thus include unbounded context information.", "Unfortunately, the recurrent hidden layer cannot be easily applied for the neural hidden Markov model, since it will significantly complicate the computation of forward-backward messages when running Baum-Welch.", "Nevertheless, we can apply long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997) structure for source and target words embedding.", "With this technique we can take the essence of LSTM RNN and do not break any sequential generative model assumptions.", "p(∆ i |b i−1 1 , e i−1 0 , f J 1 ) that predicts the jump ∆ i = b i − b i−1 .", "Our models are close in structure to the model proposed in Luong et al.", "(2015) , where we have a component that encodes the source sentence, and another that encodes the target sentence.", "As shown in Figure 1 , we use a source side bidi- rectional LSTM embedding h j = − → h j + ← − h j , where − → h j = LSTM(W, f j , − → h j−1 ) and ← − h j = LSTM(V, f j , ← − h j+1 ), as well as a target side LSTM embedding s i−1 = LSTM(U, e i−1 , s i−2 ).", "h j , − → h j , ← − h j and s i−1 , s i−2 are vectors, W , V and U are weight matrices.", "Before the non-linear hidden layers, there is a projection layer which f1 · · · fj−1 fj fj+1 concatenates h j , s i−1 and e i−1 .", "Then the neural network-based lexicon model is given by · · · fJ e1 · · · ei−2 ei−1 − → s i−1 · · · · · · · · · · · · · · · · · · · · · − → h j ← − h j p(ei|hj, si−1, ei−1) p(e i |b i 1 , e i−1 0 , f J 1 ) := p(e i |h j , s i−1 , e i−1 ) (3) and the neural network-based alignment model p(b i |b i−1 1 , e i−1 0 , f J 1 ) := p(∆ i |h j , s i−1 , e i−1 ) (4) where j = b i−1 .", "The training criterion is the logarithm of sentence posterior probabilities over training sentence pairs (F r , E r ), r = 1, ..., R: arg max θ r log p θ (E r |F r ) (5) The derivative for a single sentence pair (F, E) = (f J 1 , e I 1 ) is: ∂ ∂θ log p θ (E|F ) = j ,j i p i (j , j|f J 1 , e I 1 ; θ) · ∂ ∂θ log p(j, e i |j , e i−1 0 , f J 1 ; θ) (6) with HMM posterior weights p i (j , j|f J 1 , e I 1 ; θ), which can be computed using the forwardbackward algorithm.", "The entire training procedure can be summarized as backpropagation in an EM framework: 1. compute: • the posterior HMM weights • the local gradients (backpropagation) 2. update neural network weights Decoding In the decoding stage we still calculate the sum over alignments and apply a target-synchronous beam search for the target string.", "The auxiliary quantity for each unknown partial string e i 0 is specified as Q(i, j; e i 0 ).", "During search, the partial hypothesis is extended from e i−1 0 to e i 0 : Q(i, j; e i 0 ) = j p(j, e i |j , e i−1 0 , f J 1 ) · Q(i − 1, j ; e i−1 0 ) (7) The decoder is shown in Algorithm 1.", "In the innermost loop (line 11-13), alignments are hypothesized and used to calculate the auxiliary quantity Q(i, j; e i 0 ).", "Then for each source position j, the lexical distribution over the full target vocabulary is computed (line 14).", "The distributions are accumulated (Q(i; e i 0 ) = j Q(i, j; e i 0 ), line 16), then sorted (line 18) and the best candidate translations (arg max e i Q(i; e i 0 )) lying within the beam are used to expand the partial hypotheses (line 19-23).", "cache is a two-dimensional list of size J × |V src | (source vocabulary size), which is used to cache the current quantities.", "Whenever a partial hypothesis in the beam ends with the sentence end symbol (<EOF>), the counter will be increased by 1 (line 26-28).", "The translation is terminated if the counter reaches the beam size or hypothesis sentence length reaches three times the source sentence length (line 6).", "If a hypothesis stops but its score is worse than other hypotheses, it is eliminated from the beam, but it still contests non-terminated hypotheses.", "During comparison the scores are normalized by hypothesis sentence length.", "Note that we have no explicit coverage constraints.", "This means that a source position can be revisited many times, whereby creating one-to-many alignment cases.", "This also allows unaligned source words.", "In the neural HMM decoder, word alignments are estimated and scored according to the distribution calculated by the neural network alignment model, leading alignment decisions to become part of the beam search.", "The search space consists of both alignment and translation decisions.", "In contrast, the search space in attentionbased decoding consists only of translation decisions.", "The decoding complexity is O(J 2 · I) (J = source sentence length, I = target sentence length) return GETBEST(hyps) 33: end function compared to O(J · I) for attention-based models.", "These are theoretical complexities of decoding on a CPU only considering source and target sentence lengths.", "In practice, the size of the neural network must also be taken into account, and there are some optimized matrix multiplications for decoding on a GPU.", "In general, the decoding speed of our model is about 3 times slower than that of a standard attention model (1.07 sentences per second vs. 3.00 sentences per second) on a single GPU.", "This is still an initial decoder and we did not spend much time on accelerating its decoding yet.", "The optimization of our decoder would be a promising future work.", "Experiments The experiments are conducted on the WMT 2017 German↔English and Chinese→English translation tasks, which consist of 5M and 23M parallel sentence pairs respectively.", "Translation quality is measured with the case sensitive BLEU (Papineni et al., 2002) and TER (Snover et al., 2006) metric on newstests 2017, which contain 3004 (German↔English) and 2001 (Chinese→English) sentence pairs.", "For German and English preprocessing, we use the Moses tokenizer with hyphen splitting, and perform truecasing with Moses scripts (Koehn et al., 2007) .", "For German↔English subword segmentation , we use 20K joint BPE operations.", "For the Chinese data, we segment it using the Jieba 1 segmenter.", "We then learn a BPE model on the segmented Chinese, also using 20K merge operations.", "During training, sentences with a length greater than 50 subwords are filtered out.", "Attention-Based System The attention-based systems are trained with Sockeye (Hieber et al., 2017) , which implement an attentional encoder-decoder with small modifications to the model in Bahdanau et al.", "(2015) .", "The encoder and decoder word embeddings are of size 620.", "The encoder consists of a bidirectional layer with 1000 LSTMs with peephole connections to encode the source side.", "We use Adam (Kingma and Ba, 2015) as optimizer with a learning rate of 0.001, and a batch size of 50.", "The network is trained with 30% dropout for up to 500K iterations and evaluated every 10K iterations on the development set with BLEU.", "Decoding is done using beam search with a beam size of 12.", "In the end the four best models are averaged as described in 1 https://github.com/fxsjy/jieba the beginning of Junczys-Dowmunt et al.", "(2016) .", "Neural Hidden Markov Model The entire neural hidden Markov model is implemented in TensorFlow (Abadi et al., 2016) .", "The feedforward models have three hidden layers of sizes 1000, 1000 and 500 respectively, with a 5word source window and a 3-gram target history.", "200 nodes are used for word embeddings.", "The output layer of the neural lexicon model consists of around 25K nodes for all subword units, while the neural alignment model has a small output layer with 201 nodes, which reflects that the aligned position can jump within the scope from −100 to 100.", "Apart from the basic projection layer, we also applied LSTM layers for the source and target words embedding.", "The embedding layers have 350 nodes and the size of the projection layer is 800 (400 + 200 + 200, Figure 1 ).", "We use Adam as optimizer with a learning rate of 0.001.", "Neural lexicon and alignment models are trained with 30% dropout and the norm of the gradient is clipped with a threshold 1 (Pascanu et al., 2014) .", "In decoding we use a beam size of 12 and the element-wise average of all weights of the four best models also results in better performance.", "Results We compare the neural HMM approach (Subsection 5.2) with the state-of-the-art attention-based approach (Subsection 5.1) on different translation tasks.", "The results are presented in Table 1 .", "Compare to the model presented in Wang et al.", "(2017) , switching to LSTM models has a clear advantage, which improves the FFNN-based system by up to 1.3% BLEU and 1.8% TER.", "It seems that the HMM model benefits from richer features, such as LSTM states, which are very similar to what an attention mechanism would require.", "We actually WMT Attention-based NMT e r w o l l t e n i e a n i r g e n d e i n e r A r t v o n A u s e i n a n d e r s e t z u n g t e i l n e h m expected it to do with less, the reason being that alignment distributions get refined a posteriori and so they do not have to be as strong a priori.", "We can also observe that the performance of our approach is comparable with the state-of-the-art attentionbased system with 25M more parameters on all three tasks.", "Alignment Analysis We show an example from the German→English newstest 2017 in Figure 2 , along with the attention and alignment matrices.", "We can observe that the neural network-based HMM could generate a more clear alignment path compared to the attention weights.", "In this example, it can exactly estimate the alignment positions for words wanted and of.", "Discussion We described a novel formulation for a neural network-based machine translation system, which applied neural networks to the conventional hidden Markov model.", "The training is end-to-end, the model is monolithic and can be used as a standalone decoder.", "This results in a more modern and efficient way to use HMM in machine translation and enables neural networks to benefit from HMMs.", "Experiments show that replacing attention with alignment does not improve the translation performance of NMT significantly.", "One possible reason is that alignment may fail to capture relevant contexts as attention does.", "While alignment aims to identify translation equivalents between two lan-guages, attention is designed to find relevant context for predicting the next target word.", "Source words with high attention weights are not necessarily translation equivalents of the target word.", "Although using alignment does not lead to significant improvements in terms of BLEU over attention, we think alignment-based NMT models are still useful for automatic post editing and developing coverage-based models.", "These might be interesting future directions to explore." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5", "5.1", "5.2", "5.3", "5.4", "6" ], "paper_header_content": [ "Introduction", "Motivation", "Neural Hidden Markov Model", "Decoding", "Experiments", "Attention-Based System", "Neural Hidden Markov Model", "Results", "Alignment Analysis", "Discussion" ] }
GEM-SciDuet-train-42#paper-1062#slide-5
Experimental Results
I Attention-based neural network: [Bahdanau & Cho+ I FFNN-based neural HMM: [Wang & Alkhouli+ 17] applied in decoding I LSTM-based neural HMM: this work I All models trained without synthetic data I Single model used for decoding I LSTM models improve FFNN-based system by up to 1.3% BLEU and 1.8% TER I Comparable performance with attention-based system
I Attention-based neural network: [Bahdanau & Cho+ I FFNN-based neural HMM: [Wang & Alkhouli+ 17] applied in decoding I LSTM-based neural HMM: this work I All models trained without synthetic data I Single model used for decoding I LSTM models improve FFNN-based system by up to 1.3% BLEU and 1.8% TER I Comparable performance with attention-based system
[]
GEM-SciDuet-train-42#paper-1062#slide-6
1062
Neural Hidden Markov Model for Machine Translation
This work aims to investigate alternative neural machine translation (NMT) approaches and thus proposes a neural hidden Markov model (HMM) consisting of neural network-based alignment and lexicon models. The neural models make use of encoder and decoder components, but drop the attention component. The training is end-to-end and the standalone decoder is able to provide comparable performance with the state-of-the-art attention-based models on three different translation tasks.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105 ], "paper_content_text": [ "Introduction Attention-based neural translation models (Bahdanau et al., 2015; Luong et al., 2015) attend to specific positions on the source side to generate translation.", "Using the attention component provides significant improvements over the pure encoder-decoder sequence-to-sequence approach (Sutskever et al., 2014) that uses no such attention mechanism.", "In this work, we aim to compare the performance of attention-based models to another baseline, namely, neural hidden Markov models.", "The neural HMM has been successfully applied in the literature on top of conventional phrasebased systems (Wang et al., 2017) .", "In this work, our purpose is to explore its application in standalone decoding, i.e.", "the model is used to generate and score candidates without assistance from a phrase-based system.", "Because translation is done standalone using only neural models, we still refer to this as NMT.", "In addition, while Wang et al.", "(2017) applied feedforward networks to model alignment and translation, the recurrent structures proposed in this work surpass the feedforward variants by up to 1.3% in BLEU.", "By comparing neural HMM and attention-based NMT, we shed light on the role of the attention component.", "To this end, we use an alignmentbased model that has a recurrent bidirectional encoder and a recurrent decoder, but use no attention component.", "We replace the attention mechanism by a first-order HMM alignment model.", "Attention levels are deterministic normalized similarity scores part of the architecture design of an otherwise fully supervised classifier.", "HMM-style alignments on the other hand are discrete random variables and (unlike attention levels) must be marginalized.", "Once alignments are marginalized, which is tractable for a first-order HMM, parameters can be estimated to attain a local optimum of log-likelihood of observations as usual.", "Motivation In attention-based approaches, the alignment distribution is used to select the positions in the source sentence that the decoder attends to during translation.", "Thus the alignment model can be considered as an implicit part of the translation model.", "On the other hand, separating the alignment model from the lexicon model has its own advantages: First of all, this leads to more flexibility in modeling and training: The models can not only be trained separately, but they can also have different model types, such as neural models, count-based models, etc.", "Second, the separation avoids propagating errors from one model to another.", "In attention-based systems, the translation score is based on the alignment distribution, in which errors can be propagated from the alignment part to the translation part.", "Third, probabilistic treatment to alignments in NMT typically implies an extended degree of interpretability (e.g.", "one can inspect posteriors) and control over the model (e.g.", "one can impose priors over alignments and lexical distributions).", "Neural Hidden Markov Model Given a source sentence f J 1 = f 1 ...f j ...f J and a target sentence e I 1 = e 1 ...e i ...e I , where j = b i is the source position aligned to the target position i, we model translation using an alignment model and a lexicon model: p(e I 1 |f J 1 ) = b I 1 p(e I 1 , b I 1 |f J 1 ) (1) := b I 1 I i=1 p(e i |b i 1 , e i−1 0 , f J 1 ) lexicon model · p(b i |b i−1 1 , e i−1 0 , f J 1 ) alignment model (2) Instead of predicting the absolute source position b i , we use an alignment model Wang et al.", "(2017) applied feedforward neural networks for modeling the lexicon and alignment probabilities.", "In this work, we would like to model these distributions using recurrent neural networks (RNN).", "RNNs have been shown to outperform feedforward variants in language and translation modeling.", "This is mainly due to that RNN can handle arbitrary input lengths and thus include unbounded context information.", "Unfortunately, the recurrent hidden layer cannot be easily applied for the neural hidden Markov model, since it will significantly complicate the computation of forward-backward messages when running Baum-Welch.", "Nevertheless, we can apply long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997) structure for source and target words embedding.", "With this technique we can take the essence of LSTM RNN and do not break any sequential generative model assumptions.", "p(∆ i |b i−1 1 , e i−1 0 , f J 1 ) that predicts the jump ∆ i = b i − b i−1 .", "Our models are close in structure to the model proposed in Luong et al.", "(2015) , where we have a component that encodes the source sentence, and another that encodes the target sentence.", "As shown in Figure 1 , we use a source side bidi- rectional LSTM embedding h j = − → h j + ← − h j , where − → h j = LSTM(W, f j , − → h j−1 ) and ← − h j = LSTM(V, f j , ← − h j+1 ), as well as a target side LSTM embedding s i−1 = LSTM(U, e i−1 , s i−2 ).", "h j , − → h j , ← − h j and s i−1 , s i−2 are vectors, W , V and U are weight matrices.", "Before the non-linear hidden layers, there is a projection layer which f1 · · · fj−1 fj fj+1 concatenates h j , s i−1 and e i−1 .", "Then the neural network-based lexicon model is given by · · · fJ e1 · · · ei−2 ei−1 − → s i−1 · · · · · · · · · · · · · · · · · · · · · − → h j ← − h j p(ei|hj, si−1, ei−1) p(e i |b i 1 , e i−1 0 , f J 1 ) := p(e i |h j , s i−1 , e i−1 ) (3) and the neural network-based alignment model p(b i |b i−1 1 , e i−1 0 , f J 1 ) := p(∆ i |h j , s i−1 , e i−1 ) (4) where j = b i−1 .", "The training criterion is the logarithm of sentence posterior probabilities over training sentence pairs (F r , E r ), r = 1, ..., R: arg max θ r log p θ (E r |F r ) (5) The derivative for a single sentence pair (F, E) = (f J 1 , e I 1 ) is: ∂ ∂θ log p θ (E|F ) = j ,j i p i (j , j|f J 1 , e I 1 ; θ) · ∂ ∂θ log p(j, e i |j , e i−1 0 , f J 1 ; θ) (6) with HMM posterior weights p i (j , j|f J 1 , e I 1 ; θ), which can be computed using the forwardbackward algorithm.", "The entire training procedure can be summarized as backpropagation in an EM framework: 1. compute: • the posterior HMM weights • the local gradients (backpropagation) 2. update neural network weights Decoding In the decoding stage we still calculate the sum over alignments and apply a target-synchronous beam search for the target string.", "The auxiliary quantity for each unknown partial string e i 0 is specified as Q(i, j; e i 0 ).", "During search, the partial hypothesis is extended from e i−1 0 to e i 0 : Q(i, j; e i 0 ) = j p(j, e i |j , e i−1 0 , f J 1 ) · Q(i − 1, j ; e i−1 0 ) (7) The decoder is shown in Algorithm 1.", "In the innermost loop (line 11-13), alignments are hypothesized and used to calculate the auxiliary quantity Q(i, j; e i 0 ).", "Then for each source position j, the lexical distribution over the full target vocabulary is computed (line 14).", "The distributions are accumulated (Q(i; e i 0 ) = j Q(i, j; e i 0 ), line 16), then sorted (line 18) and the best candidate translations (arg max e i Q(i; e i 0 )) lying within the beam are used to expand the partial hypotheses (line 19-23).", "cache is a two-dimensional list of size J × |V src | (source vocabulary size), which is used to cache the current quantities.", "Whenever a partial hypothesis in the beam ends with the sentence end symbol (<EOF>), the counter will be increased by 1 (line 26-28).", "The translation is terminated if the counter reaches the beam size or hypothesis sentence length reaches three times the source sentence length (line 6).", "If a hypothesis stops but its score is worse than other hypotheses, it is eliminated from the beam, but it still contests non-terminated hypotheses.", "During comparison the scores are normalized by hypothesis sentence length.", "Note that we have no explicit coverage constraints.", "This means that a source position can be revisited many times, whereby creating one-to-many alignment cases.", "This also allows unaligned source words.", "In the neural HMM decoder, word alignments are estimated and scored according to the distribution calculated by the neural network alignment model, leading alignment decisions to become part of the beam search.", "The search space consists of both alignment and translation decisions.", "In contrast, the search space in attentionbased decoding consists only of translation decisions.", "The decoding complexity is O(J 2 · I) (J = source sentence length, I = target sentence length) return GETBEST(hyps) 33: end function compared to O(J · I) for attention-based models.", "These are theoretical complexities of decoding on a CPU only considering source and target sentence lengths.", "In practice, the size of the neural network must also be taken into account, and there are some optimized matrix multiplications for decoding on a GPU.", "In general, the decoding speed of our model is about 3 times slower than that of a standard attention model (1.07 sentences per second vs. 3.00 sentences per second) on a single GPU.", "This is still an initial decoder and we did not spend much time on accelerating its decoding yet.", "The optimization of our decoder would be a promising future work.", "Experiments The experiments are conducted on the WMT 2017 German↔English and Chinese→English translation tasks, which consist of 5M and 23M parallel sentence pairs respectively.", "Translation quality is measured with the case sensitive BLEU (Papineni et al., 2002) and TER (Snover et al., 2006) metric on newstests 2017, which contain 3004 (German↔English) and 2001 (Chinese→English) sentence pairs.", "For German and English preprocessing, we use the Moses tokenizer with hyphen splitting, and perform truecasing with Moses scripts (Koehn et al., 2007) .", "For German↔English subword segmentation , we use 20K joint BPE operations.", "For the Chinese data, we segment it using the Jieba 1 segmenter.", "We then learn a BPE model on the segmented Chinese, also using 20K merge operations.", "During training, sentences with a length greater than 50 subwords are filtered out.", "Attention-Based System The attention-based systems are trained with Sockeye (Hieber et al., 2017) , which implement an attentional encoder-decoder with small modifications to the model in Bahdanau et al.", "(2015) .", "The encoder and decoder word embeddings are of size 620.", "The encoder consists of a bidirectional layer with 1000 LSTMs with peephole connections to encode the source side.", "We use Adam (Kingma and Ba, 2015) as optimizer with a learning rate of 0.001, and a batch size of 50.", "The network is trained with 30% dropout for up to 500K iterations and evaluated every 10K iterations on the development set with BLEU.", "Decoding is done using beam search with a beam size of 12.", "In the end the four best models are averaged as described in 1 https://github.com/fxsjy/jieba the beginning of Junczys-Dowmunt et al.", "(2016) .", "Neural Hidden Markov Model The entire neural hidden Markov model is implemented in TensorFlow (Abadi et al., 2016) .", "The feedforward models have three hidden layers of sizes 1000, 1000 and 500 respectively, with a 5word source window and a 3-gram target history.", "200 nodes are used for word embeddings.", "The output layer of the neural lexicon model consists of around 25K nodes for all subword units, while the neural alignment model has a small output layer with 201 nodes, which reflects that the aligned position can jump within the scope from −100 to 100.", "Apart from the basic projection layer, we also applied LSTM layers for the source and target words embedding.", "The embedding layers have 350 nodes and the size of the projection layer is 800 (400 + 200 + 200, Figure 1 ).", "We use Adam as optimizer with a learning rate of 0.001.", "Neural lexicon and alignment models are trained with 30% dropout and the norm of the gradient is clipped with a threshold 1 (Pascanu et al., 2014) .", "In decoding we use a beam size of 12 and the element-wise average of all weights of the four best models also results in better performance.", "Results We compare the neural HMM approach (Subsection 5.2) with the state-of-the-art attention-based approach (Subsection 5.1) on different translation tasks.", "The results are presented in Table 1 .", "Compare to the model presented in Wang et al.", "(2017) , switching to LSTM models has a clear advantage, which improves the FFNN-based system by up to 1.3% BLEU and 1.8% TER.", "It seems that the HMM model benefits from richer features, such as LSTM states, which are very similar to what an attention mechanism would require.", "We actually WMT Attention-based NMT e r w o l l t e n i e a n i r g e n d e i n e r A r t v o n A u s e i n a n d e r s e t z u n g t e i l n e h m expected it to do with less, the reason being that alignment distributions get refined a posteriori and so they do not have to be as strong a priori.", "We can also observe that the performance of our approach is comparable with the state-of-the-art attentionbased system with 25M more parameters on all three tasks.", "Alignment Analysis We show an example from the German→English newstest 2017 in Figure 2 , along with the attention and alignment matrices.", "We can observe that the neural network-based HMM could generate a more clear alignment path compared to the attention weights.", "In this example, it can exactly estimate the alignment positions for words wanted and of.", "Discussion We described a novel formulation for a neural network-based machine translation system, which applied neural networks to the conventional hidden Markov model.", "The training is end-to-end, the model is monolithic and can be used as a standalone decoder.", "This results in a more modern and efficient way to use HMM in machine translation and enables neural networks to benefit from HMMs.", "Experiments show that replacing attention with alignment does not improve the translation performance of NMT significantly.", "One possible reason is that alignment may fail to capture relevant contexts as attention does.", "While alignment aims to identify translation equivalents between two lan-guages, attention is designed to find relevant context for predicting the next target word.", "Source words with high attention weights are not necessarily translation equivalents of the target word.", "Although using alignment does not lead to significant improvements in terms of BLEU over attention, we think alignment-based NMT models are still useful for automatic post editing and developing coverage-based models.", "These might be interesting future directions to explore." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5", "5.1", "5.2", "5.3", "5.4", "6" ], "paper_header_content": [ "Introduction", "Motivation", "Neural Hidden Markov Model", "Decoding", "Experiments", "Attention-Based System", "Neural Hidden Markov Model", "Results", "Alignment Analysis", "Discussion" ] }
GEM-SciDuet-train-42#paper-1062#slide-6
Summary
I Apply NNs to conventional HMM for MT I End-to-end with a stand-alone decoder I Comparable performance with the standard attention-based system significantly outperforms the feed-forward variant Speed up training and decoding Application in automatic post editing Combination with attention or transformer [Vaswani & Shazeer+ 17] model
I Apply NNs to conventional HMM for MT I End-to-end with a stand-alone decoder I Comparable performance with the standard attention-based system significantly outperforms the feed-forward variant Speed up training and decoding Application in automatic post editing Combination with attention or transformer [Vaswani & Shazeer+ 17] model
[]
GEM-SciDuet-train-42#paper-1062#slide-7
1062
Neural Hidden Markov Model for Machine Translation
This work aims to investigate alternative neural machine translation (NMT) approaches and thus proposes a neural hidden Markov model (HMM) consisting of neural network-based alignment and lexicon models. The neural models make use of encoder and decoder components, but drop the attention component. The training is end-to-end and the standalone decoder is able to provide comparable performance with the state-of-the-art attention-based models on three different translation tasks.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105 ], "paper_content_text": [ "Introduction Attention-based neural translation models (Bahdanau et al., 2015; Luong et al., 2015) attend to specific positions on the source side to generate translation.", "Using the attention component provides significant improvements over the pure encoder-decoder sequence-to-sequence approach (Sutskever et al., 2014) that uses no such attention mechanism.", "In this work, we aim to compare the performance of attention-based models to another baseline, namely, neural hidden Markov models.", "The neural HMM has been successfully applied in the literature on top of conventional phrasebased systems (Wang et al., 2017) .", "In this work, our purpose is to explore its application in standalone decoding, i.e.", "the model is used to generate and score candidates without assistance from a phrase-based system.", "Because translation is done standalone using only neural models, we still refer to this as NMT.", "In addition, while Wang et al.", "(2017) applied feedforward networks to model alignment and translation, the recurrent structures proposed in this work surpass the feedforward variants by up to 1.3% in BLEU.", "By comparing neural HMM and attention-based NMT, we shed light on the role of the attention component.", "To this end, we use an alignmentbased model that has a recurrent bidirectional encoder and a recurrent decoder, but use no attention component.", "We replace the attention mechanism by a first-order HMM alignment model.", "Attention levels are deterministic normalized similarity scores part of the architecture design of an otherwise fully supervised classifier.", "HMM-style alignments on the other hand are discrete random variables and (unlike attention levels) must be marginalized.", "Once alignments are marginalized, which is tractable for a first-order HMM, parameters can be estimated to attain a local optimum of log-likelihood of observations as usual.", "Motivation In attention-based approaches, the alignment distribution is used to select the positions in the source sentence that the decoder attends to during translation.", "Thus the alignment model can be considered as an implicit part of the translation model.", "On the other hand, separating the alignment model from the lexicon model has its own advantages: First of all, this leads to more flexibility in modeling and training: The models can not only be trained separately, but they can also have different model types, such as neural models, count-based models, etc.", "Second, the separation avoids propagating errors from one model to another.", "In attention-based systems, the translation score is based on the alignment distribution, in which errors can be propagated from the alignment part to the translation part.", "Third, probabilistic treatment to alignments in NMT typically implies an extended degree of interpretability (e.g.", "one can inspect posteriors) and control over the model (e.g.", "one can impose priors over alignments and lexical distributions).", "Neural Hidden Markov Model Given a source sentence f J 1 = f 1 ...f j ...f J and a target sentence e I 1 = e 1 ...e i ...e I , where j = b i is the source position aligned to the target position i, we model translation using an alignment model and a lexicon model: p(e I 1 |f J 1 ) = b I 1 p(e I 1 , b I 1 |f J 1 ) (1) := b I 1 I i=1 p(e i |b i 1 , e i−1 0 , f J 1 ) lexicon model · p(b i |b i−1 1 , e i−1 0 , f J 1 ) alignment model (2) Instead of predicting the absolute source position b i , we use an alignment model Wang et al.", "(2017) applied feedforward neural networks for modeling the lexicon and alignment probabilities.", "In this work, we would like to model these distributions using recurrent neural networks (RNN).", "RNNs have been shown to outperform feedforward variants in language and translation modeling.", "This is mainly due to that RNN can handle arbitrary input lengths and thus include unbounded context information.", "Unfortunately, the recurrent hidden layer cannot be easily applied for the neural hidden Markov model, since it will significantly complicate the computation of forward-backward messages when running Baum-Welch.", "Nevertheless, we can apply long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997) structure for source and target words embedding.", "With this technique we can take the essence of LSTM RNN and do not break any sequential generative model assumptions.", "p(∆ i |b i−1 1 , e i−1 0 , f J 1 ) that predicts the jump ∆ i = b i − b i−1 .", "Our models are close in structure to the model proposed in Luong et al.", "(2015) , where we have a component that encodes the source sentence, and another that encodes the target sentence.", "As shown in Figure 1 , we use a source side bidi- rectional LSTM embedding h j = − → h j + ← − h j , where − → h j = LSTM(W, f j , − → h j−1 ) and ← − h j = LSTM(V, f j , ← − h j+1 ), as well as a target side LSTM embedding s i−1 = LSTM(U, e i−1 , s i−2 ).", "h j , − → h j , ← − h j and s i−1 , s i−2 are vectors, W , V and U are weight matrices.", "Before the non-linear hidden layers, there is a projection layer which f1 · · · fj−1 fj fj+1 concatenates h j , s i−1 and e i−1 .", "Then the neural network-based lexicon model is given by · · · fJ e1 · · · ei−2 ei−1 − → s i−1 · · · · · · · · · · · · · · · · · · · · · − → h j ← − h j p(ei|hj, si−1, ei−1) p(e i |b i 1 , e i−1 0 , f J 1 ) := p(e i |h j , s i−1 , e i−1 ) (3) and the neural network-based alignment model p(b i |b i−1 1 , e i−1 0 , f J 1 ) := p(∆ i |h j , s i−1 , e i−1 ) (4) where j = b i−1 .", "The training criterion is the logarithm of sentence posterior probabilities over training sentence pairs (F r , E r ), r = 1, ..., R: arg max θ r log p θ (E r |F r ) (5) The derivative for a single sentence pair (F, E) = (f J 1 , e I 1 ) is: ∂ ∂θ log p θ (E|F ) = j ,j i p i (j , j|f J 1 , e I 1 ; θ) · ∂ ∂θ log p(j, e i |j , e i−1 0 , f J 1 ; θ) (6) with HMM posterior weights p i (j , j|f J 1 , e I 1 ; θ), which can be computed using the forwardbackward algorithm.", "The entire training procedure can be summarized as backpropagation in an EM framework: 1. compute: • the posterior HMM weights • the local gradients (backpropagation) 2. update neural network weights Decoding In the decoding stage we still calculate the sum over alignments and apply a target-synchronous beam search for the target string.", "The auxiliary quantity for each unknown partial string e i 0 is specified as Q(i, j; e i 0 ).", "During search, the partial hypothesis is extended from e i−1 0 to e i 0 : Q(i, j; e i 0 ) = j p(j, e i |j , e i−1 0 , f J 1 ) · Q(i − 1, j ; e i−1 0 ) (7) The decoder is shown in Algorithm 1.", "In the innermost loop (line 11-13), alignments are hypothesized and used to calculate the auxiliary quantity Q(i, j; e i 0 ).", "Then for each source position j, the lexical distribution over the full target vocabulary is computed (line 14).", "The distributions are accumulated (Q(i; e i 0 ) = j Q(i, j; e i 0 ), line 16), then sorted (line 18) and the best candidate translations (arg max e i Q(i; e i 0 )) lying within the beam are used to expand the partial hypotheses (line 19-23).", "cache is a two-dimensional list of size J × |V src | (source vocabulary size), which is used to cache the current quantities.", "Whenever a partial hypothesis in the beam ends with the sentence end symbol (<EOF>), the counter will be increased by 1 (line 26-28).", "The translation is terminated if the counter reaches the beam size or hypothesis sentence length reaches three times the source sentence length (line 6).", "If a hypothesis stops but its score is worse than other hypotheses, it is eliminated from the beam, but it still contests non-terminated hypotheses.", "During comparison the scores are normalized by hypothesis sentence length.", "Note that we have no explicit coverage constraints.", "This means that a source position can be revisited many times, whereby creating one-to-many alignment cases.", "This also allows unaligned source words.", "In the neural HMM decoder, word alignments are estimated and scored according to the distribution calculated by the neural network alignment model, leading alignment decisions to become part of the beam search.", "The search space consists of both alignment and translation decisions.", "In contrast, the search space in attentionbased decoding consists only of translation decisions.", "The decoding complexity is O(J 2 · I) (J = source sentence length, I = target sentence length) return GETBEST(hyps) 33: end function compared to O(J · I) for attention-based models.", "These are theoretical complexities of decoding on a CPU only considering source and target sentence lengths.", "In practice, the size of the neural network must also be taken into account, and there are some optimized matrix multiplications for decoding on a GPU.", "In general, the decoding speed of our model is about 3 times slower than that of a standard attention model (1.07 sentences per second vs. 3.00 sentences per second) on a single GPU.", "This is still an initial decoder and we did not spend much time on accelerating its decoding yet.", "The optimization of our decoder would be a promising future work.", "Experiments The experiments are conducted on the WMT 2017 German↔English and Chinese→English translation tasks, which consist of 5M and 23M parallel sentence pairs respectively.", "Translation quality is measured with the case sensitive BLEU (Papineni et al., 2002) and TER (Snover et al., 2006) metric on newstests 2017, which contain 3004 (German↔English) and 2001 (Chinese→English) sentence pairs.", "For German and English preprocessing, we use the Moses tokenizer with hyphen splitting, and perform truecasing with Moses scripts (Koehn et al., 2007) .", "For German↔English subword segmentation , we use 20K joint BPE operations.", "For the Chinese data, we segment it using the Jieba 1 segmenter.", "We then learn a BPE model on the segmented Chinese, also using 20K merge operations.", "During training, sentences with a length greater than 50 subwords are filtered out.", "Attention-Based System The attention-based systems are trained with Sockeye (Hieber et al., 2017) , which implement an attentional encoder-decoder with small modifications to the model in Bahdanau et al.", "(2015) .", "The encoder and decoder word embeddings are of size 620.", "The encoder consists of a bidirectional layer with 1000 LSTMs with peephole connections to encode the source side.", "We use Adam (Kingma and Ba, 2015) as optimizer with a learning rate of 0.001, and a batch size of 50.", "The network is trained with 30% dropout for up to 500K iterations and evaluated every 10K iterations on the development set with BLEU.", "Decoding is done using beam search with a beam size of 12.", "In the end the four best models are averaged as described in 1 https://github.com/fxsjy/jieba the beginning of Junczys-Dowmunt et al.", "(2016) .", "Neural Hidden Markov Model The entire neural hidden Markov model is implemented in TensorFlow (Abadi et al., 2016) .", "The feedforward models have three hidden layers of sizes 1000, 1000 and 500 respectively, with a 5word source window and a 3-gram target history.", "200 nodes are used for word embeddings.", "The output layer of the neural lexicon model consists of around 25K nodes for all subword units, while the neural alignment model has a small output layer with 201 nodes, which reflects that the aligned position can jump within the scope from −100 to 100.", "Apart from the basic projection layer, we also applied LSTM layers for the source and target words embedding.", "The embedding layers have 350 nodes and the size of the projection layer is 800 (400 + 200 + 200, Figure 1 ).", "We use Adam as optimizer with a learning rate of 0.001.", "Neural lexicon and alignment models are trained with 30% dropout and the norm of the gradient is clipped with a threshold 1 (Pascanu et al., 2014) .", "In decoding we use a beam size of 12 and the element-wise average of all weights of the four best models also results in better performance.", "Results We compare the neural HMM approach (Subsection 5.2) with the state-of-the-art attention-based approach (Subsection 5.1) on different translation tasks.", "The results are presented in Table 1 .", "Compare to the model presented in Wang et al.", "(2017) , switching to LSTM models has a clear advantage, which improves the FFNN-based system by up to 1.3% BLEU and 1.8% TER.", "It seems that the HMM model benefits from richer features, such as LSTM states, which are very similar to what an attention mechanism would require.", "We actually WMT Attention-based NMT e r w o l l t e n i e a n i r g e n d e i n e r A r t v o n A u s e i n a n d e r s e t z u n g t e i l n e h m expected it to do with less, the reason being that alignment distributions get refined a posteriori and so they do not have to be as strong a priori.", "We can also observe that the performance of our approach is comparable with the state-of-the-art attentionbased system with 25M more parameters on all three tasks.", "Alignment Analysis We show an example from the German→English newstest 2017 in Figure 2 , along with the attention and alignment matrices.", "We can observe that the neural network-based HMM could generate a more clear alignment path compared to the attention weights.", "In this example, it can exactly estimate the alignment positions for words wanted and of.", "Discussion We described a novel formulation for a neural network-based machine translation system, which applied neural networks to the conventional hidden Markov model.", "The training is end-to-end, the model is monolithic and can be used as a standalone decoder.", "This results in a more modern and efficient way to use HMM in machine translation and enables neural networks to benefit from HMMs.", "Experiments show that replacing attention with alignment does not improve the translation performance of NMT significantly.", "One possible reason is that alignment may fail to capture relevant contexts as attention does.", "While alignment aims to identify translation equivalents between two lan-guages, attention is designed to find relevant context for predicting the next target word.", "Source words with high attention weights are not necessarily translation equivalents of the target word.", "Although using alignment does not lead to significant improvements in terms of BLEU over attention, we think alignment-based NMT models are still useful for automatic post editing and developing coverage-based models.", "These might be interesting future directions to explore." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5", "5.1", "5.2", "5.3", "5.4", "6" ], "paper_header_content": [ "Introduction", "Motivation", "Neural Hidden Markov Model", "Decoding", "Experiments", "Attention-Based System", "Neural Hidden Markov Model", "Results", "Alignment Analysis", "Discussion" ] }
GEM-SciDuet-train-42#paper-1062#slide-7
Appendix Motivation
I Neural HMM compared to attention-based systems recurrent encoder and decoder without attention component replacing attention mechanism by a first-order HMM alignment model attention levels: deterministic normalized similarity scores HMM alignments: discrete random variables and must be marginalized separating the alignment model from the lexicon model more flexibility in modeling and training avoids propagating errors from one model to another implies an extended degree of interpretability and control over the model
I Neural HMM compared to attention-based systems recurrent encoder and decoder without attention component replacing attention mechanism by a first-order HMM alignment model attention levels: deterministic normalized similarity scores HMM alignments: discrete random variables and must be marginalized separating the alignment model from the lexicon model more flexibility in modeling and training avoids propagating errors from one model to another implies an extended degree of interpretability and control over the model
[]
GEM-SciDuet-train-42#paper-1062#slide-8
1062
Neural Hidden Markov Model for Machine Translation
This work aims to investigate alternative neural machine translation (NMT) approaches and thus proposes a neural hidden Markov model (HMM) consisting of neural network-based alignment and lexicon models. The neural models make use of encoder and decoder components, but drop the attention component. The training is end-to-end and the standalone decoder is able to provide comparable performance with the state-of-the-art attention-based models on three different translation tasks.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105 ], "paper_content_text": [ "Introduction Attention-based neural translation models (Bahdanau et al., 2015; Luong et al., 2015) attend to specific positions on the source side to generate translation.", "Using the attention component provides significant improvements over the pure encoder-decoder sequence-to-sequence approach (Sutskever et al., 2014) that uses no such attention mechanism.", "In this work, we aim to compare the performance of attention-based models to another baseline, namely, neural hidden Markov models.", "The neural HMM has been successfully applied in the literature on top of conventional phrasebased systems (Wang et al., 2017) .", "In this work, our purpose is to explore its application in standalone decoding, i.e.", "the model is used to generate and score candidates without assistance from a phrase-based system.", "Because translation is done standalone using only neural models, we still refer to this as NMT.", "In addition, while Wang et al.", "(2017) applied feedforward networks to model alignment and translation, the recurrent structures proposed in this work surpass the feedforward variants by up to 1.3% in BLEU.", "By comparing neural HMM and attention-based NMT, we shed light on the role of the attention component.", "To this end, we use an alignmentbased model that has a recurrent bidirectional encoder and a recurrent decoder, but use no attention component.", "We replace the attention mechanism by a first-order HMM alignment model.", "Attention levels are deterministic normalized similarity scores part of the architecture design of an otherwise fully supervised classifier.", "HMM-style alignments on the other hand are discrete random variables and (unlike attention levels) must be marginalized.", "Once alignments are marginalized, which is tractable for a first-order HMM, parameters can be estimated to attain a local optimum of log-likelihood of observations as usual.", "Motivation In attention-based approaches, the alignment distribution is used to select the positions in the source sentence that the decoder attends to during translation.", "Thus the alignment model can be considered as an implicit part of the translation model.", "On the other hand, separating the alignment model from the lexicon model has its own advantages: First of all, this leads to more flexibility in modeling and training: The models can not only be trained separately, but they can also have different model types, such as neural models, count-based models, etc.", "Second, the separation avoids propagating errors from one model to another.", "In attention-based systems, the translation score is based on the alignment distribution, in which errors can be propagated from the alignment part to the translation part.", "Third, probabilistic treatment to alignments in NMT typically implies an extended degree of interpretability (e.g.", "one can inspect posteriors) and control over the model (e.g.", "one can impose priors over alignments and lexical distributions).", "Neural Hidden Markov Model Given a source sentence f J 1 = f 1 ...f j ...f J and a target sentence e I 1 = e 1 ...e i ...e I , where j = b i is the source position aligned to the target position i, we model translation using an alignment model and a lexicon model: p(e I 1 |f J 1 ) = b I 1 p(e I 1 , b I 1 |f J 1 ) (1) := b I 1 I i=1 p(e i |b i 1 , e i−1 0 , f J 1 ) lexicon model · p(b i |b i−1 1 , e i−1 0 , f J 1 ) alignment model (2) Instead of predicting the absolute source position b i , we use an alignment model Wang et al.", "(2017) applied feedforward neural networks for modeling the lexicon and alignment probabilities.", "In this work, we would like to model these distributions using recurrent neural networks (RNN).", "RNNs have been shown to outperform feedforward variants in language and translation modeling.", "This is mainly due to that RNN can handle arbitrary input lengths and thus include unbounded context information.", "Unfortunately, the recurrent hidden layer cannot be easily applied for the neural hidden Markov model, since it will significantly complicate the computation of forward-backward messages when running Baum-Welch.", "Nevertheless, we can apply long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997) structure for source and target words embedding.", "With this technique we can take the essence of LSTM RNN and do not break any sequential generative model assumptions.", "p(∆ i |b i−1 1 , e i−1 0 , f J 1 ) that predicts the jump ∆ i = b i − b i−1 .", "Our models are close in structure to the model proposed in Luong et al.", "(2015) , where we have a component that encodes the source sentence, and another that encodes the target sentence.", "As shown in Figure 1 , we use a source side bidi- rectional LSTM embedding h j = − → h j + ← − h j , where − → h j = LSTM(W, f j , − → h j−1 ) and ← − h j = LSTM(V, f j , ← − h j+1 ), as well as a target side LSTM embedding s i−1 = LSTM(U, e i−1 , s i−2 ).", "h j , − → h j , ← − h j and s i−1 , s i−2 are vectors, W , V and U are weight matrices.", "Before the non-linear hidden layers, there is a projection layer which f1 · · · fj−1 fj fj+1 concatenates h j , s i−1 and e i−1 .", "Then the neural network-based lexicon model is given by · · · fJ e1 · · · ei−2 ei−1 − → s i−1 · · · · · · · · · · · · · · · · · · · · · − → h j ← − h j p(ei|hj, si−1, ei−1) p(e i |b i 1 , e i−1 0 , f J 1 ) := p(e i |h j , s i−1 , e i−1 ) (3) and the neural network-based alignment model p(b i |b i−1 1 , e i−1 0 , f J 1 ) := p(∆ i |h j , s i−1 , e i−1 ) (4) where j = b i−1 .", "The training criterion is the logarithm of sentence posterior probabilities over training sentence pairs (F r , E r ), r = 1, ..., R: arg max θ r log p θ (E r |F r ) (5) The derivative for a single sentence pair (F, E) = (f J 1 , e I 1 ) is: ∂ ∂θ log p θ (E|F ) = j ,j i p i (j , j|f J 1 , e I 1 ; θ) · ∂ ∂θ log p(j, e i |j , e i−1 0 , f J 1 ; θ) (6) with HMM posterior weights p i (j , j|f J 1 , e I 1 ; θ), which can be computed using the forwardbackward algorithm.", "The entire training procedure can be summarized as backpropagation in an EM framework: 1. compute: • the posterior HMM weights • the local gradients (backpropagation) 2. update neural network weights Decoding In the decoding stage we still calculate the sum over alignments and apply a target-synchronous beam search for the target string.", "The auxiliary quantity for each unknown partial string e i 0 is specified as Q(i, j; e i 0 ).", "During search, the partial hypothesis is extended from e i−1 0 to e i 0 : Q(i, j; e i 0 ) = j p(j, e i |j , e i−1 0 , f J 1 ) · Q(i − 1, j ; e i−1 0 ) (7) The decoder is shown in Algorithm 1.", "In the innermost loop (line 11-13), alignments are hypothesized and used to calculate the auxiliary quantity Q(i, j; e i 0 ).", "Then for each source position j, the lexical distribution over the full target vocabulary is computed (line 14).", "The distributions are accumulated (Q(i; e i 0 ) = j Q(i, j; e i 0 ), line 16), then sorted (line 18) and the best candidate translations (arg max e i Q(i; e i 0 )) lying within the beam are used to expand the partial hypotheses (line 19-23).", "cache is a two-dimensional list of size J × |V src | (source vocabulary size), which is used to cache the current quantities.", "Whenever a partial hypothesis in the beam ends with the sentence end symbol (<EOF>), the counter will be increased by 1 (line 26-28).", "The translation is terminated if the counter reaches the beam size or hypothesis sentence length reaches three times the source sentence length (line 6).", "If a hypothesis stops but its score is worse than other hypotheses, it is eliminated from the beam, but it still contests non-terminated hypotheses.", "During comparison the scores are normalized by hypothesis sentence length.", "Note that we have no explicit coverage constraints.", "This means that a source position can be revisited many times, whereby creating one-to-many alignment cases.", "This also allows unaligned source words.", "In the neural HMM decoder, word alignments are estimated and scored according to the distribution calculated by the neural network alignment model, leading alignment decisions to become part of the beam search.", "The search space consists of both alignment and translation decisions.", "In contrast, the search space in attentionbased decoding consists only of translation decisions.", "The decoding complexity is O(J 2 · I) (J = source sentence length, I = target sentence length) return GETBEST(hyps) 33: end function compared to O(J · I) for attention-based models.", "These are theoretical complexities of decoding on a CPU only considering source and target sentence lengths.", "In practice, the size of the neural network must also be taken into account, and there are some optimized matrix multiplications for decoding on a GPU.", "In general, the decoding speed of our model is about 3 times slower than that of a standard attention model (1.07 sentences per second vs. 3.00 sentences per second) on a single GPU.", "This is still an initial decoder and we did not spend much time on accelerating its decoding yet.", "The optimization of our decoder would be a promising future work.", "Experiments The experiments are conducted on the WMT 2017 German↔English and Chinese→English translation tasks, which consist of 5M and 23M parallel sentence pairs respectively.", "Translation quality is measured with the case sensitive BLEU (Papineni et al., 2002) and TER (Snover et al., 2006) metric on newstests 2017, which contain 3004 (German↔English) and 2001 (Chinese→English) sentence pairs.", "For German and English preprocessing, we use the Moses tokenizer with hyphen splitting, and perform truecasing with Moses scripts (Koehn et al., 2007) .", "For German↔English subword segmentation , we use 20K joint BPE operations.", "For the Chinese data, we segment it using the Jieba 1 segmenter.", "We then learn a BPE model on the segmented Chinese, also using 20K merge operations.", "During training, sentences with a length greater than 50 subwords are filtered out.", "Attention-Based System The attention-based systems are trained with Sockeye (Hieber et al., 2017) , which implement an attentional encoder-decoder with small modifications to the model in Bahdanau et al.", "(2015) .", "The encoder and decoder word embeddings are of size 620.", "The encoder consists of a bidirectional layer with 1000 LSTMs with peephole connections to encode the source side.", "We use Adam (Kingma and Ba, 2015) as optimizer with a learning rate of 0.001, and a batch size of 50.", "The network is trained with 30% dropout for up to 500K iterations and evaluated every 10K iterations on the development set with BLEU.", "Decoding is done using beam search with a beam size of 12.", "In the end the four best models are averaged as described in 1 https://github.com/fxsjy/jieba the beginning of Junczys-Dowmunt et al.", "(2016) .", "Neural Hidden Markov Model The entire neural hidden Markov model is implemented in TensorFlow (Abadi et al., 2016) .", "The feedforward models have three hidden layers of sizes 1000, 1000 and 500 respectively, with a 5word source window and a 3-gram target history.", "200 nodes are used for word embeddings.", "The output layer of the neural lexicon model consists of around 25K nodes for all subword units, while the neural alignment model has a small output layer with 201 nodes, which reflects that the aligned position can jump within the scope from −100 to 100.", "Apart from the basic projection layer, we also applied LSTM layers for the source and target words embedding.", "The embedding layers have 350 nodes and the size of the projection layer is 800 (400 + 200 + 200, Figure 1 ).", "We use Adam as optimizer with a learning rate of 0.001.", "Neural lexicon and alignment models are trained with 30% dropout and the norm of the gradient is clipped with a threshold 1 (Pascanu et al., 2014) .", "In decoding we use a beam size of 12 and the element-wise average of all weights of the four best models also results in better performance.", "Results We compare the neural HMM approach (Subsection 5.2) with the state-of-the-art attention-based approach (Subsection 5.1) on different translation tasks.", "The results are presented in Table 1 .", "Compare to the model presented in Wang et al.", "(2017) , switching to LSTM models has a clear advantage, which improves the FFNN-based system by up to 1.3% BLEU and 1.8% TER.", "It seems that the HMM model benefits from richer features, such as LSTM states, which are very similar to what an attention mechanism would require.", "We actually WMT Attention-based NMT e r w o l l t e n i e a n i r g e n d e i n e r A r t v o n A u s e i n a n d e r s e t z u n g t e i l n e h m expected it to do with less, the reason being that alignment distributions get refined a posteriori and so they do not have to be as strong a priori.", "We can also observe that the performance of our approach is comparable with the state-of-the-art attentionbased system with 25M more parameters on all three tasks.", "Alignment Analysis We show an example from the German→English newstest 2017 in Figure 2 , along with the attention and alignment matrices.", "We can observe that the neural network-based HMM could generate a more clear alignment path compared to the attention weights.", "In this example, it can exactly estimate the alignment positions for words wanted and of.", "Discussion We described a novel formulation for a neural network-based machine translation system, which applied neural networks to the conventional hidden Markov model.", "The training is end-to-end, the model is monolithic and can be used as a standalone decoder.", "This results in a more modern and efficient way to use HMM in machine translation and enables neural networks to benefit from HMMs.", "Experiments show that replacing attention with alignment does not improve the translation performance of NMT significantly.", "One possible reason is that alignment may fail to capture relevant contexts as attention does.", "While alignment aims to identify translation equivalents between two lan-guages, attention is designed to find relevant context for predicting the next target word.", "Source words with high attention weights are not necessarily translation equivalents of the target word.", "Although using alignment does not lead to significant improvements in terms of BLEU over attention, we think alignment-based NMT models are still useful for automatic post editing and developing coverage-based models.", "These might be interesting future directions to explore." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5", "5.1", "5.2", "5.3", "5.4", "6" ], "paper_header_content": [ "Introduction", "Motivation", "Neural Hidden Markov Model", "Decoding", "Experiments", "Attention-Based System", "Neural Hidden Markov Model", "Results", "Alignment Analysis", "Discussion" ] }
GEM-SciDuet-train-42#paper-1062#slide-8
Appendix Analysis
Attention-based NMT Neural HMM he he er wollte nie an irgendeiner A rt von A useinandersetzung teilnehm en er wollte nie an irgendeiner A rt von A useinandersetzung teilnehm en I Attention weight and alignment matrices visualized in heat map form I Generated by attention NMT baseline and neural HMM 28-year-old cook found dead in San Francisco Mall source reference attention NMT neural HMM 28-jahriger Koch in San Francisco Mall tot aufgefunden 28-Year-Old Chef Found Dead at San Francisco Mall 28-year-old cook in San Francisco Mall found dead Frankie hat in GB bereits fast 30 Jahre Gewinner geritten , was toll ist . Frankie s been riding winners in the UK for the best part of 30 years which is great to see . Frankie has ridden winners in the UK for almost 30 years , which is great . source reference attention NMT neural HMM Wer baut Braunschweigs gunstige Wohnungen ? Who is going to build Braunschweig s low-cost housing ? Who does Braunschweig build cheap apartments ? Who builds Braunschweig s cheap apartments ? I Sample translations from the WMT GermanEnglish newstest2017 set underline source words of interest italicize correct translations bold-face for incorrect translations
Attention-based NMT Neural HMM he he er wollte nie an irgendeiner A rt von A useinandersetzung teilnehm en er wollte nie an irgendeiner A rt von A useinandersetzung teilnehm en I Attention weight and alignment matrices visualized in heat map form I Generated by attention NMT baseline and neural HMM 28-year-old cook found dead in San Francisco Mall source reference attention NMT neural HMM 28-jahriger Koch in San Francisco Mall tot aufgefunden 28-Year-Old Chef Found Dead at San Francisco Mall 28-year-old cook in San Francisco Mall found dead Frankie hat in GB bereits fast 30 Jahre Gewinner geritten , was toll ist . Frankie s been riding winners in the UK for the best part of 30 years which is great to see . Frankie has ridden winners in the UK for almost 30 years , which is great . source reference attention NMT neural HMM Wer baut Braunschweigs gunstige Wohnungen ? Who is going to build Braunschweig s low-cost housing ? Who does Braunschweig build cheap apartments ? Who builds Braunschweig s cheap apartments ? I Sample translations from the WMT GermanEnglish newstest2017 set underline source words of interest italicize correct translations bold-face for incorrect translations
[]
GEM-SciDuet-train-43#paper-1063#slide-0
1063
Analyzing Linguistic Differences between Owner and Staff Attributed Tweets
Research on social media has to date assumed that all posts from an account are authored by the same person. In this study, we challenge this assumption and study the linguistic differences between posts signed by the account owner or attributed to their staff. We introduce a novel data set of tweets posted by U.S. politicians who self-reported their tweets using a signature. We analyze the linguistic topics and style features that distinguish the two types of tweets. Predictive results show that we are able to distinguish between owner and staff attributed tweets with good accuracy, even when not using any training data from that account.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103 ], "paper_content_text": [ "Introduction Social media has become one of the main venues for breaking news that come directly from primary sources.", "Platforms such as Twitter have started to play a key role in elections (Politico, 2017) and have become widely used by public figures to disseminate their activities and opinions.", "However, posts are rarely authored by the public figure who owns the account; rather, they are posted by staff who update followers on the thoughts, stances and activities of the owner.", "This study introduces a new application of Natural Language Processing: predicting which posts from a Twitter account are authored by the owner of an account.", "Direct applications include predicting owner authored tweets for unseen users and can be useful to political or PR advisers to gain a better understanding on how to craft more personal or engaging messages.", "Past research has experimented with predicting user types or traits from tweets (Pennacchiotti and Popescu, 2011; McCorriston et al., 2015) .", "However, all these studies have relied on the assumption that tweets posted from an account were all written by the same person.", "No previous study has looked at predicting which tweets from the same Twitter account were authored by different persons, here staffers or the owner of the Twitter account.", "Figure 1 shows an example of a U.S. politician who signs their tweets by adding '-PM' at the end of the tweet.", "Staff posts are likely to be different in terms of topics, style, timing or impact to posts attributed to the owner of the account.", "The goal of the present study is thus to: • analyze linguistic differences between the two types of tweets in terms of words, topics, style, type and impact; • build a model that predicts if a tweet is attributed to the account owner or their staff.", "To this end, we introduce a novel data set consisting of over 200,000 tweets from accounts of 147 U.S. politicians that are attributed to the owner or their staff.", "1 Evaluation on unseen accounts leads to an accuracy of up to .741 AUC.", "Similar account sharing behaviors exists in several other domains such as Twitter accounts of entertainers (artists, TV hosts), public figures or CEOs who employ staff to author their tweets or with organi-zational accounts, which alternate between posting messages about important company updates and tweets about promotions, PR activity or customer service.", "Direct applications of our analysis include automatically predicting staff tweets for unseen users and gaining a better understanding on how to craft more personal messages which can be useful to political or PR advisers.", "Related Work Several studies have looked at predicting the type of a Twitter account, most frequently between individual or organizational, using linguistic features (De Choudhury et al., 2012; McCorriston et al., 2015; Mac Kim et al., 2017) .", "A broad literature has been devoted to predicting personal traits from language use on Twitter, such as gender (Burger et al., 2011) , age (Nguyen et al., 2011) , geolocation (Cheng et al., 2010) , political preference (Volkova et al., 2014; , income , impact (Lampos et al., 2014) , socioeconomic status (Aletras and Chamberlain, 2018) , race (Preoţiuc-Pietro and Ungar, 2018) or personality (Schwartz et al., 2013a; Preoţiuc-Pietro et al., 2016 ).", "Related to our task is authorship attribution, where the goal is to predict the author of a given text.", "With few exceptions (Schwartz et al., 2013b) , this was attempted on larger documents or books (Popescu and Dinu, 2007; Stamatatos, 2009; Juola et al., 2008; Koppel et al., 2009 ).", "In our case, the experiments are set up as the same binary classification task regardless of the account (owner vs. staffer) which, unlike authorship attribution, allows for experiments across multiple user accounts.", "Additionally, in most authorship attribution studies, differences between authors consist mainly of the topics they write about.", "Our experimental setup limits the extent to which topic presence impacts the prediction, as all tweets are posted by US politicians and within the topics of the tweets from an account should be similar to each other.", "Pastiche detection is another related area of research (Dinu et al., 2012) , where models are trained to distinguish between an original text and a text written by one who aims to imitate the style of the original author, resulting in the documents having similar topics.", "Data We build a data set of Twitter accounts used by both the owner (the person who the account represents) and their staff.", "Several Twitter users attribute the authorship of a subset of their tweets to themselves by signing these with their initials or a hashtag, following the example of Barack Obama (Time, 2011) .", "The rest of the tweets are implicitly attributed to their staff.", "Thus, we use the Twitter user description to identify potential accounts where owners sign their tweets.", "We collect in total 1,365 potential user descriptions from Twitter that match a set of keyphrases indicative of personal tweet signatures (i.e., tweets by me signed, tweets signed, tweets are signed, staff unless noted, tweets from staff unless signed, tweets signed by, my tweets are signed).", "We then manually check all descriptions and filter out those not mentioning a signature, leaving us with 628 accounts.", "We aim to perform our analysis on a set of users from the same domain to limit variations caused by topic and we observe that the most numerous category of users who sign their messages are U.S. politicians, which leaves us with 147 accounts.", "We download all the tweets posted by these accounts that are accessible through the Twitter API (a maximum of 3,200).", "We remove the retweets made by an account, as these are not attributed to either the account owner or their staff.", "This results in a data set with a total of 202,024 tweets.", "We manually identified each user's signature from their profile description.", "To assign labels to tweets, we automatically matched the signature to each tweet using a regular expression.", "We remove the signature from all predictive experiments and feature analyses as this would make the classification task trivial.", "In total, 9,715 tweets (4.8% of the total) are signed by the account owners.", "While our task is to predict if a tweet is attributed to the owner or its staff, we assume this as a proxy to authorship if account owners are truthful when using the signature in their tweets.", "There is little incentive for owners to be untruthful, with potentially serious negative ramifications associated with public deception.", "We use DLATK, which handles social media content and markup such as emoticons or hashtags (Schwartz et al., 2017) .", "Further, we anonymize all usernames present in the tweet and URLs and replace them with placeholder tokens.", "Features We use a broad set of linguistic features motivated by past research on user trait prediction in our attempt to predict and interpret the difference between owner and staff attributed tweets.", "These include: LIWC.", "Traditional psychology studies use a dictionary-based approach to representing text.", "The most popular method is based on Linguistic Inquiry and Word Count (LIWC) (Pennebaker et al., 2001) consisting of 73 manually constructed lists of words (Pennebaker et al., 2015) including some specific parts-of-speech, topical or stylistic categories.", "Each message is thereby represented as a frequency distribution over these categories.", "Word2Vec Clusters.", "An alternative to LIWC is to use automatically generated word clusters.", "These clusters of words can be thought of as topics, i.e., groups of words that are semantically and/or syntactically similar.", "The clusters help reduce the feature space and provide good interpretability.", "We use the method by to compute topics using Word2Vec similarity (Mikolov et al., 2013) and spectral clustering (Shi and Malik, 2000; von Luxburg, 2007) of different sizes.", "We present results using 200 topics as this gave the best predictive results.", "Each message is thus represented as an unweighted distribution over clusters.", "Sentiment & Emotions.", "We also investigate the extent to which tweets posted by the account owner express more or fewer emotions.", "The most popular model of discrete emotions is the Ekman model (Ekman, 1992; Strapparava and Mihalcea, 2008; Strapparava et al., 2004) which posits the existence of six basic emotions: anger, disgust, fear, joy, sadness and surprise.", "We automatically quantify these emotions from our Twitter data set using a publicly available crowd-sourcing derived lexicon of words associated with any of the six emotions, as well as general positive and negative sentiment Turney, 2010, 2013) .", "Using these models, we assign sentiment and emotion probabilities to each message.", "Unigrams.", "We use the bag-of-words representation to reduce each message to a normalised frequency distribution over the vocabulary consisting of all words used by at least 20% of the users (2,099 words in total).", "We chose this smaller vocabulary that is more representative of words used by a larger set of users such that models would be able to transfer better to unseen users.", "Tweet Features.", "We compute additional tweetlevel features such as: the length in characters and tokens (Length), the type of tweet encoding if this is an @-reply or contains a URL (Tweet Type), the time of the tweet represented as a one-hot vector over the hour of day and day of week (Post Time) and the number of retweets and likes the tweet received (Impact).", "Although the latter features are not available in a real-time predictive scenario, they are useful for analysis.", "Prediction Our hypothesis is that tweets attributed to the owner of the account are different than those attributed to staff, and that these patterns generalize to held-out accounts not included in the training data.", "Hence, we build predictive models and test them in two setups.", "First, we split the users into ten folds.", "Tweets used in training are all posted by 80% of the users, tweets from 10% of the users are used for hyperparameter tuning and tweets from the final 10% of the users are used in testing (Users).", "In the second experimental setup, we split all tweets into ten folds using the same split sizes (Tweets).", "We report the average performance across the ten folds.", "Due to class imbalance -only 4.8% of tweets are posted by the account owners -results are measured in ROC AUC, which is a more suitable metric in this setup.", "In our predictive experiments, we used logistic regression with Elastic Net regularization.", "As features, we use all feature types described in the previous section separately as well as together using a logistic regression model combining all feature sets (Combined).", "The results using both experimental setups -holding-out tweets or users -are presented in Table 1 .", "Results show that we can predict owner tweets with good performance and consistently better than chance, even when we have no training data for the users in the test set.", "The held-out user experimental setup is more challenging as reflected by lower predictive numbers for most language features, except for the LIWC features.", "One potential explanation for the high performance of the LIWC features in this setup is that these are low dimensional and are better at identifying general patterns which transfer better to unseen users rather than overfit the users from the training data.", "Table 1 : Predictive results with each feature type for classifying tweets attributed to account owners or staffers, measured using ROC AUC.", "Evaluation is performed using 10-fold cross-validation by holding out in each fold either: 10% of the tweets (Tweets) or all tweets posted by 10% of the users (Users).", "Analysis In this section we investigate the linguistic and tweet features distinctive of tweets attributed to the account owner and to staff.", "A few accounts are outliers in the frequency of their signed tweets, with up to 80% owner attributed tweets compared to only 4.8% on average.", "We perform our analysis on a subset of the data, in order for our linguistic analysis not to be driven by a few prolific users or by any imbalance in the ratio of owner/staff tweets across users.", "The data set is obtained as follows.", "Each account can contribute a minimum of 10, maximum of 100 owner attributed tweets.", "We then sample staff attributed tweets from each account such that these are nine times the number of tweets signed by the owner.", "Newer messages are preferred when sampling.", "This leads to a data set of 28,150 tweets with exactly a tenth of them attributed to the account owners (2,815).", "We perform analysis of all previously described feature sets using Pearson correlations following Schwartz et al.", "(2013a) .", "We compute Pearson correlations independently for each feature between its distribution across messages (features are first normalized to sum up to unit for each message) and a variable encoding if the tweet was attributed to the account owner or not.", "We correct for multiple comparisons using Simes correction.", "Top unigrams correlated with owner attributed tweets are presented in Table 3 , with the other group textual features (LIWC categories, Word2Vec topics and emotion features) in Table 2 .", "Tweet feature results are presented in Table 4 .", "LIWC Features r Name Top Words .111 FUNCTION to, the, for, in, of, and, a, is, on, out .102 PRONOUN our, we, you, i, your, my, us, his .101 AFFECT great, thank, support, thanks, proud, care .098 SOCIAL our, we, you, your, who, us, his, help, they .107 PREP to, for, in, of, on, at, with, Our analysis shows that owner tweets are associated to a greater extent with language destined to convey emotion or a state of being and to signal a personal relationship with another political figure.", "Tweets of congratulations, condolences and support are also specific of signed tweets.", "These tweets tend to be retweeted less by others, but get more likes than staff attributed tweets.", "Tweets attributed to account owners are more likely to be posted on weekends, are less likely to be replies to others and contain less links to websites or images.", "Remarkably, there are no textual features significantly correlated with staff attributed tweets.", "An analysis showed that these are more diverse and thus no significant patterns are consistent in association with text features such as unigrams, topic or LIWC categories.", "Conclusions This study introduced a novel application of NLP: predicting if tweets from an account are attributed to their owner or to staffers.", "Past research on predicting and studying Twitter account characteristics such as type or personal traits (e.g., gender, age) assumed that the same person is authoring all posts from that account.", "Using a novel data set, we showed that owner attributed tweets exhibit distinct linguistic patterns to those attributed to staffers.", "Even when tested on held-out user accounts, our predictive model of owner tweets reaches an average performance of .741 AUC.", "Future work could study other types of accounts with similar posting behaviors such as organizational accounts, explore other sources for ground truth tweet identity information (Robinson, 2016) or study the effects of user traits such as gender or political affiliation in tweeting signed content." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5", "6", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Data", "Features", "Prediction", "Analysis", "Conclusions" ] }
GEM-SciDuet-train-43#paper-1063#slide-0
Motivation
2019 Bloomberg Finance L.P. All rights reserved.
2019 Bloomberg Finance L.P. All rights reserved.
[]
GEM-SciDuet-train-43#paper-1063#slide-1
1063
Analyzing Linguistic Differences between Owner and Staff Attributed Tweets
Research on social media has to date assumed that all posts from an account are authored by the same person. In this study, we challenge this assumption and study the linguistic differences between posts signed by the account owner or attributed to their staff. We introduce a novel data set of tweets posted by U.S. politicians who self-reported their tweets using a signature. We analyze the linguistic topics and style features that distinguish the two types of tweets. Predictive results show that we are able to distinguish between owner and staff attributed tweets with good accuracy, even when not using any training data from that account.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103 ], "paper_content_text": [ "Introduction Social media has become one of the main venues for breaking news that come directly from primary sources.", "Platforms such as Twitter have started to play a key role in elections (Politico, 2017) and have become widely used by public figures to disseminate their activities and opinions.", "However, posts are rarely authored by the public figure who owns the account; rather, they are posted by staff who update followers on the thoughts, stances and activities of the owner.", "This study introduces a new application of Natural Language Processing: predicting which posts from a Twitter account are authored by the owner of an account.", "Direct applications include predicting owner authored tweets for unseen users and can be useful to political or PR advisers to gain a better understanding on how to craft more personal or engaging messages.", "Past research has experimented with predicting user types or traits from tweets (Pennacchiotti and Popescu, 2011; McCorriston et al., 2015) .", "However, all these studies have relied on the assumption that tweets posted from an account were all written by the same person.", "No previous study has looked at predicting which tweets from the same Twitter account were authored by different persons, here staffers or the owner of the Twitter account.", "Figure 1 shows an example of a U.S. politician who signs their tweets by adding '-PM' at the end of the tweet.", "Staff posts are likely to be different in terms of topics, style, timing or impact to posts attributed to the owner of the account.", "The goal of the present study is thus to: • analyze linguistic differences between the two types of tweets in terms of words, topics, style, type and impact; • build a model that predicts if a tweet is attributed to the account owner or their staff.", "To this end, we introduce a novel data set consisting of over 200,000 tweets from accounts of 147 U.S. politicians that are attributed to the owner or their staff.", "1 Evaluation on unseen accounts leads to an accuracy of up to .741 AUC.", "Similar account sharing behaviors exists in several other domains such as Twitter accounts of entertainers (artists, TV hosts), public figures or CEOs who employ staff to author their tweets or with organi-zational accounts, which alternate between posting messages about important company updates and tweets about promotions, PR activity or customer service.", "Direct applications of our analysis include automatically predicting staff tweets for unseen users and gaining a better understanding on how to craft more personal messages which can be useful to political or PR advisers.", "Related Work Several studies have looked at predicting the type of a Twitter account, most frequently between individual or organizational, using linguistic features (De Choudhury et al., 2012; McCorriston et al., 2015; Mac Kim et al., 2017) .", "A broad literature has been devoted to predicting personal traits from language use on Twitter, such as gender (Burger et al., 2011) , age (Nguyen et al., 2011) , geolocation (Cheng et al., 2010) , political preference (Volkova et al., 2014; , income , impact (Lampos et al., 2014) , socioeconomic status (Aletras and Chamberlain, 2018) , race (Preoţiuc-Pietro and Ungar, 2018) or personality (Schwartz et al., 2013a; Preoţiuc-Pietro et al., 2016 ).", "Related to our task is authorship attribution, where the goal is to predict the author of a given text.", "With few exceptions (Schwartz et al., 2013b) , this was attempted on larger documents or books (Popescu and Dinu, 2007; Stamatatos, 2009; Juola et al., 2008; Koppel et al., 2009 ).", "In our case, the experiments are set up as the same binary classification task regardless of the account (owner vs. staffer) which, unlike authorship attribution, allows for experiments across multiple user accounts.", "Additionally, in most authorship attribution studies, differences between authors consist mainly of the topics they write about.", "Our experimental setup limits the extent to which topic presence impacts the prediction, as all tweets are posted by US politicians and within the topics of the tweets from an account should be similar to each other.", "Pastiche detection is another related area of research (Dinu et al., 2012) , where models are trained to distinguish between an original text and a text written by one who aims to imitate the style of the original author, resulting in the documents having similar topics.", "Data We build a data set of Twitter accounts used by both the owner (the person who the account represents) and their staff.", "Several Twitter users attribute the authorship of a subset of their tweets to themselves by signing these with their initials or a hashtag, following the example of Barack Obama (Time, 2011) .", "The rest of the tweets are implicitly attributed to their staff.", "Thus, we use the Twitter user description to identify potential accounts where owners sign their tweets.", "We collect in total 1,365 potential user descriptions from Twitter that match a set of keyphrases indicative of personal tweet signatures (i.e., tweets by me signed, tweets signed, tweets are signed, staff unless noted, tweets from staff unless signed, tweets signed by, my tweets are signed).", "We then manually check all descriptions and filter out those not mentioning a signature, leaving us with 628 accounts.", "We aim to perform our analysis on a set of users from the same domain to limit variations caused by topic and we observe that the most numerous category of users who sign their messages are U.S. politicians, which leaves us with 147 accounts.", "We download all the tweets posted by these accounts that are accessible through the Twitter API (a maximum of 3,200).", "We remove the retweets made by an account, as these are not attributed to either the account owner or their staff.", "This results in a data set with a total of 202,024 tweets.", "We manually identified each user's signature from their profile description.", "To assign labels to tweets, we automatically matched the signature to each tweet using a regular expression.", "We remove the signature from all predictive experiments and feature analyses as this would make the classification task trivial.", "In total, 9,715 tweets (4.8% of the total) are signed by the account owners.", "While our task is to predict if a tweet is attributed to the owner or its staff, we assume this as a proxy to authorship if account owners are truthful when using the signature in their tweets.", "There is little incentive for owners to be untruthful, with potentially serious negative ramifications associated with public deception.", "We use DLATK, which handles social media content and markup such as emoticons or hashtags (Schwartz et al., 2017) .", "Further, we anonymize all usernames present in the tweet and URLs and replace them with placeholder tokens.", "Features We use a broad set of linguistic features motivated by past research on user trait prediction in our attempt to predict and interpret the difference between owner and staff attributed tweets.", "These include: LIWC.", "Traditional psychology studies use a dictionary-based approach to representing text.", "The most popular method is based on Linguistic Inquiry and Word Count (LIWC) (Pennebaker et al., 2001) consisting of 73 manually constructed lists of words (Pennebaker et al., 2015) including some specific parts-of-speech, topical or stylistic categories.", "Each message is thereby represented as a frequency distribution over these categories.", "Word2Vec Clusters.", "An alternative to LIWC is to use automatically generated word clusters.", "These clusters of words can be thought of as topics, i.e., groups of words that are semantically and/or syntactically similar.", "The clusters help reduce the feature space and provide good interpretability.", "We use the method by to compute topics using Word2Vec similarity (Mikolov et al., 2013) and spectral clustering (Shi and Malik, 2000; von Luxburg, 2007) of different sizes.", "We present results using 200 topics as this gave the best predictive results.", "Each message is thus represented as an unweighted distribution over clusters.", "Sentiment & Emotions.", "We also investigate the extent to which tweets posted by the account owner express more or fewer emotions.", "The most popular model of discrete emotions is the Ekman model (Ekman, 1992; Strapparava and Mihalcea, 2008; Strapparava et al., 2004) which posits the existence of six basic emotions: anger, disgust, fear, joy, sadness and surprise.", "We automatically quantify these emotions from our Twitter data set using a publicly available crowd-sourcing derived lexicon of words associated with any of the six emotions, as well as general positive and negative sentiment Turney, 2010, 2013) .", "Using these models, we assign sentiment and emotion probabilities to each message.", "Unigrams.", "We use the bag-of-words representation to reduce each message to a normalised frequency distribution over the vocabulary consisting of all words used by at least 20% of the users (2,099 words in total).", "We chose this smaller vocabulary that is more representative of words used by a larger set of users such that models would be able to transfer better to unseen users.", "Tweet Features.", "We compute additional tweetlevel features such as: the length in characters and tokens (Length), the type of tweet encoding if this is an @-reply or contains a URL (Tweet Type), the time of the tweet represented as a one-hot vector over the hour of day and day of week (Post Time) and the number of retweets and likes the tweet received (Impact).", "Although the latter features are not available in a real-time predictive scenario, they are useful for analysis.", "Prediction Our hypothesis is that tweets attributed to the owner of the account are different than those attributed to staff, and that these patterns generalize to held-out accounts not included in the training data.", "Hence, we build predictive models and test them in two setups.", "First, we split the users into ten folds.", "Tweets used in training are all posted by 80% of the users, tweets from 10% of the users are used for hyperparameter tuning and tweets from the final 10% of the users are used in testing (Users).", "In the second experimental setup, we split all tweets into ten folds using the same split sizes (Tweets).", "We report the average performance across the ten folds.", "Due to class imbalance -only 4.8% of tweets are posted by the account owners -results are measured in ROC AUC, which is a more suitable metric in this setup.", "In our predictive experiments, we used logistic regression with Elastic Net regularization.", "As features, we use all feature types described in the previous section separately as well as together using a logistic regression model combining all feature sets (Combined).", "The results using both experimental setups -holding-out tweets or users -are presented in Table 1 .", "Results show that we can predict owner tweets with good performance and consistently better than chance, even when we have no training data for the users in the test set.", "The held-out user experimental setup is more challenging as reflected by lower predictive numbers for most language features, except for the LIWC features.", "One potential explanation for the high performance of the LIWC features in this setup is that these are low dimensional and are better at identifying general patterns which transfer better to unseen users rather than overfit the users from the training data.", "Table 1 : Predictive results with each feature type for classifying tweets attributed to account owners or staffers, measured using ROC AUC.", "Evaluation is performed using 10-fold cross-validation by holding out in each fold either: 10% of the tweets (Tweets) or all tweets posted by 10% of the users (Users).", "Analysis In this section we investigate the linguistic and tweet features distinctive of tweets attributed to the account owner and to staff.", "A few accounts are outliers in the frequency of their signed tweets, with up to 80% owner attributed tweets compared to only 4.8% on average.", "We perform our analysis on a subset of the data, in order for our linguistic analysis not to be driven by a few prolific users or by any imbalance in the ratio of owner/staff tweets across users.", "The data set is obtained as follows.", "Each account can contribute a minimum of 10, maximum of 100 owner attributed tweets.", "We then sample staff attributed tweets from each account such that these are nine times the number of tweets signed by the owner.", "Newer messages are preferred when sampling.", "This leads to a data set of 28,150 tweets with exactly a tenth of them attributed to the account owners (2,815).", "We perform analysis of all previously described feature sets using Pearson correlations following Schwartz et al.", "(2013a) .", "We compute Pearson correlations independently for each feature between its distribution across messages (features are first normalized to sum up to unit for each message) and a variable encoding if the tweet was attributed to the account owner or not.", "We correct for multiple comparisons using Simes correction.", "Top unigrams correlated with owner attributed tweets are presented in Table 3 , with the other group textual features (LIWC categories, Word2Vec topics and emotion features) in Table 2 .", "Tweet feature results are presented in Table 4 .", "LIWC Features r Name Top Words .111 FUNCTION to, the, for, in, of, and, a, is, on, out .102 PRONOUN our, we, you, i, your, my, us, his .101 AFFECT great, thank, support, thanks, proud, care .098 SOCIAL our, we, you, your, who, us, his, help, they .107 PREP to, for, in, of, on, at, with, Our analysis shows that owner tweets are associated to a greater extent with language destined to convey emotion or a state of being and to signal a personal relationship with another political figure.", "Tweets of congratulations, condolences and support are also specific of signed tweets.", "These tweets tend to be retweeted less by others, but get more likes than staff attributed tweets.", "Tweets attributed to account owners are more likely to be posted on weekends, are less likely to be replies to others and contain less links to websites or images.", "Remarkably, there are no textual features significantly correlated with staff attributed tweets.", "An analysis showed that these are more diverse and thus no significant patterns are consistent in association with text features such as unigrams, topic or LIWC categories.", "Conclusions This study introduced a novel application of NLP: predicting if tweets from an account are attributed to their owner or to staffers.", "Past research on predicting and studying Twitter account characteristics such as type or personal traits (e.g., gender, age) assumed that the same person is authoring all posts from that account.", "Using a novel data set, we showed that owner attributed tweets exhibit distinct linguistic patterns to those attributed to staffers.", "Even when tested on held-out user accounts, our predictive model of owner tweets reaches an average performance of .741 AUC.", "Future work could study other types of accounts with similar posting behaviors such as organizational accounts, explore other sources for ground truth tweet identity information (Robinson, 2016) or study the effects of user traits such as gender or political affiliation in tweeting signed content." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5", "6", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Data", "Features", "Prediction", "Analysis", "Conclusions" ] }
GEM-SciDuet-train-43#paper-1063#slide-1
Aim
2019 Bloomberg Finance L.P. All rights reserved.
2019 Bloomberg Finance L.P. All rights reserved.
[]
GEM-SciDuet-train-43#paper-1063#slide-2
1063
Analyzing Linguistic Differences between Owner and Staff Attributed Tweets
Research on social media has to date assumed that all posts from an account are authored by the same person. In this study, we challenge this assumption and study the linguistic differences between posts signed by the account owner or attributed to their staff. We introduce a novel data set of tweets posted by U.S. politicians who self-reported their tweets using a signature. We analyze the linguistic topics and style features that distinguish the two types of tweets. Predictive results show that we are able to distinguish between owner and staff attributed tweets with good accuracy, even when not using any training data from that account.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103 ], "paper_content_text": [ "Introduction Social media has become one of the main venues for breaking news that come directly from primary sources.", "Platforms such as Twitter have started to play a key role in elections (Politico, 2017) and have become widely used by public figures to disseminate their activities and opinions.", "However, posts are rarely authored by the public figure who owns the account; rather, they are posted by staff who update followers on the thoughts, stances and activities of the owner.", "This study introduces a new application of Natural Language Processing: predicting which posts from a Twitter account are authored by the owner of an account.", "Direct applications include predicting owner authored tweets for unseen users and can be useful to political or PR advisers to gain a better understanding on how to craft more personal or engaging messages.", "Past research has experimented with predicting user types or traits from tweets (Pennacchiotti and Popescu, 2011; McCorriston et al., 2015) .", "However, all these studies have relied on the assumption that tweets posted from an account were all written by the same person.", "No previous study has looked at predicting which tweets from the same Twitter account were authored by different persons, here staffers or the owner of the Twitter account.", "Figure 1 shows an example of a U.S. politician who signs their tweets by adding '-PM' at the end of the tweet.", "Staff posts are likely to be different in terms of topics, style, timing or impact to posts attributed to the owner of the account.", "The goal of the present study is thus to: • analyze linguistic differences between the two types of tweets in terms of words, topics, style, type and impact; • build a model that predicts if a tweet is attributed to the account owner or their staff.", "To this end, we introduce a novel data set consisting of over 200,000 tweets from accounts of 147 U.S. politicians that are attributed to the owner or their staff.", "1 Evaluation on unseen accounts leads to an accuracy of up to .741 AUC.", "Similar account sharing behaviors exists in several other domains such as Twitter accounts of entertainers (artists, TV hosts), public figures or CEOs who employ staff to author their tweets or with organi-zational accounts, which alternate between posting messages about important company updates and tweets about promotions, PR activity or customer service.", "Direct applications of our analysis include automatically predicting staff tweets for unseen users and gaining a better understanding on how to craft more personal messages which can be useful to political or PR advisers.", "Related Work Several studies have looked at predicting the type of a Twitter account, most frequently between individual or organizational, using linguistic features (De Choudhury et al., 2012; McCorriston et al., 2015; Mac Kim et al., 2017) .", "A broad literature has been devoted to predicting personal traits from language use on Twitter, such as gender (Burger et al., 2011) , age (Nguyen et al., 2011) , geolocation (Cheng et al., 2010) , political preference (Volkova et al., 2014; , income , impact (Lampos et al., 2014) , socioeconomic status (Aletras and Chamberlain, 2018) , race (Preoţiuc-Pietro and Ungar, 2018) or personality (Schwartz et al., 2013a; Preoţiuc-Pietro et al., 2016 ).", "Related to our task is authorship attribution, where the goal is to predict the author of a given text.", "With few exceptions (Schwartz et al., 2013b) , this was attempted on larger documents or books (Popescu and Dinu, 2007; Stamatatos, 2009; Juola et al., 2008; Koppel et al., 2009 ).", "In our case, the experiments are set up as the same binary classification task regardless of the account (owner vs. staffer) which, unlike authorship attribution, allows for experiments across multiple user accounts.", "Additionally, in most authorship attribution studies, differences between authors consist mainly of the topics they write about.", "Our experimental setup limits the extent to which topic presence impacts the prediction, as all tweets are posted by US politicians and within the topics of the tweets from an account should be similar to each other.", "Pastiche detection is another related area of research (Dinu et al., 2012) , where models are trained to distinguish between an original text and a text written by one who aims to imitate the style of the original author, resulting in the documents having similar topics.", "Data We build a data set of Twitter accounts used by both the owner (the person who the account represents) and their staff.", "Several Twitter users attribute the authorship of a subset of their tweets to themselves by signing these with their initials or a hashtag, following the example of Barack Obama (Time, 2011) .", "The rest of the tweets are implicitly attributed to their staff.", "Thus, we use the Twitter user description to identify potential accounts where owners sign their tweets.", "We collect in total 1,365 potential user descriptions from Twitter that match a set of keyphrases indicative of personal tweet signatures (i.e., tweets by me signed, tweets signed, tweets are signed, staff unless noted, tweets from staff unless signed, tweets signed by, my tweets are signed).", "We then manually check all descriptions and filter out those not mentioning a signature, leaving us with 628 accounts.", "We aim to perform our analysis on a set of users from the same domain to limit variations caused by topic and we observe that the most numerous category of users who sign their messages are U.S. politicians, which leaves us with 147 accounts.", "We download all the tweets posted by these accounts that are accessible through the Twitter API (a maximum of 3,200).", "We remove the retweets made by an account, as these are not attributed to either the account owner or their staff.", "This results in a data set with a total of 202,024 tweets.", "We manually identified each user's signature from their profile description.", "To assign labels to tweets, we automatically matched the signature to each tweet using a regular expression.", "We remove the signature from all predictive experiments and feature analyses as this would make the classification task trivial.", "In total, 9,715 tweets (4.8% of the total) are signed by the account owners.", "While our task is to predict if a tweet is attributed to the owner or its staff, we assume this as a proxy to authorship if account owners are truthful when using the signature in their tweets.", "There is little incentive for owners to be untruthful, with potentially serious negative ramifications associated with public deception.", "We use DLATK, which handles social media content and markup such as emoticons or hashtags (Schwartz et al., 2017) .", "Further, we anonymize all usernames present in the tweet and URLs and replace them with placeholder tokens.", "Features We use a broad set of linguistic features motivated by past research on user trait prediction in our attempt to predict and interpret the difference between owner and staff attributed tweets.", "These include: LIWC.", "Traditional psychology studies use a dictionary-based approach to representing text.", "The most popular method is based on Linguistic Inquiry and Word Count (LIWC) (Pennebaker et al., 2001) consisting of 73 manually constructed lists of words (Pennebaker et al., 2015) including some specific parts-of-speech, topical or stylistic categories.", "Each message is thereby represented as a frequency distribution over these categories.", "Word2Vec Clusters.", "An alternative to LIWC is to use automatically generated word clusters.", "These clusters of words can be thought of as topics, i.e., groups of words that are semantically and/or syntactically similar.", "The clusters help reduce the feature space and provide good interpretability.", "We use the method by to compute topics using Word2Vec similarity (Mikolov et al., 2013) and spectral clustering (Shi and Malik, 2000; von Luxburg, 2007) of different sizes.", "We present results using 200 topics as this gave the best predictive results.", "Each message is thus represented as an unweighted distribution over clusters.", "Sentiment & Emotions.", "We also investigate the extent to which tweets posted by the account owner express more or fewer emotions.", "The most popular model of discrete emotions is the Ekman model (Ekman, 1992; Strapparava and Mihalcea, 2008; Strapparava et al., 2004) which posits the existence of six basic emotions: anger, disgust, fear, joy, sadness and surprise.", "We automatically quantify these emotions from our Twitter data set using a publicly available crowd-sourcing derived lexicon of words associated with any of the six emotions, as well as general positive and negative sentiment Turney, 2010, 2013) .", "Using these models, we assign sentiment and emotion probabilities to each message.", "Unigrams.", "We use the bag-of-words representation to reduce each message to a normalised frequency distribution over the vocabulary consisting of all words used by at least 20% of the users (2,099 words in total).", "We chose this smaller vocabulary that is more representative of words used by a larger set of users such that models would be able to transfer better to unseen users.", "Tweet Features.", "We compute additional tweetlevel features such as: the length in characters and tokens (Length), the type of tweet encoding if this is an @-reply or contains a URL (Tweet Type), the time of the tweet represented as a one-hot vector over the hour of day and day of week (Post Time) and the number of retweets and likes the tweet received (Impact).", "Although the latter features are not available in a real-time predictive scenario, they are useful for analysis.", "Prediction Our hypothesis is that tweets attributed to the owner of the account are different than those attributed to staff, and that these patterns generalize to held-out accounts not included in the training data.", "Hence, we build predictive models and test them in two setups.", "First, we split the users into ten folds.", "Tweets used in training are all posted by 80% of the users, tweets from 10% of the users are used for hyperparameter tuning and tweets from the final 10% of the users are used in testing (Users).", "In the second experimental setup, we split all tweets into ten folds using the same split sizes (Tweets).", "We report the average performance across the ten folds.", "Due to class imbalance -only 4.8% of tweets are posted by the account owners -results are measured in ROC AUC, which is a more suitable metric in this setup.", "In our predictive experiments, we used logistic regression with Elastic Net regularization.", "As features, we use all feature types described in the previous section separately as well as together using a logistic regression model combining all feature sets (Combined).", "The results using both experimental setups -holding-out tweets or users -are presented in Table 1 .", "Results show that we can predict owner tweets with good performance and consistently better than chance, even when we have no training data for the users in the test set.", "The held-out user experimental setup is more challenging as reflected by lower predictive numbers for most language features, except for the LIWC features.", "One potential explanation for the high performance of the LIWC features in this setup is that these are low dimensional and are better at identifying general patterns which transfer better to unseen users rather than overfit the users from the training data.", "Table 1 : Predictive results with each feature type for classifying tweets attributed to account owners or staffers, measured using ROC AUC.", "Evaluation is performed using 10-fold cross-validation by holding out in each fold either: 10% of the tweets (Tweets) or all tweets posted by 10% of the users (Users).", "Analysis In this section we investigate the linguistic and tweet features distinctive of tweets attributed to the account owner and to staff.", "A few accounts are outliers in the frequency of their signed tweets, with up to 80% owner attributed tweets compared to only 4.8% on average.", "We perform our analysis on a subset of the data, in order for our linguistic analysis not to be driven by a few prolific users or by any imbalance in the ratio of owner/staff tweets across users.", "The data set is obtained as follows.", "Each account can contribute a minimum of 10, maximum of 100 owner attributed tweets.", "We then sample staff attributed tweets from each account such that these are nine times the number of tweets signed by the owner.", "Newer messages are preferred when sampling.", "This leads to a data set of 28,150 tweets with exactly a tenth of them attributed to the account owners (2,815).", "We perform analysis of all previously described feature sets using Pearson correlations following Schwartz et al.", "(2013a) .", "We compute Pearson correlations independently for each feature between its distribution across messages (features are first normalized to sum up to unit for each message) and a variable encoding if the tweet was attributed to the account owner or not.", "We correct for multiple comparisons using Simes correction.", "Top unigrams correlated with owner attributed tweets are presented in Table 3 , with the other group textual features (LIWC categories, Word2Vec topics and emotion features) in Table 2 .", "Tweet feature results are presented in Table 4 .", "LIWC Features r Name Top Words .111 FUNCTION to, the, for, in, of, and, a, is, on, out .102 PRONOUN our, we, you, i, your, my, us, his .101 AFFECT great, thank, support, thanks, proud, care .098 SOCIAL our, we, you, your, who, us, his, help, they .107 PREP to, for, in, of, on, at, with, Our analysis shows that owner tweets are associated to a greater extent with language destined to convey emotion or a state of being and to signal a personal relationship with another political figure.", "Tweets of congratulations, condolences and support are also specific of signed tweets.", "These tweets tend to be retweeted less by others, but get more likes than staff attributed tweets.", "Tweets attributed to account owners are more likely to be posted on weekends, are less likely to be replies to others and contain less links to websites or images.", "Remarkably, there are no textual features significantly correlated with staff attributed tweets.", "An analysis showed that these are more diverse and thus no significant patterns are consistent in association with text features such as unigrams, topic or LIWC categories.", "Conclusions This study introduced a novel application of NLP: predicting if tweets from an account are attributed to their owner or to staffers.", "Past research on predicting and studying Twitter account characteristics such as type or personal traits (e.g., gender, age) assumed that the same person is authoring all posts from that account.", "Using a novel data set, we showed that owner attributed tweets exhibit distinct linguistic patterns to those attributed to staffers.", "Even when tested on held-out user accounts, our predictive model of owner tweets reaches an average performance of .741 AUC.", "Future work could study other types of accounts with similar posting behaviors such as organizational accounts, explore other sources for ground truth tweet identity information (Robinson, 2016) or study the effects of user traits such as gender or political affiliation in tweeting signed content." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5", "6", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Data", "Features", "Prediction", "Analysis", "Conclusions" ] }
GEM-SciDuet-train-43#paper-1063#slide-2
Data Acquisition
2019 Bloomberg Finance L.P. All rights reserved. . : Engineering
2019 Bloomberg Finance L.P. All rights reserved. . : Engineering
[]
GEM-SciDuet-train-43#paper-1063#slide-3
1063
Analyzing Linguistic Differences between Owner and Staff Attributed Tweets
Research on social media has to date assumed that all posts from an account are authored by the same person. In this study, we challenge this assumption and study the linguistic differences between posts signed by the account owner or attributed to their staff. We introduce a novel data set of tweets posted by U.S. politicians who self-reported their tweets using a signature. We analyze the linguistic topics and style features that distinguish the two types of tweets. Predictive results show that we are able to distinguish between owner and staff attributed tweets with good accuracy, even when not using any training data from that account.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103 ], "paper_content_text": [ "Introduction Social media has become one of the main venues for breaking news that come directly from primary sources.", "Platforms such as Twitter have started to play a key role in elections (Politico, 2017) and have become widely used by public figures to disseminate their activities and opinions.", "However, posts are rarely authored by the public figure who owns the account; rather, they are posted by staff who update followers on the thoughts, stances and activities of the owner.", "This study introduces a new application of Natural Language Processing: predicting which posts from a Twitter account are authored by the owner of an account.", "Direct applications include predicting owner authored tweets for unseen users and can be useful to political or PR advisers to gain a better understanding on how to craft more personal or engaging messages.", "Past research has experimented with predicting user types or traits from tweets (Pennacchiotti and Popescu, 2011; McCorriston et al., 2015) .", "However, all these studies have relied on the assumption that tweets posted from an account were all written by the same person.", "No previous study has looked at predicting which tweets from the same Twitter account were authored by different persons, here staffers or the owner of the Twitter account.", "Figure 1 shows an example of a U.S. politician who signs their tweets by adding '-PM' at the end of the tweet.", "Staff posts are likely to be different in terms of topics, style, timing or impact to posts attributed to the owner of the account.", "The goal of the present study is thus to: • analyze linguistic differences between the two types of tweets in terms of words, topics, style, type and impact; • build a model that predicts if a tweet is attributed to the account owner or their staff.", "To this end, we introduce a novel data set consisting of over 200,000 tweets from accounts of 147 U.S. politicians that are attributed to the owner or their staff.", "1 Evaluation on unseen accounts leads to an accuracy of up to .741 AUC.", "Similar account sharing behaviors exists in several other domains such as Twitter accounts of entertainers (artists, TV hosts), public figures or CEOs who employ staff to author their tweets or with organi-zational accounts, which alternate between posting messages about important company updates and tweets about promotions, PR activity or customer service.", "Direct applications of our analysis include automatically predicting staff tweets for unseen users and gaining a better understanding on how to craft more personal messages which can be useful to political or PR advisers.", "Related Work Several studies have looked at predicting the type of a Twitter account, most frequently between individual or organizational, using linguistic features (De Choudhury et al., 2012; McCorriston et al., 2015; Mac Kim et al., 2017) .", "A broad literature has been devoted to predicting personal traits from language use on Twitter, such as gender (Burger et al., 2011) , age (Nguyen et al., 2011) , geolocation (Cheng et al., 2010) , political preference (Volkova et al., 2014; , income , impact (Lampos et al., 2014) , socioeconomic status (Aletras and Chamberlain, 2018) , race (Preoţiuc-Pietro and Ungar, 2018) or personality (Schwartz et al., 2013a; Preoţiuc-Pietro et al., 2016 ).", "Related to our task is authorship attribution, where the goal is to predict the author of a given text.", "With few exceptions (Schwartz et al., 2013b) , this was attempted on larger documents or books (Popescu and Dinu, 2007; Stamatatos, 2009; Juola et al., 2008; Koppel et al., 2009 ).", "In our case, the experiments are set up as the same binary classification task regardless of the account (owner vs. staffer) which, unlike authorship attribution, allows for experiments across multiple user accounts.", "Additionally, in most authorship attribution studies, differences between authors consist mainly of the topics they write about.", "Our experimental setup limits the extent to which topic presence impacts the prediction, as all tweets are posted by US politicians and within the topics of the tweets from an account should be similar to each other.", "Pastiche detection is another related area of research (Dinu et al., 2012) , where models are trained to distinguish between an original text and a text written by one who aims to imitate the style of the original author, resulting in the documents having similar topics.", "Data We build a data set of Twitter accounts used by both the owner (the person who the account represents) and their staff.", "Several Twitter users attribute the authorship of a subset of their tweets to themselves by signing these with their initials or a hashtag, following the example of Barack Obama (Time, 2011) .", "The rest of the tweets are implicitly attributed to their staff.", "Thus, we use the Twitter user description to identify potential accounts where owners sign their tweets.", "We collect in total 1,365 potential user descriptions from Twitter that match a set of keyphrases indicative of personal tweet signatures (i.e., tweets by me signed, tweets signed, tweets are signed, staff unless noted, tweets from staff unless signed, tweets signed by, my tweets are signed).", "We then manually check all descriptions and filter out those not mentioning a signature, leaving us with 628 accounts.", "We aim to perform our analysis on a set of users from the same domain to limit variations caused by topic and we observe that the most numerous category of users who sign their messages are U.S. politicians, which leaves us with 147 accounts.", "We download all the tweets posted by these accounts that are accessible through the Twitter API (a maximum of 3,200).", "We remove the retweets made by an account, as these are not attributed to either the account owner or their staff.", "This results in a data set with a total of 202,024 tweets.", "We manually identified each user's signature from their profile description.", "To assign labels to tweets, we automatically matched the signature to each tweet using a regular expression.", "We remove the signature from all predictive experiments and feature analyses as this would make the classification task trivial.", "In total, 9,715 tweets (4.8% of the total) are signed by the account owners.", "While our task is to predict if a tweet is attributed to the owner or its staff, we assume this as a proxy to authorship if account owners are truthful when using the signature in their tweets.", "There is little incentive for owners to be untruthful, with potentially serious negative ramifications associated with public deception.", "We use DLATK, which handles social media content and markup such as emoticons or hashtags (Schwartz et al., 2017) .", "Further, we anonymize all usernames present in the tweet and URLs and replace them with placeholder tokens.", "Features We use a broad set of linguistic features motivated by past research on user trait prediction in our attempt to predict and interpret the difference between owner and staff attributed tweets.", "These include: LIWC.", "Traditional psychology studies use a dictionary-based approach to representing text.", "The most popular method is based on Linguistic Inquiry and Word Count (LIWC) (Pennebaker et al., 2001) consisting of 73 manually constructed lists of words (Pennebaker et al., 2015) including some specific parts-of-speech, topical or stylistic categories.", "Each message is thereby represented as a frequency distribution over these categories.", "Word2Vec Clusters.", "An alternative to LIWC is to use automatically generated word clusters.", "These clusters of words can be thought of as topics, i.e., groups of words that are semantically and/or syntactically similar.", "The clusters help reduce the feature space and provide good interpretability.", "We use the method by to compute topics using Word2Vec similarity (Mikolov et al., 2013) and spectral clustering (Shi and Malik, 2000; von Luxburg, 2007) of different sizes.", "We present results using 200 topics as this gave the best predictive results.", "Each message is thus represented as an unweighted distribution over clusters.", "Sentiment & Emotions.", "We also investigate the extent to which tweets posted by the account owner express more or fewer emotions.", "The most popular model of discrete emotions is the Ekman model (Ekman, 1992; Strapparava and Mihalcea, 2008; Strapparava et al., 2004) which posits the existence of six basic emotions: anger, disgust, fear, joy, sadness and surprise.", "We automatically quantify these emotions from our Twitter data set using a publicly available crowd-sourcing derived lexicon of words associated with any of the six emotions, as well as general positive and negative sentiment Turney, 2010, 2013) .", "Using these models, we assign sentiment and emotion probabilities to each message.", "Unigrams.", "We use the bag-of-words representation to reduce each message to a normalised frequency distribution over the vocabulary consisting of all words used by at least 20% of the users (2,099 words in total).", "We chose this smaller vocabulary that is more representative of words used by a larger set of users such that models would be able to transfer better to unseen users.", "Tweet Features.", "We compute additional tweetlevel features such as: the length in characters and tokens (Length), the type of tweet encoding if this is an @-reply or contains a URL (Tweet Type), the time of the tweet represented as a one-hot vector over the hour of day and day of week (Post Time) and the number of retweets and likes the tweet received (Impact).", "Although the latter features are not available in a real-time predictive scenario, they are useful for analysis.", "Prediction Our hypothesis is that tweets attributed to the owner of the account are different than those attributed to staff, and that these patterns generalize to held-out accounts not included in the training data.", "Hence, we build predictive models and test them in two setups.", "First, we split the users into ten folds.", "Tweets used in training are all posted by 80% of the users, tweets from 10% of the users are used for hyperparameter tuning and tweets from the final 10% of the users are used in testing (Users).", "In the second experimental setup, we split all tweets into ten folds using the same split sizes (Tweets).", "We report the average performance across the ten folds.", "Due to class imbalance -only 4.8% of tweets are posted by the account owners -results are measured in ROC AUC, which is a more suitable metric in this setup.", "In our predictive experiments, we used logistic regression with Elastic Net regularization.", "As features, we use all feature types described in the previous section separately as well as together using a logistic regression model combining all feature sets (Combined).", "The results using both experimental setups -holding-out tweets or users -are presented in Table 1 .", "Results show that we can predict owner tweets with good performance and consistently better than chance, even when we have no training data for the users in the test set.", "The held-out user experimental setup is more challenging as reflected by lower predictive numbers for most language features, except for the LIWC features.", "One potential explanation for the high performance of the LIWC features in this setup is that these are low dimensional and are better at identifying general patterns which transfer better to unseen users rather than overfit the users from the training data.", "Table 1 : Predictive results with each feature type for classifying tweets attributed to account owners or staffers, measured using ROC AUC.", "Evaluation is performed using 10-fold cross-validation by holding out in each fold either: 10% of the tweets (Tweets) or all tweets posted by 10% of the users (Users).", "Analysis In this section we investigate the linguistic and tweet features distinctive of tweets attributed to the account owner and to staff.", "A few accounts are outliers in the frequency of their signed tweets, with up to 80% owner attributed tweets compared to only 4.8% on average.", "We perform our analysis on a subset of the data, in order for our linguistic analysis not to be driven by a few prolific users or by any imbalance in the ratio of owner/staff tweets across users.", "The data set is obtained as follows.", "Each account can contribute a minimum of 10, maximum of 100 owner attributed tweets.", "We then sample staff attributed tweets from each account such that these are nine times the number of tweets signed by the owner.", "Newer messages are preferred when sampling.", "This leads to a data set of 28,150 tweets with exactly a tenth of them attributed to the account owners (2,815).", "We perform analysis of all previously described feature sets using Pearson correlations following Schwartz et al.", "(2013a) .", "We compute Pearson correlations independently for each feature between its distribution across messages (features are first normalized to sum up to unit for each message) and a variable encoding if the tweet was attributed to the account owner or not.", "We correct for multiple comparisons using Simes correction.", "Top unigrams correlated with owner attributed tweets are presented in Table 3 , with the other group textual features (LIWC categories, Word2Vec topics and emotion features) in Table 2 .", "Tweet feature results are presented in Table 4 .", "LIWC Features r Name Top Words .111 FUNCTION to, the, for, in, of, and, a, is, on, out .102 PRONOUN our, we, you, i, your, my, us, his .101 AFFECT great, thank, support, thanks, proud, care .098 SOCIAL our, we, you, your, who, us, his, help, they .107 PREP to, for, in, of, on, at, with, Our analysis shows that owner tweets are associated to a greater extent with language destined to convey emotion or a state of being and to signal a personal relationship with another political figure.", "Tweets of congratulations, condolences and support are also specific of signed tweets.", "These tweets tend to be retweeted less by others, but get more likes than staff attributed tweets.", "Tweets attributed to account owners are more likely to be posted on weekends, are less likely to be replies to others and contain less links to websites or images.", "Remarkably, there are no textual features significantly correlated with staff attributed tweets.", "An analysis showed that these are more diverse and thus no significant patterns are consistent in association with text features such as unigrams, topic or LIWC categories.", "Conclusions This study introduced a novel application of NLP: predicting if tweets from an account are attributed to their owner or to staffers.", "Past research on predicting and studying Twitter account characteristics such as type or personal traits (e.g., gender, age) assumed that the same person is authoring all posts from that account.", "Using a novel data set, we showed that owner attributed tweets exhibit distinct linguistic patterns to those attributed to staffers.", "Even when tested on held-out user accounts, our predictive model of owner tweets reaches an average performance of .741 AUC.", "Future work could study other types of accounts with similar posting behaviors such as organizational accounts, explore other sources for ground truth tweet identity information (Robinson, 2016) or study the effects of user traits such as gender or political affiliation in tweeting signed content." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5", "6", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Data", "Features", "Prediction", "Analysis", "Conclusions" ] }
GEM-SciDuet-train-43#paper-1063#slide-3
Data Processing
2019 Bloomberg Finance L.P. All rights reserved.
2019 Bloomberg Finance L.P. All rights reserved.
[]
GEM-SciDuet-train-43#paper-1063#slide-4
1063
Analyzing Linguistic Differences between Owner and Staff Attributed Tweets
Research on social media has to date assumed that all posts from an account are authored by the same person. In this study, we challenge this assumption and study the linguistic differences between posts signed by the account owner or attributed to their staff. We introduce a novel data set of tweets posted by U.S. politicians who self-reported their tweets using a signature. We analyze the linguistic topics and style features that distinguish the two types of tweets. Predictive results show that we are able to distinguish between owner and staff attributed tweets with good accuracy, even when not using any training data from that account.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103 ], "paper_content_text": [ "Introduction Social media has become one of the main venues for breaking news that come directly from primary sources.", "Platforms such as Twitter have started to play a key role in elections (Politico, 2017) and have become widely used by public figures to disseminate their activities and opinions.", "However, posts are rarely authored by the public figure who owns the account; rather, they are posted by staff who update followers on the thoughts, stances and activities of the owner.", "This study introduces a new application of Natural Language Processing: predicting which posts from a Twitter account are authored by the owner of an account.", "Direct applications include predicting owner authored tweets for unseen users and can be useful to political or PR advisers to gain a better understanding on how to craft more personal or engaging messages.", "Past research has experimented with predicting user types or traits from tweets (Pennacchiotti and Popescu, 2011; McCorriston et al., 2015) .", "However, all these studies have relied on the assumption that tweets posted from an account were all written by the same person.", "No previous study has looked at predicting which tweets from the same Twitter account were authored by different persons, here staffers or the owner of the Twitter account.", "Figure 1 shows an example of a U.S. politician who signs their tweets by adding '-PM' at the end of the tweet.", "Staff posts are likely to be different in terms of topics, style, timing or impact to posts attributed to the owner of the account.", "The goal of the present study is thus to: • analyze linguistic differences between the two types of tweets in terms of words, topics, style, type and impact; • build a model that predicts if a tweet is attributed to the account owner or their staff.", "To this end, we introduce a novel data set consisting of over 200,000 tweets from accounts of 147 U.S. politicians that are attributed to the owner or their staff.", "1 Evaluation on unseen accounts leads to an accuracy of up to .741 AUC.", "Similar account sharing behaviors exists in several other domains such as Twitter accounts of entertainers (artists, TV hosts), public figures or CEOs who employ staff to author their tweets or with organi-zational accounts, which alternate between posting messages about important company updates and tweets about promotions, PR activity or customer service.", "Direct applications of our analysis include automatically predicting staff tweets for unseen users and gaining a better understanding on how to craft more personal messages which can be useful to political or PR advisers.", "Related Work Several studies have looked at predicting the type of a Twitter account, most frequently between individual or organizational, using linguistic features (De Choudhury et al., 2012; McCorriston et al., 2015; Mac Kim et al., 2017) .", "A broad literature has been devoted to predicting personal traits from language use on Twitter, such as gender (Burger et al., 2011) , age (Nguyen et al., 2011) , geolocation (Cheng et al., 2010) , political preference (Volkova et al., 2014; , income , impact (Lampos et al., 2014) , socioeconomic status (Aletras and Chamberlain, 2018) , race (Preoţiuc-Pietro and Ungar, 2018) or personality (Schwartz et al., 2013a; Preoţiuc-Pietro et al., 2016 ).", "Related to our task is authorship attribution, where the goal is to predict the author of a given text.", "With few exceptions (Schwartz et al., 2013b) , this was attempted on larger documents or books (Popescu and Dinu, 2007; Stamatatos, 2009; Juola et al., 2008; Koppel et al., 2009 ).", "In our case, the experiments are set up as the same binary classification task regardless of the account (owner vs. staffer) which, unlike authorship attribution, allows for experiments across multiple user accounts.", "Additionally, in most authorship attribution studies, differences between authors consist mainly of the topics they write about.", "Our experimental setup limits the extent to which topic presence impacts the prediction, as all tweets are posted by US politicians and within the topics of the tweets from an account should be similar to each other.", "Pastiche detection is another related area of research (Dinu et al., 2012) , where models are trained to distinguish between an original text and a text written by one who aims to imitate the style of the original author, resulting in the documents having similar topics.", "Data We build a data set of Twitter accounts used by both the owner (the person who the account represents) and their staff.", "Several Twitter users attribute the authorship of a subset of their tweets to themselves by signing these with their initials or a hashtag, following the example of Barack Obama (Time, 2011) .", "The rest of the tweets are implicitly attributed to their staff.", "Thus, we use the Twitter user description to identify potential accounts where owners sign their tweets.", "We collect in total 1,365 potential user descriptions from Twitter that match a set of keyphrases indicative of personal tweet signatures (i.e., tweets by me signed, tweets signed, tweets are signed, staff unless noted, tweets from staff unless signed, tweets signed by, my tweets are signed).", "We then manually check all descriptions and filter out those not mentioning a signature, leaving us with 628 accounts.", "We aim to perform our analysis on a set of users from the same domain to limit variations caused by topic and we observe that the most numerous category of users who sign their messages are U.S. politicians, which leaves us with 147 accounts.", "We download all the tweets posted by these accounts that are accessible through the Twitter API (a maximum of 3,200).", "We remove the retweets made by an account, as these are not attributed to either the account owner or their staff.", "This results in a data set with a total of 202,024 tweets.", "We manually identified each user's signature from their profile description.", "To assign labels to tweets, we automatically matched the signature to each tweet using a regular expression.", "We remove the signature from all predictive experiments and feature analyses as this would make the classification task trivial.", "In total, 9,715 tweets (4.8% of the total) are signed by the account owners.", "While our task is to predict if a tweet is attributed to the owner or its staff, we assume this as a proxy to authorship if account owners are truthful when using the signature in their tweets.", "There is little incentive for owners to be untruthful, with potentially serious negative ramifications associated with public deception.", "We use DLATK, which handles social media content and markup such as emoticons or hashtags (Schwartz et al., 2017) .", "Further, we anonymize all usernames present in the tweet and URLs and replace them with placeholder tokens.", "Features We use a broad set of linguistic features motivated by past research on user trait prediction in our attempt to predict and interpret the difference between owner and staff attributed tweets.", "These include: LIWC.", "Traditional psychology studies use a dictionary-based approach to representing text.", "The most popular method is based on Linguistic Inquiry and Word Count (LIWC) (Pennebaker et al., 2001) consisting of 73 manually constructed lists of words (Pennebaker et al., 2015) including some specific parts-of-speech, topical or stylistic categories.", "Each message is thereby represented as a frequency distribution over these categories.", "Word2Vec Clusters.", "An alternative to LIWC is to use automatically generated word clusters.", "These clusters of words can be thought of as topics, i.e., groups of words that are semantically and/or syntactically similar.", "The clusters help reduce the feature space and provide good interpretability.", "We use the method by to compute topics using Word2Vec similarity (Mikolov et al., 2013) and spectral clustering (Shi and Malik, 2000; von Luxburg, 2007) of different sizes.", "We present results using 200 topics as this gave the best predictive results.", "Each message is thus represented as an unweighted distribution over clusters.", "Sentiment & Emotions.", "We also investigate the extent to which tweets posted by the account owner express more or fewer emotions.", "The most popular model of discrete emotions is the Ekman model (Ekman, 1992; Strapparava and Mihalcea, 2008; Strapparava et al., 2004) which posits the existence of six basic emotions: anger, disgust, fear, joy, sadness and surprise.", "We automatically quantify these emotions from our Twitter data set using a publicly available crowd-sourcing derived lexicon of words associated with any of the six emotions, as well as general positive and negative sentiment Turney, 2010, 2013) .", "Using these models, we assign sentiment and emotion probabilities to each message.", "Unigrams.", "We use the bag-of-words representation to reduce each message to a normalised frequency distribution over the vocabulary consisting of all words used by at least 20% of the users (2,099 words in total).", "We chose this smaller vocabulary that is more representative of words used by a larger set of users such that models would be able to transfer better to unseen users.", "Tweet Features.", "We compute additional tweetlevel features such as: the length in characters and tokens (Length), the type of tweet encoding if this is an @-reply or contains a URL (Tweet Type), the time of the tweet represented as a one-hot vector over the hour of day and day of week (Post Time) and the number of retweets and likes the tweet received (Impact).", "Although the latter features are not available in a real-time predictive scenario, they are useful for analysis.", "Prediction Our hypothesis is that tweets attributed to the owner of the account are different than those attributed to staff, and that these patterns generalize to held-out accounts not included in the training data.", "Hence, we build predictive models and test them in two setups.", "First, we split the users into ten folds.", "Tweets used in training are all posted by 80% of the users, tweets from 10% of the users are used for hyperparameter tuning and tweets from the final 10% of the users are used in testing (Users).", "In the second experimental setup, we split all tweets into ten folds using the same split sizes (Tweets).", "We report the average performance across the ten folds.", "Due to class imbalance -only 4.8% of tweets are posted by the account owners -results are measured in ROC AUC, which is a more suitable metric in this setup.", "In our predictive experiments, we used logistic regression with Elastic Net regularization.", "As features, we use all feature types described in the previous section separately as well as together using a logistic regression model combining all feature sets (Combined).", "The results using both experimental setups -holding-out tweets or users -are presented in Table 1 .", "Results show that we can predict owner tweets with good performance and consistently better than chance, even when we have no training data for the users in the test set.", "The held-out user experimental setup is more challenging as reflected by lower predictive numbers for most language features, except for the LIWC features.", "One potential explanation for the high performance of the LIWC features in this setup is that these are low dimensional and are better at identifying general patterns which transfer better to unseen users rather than overfit the users from the training data.", "Table 1 : Predictive results with each feature type for classifying tweets attributed to account owners or staffers, measured using ROC AUC.", "Evaluation is performed using 10-fold cross-validation by holding out in each fold either: 10% of the tweets (Tweets) or all tweets posted by 10% of the users (Users).", "Analysis In this section we investigate the linguistic and tweet features distinctive of tweets attributed to the account owner and to staff.", "A few accounts are outliers in the frequency of their signed tweets, with up to 80% owner attributed tweets compared to only 4.8% on average.", "We perform our analysis on a subset of the data, in order for our linguistic analysis not to be driven by a few prolific users or by any imbalance in the ratio of owner/staff tweets across users.", "The data set is obtained as follows.", "Each account can contribute a minimum of 10, maximum of 100 owner attributed tweets.", "We then sample staff attributed tweets from each account such that these are nine times the number of tweets signed by the owner.", "Newer messages are preferred when sampling.", "This leads to a data set of 28,150 tweets with exactly a tenth of them attributed to the account owners (2,815).", "We perform analysis of all previously described feature sets using Pearson correlations following Schwartz et al.", "(2013a) .", "We compute Pearson correlations independently for each feature between its distribution across messages (features are first normalized to sum up to unit for each message) and a variable encoding if the tweet was attributed to the account owner or not.", "We correct for multiple comparisons using Simes correction.", "Top unigrams correlated with owner attributed tweets are presented in Table 3 , with the other group textual features (LIWC categories, Word2Vec topics and emotion features) in Table 2 .", "Tweet feature results are presented in Table 4 .", "LIWC Features r Name Top Words .111 FUNCTION to, the, for, in, of, and, a, is, on, out .102 PRONOUN our, we, you, i, your, my, us, his .101 AFFECT great, thank, support, thanks, proud, care .098 SOCIAL our, we, you, your, who, us, his, help, they .107 PREP to, for, in, of, on, at, with, Our analysis shows that owner tweets are associated to a greater extent with language destined to convey emotion or a state of being and to signal a personal relationship with another political figure.", "Tweets of congratulations, condolences and support are also specific of signed tweets.", "These tweets tend to be retweeted less by others, but get more likes than staff attributed tweets.", "Tweets attributed to account owners are more likely to be posted on weekends, are less likely to be replies to others and contain less links to websites or images.", "Remarkably, there are no textual features significantly correlated with staff attributed tweets.", "An analysis showed that these are more diverse and thus no significant patterns are consistent in association with text features such as unigrams, topic or LIWC categories.", "Conclusions This study introduced a novel application of NLP: predicting if tweets from an account are attributed to their owner or to staffers.", "Past research on predicting and studying Twitter account characteristics such as type or personal traits (e.g., gender, age) assumed that the same person is authoring all posts from that account.", "Using a novel data set, we showed that owner attributed tweets exhibit distinct linguistic patterns to those attributed to staffers.", "Even when tested on held-out user accounts, our predictive model of owner tweets reaches an average performance of .741 AUC.", "Future work could study other types of accounts with similar posting behaviors such as organizational accounts, explore other sources for ground truth tweet identity information (Robinson, 2016) or study the effects of user traits such as gender or political affiliation in tweeting signed content." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5", "6", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Data", "Features", "Prediction", "Analysis", "Conclusions" ] }
GEM-SciDuet-train-43#paper-1063#slide-4
Features
2019 Bloomberg Finance L.P. All rights reserved.
2019 Bloomberg Finance L.P. All rights reserved.
[]
GEM-SciDuet-train-43#paper-1063#slide-5
1063
Analyzing Linguistic Differences between Owner and Staff Attributed Tweets
Research on social media has to date assumed that all posts from an account are authored by the same person. In this study, we challenge this assumption and study the linguistic differences between posts signed by the account owner or attributed to their staff. We introduce a novel data set of tweets posted by U.S. politicians who self-reported their tweets using a signature. We analyze the linguistic topics and style features that distinguish the two types of tweets. Predictive results show that we are able to distinguish between owner and staff attributed tweets with good accuracy, even when not using any training data from that account.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103 ], "paper_content_text": [ "Introduction Social media has become one of the main venues for breaking news that come directly from primary sources.", "Platforms such as Twitter have started to play a key role in elections (Politico, 2017) and have become widely used by public figures to disseminate their activities and opinions.", "However, posts are rarely authored by the public figure who owns the account; rather, they are posted by staff who update followers on the thoughts, stances and activities of the owner.", "This study introduces a new application of Natural Language Processing: predicting which posts from a Twitter account are authored by the owner of an account.", "Direct applications include predicting owner authored tweets for unseen users and can be useful to political or PR advisers to gain a better understanding on how to craft more personal or engaging messages.", "Past research has experimented with predicting user types or traits from tweets (Pennacchiotti and Popescu, 2011; McCorriston et al., 2015) .", "However, all these studies have relied on the assumption that tweets posted from an account were all written by the same person.", "No previous study has looked at predicting which tweets from the same Twitter account were authored by different persons, here staffers or the owner of the Twitter account.", "Figure 1 shows an example of a U.S. politician who signs their tweets by adding '-PM' at the end of the tweet.", "Staff posts are likely to be different in terms of topics, style, timing or impact to posts attributed to the owner of the account.", "The goal of the present study is thus to: • analyze linguistic differences between the two types of tweets in terms of words, topics, style, type and impact; • build a model that predicts if a tweet is attributed to the account owner or their staff.", "To this end, we introduce a novel data set consisting of over 200,000 tweets from accounts of 147 U.S. politicians that are attributed to the owner or their staff.", "1 Evaluation on unseen accounts leads to an accuracy of up to .741 AUC.", "Similar account sharing behaviors exists in several other domains such as Twitter accounts of entertainers (artists, TV hosts), public figures or CEOs who employ staff to author their tweets or with organi-zational accounts, which alternate between posting messages about important company updates and tweets about promotions, PR activity or customer service.", "Direct applications of our analysis include automatically predicting staff tweets for unseen users and gaining a better understanding on how to craft more personal messages which can be useful to political or PR advisers.", "Related Work Several studies have looked at predicting the type of a Twitter account, most frequently between individual or organizational, using linguistic features (De Choudhury et al., 2012; McCorriston et al., 2015; Mac Kim et al., 2017) .", "A broad literature has been devoted to predicting personal traits from language use on Twitter, such as gender (Burger et al., 2011) , age (Nguyen et al., 2011) , geolocation (Cheng et al., 2010) , political preference (Volkova et al., 2014; , income , impact (Lampos et al., 2014) , socioeconomic status (Aletras and Chamberlain, 2018) , race (Preoţiuc-Pietro and Ungar, 2018) or personality (Schwartz et al., 2013a; Preoţiuc-Pietro et al., 2016 ).", "Related to our task is authorship attribution, where the goal is to predict the author of a given text.", "With few exceptions (Schwartz et al., 2013b) , this was attempted on larger documents or books (Popescu and Dinu, 2007; Stamatatos, 2009; Juola et al., 2008; Koppel et al., 2009 ).", "In our case, the experiments are set up as the same binary classification task regardless of the account (owner vs. staffer) which, unlike authorship attribution, allows for experiments across multiple user accounts.", "Additionally, in most authorship attribution studies, differences between authors consist mainly of the topics they write about.", "Our experimental setup limits the extent to which topic presence impacts the prediction, as all tweets are posted by US politicians and within the topics of the tweets from an account should be similar to each other.", "Pastiche detection is another related area of research (Dinu et al., 2012) , where models are trained to distinguish between an original text and a text written by one who aims to imitate the style of the original author, resulting in the documents having similar topics.", "Data We build a data set of Twitter accounts used by both the owner (the person who the account represents) and their staff.", "Several Twitter users attribute the authorship of a subset of their tweets to themselves by signing these with their initials or a hashtag, following the example of Barack Obama (Time, 2011) .", "The rest of the tweets are implicitly attributed to their staff.", "Thus, we use the Twitter user description to identify potential accounts where owners sign their tweets.", "We collect in total 1,365 potential user descriptions from Twitter that match a set of keyphrases indicative of personal tweet signatures (i.e., tweets by me signed, tweets signed, tweets are signed, staff unless noted, tweets from staff unless signed, tweets signed by, my tweets are signed).", "We then manually check all descriptions and filter out those not mentioning a signature, leaving us with 628 accounts.", "We aim to perform our analysis on a set of users from the same domain to limit variations caused by topic and we observe that the most numerous category of users who sign their messages are U.S. politicians, which leaves us with 147 accounts.", "We download all the tweets posted by these accounts that are accessible through the Twitter API (a maximum of 3,200).", "We remove the retweets made by an account, as these are not attributed to either the account owner or their staff.", "This results in a data set with a total of 202,024 tweets.", "We manually identified each user's signature from their profile description.", "To assign labels to tweets, we automatically matched the signature to each tweet using a regular expression.", "We remove the signature from all predictive experiments and feature analyses as this would make the classification task trivial.", "In total, 9,715 tweets (4.8% of the total) are signed by the account owners.", "While our task is to predict if a tweet is attributed to the owner or its staff, we assume this as a proxy to authorship if account owners are truthful when using the signature in their tweets.", "There is little incentive for owners to be untruthful, with potentially serious negative ramifications associated with public deception.", "We use DLATK, which handles social media content and markup such as emoticons or hashtags (Schwartz et al., 2017) .", "Further, we anonymize all usernames present in the tweet and URLs and replace them with placeholder tokens.", "Features We use a broad set of linguistic features motivated by past research on user trait prediction in our attempt to predict and interpret the difference between owner and staff attributed tweets.", "These include: LIWC.", "Traditional psychology studies use a dictionary-based approach to representing text.", "The most popular method is based on Linguistic Inquiry and Word Count (LIWC) (Pennebaker et al., 2001) consisting of 73 manually constructed lists of words (Pennebaker et al., 2015) including some specific parts-of-speech, topical or stylistic categories.", "Each message is thereby represented as a frequency distribution over these categories.", "Word2Vec Clusters.", "An alternative to LIWC is to use automatically generated word clusters.", "These clusters of words can be thought of as topics, i.e., groups of words that are semantically and/or syntactically similar.", "The clusters help reduce the feature space and provide good interpretability.", "We use the method by to compute topics using Word2Vec similarity (Mikolov et al., 2013) and spectral clustering (Shi and Malik, 2000; von Luxburg, 2007) of different sizes.", "We present results using 200 topics as this gave the best predictive results.", "Each message is thus represented as an unweighted distribution over clusters.", "Sentiment & Emotions.", "We also investigate the extent to which tweets posted by the account owner express more or fewer emotions.", "The most popular model of discrete emotions is the Ekman model (Ekman, 1992; Strapparava and Mihalcea, 2008; Strapparava et al., 2004) which posits the existence of six basic emotions: anger, disgust, fear, joy, sadness and surprise.", "We automatically quantify these emotions from our Twitter data set using a publicly available crowd-sourcing derived lexicon of words associated with any of the six emotions, as well as general positive and negative sentiment Turney, 2010, 2013) .", "Using these models, we assign sentiment and emotion probabilities to each message.", "Unigrams.", "We use the bag-of-words representation to reduce each message to a normalised frequency distribution over the vocabulary consisting of all words used by at least 20% of the users (2,099 words in total).", "We chose this smaller vocabulary that is more representative of words used by a larger set of users such that models would be able to transfer better to unseen users.", "Tweet Features.", "We compute additional tweetlevel features such as: the length in characters and tokens (Length), the type of tweet encoding if this is an @-reply or contains a URL (Tweet Type), the time of the tweet represented as a one-hot vector over the hour of day and day of week (Post Time) and the number of retweets and likes the tweet received (Impact).", "Although the latter features are not available in a real-time predictive scenario, they are useful for analysis.", "Prediction Our hypothesis is that tweets attributed to the owner of the account are different than those attributed to staff, and that these patterns generalize to held-out accounts not included in the training data.", "Hence, we build predictive models and test them in two setups.", "First, we split the users into ten folds.", "Tweets used in training are all posted by 80% of the users, tweets from 10% of the users are used for hyperparameter tuning and tweets from the final 10% of the users are used in testing (Users).", "In the second experimental setup, we split all tweets into ten folds using the same split sizes (Tweets).", "We report the average performance across the ten folds.", "Due to class imbalance -only 4.8% of tweets are posted by the account owners -results are measured in ROC AUC, which is a more suitable metric in this setup.", "In our predictive experiments, we used logistic regression with Elastic Net regularization.", "As features, we use all feature types described in the previous section separately as well as together using a logistic regression model combining all feature sets (Combined).", "The results using both experimental setups -holding-out tweets or users -are presented in Table 1 .", "Results show that we can predict owner tweets with good performance and consistently better than chance, even when we have no training data for the users in the test set.", "The held-out user experimental setup is more challenging as reflected by lower predictive numbers for most language features, except for the LIWC features.", "One potential explanation for the high performance of the LIWC features in this setup is that these are low dimensional and are better at identifying general patterns which transfer better to unseen users rather than overfit the users from the training data.", "Table 1 : Predictive results with each feature type for classifying tweets attributed to account owners or staffers, measured using ROC AUC.", "Evaluation is performed using 10-fold cross-validation by holding out in each fold either: 10% of the tweets (Tweets) or all tweets posted by 10% of the users (Users).", "Analysis In this section we investigate the linguistic and tweet features distinctive of tweets attributed to the account owner and to staff.", "A few accounts are outliers in the frequency of their signed tweets, with up to 80% owner attributed tweets compared to only 4.8% on average.", "We perform our analysis on a subset of the data, in order for our linguistic analysis not to be driven by a few prolific users or by any imbalance in the ratio of owner/staff tweets across users.", "The data set is obtained as follows.", "Each account can contribute a minimum of 10, maximum of 100 owner attributed tweets.", "We then sample staff attributed tweets from each account such that these are nine times the number of tweets signed by the owner.", "Newer messages are preferred when sampling.", "This leads to a data set of 28,150 tweets with exactly a tenth of them attributed to the account owners (2,815).", "We perform analysis of all previously described feature sets using Pearson correlations following Schwartz et al.", "(2013a) .", "We compute Pearson correlations independently for each feature between its distribution across messages (features are first normalized to sum up to unit for each message) and a variable encoding if the tweet was attributed to the account owner or not.", "We correct for multiple comparisons using Simes correction.", "Top unigrams correlated with owner attributed tweets are presented in Table 3 , with the other group textual features (LIWC categories, Word2Vec topics and emotion features) in Table 2 .", "Tweet feature results are presented in Table 4 .", "LIWC Features r Name Top Words .111 FUNCTION to, the, for, in, of, and, a, is, on, out .102 PRONOUN our, we, you, i, your, my, us, his .101 AFFECT great, thank, support, thanks, proud, care .098 SOCIAL our, we, you, your, who, us, his, help, they .107 PREP to, for, in, of, on, at, with, Our analysis shows that owner tweets are associated to a greater extent with language destined to convey emotion or a state of being and to signal a personal relationship with another political figure.", "Tweets of congratulations, condolences and support are also specific of signed tweets.", "These tweets tend to be retweeted less by others, but get more likes than staff attributed tweets.", "Tweets attributed to account owners are more likely to be posted on weekends, are less likely to be replies to others and contain less links to websites or images.", "Remarkably, there are no textual features significantly correlated with staff attributed tweets.", "An analysis showed that these are more diverse and thus no significant patterns are consistent in association with text features such as unigrams, topic or LIWC categories.", "Conclusions This study introduced a novel application of NLP: predicting if tweets from an account are attributed to their owner or to staffers.", "Past research on predicting and studying Twitter account characteristics such as type or personal traits (e.g., gender, age) assumed that the same person is authoring all posts from that account.", "Using a novel data set, we showed that owner attributed tweets exhibit distinct linguistic patterns to those attributed to staffers.", "Even when tested on held-out user accounts, our predictive model of owner tweets reaches an average performance of .741 AUC.", "Future work could study other types of accounts with similar posting behaviors such as organizational accounts, explore other sources for ground truth tweet identity information (Robinson, 2016) or study the effects of user traits such as gender or political affiliation in tweeting signed content." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5", "6", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Data", "Features", "Prediction", "Analysis", "Conclusions" ] }
GEM-SciDuet-train-43#paper-1063#slide-5
Prediction
2019 Bloomberg Finance L.P. All rights reserved. Tweet Length Tweet Type Tweet Time Tweet Impact LIWC Word2Vec Clusters Unigrams Combined
2019 Bloomberg Finance L.P. All rights reserved. Tweet Length Tweet Type Tweet Time Tweet Impact LIWC Word2Vec Clusters Unigrams Combined
[]
GEM-SciDuet-train-43#paper-1063#slide-6
1063
Analyzing Linguistic Differences between Owner and Staff Attributed Tweets
Research on social media has to date assumed that all posts from an account are authored by the same person. In this study, we challenge this assumption and study the linguistic differences between posts signed by the account owner or attributed to their staff. We introduce a novel data set of tweets posted by U.S. politicians who self-reported their tweets using a signature. We analyze the linguistic topics and style features that distinguish the two types of tweets. Predictive results show that we are able to distinguish between owner and staff attributed tweets with good accuracy, even when not using any training data from that account.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103 ], "paper_content_text": [ "Introduction Social media has become one of the main venues for breaking news that come directly from primary sources.", "Platforms such as Twitter have started to play a key role in elections (Politico, 2017) and have become widely used by public figures to disseminate their activities and opinions.", "However, posts are rarely authored by the public figure who owns the account; rather, they are posted by staff who update followers on the thoughts, stances and activities of the owner.", "This study introduces a new application of Natural Language Processing: predicting which posts from a Twitter account are authored by the owner of an account.", "Direct applications include predicting owner authored tweets for unseen users and can be useful to political or PR advisers to gain a better understanding on how to craft more personal or engaging messages.", "Past research has experimented with predicting user types or traits from tweets (Pennacchiotti and Popescu, 2011; McCorriston et al., 2015) .", "However, all these studies have relied on the assumption that tweets posted from an account were all written by the same person.", "No previous study has looked at predicting which tweets from the same Twitter account were authored by different persons, here staffers or the owner of the Twitter account.", "Figure 1 shows an example of a U.S. politician who signs their tweets by adding '-PM' at the end of the tweet.", "Staff posts are likely to be different in terms of topics, style, timing or impact to posts attributed to the owner of the account.", "The goal of the present study is thus to: • analyze linguistic differences between the two types of tweets in terms of words, topics, style, type and impact; • build a model that predicts if a tweet is attributed to the account owner or their staff.", "To this end, we introduce a novel data set consisting of over 200,000 tweets from accounts of 147 U.S. politicians that are attributed to the owner or their staff.", "1 Evaluation on unseen accounts leads to an accuracy of up to .741 AUC.", "Similar account sharing behaviors exists in several other domains such as Twitter accounts of entertainers (artists, TV hosts), public figures or CEOs who employ staff to author their tweets or with organi-zational accounts, which alternate between posting messages about important company updates and tweets about promotions, PR activity or customer service.", "Direct applications of our analysis include automatically predicting staff tweets for unseen users and gaining a better understanding on how to craft more personal messages which can be useful to political or PR advisers.", "Related Work Several studies have looked at predicting the type of a Twitter account, most frequently between individual or organizational, using linguistic features (De Choudhury et al., 2012; McCorriston et al., 2015; Mac Kim et al., 2017) .", "A broad literature has been devoted to predicting personal traits from language use on Twitter, such as gender (Burger et al., 2011) , age (Nguyen et al., 2011) , geolocation (Cheng et al., 2010) , political preference (Volkova et al., 2014; , income , impact (Lampos et al., 2014) , socioeconomic status (Aletras and Chamberlain, 2018) , race (Preoţiuc-Pietro and Ungar, 2018) or personality (Schwartz et al., 2013a; Preoţiuc-Pietro et al., 2016 ).", "Related to our task is authorship attribution, where the goal is to predict the author of a given text.", "With few exceptions (Schwartz et al., 2013b) , this was attempted on larger documents or books (Popescu and Dinu, 2007; Stamatatos, 2009; Juola et al., 2008; Koppel et al., 2009 ).", "In our case, the experiments are set up as the same binary classification task regardless of the account (owner vs. staffer) which, unlike authorship attribution, allows for experiments across multiple user accounts.", "Additionally, in most authorship attribution studies, differences between authors consist mainly of the topics they write about.", "Our experimental setup limits the extent to which topic presence impacts the prediction, as all tweets are posted by US politicians and within the topics of the tweets from an account should be similar to each other.", "Pastiche detection is another related area of research (Dinu et al., 2012) , where models are trained to distinguish between an original text and a text written by one who aims to imitate the style of the original author, resulting in the documents having similar topics.", "Data We build a data set of Twitter accounts used by both the owner (the person who the account represents) and their staff.", "Several Twitter users attribute the authorship of a subset of their tweets to themselves by signing these with their initials or a hashtag, following the example of Barack Obama (Time, 2011) .", "The rest of the tweets are implicitly attributed to their staff.", "Thus, we use the Twitter user description to identify potential accounts where owners sign their tweets.", "We collect in total 1,365 potential user descriptions from Twitter that match a set of keyphrases indicative of personal tweet signatures (i.e., tweets by me signed, tweets signed, tweets are signed, staff unless noted, tweets from staff unless signed, tweets signed by, my tweets are signed).", "We then manually check all descriptions and filter out those not mentioning a signature, leaving us with 628 accounts.", "We aim to perform our analysis on a set of users from the same domain to limit variations caused by topic and we observe that the most numerous category of users who sign their messages are U.S. politicians, which leaves us with 147 accounts.", "We download all the tweets posted by these accounts that are accessible through the Twitter API (a maximum of 3,200).", "We remove the retweets made by an account, as these are not attributed to either the account owner or their staff.", "This results in a data set with a total of 202,024 tweets.", "We manually identified each user's signature from their profile description.", "To assign labels to tweets, we automatically matched the signature to each tweet using a regular expression.", "We remove the signature from all predictive experiments and feature analyses as this would make the classification task trivial.", "In total, 9,715 tweets (4.8% of the total) are signed by the account owners.", "While our task is to predict if a tweet is attributed to the owner or its staff, we assume this as a proxy to authorship if account owners are truthful when using the signature in their tweets.", "There is little incentive for owners to be untruthful, with potentially serious negative ramifications associated with public deception.", "We use DLATK, which handles social media content and markup such as emoticons or hashtags (Schwartz et al., 2017) .", "Further, we anonymize all usernames present in the tweet and URLs and replace them with placeholder tokens.", "Features We use a broad set of linguistic features motivated by past research on user trait prediction in our attempt to predict and interpret the difference between owner and staff attributed tweets.", "These include: LIWC.", "Traditional psychology studies use a dictionary-based approach to representing text.", "The most popular method is based on Linguistic Inquiry and Word Count (LIWC) (Pennebaker et al., 2001) consisting of 73 manually constructed lists of words (Pennebaker et al., 2015) including some specific parts-of-speech, topical or stylistic categories.", "Each message is thereby represented as a frequency distribution over these categories.", "Word2Vec Clusters.", "An alternative to LIWC is to use automatically generated word clusters.", "These clusters of words can be thought of as topics, i.e., groups of words that are semantically and/or syntactically similar.", "The clusters help reduce the feature space and provide good interpretability.", "We use the method by to compute topics using Word2Vec similarity (Mikolov et al., 2013) and spectral clustering (Shi and Malik, 2000; von Luxburg, 2007) of different sizes.", "We present results using 200 topics as this gave the best predictive results.", "Each message is thus represented as an unweighted distribution over clusters.", "Sentiment & Emotions.", "We also investigate the extent to which tweets posted by the account owner express more or fewer emotions.", "The most popular model of discrete emotions is the Ekman model (Ekman, 1992; Strapparava and Mihalcea, 2008; Strapparava et al., 2004) which posits the existence of six basic emotions: anger, disgust, fear, joy, sadness and surprise.", "We automatically quantify these emotions from our Twitter data set using a publicly available crowd-sourcing derived lexicon of words associated with any of the six emotions, as well as general positive and negative sentiment Turney, 2010, 2013) .", "Using these models, we assign sentiment and emotion probabilities to each message.", "Unigrams.", "We use the bag-of-words representation to reduce each message to a normalised frequency distribution over the vocabulary consisting of all words used by at least 20% of the users (2,099 words in total).", "We chose this smaller vocabulary that is more representative of words used by a larger set of users such that models would be able to transfer better to unseen users.", "Tweet Features.", "We compute additional tweetlevel features such as: the length in characters and tokens (Length), the type of tweet encoding if this is an @-reply or contains a URL (Tweet Type), the time of the tweet represented as a one-hot vector over the hour of day and day of week (Post Time) and the number of retweets and likes the tweet received (Impact).", "Although the latter features are not available in a real-time predictive scenario, they are useful for analysis.", "Prediction Our hypothesis is that tweets attributed to the owner of the account are different than those attributed to staff, and that these patterns generalize to held-out accounts not included in the training data.", "Hence, we build predictive models and test them in two setups.", "First, we split the users into ten folds.", "Tweets used in training are all posted by 80% of the users, tweets from 10% of the users are used for hyperparameter tuning and tweets from the final 10% of the users are used in testing (Users).", "In the second experimental setup, we split all tweets into ten folds using the same split sizes (Tweets).", "We report the average performance across the ten folds.", "Due to class imbalance -only 4.8% of tweets are posted by the account owners -results are measured in ROC AUC, which is a more suitable metric in this setup.", "In our predictive experiments, we used logistic regression with Elastic Net regularization.", "As features, we use all feature types described in the previous section separately as well as together using a logistic regression model combining all feature sets (Combined).", "The results using both experimental setups -holding-out tweets or users -are presented in Table 1 .", "Results show that we can predict owner tweets with good performance and consistently better than chance, even when we have no training data for the users in the test set.", "The held-out user experimental setup is more challenging as reflected by lower predictive numbers for most language features, except for the LIWC features.", "One potential explanation for the high performance of the LIWC features in this setup is that these are low dimensional and are better at identifying general patterns which transfer better to unseen users rather than overfit the users from the training data.", "Table 1 : Predictive results with each feature type for classifying tweets attributed to account owners or staffers, measured using ROC AUC.", "Evaluation is performed using 10-fold cross-validation by holding out in each fold either: 10% of the tweets (Tweets) or all tweets posted by 10% of the users (Users).", "Analysis In this section we investigate the linguistic and tweet features distinctive of tweets attributed to the account owner and to staff.", "A few accounts are outliers in the frequency of their signed tweets, with up to 80% owner attributed tweets compared to only 4.8% on average.", "We perform our analysis on a subset of the data, in order for our linguistic analysis not to be driven by a few prolific users or by any imbalance in the ratio of owner/staff tweets across users.", "The data set is obtained as follows.", "Each account can contribute a minimum of 10, maximum of 100 owner attributed tweets.", "We then sample staff attributed tweets from each account such that these are nine times the number of tweets signed by the owner.", "Newer messages are preferred when sampling.", "This leads to a data set of 28,150 tweets with exactly a tenth of them attributed to the account owners (2,815).", "We perform analysis of all previously described feature sets using Pearson correlations following Schwartz et al.", "(2013a) .", "We compute Pearson correlations independently for each feature between its distribution across messages (features are first normalized to sum up to unit for each message) and a variable encoding if the tweet was attributed to the account owner or not.", "We correct for multiple comparisons using Simes correction.", "Top unigrams correlated with owner attributed tweets are presented in Table 3 , with the other group textual features (LIWC categories, Word2Vec topics and emotion features) in Table 2 .", "Tweet feature results are presented in Table 4 .", "LIWC Features r Name Top Words .111 FUNCTION to, the, for, in, of, and, a, is, on, out .102 PRONOUN our, we, you, i, your, my, us, his .101 AFFECT great, thank, support, thanks, proud, care .098 SOCIAL our, we, you, your, who, us, his, help, they .107 PREP to, for, in, of, on, at, with, Our analysis shows that owner tweets are associated to a greater extent with language destined to convey emotion or a state of being and to signal a personal relationship with another political figure.", "Tweets of congratulations, condolences and support are also specific of signed tweets.", "These tweets tend to be retweeted less by others, but get more likes than staff attributed tweets.", "Tweets attributed to account owners are more likely to be posted on weekends, are less likely to be replies to others and contain less links to websites or images.", "Remarkably, there are no textual features significantly correlated with staff attributed tweets.", "An analysis showed that these are more diverse and thus no significant patterns are consistent in association with text features such as unigrams, topic or LIWC categories.", "Conclusions This study introduced a novel application of NLP: predicting if tweets from an account are attributed to their owner or to staffers.", "Past research on predicting and studying Twitter account characteristics such as type or personal traits (e.g., gender, age) assumed that the same person is authoring all posts from that account.", "Using a novel data set, we showed that owner attributed tweets exhibit distinct linguistic patterns to those attributed to staffers.", "Even when tested on held-out user accounts, our predictive model of owner tweets reaches an average performance of .741 AUC.", "Future work could study other types of accounts with similar posting behaviors such as organizational accounts, explore other sources for ground truth tweet identity information (Robinson, 2016) or study the effects of user traits such as gender or political affiliation in tweeting signed content." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5", "6", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Data", "Features", "Prediction", "Analysis", "Conclusions" ] }
GEM-SciDuet-train-43#paper-1063#slide-6
Analysis
* All differences between means significant at p .001, Mann-Whitne 2019 Bloomberg Finance L.P. All rights reserved. - Congratulations, condolences and support - More personal pronouns - More function words - More positive and negative sentiment No features are correlated with unsigned tweets: @-Reply - More generic usage Sent on Weekends Other feature analysis in paper # Retweets
* All differences between means significant at p .001, Mann-Whitne 2019 Bloomberg Finance L.P. All rights reserved. - Congratulations, condolences and support - More personal pronouns - More function words - More positive and negative sentiment No features are correlated with unsigned tweets: @-Reply - More generic usage Sent on Weekends Other feature analysis in paper # Retweets
[]
GEM-SciDuet-train-43#paper-1063#slide-7
1063
Analyzing Linguistic Differences between Owner and Staff Attributed Tweets
Research on social media has to date assumed that all posts from an account are authored by the same person. In this study, we challenge this assumption and study the linguistic differences between posts signed by the account owner or attributed to their staff. We introduce a novel data set of tweets posted by U.S. politicians who self-reported their tweets using a signature. We analyze the linguistic topics and style features that distinguish the two types of tweets. Predictive results show that we are able to distinguish between owner and staff attributed tweets with good accuracy, even when not using any training data from that account.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103 ], "paper_content_text": [ "Introduction Social media has become one of the main venues for breaking news that come directly from primary sources.", "Platforms such as Twitter have started to play a key role in elections (Politico, 2017) and have become widely used by public figures to disseminate their activities and opinions.", "However, posts are rarely authored by the public figure who owns the account; rather, they are posted by staff who update followers on the thoughts, stances and activities of the owner.", "This study introduces a new application of Natural Language Processing: predicting which posts from a Twitter account are authored by the owner of an account.", "Direct applications include predicting owner authored tweets for unseen users and can be useful to political or PR advisers to gain a better understanding on how to craft more personal or engaging messages.", "Past research has experimented with predicting user types or traits from tweets (Pennacchiotti and Popescu, 2011; McCorriston et al., 2015) .", "However, all these studies have relied on the assumption that tweets posted from an account were all written by the same person.", "No previous study has looked at predicting which tweets from the same Twitter account were authored by different persons, here staffers or the owner of the Twitter account.", "Figure 1 shows an example of a U.S. politician who signs their tweets by adding '-PM' at the end of the tweet.", "Staff posts are likely to be different in terms of topics, style, timing or impact to posts attributed to the owner of the account.", "The goal of the present study is thus to: • analyze linguistic differences between the two types of tweets in terms of words, topics, style, type and impact; • build a model that predicts if a tweet is attributed to the account owner or their staff.", "To this end, we introduce a novel data set consisting of over 200,000 tweets from accounts of 147 U.S. politicians that are attributed to the owner or their staff.", "1 Evaluation on unseen accounts leads to an accuracy of up to .741 AUC.", "Similar account sharing behaviors exists in several other domains such as Twitter accounts of entertainers (artists, TV hosts), public figures or CEOs who employ staff to author their tweets or with organi-zational accounts, which alternate between posting messages about important company updates and tweets about promotions, PR activity or customer service.", "Direct applications of our analysis include automatically predicting staff tweets for unseen users and gaining a better understanding on how to craft more personal messages which can be useful to political or PR advisers.", "Related Work Several studies have looked at predicting the type of a Twitter account, most frequently between individual or organizational, using linguistic features (De Choudhury et al., 2012; McCorriston et al., 2015; Mac Kim et al., 2017) .", "A broad literature has been devoted to predicting personal traits from language use on Twitter, such as gender (Burger et al., 2011) , age (Nguyen et al., 2011) , geolocation (Cheng et al., 2010) , political preference (Volkova et al., 2014; , income , impact (Lampos et al., 2014) , socioeconomic status (Aletras and Chamberlain, 2018) , race (Preoţiuc-Pietro and Ungar, 2018) or personality (Schwartz et al., 2013a; Preoţiuc-Pietro et al., 2016 ).", "Related to our task is authorship attribution, where the goal is to predict the author of a given text.", "With few exceptions (Schwartz et al., 2013b) , this was attempted on larger documents or books (Popescu and Dinu, 2007; Stamatatos, 2009; Juola et al., 2008; Koppel et al., 2009 ).", "In our case, the experiments are set up as the same binary classification task regardless of the account (owner vs. staffer) which, unlike authorship attribution, allows for experiments across multiple user accounts.", "Additionally, in most authorship attribution studies, differences between authors consist mainly of the topics they write about.", "Our experimental setup limits the extent to which topic presence impacts the prediction, as all tweets are posted by US politicians and within the topics of the tweets from an account should be similar to each other.", "Pastiche detection is another related area of research (Dinu et al., 2012) , where models are trained to distinguish between an original text and a text written by one who aims to imitate the style of the original author, resulting in the documents having similar topics.", "Data We build a data set of Twitter accounts used by both the owner (the person who the account represents) and their staff.", "Several Twitter users attribute the authorship of a subset of their tweets to themselves by signing these with their initials or a hashtag, following the example of Barack Obama (Time, 2011) .", "The rest of the tweets are implicitly attributed to their staff.", "Thus, we use the Twitter user description to identify potential accounts where owners sign their tweets.", "We collect in total 1,365 potential user descriptions from Twitter that match a set of keyphrases indicative of personal tweet signatures (i.e., tweets by me signed, tweets signed, tweets are signed, staff unless noted, tweets from staff unless signed, tweets signed by, my tweets are signed).", "We then manually check all descriptions and filter out those not mentioning a signature, leaving us with 628 accounts.", "We aim to perform our analysis on a set of users from the same domain to limit variations caused by topic and we observe that the most numerous category of users who sign their messages are U.S. politicians, which leaves us with 147 accounts.", "We download all the tweets posted by these accounts that are accessible through the Twitter API (a maximum of 3,200).", "We remove the retweets made by an account, as these are not attributed to either the account owner or their staff.", "This results in a data set with a total of 202,024 tweets.", "We manually identified each user's signature from their profile description.", "To assign labels to tweets, we automatically matched the signature to each tweet using a regular expression.", "We remove the signature from all predictive experiments and feature analyses as this would make the classification task trivial.", "In total, 9,715 tweets (4.8% of the total) are signed by the account owners.", "While our task is to predict if a tweet is attributed to the owner or its staff, we assume this as a proxy to authorship if account owners are truthful when using the signature in their tweets.", "There is little incentive for owners to be untruthful, with potentially serious negative ramifications associated with public deception.", "We use DLATK, which handles social media content and markup such as emoticons or hashtags (Schwartz et al., 2017) .", "Further, we anonymize all usernames present in the tweet and URLs and replace them with placeholder tokens.", "Features We use a broad set of linguistic features motivated by past research on user trait prediction in our attempt to predict and interpret the difference between owner and staff attributed tweets.", "These include: LIWC.", "Traditional psychology studies use a dictionary-based approach to representing text.", "The most popular method is based on Linguistic Inquiry and Word Count (LIWC) (Pennebaker et al., 2001) consisting of 73 manually constructed lists of words (Pennebaker et al., 2015) including some specific parts-of-speech, topical or stylistic categories.", "Each message is thereby represented as a frequency distribution over these categories.", "Word2Vec Clusters.", "An alternative to LIWC is to use automatically generated word clusters.", "These clusters of words can be thought of as topics, i.e., groups of words that are semantically and/or syntactically similar.", "The clusters help reduce the feature space and provide good interpretability.", "We use the method by to compute topics using Word2Vec similarity (Mikolov et al., 2013) and spectral clustering (Shi and Malik, 2000; von Luxburg, 2007) of different sizes.", "We present results using 200 topics as this gave the best predictive results.", "Each message is thus represented as an unweighted distribution over clusters.", "Sentiment & Emotions.", "We also investigate the extent to which tweets posted by the account owner express more or fewer emotions.", "The most popular model of discrete emotions is the Ekman model (Ekman, 1992; Strapparava and Mihalcea, 2008; Strapparava et al., 2004) which posits the existence of six basic emotions: anger, disgust, fear, joy, sadness and surprise.", "We automatically quantify these emotions from our Twitter data set using a publicly available crowd-sourcing derived lexicon of words associated with any of the six emotions, as well as general positive and negative sentiment Turney, 2010, 2013) .", "Using these models, we assign sentiment and emotion probabilities to each message.", "Unigrams.", "We use the bag-of-words representation to reduce each message to a normalised frequency distribution over the vocabulary consisting of all words used by at least 20% of the users (2,099 words in total).", "We chose this smaller vocabulary that is more representative of words used by a larger set of users such that models would be able to transfer better to unseen users.", "Tweet Features.", "We compute additional tweetlevel features such as: the length in characters and tokens (Length), the type of tweet encoding if this is an @-reply or contains a URL (Tweet Type), the time of the tweet represented as a one-hot vector over the hour of day and day of week (Post Time) and the number of retweets and likes the tweet received (Impact).", "Although the latter features are not available in a real-time predictive scenario, they are useful for analysis.", "Prediction Our hypothesis is that tweets attributed to the owner of the account are different than those attributed to staff, and that these patterns generalize to held-out accounts not included in the training data.", "Hence, we build predictive models and test them in two setups.", "First, we split the users into ten folds.", "Tweets used in training are all posted by 80% of the users, tweets from 10% of the users are used for hyperparameter tuning and tweets from the final 10% of the users are used in testing (Users).", "In the second experimental setup, we split all tweets into ten folds using the same split sizes (Tweets).", "We report the average performance across the ten folds.", "Due to class imbalance -only 4.8% of tweets are posted by the account owners -results are measured in ROC AUC, which is a more suitable metric in this setup.", "In our predictive experiments, we used logistic regression with Elastic Net regularization.", "As features, we use all feature types described in the previous section separately as well as together using a logistic regression model combining all feature sets (Combined).", "The results using both experimental setups -holding-out tweets or users -are presented in Table 1 .", "Results show that we can predict owner tweets with good performance and consistently better than chance, even when we have no training data for the users in the test set.", "The held-out user experimental setup is more challenging as reflected by lower predictive numbers for most language features, except for the LIWC features.", "One potential explanation for the high performance of the LIWC features in this setup is that these are low dimensional and are better at identifying general patterns which transfer better to unseen users rather than overfit the users from the training data.", "Table 1 : Predictive results with each feature type for classifying tweets attributed to account owners or staffers, measured using ROC AUC.", "Evaluation is performed using 10-fold cross-validation by holding out in each fold either: 10% of the tweets (Tweets) or all tweets posted by 10% of the users (Users).", "Analysis In this section we investigate the linguistic and tweet features distinctive of tweets attributed to the account owner and to staff.", "A few accounts are outliers in the frequency of their signed tweets, with up to 80% owner attributed tweets compared to only 4.8% on average.", "We perform our analysis on a subset of the data, in order for our linguistic analysis not to be driven by a few prolific users or by any imbalance in the ratio of owner/staff tweets across users.", "The data set is obtained as follows.", "Each account can contribute a minimum of 10, maximum of 100 owner attributed tweets.", "We then sample staff attributed tweets from each account such that these are nine times the number of tweets signed by the owner.", "Newer messages are preferred when sampling.", "This leads to a data set of 28,150 tweets with exactly a tenth of them attributed to the account owners (2,815).", "We perform analysis of all previously described feature sets using Pearson correlations following Schwartz et al.", "(2013a) .", "We compute Pearson correlations independently for each feature between its distribution across messages (features are first normalized to sum up to unit for each message) and a variable encoding if the tweet was attributed to the account owner or not.", "We correct for multiple comparisons using Simes correction.", "Top unigrams correlated with owner attributed tweets are presented in Table 3 , with the other group textual features (LIWC categories, Word2Vec topics and emotion features) in Table 2 .", "Tweet feature results are presented in Table 4 .", "LIWC Features r Name Top Words .111 FUNCTION to, the, for, in, of, and, a, is, on, out .102 PRONOUN our, we, you, i, your, my, us, his .101 AFFECT great, thank, support, thanks, proud, care .098 SOCIAL our, we, you, your, who, us, his, help, they .107 PREP to, for, in, of, on, at, with, Our analysis shows that owner tweets are associated to a greater extent with language destined to convey emotion or a state of being and to signal a personal relationship with another political figure.", "Tweets of congratulations, condolences and support are also specific of signed tweets.", "These tweets tend to be retweeted less by others, but get more likes than staff attributed tweets.", "Tweets attributed to account owners are more likely to be posted on weekends, are less likely to be replies to others and contain less links to websites or images.", "Remarkably, there are no textual features significantly correlated with staff attributed tweets.", "An analysis showed that these are more diverse and thus no significant patterns are consistent in association with text features such as unigrams, topic or LIWC categories.", "Conclusions This study introduced a novel application of NLP: predicting if tweets from an account are attributed to their owner or to staffers.", "Past research on predicting and studying Twitter account characteristics such as type or personal traits (e.g., gender, age) assumed that the same person is authoring all posts from that account.", "Using a novel data set, we showed that owner attributed tweets exhibit distinct linguistic patterns to those attributed to staffers.", "Even when tested on held-out user accounts, our predictive model of owner tweets reaches an average performance of .741 AUC.", "Future work could study other types of accounts with similar posting behaviors such as organizational accounts, explore other sources for ground truth tweet identity information (Robinson, 2016) or study the effects of user traits such as gender or political affiliation in tweeting signed content." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5", "6", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Data", "Features", "Prediction", "Analysis", "Conclusions" ] }
GEM-SciDuet-train-43#paper-1063#slide-7
Takeaways
2019 Bloomberg Finance L.P. All rights reserved.
2019 Bloomberg Finance L.P. All rights reserved.
[]
GEM-SciDuet-train-44#paper-1064#slide-0
1064
A Crowd-Annotated Spanish Corpus for Humor Analysis
Computational Humor involves several tasks, such as humor recognition, humor generation, and humor scoring, for which it is useful to have human-curated data. In this work we present a corpus of 27,000 tweets written in Spanish and crowd-annotated by their humor value and funniness score, with about four annotations per tweet, tagged by 1,300 people over the Internet. It is equally divided between tweets coming from humorous and non-humorous accounts. The interannotator agreement Krippendorff's alpha value is 0.5710. The dataset is available for general usage and can serve as a basis for humor detection and as a first step to tackle subjectivity.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114 ], "paper_content_text": [ "Introduction Computational Humor studies humor from a computational perspective, involving several tasks such as humor recognition, which aims to tell if a piece of text is humorous or not; humor generation, with the objective of generating new texts with funny content; and humor scoring, whose goal is to predict how funny a piece of text is.", "In order to carry out this kind of tasks through supervised machine learning methods, humancurated data is necessary.", "Castro et al.", "(2016) built a humor classifier for Spanish and provided a dataset for humor recognition.", "However, there are some issues: few annotations per instance, low annotator agreement, and limited variety of sources for the humorous and mostly for the nonhumorous tweets (the latter were only about news, inspirational thoughts and curious facts).", "Up to our knowledge, there is no other dataset to work on humor comprehension in Spanish.", "Some other authors, such as Mihalcea and Strapparava (2005a,b) ; Sjöbergh and Araki (2007) have tackled humor recognition in English texts, building their own corpora by downloading one-liners (onesentence jokes) from the Internet, since working with longer texts would involve additional work, such as determining humor scope.", "The microblogging platform Twitter has been found particularly useful for building humor corpora due to its public availability and the fact that its short messages are suitable for jokes or humorous comments.", "Castro et al.", "(2016) built their corpus based on Twitter, selecting nine humorous accounts and nine non-humorous accounts about news, thoughts and curious facts.", "Reyes et al.", "(2013) built a corpus for detecting irony in tweets by searching for several hashtags (i.e., #irony, #humor, #education and #politics), which is also used in Barbieri and Saggion (2014) to train a classifier that detects humor.", "More recently, Potash et al.", "(2017) built a corpus based on tweets that aims to distinguish the degree of funniness in a given tweet.", "They used the tweet set issued in response to a TV game show, labeling which tweets were considered humorous by the show.", "In this work we present a crowd-annotated Spanish corpus of tweets tagged with a humor/no humor value and also by a funniness score from one to five.", "The corpus contains tweets extracted from varied sources and has several annotations per tweet, reaching a high humor inter-annotator agreement.", "The contribution of this work is twofold: the dataset is not only useful for building a humor classifier but it also serves to approach subjectivity in humor and funniness.", "Even though there are not enough annotations per tweet as required to study subjectivity in a genuine way with techniques such as the ones by Geng (2016) , the dataset aids as a playground to study the funniness and disagree-ment among several people.", "This document is organized as follows.", "Section 2 explains where and how we obtained the data, and Section 3 describes how it was annotated.", "In Section 4 we present the corpus, and we address the analysis in Section 5.", "Finally, in Section 6 we present draw the conclusions and present the future work.", "Extraction The aim of the extraction and annotation process was to build a corpus of at least 20,000 tweets that was as balanced as possible between the humor and not humor classes.", "Furthermore, as we intended to have a way of calculating the funniness score of a tweet, we needed to have several votes for the tweets that were considered humorous.", "As we wanted to have both humorous and non-humorous tweet samples, we extracted tweets from selected accounts and from realtime samples.", "For the former, based on Castro et al.", "(2016) , we selected tweets from fifty humorous accounts from Spanish speaking countries, and took a random sample of size 12,000.", "For the latter, we fetched tweet samples written in Spanish throughout February 2018 1 , and from this collection we took another random sample of size 12,000.", "Note that we preferred to take realtime tweet samples as we did not want to bias by selecting certain negative examples, such as news or inspirational thoughts as in Castro et al.", "(2016) and Mihalcea and Strapparava (2005b) .", "From both sources we ignored retweets, responses, citations and tweets containing links, as we wanted the text to be selfcontained.", "As expected, both sources contained a mix of humorous and non-humorous tweets.", "In the case of humorous accounts, this may be due to the fact that many tweets are used to increase the number of followers, expressing an opinion on a current event or supporting some popular cause.", "We first aimed to have five votes for each tweet, and to decide which tweet was humorous by simple majority.", "However, at a certain stage during the annotation process, we noticed that the users were voting too many tweets as non-humorous, and the result was highly unbalanced.", "Because of this, we made some adjustments in the corpus and the process: as the target was to have five votes for each tweet, we considered that the 1 The language detection feature is provided by the Twitter REST API.", "The annotator is asked whether the tweet intends to be humorous.", "The available options are \"Yes\", \"No\" or \"Skip\".", "If the annotator selects \"Yes\", five emoji are shown so the annotator can specify how funny he considers the tweet.", "The emoji also include labels describing the funniness levels.", "tweets that already had three non-humorous annotations at this stage should be considered as not humor, then we deprioritized them so the users could focus in annotating the rest of the tweets that were still ambiguous.", "We also injected 4,500 more tweets randomly extracted only from the humorous accounts.", "These new tweets were also prioritized since they had less annotations than the rest.", "Annotation A crowdsourced web annotation task was carried out to tag all tweets.", "2 The annotators were shown tweets as in Fig.", "1 .", "The tweets were randomly chosen but web session information was kept to avoid showing duplicates.", "We tried to keep the user interface as intuitive and self-explanatory as possible, trying not to induce any bias on users and letting them come up with their own definition of humor.", "The simple and friendly interface is meant to keep the users engaged and having fun while classifying tweets as humorous or not, and how funny they are, with as few instructions as possible.", "If a person decides that a tweet is humorous, he has to rate it between one to five by using emoji.", "In this way, the annotator gives more information rather than just stating the tweet is humorous.", "We also allowed to skip a tweet or click a help button for more information.", "We consider that explicitly asking the annotator if the text intends to be humorous makes the distinction between the Not Humorous and Not Funny classes less ambiguous, which we believe was a problem of (Castro et al., 2016) user interface.", "Also, we consider our emoji rated funniness score to be clearer for annotators than their stars based rating.", "The web page was shared on popular social networks along with some context about the task and the annotation period occurred between March 8 th and 27 th , 2018.", "The first tweets shown to every session were the same: three tweets for which we know a clear answer (one of them was humorous and the other two were not).", "These first tweets (\"test tweets\") were meant as a way of introducing the user into how the interface works, and also as an initial way for evaluating the quality of the annotations.", "After the introductory tweets, the rest of the tweets were sampled randomly, starting with the ones with the least number of votes.", "Corpus The dataset consists of two CSV files: tweets and annotations.", "The former contains the identifier and origin (which can be the realtime samples or the selected accounts) for each one of the 27, 282 tweets 3 , while the latter contains the tweet identifier, session identifier, date and annotation value for each one of the 117, 800 annotations received during the annotation phase (including the times the skip button was pressed, 2, 959 times).", "The dataset was released and it is available online.", "4 When compiling the final version of the corpus, we considered the annotations of users that did not answer the first three tweets correctly as having lower quality.", "These sessions should not be used for training or testing machine learning algorithms.", "Fortunately, only a small number of annotations had to be discarded because of this reason.", "The final number of annotations is 107, 634 (not including the times the skip button was pressed), including 3, 916 annotations assigned to the test tweets themselves.", "Analysis Annotation Distribution Each tweet received 3.8 annotations on average, with a standard deviation of 1.16, not considering the test tweets as they are outliers (they have a large number of annotations).", "The annotation 3 Tweet text is not included in the corpus due to Twitter Terms and Conditions.", "They can be obtained from the IDs.", "distribution is shown in Fig.", "2 .", "The histogram is highly concentrated: more than 98% of the tweets received between two and six annotations each.", "Even though the strategy was to show random tweets among the ones with less annotations, note that there are tweets with less than three annotations because some annotations were finally filtered out.", "At the same time, there are some tweets with more than six annotations because we merged annotations from a few dozen duplicate tweets.", "Also, note that there is a considerable amount of tweets with at least six annotations (1, 001).", "This subset can be useful to study the different annotator opinions under the same instances.", "Fig.", "3 shows how the classes are distributed between the annotations.", "Roughly two thirds were assigned to the class Not Humorous, agreeing with the fact that there seem to be more non-humorous tweets from humorous accounts than the other way around.", "The graph also indicates that there is a bias towards bad jokes in humor, according to the annotators.", "We use simple majority of votes for categorizing between humorous or not humorous, and weighted average for computing the funniness score only for humorous tweets.", "The scale goes from one (Not Funny) to five (Excellent).", "Under this scheme, 27.01% of the tweets are humorous, 70.6% are not-humorous while 2.39% is undecided (2.38% tied and 0.01% no annotations).", "At the same time, humorous tweets have little funniness overall: the funniness score average is 1.35 and standard deviation 0.85.", "Class Distribution Annotators Distribution There were 1, 271 annotators who tagged the tweets roughly as follows: two annotators tagged 13, 000 tweets, then one annotated 8, 000, the next eight annotated between one and three thousand, the next 105 annotated between one hundred and one thousand and the rest annotated less than a hundred, having 32, 584 annotations in total (see Fig.", "4 ).", "The average was 83 tags by annotator, with a standard deviation of 597.", "Annotators Agreement An important aspect to analyze is to what extent the annotators agree on which tweets are humorous.", "We used the alpha measure from Krippendorff (2012) , a generalized version of the kappa measure (Cohen, 1960; Fleiss, 1971 ) that takes in account an arbitray number of raters.", "The agreement alpha value on humorous versus nonhumorous is 0.5710.", "According to Fleiss (1981) , it means that the agreement is somewhat between \"moderate\" to \"substantial\", suggesting there is acceptable agreement but the humans cannot completely agree.", "We believe that the carefully designed user interface impacted in the quality of the annotation, as unlike Castro et al.", "(2016) this work's annotation web page presented less ambiguity between the class Not Humorous and Not Funny.", "We clearly outperformed their interannotator agreement (which was 0.3654).", "Additionally, if we consider the whole corpus (including the removed annotations), this figure decreases to 0.5512.", "This shows that the test tweets were helpful to filter out low quality annotations.", "Additionally, we can try to estimate to what extent the annotators agree on the funniness value of the tweets.", "In this case, disagreement between close values in the scale (e.g.", "Not Funny and Little Funny) should have less impact than disagreement between values that are further (e.g.", "Not Funny and Excellent).", "Following Stevens (1946) , in the previous case we were dealing with a nominal measure while in this case it is an ordinal measure.", "Alpha considers this into the formula by using a generic distance function between ratings, so we applied it and obtained a value of 0.1625 which is far from good; it is closer to a random annotation.", "There is a lack of agreement on the funniness.", "In this case, a machine will not be able to assign a unique value of funniness to a tweet, which makes sense with its subjectivity, albeit other techniques could be used (Geng, 2016) .", "In this case, if we consider the whole dataset, this number decreases to 0.1442.", "If we only consider the eleven annotators who tagged more than a thousand times (who tagged 50, 939 times in total), the humor and funniness agreement are respectively 0.6345 and 0.2635.", "Conclusion and Future Work Our main contribution is a corpus of tweets in Spanish labeled by their humor value and funniness score with respect to a crowd-sourced annotation.", "The dataset contains 27, 282 tweets coming from multiple sources, with 107, 634 annotations.", "The corpus showed high quality because of the significant inter-annotator agreement value.", "The dataset serves to build a Spanish humor classifier, but it also serves as a first step to tackle humor and funniness subjectivity.", "Even though more annotations per tweet would be appropriate, there is a subset of a thousand tweets with at least six annotations that could be used to study people's opinion on the same instances.", "Future steps involve gathering more annotations per tweet for a considerable amount of tweets, so techniques such as the ones in (Geng, 2016 ) could be used to study how people perceive the humorous pieces and what subjects and phrases they consider funnier.", "It would be interesting to consider social strata (e.g.", "origin, age and gender) when trying to find these patterns.", "Additionally, a similar dataset could be built for other languages which count with more data to cross over with (such as English) and build a humor classifier exploiting re-" ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5.1", "5.3", "5.4", "6" ], "paper_header_content": [ "Introduction", "Extraction", "Annotation", "Corpus", "Annotation Distribution", "Annotators Distribution", "Annotators Agreement", "Conclusion and Future Work" ] }
GEM-SciDuet-train-44#paper-1064#slide-0
Background i
Humor Detection is about telling if a text is humorous My grandpa came to America looking for freedom, but it didnt work out, in the next flight my grandma was coming.
Humor Detection is about telling if a text is humorous My grandpa came to America looking for freedom, but it didnt work out, in the next flight my grandma was coming.
[]
GEM-SciDuet-train-44#paper-1064#slide-1
1064
A Crowd-Annotated Spanish Corpus for Humor Analysis
Computational Humor involves several tasks, such as humor recognition, humor generation, and humor scoring, for which it is useful to have human-curated data. In this work we present a corpus of 27,000 tweets written in Spanish and crowd-annotated by their humor value and funniness score, with about four annotations per tweet, tagged by 1,300 people over the Internet. It is equally divided between tweets coming from humorous and non-humorous accounts. The interannotator agreement Krippendorff's alpha value is 0.5710. The dataset is available for general usage and can serve as a basis for humor detection and as a first step to tackle subjectivity.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114 ], "paper_content_text": [ "Introduction Computational Humor studies humor from a computational perspective, involving several tasks such as humor recognition, which aims to tell if a piece of text is humorous or not; humor generation, with the objective of generating new texts with funny content; and humor scoring, whose goal is to predict how funny a piece of text is.", "In order to carry out this kind of tasks through supervised machine learning methods, humancurated data is necessary.", "Castro et al.", "(2016) built a humor classifier for Spanish and provided a dataset for humor recognition.", "However, there are some issues: few annotations per instance, low annotator agreement, and limited variety of sources for the humorous and mostly for the nonhumorous tweets (the latter were only about news, inspirational thoughts and curious facts).", "Up to our knowledge, there is no other dataset to work on humor comprehension in Spanish.", "Some other authors, such as Mihalcea and Strapparava (2005a,b) ; Sjöbergh and Araki (2007) have tackled humor recognition in English texts, building their own corpora by downloading one-liners (onesentence jokes) from the Internet, since working with longer texts would involve additional work, such as determining humor scope.", "The microblogging platform Twitter has been found particularly useful for building humor corpora due to its public availability and the fact that its short messages are suitable for jokes or humorous comments.", "Castro et al.", "(2016) built their corpus based on Twitter, selecting nine humorous accounts and nine non-humorous accounts about news, thoughts and curious facts.", "Reyes et al.", "(2013) built a corpus for detecting irony in tweets by searching for several hashtags (i.e., #irony, #humor, #education and #politics), which is also used in Barbieri and Saggion (2014) to train a classifier that detects humor.", "More recently, Potash et al.", "(2017) built a corpus based on tweets that aims to distinguish the degree of funniness in a given tweet.", "They used the tweet set issued in response to a TV game show, labeling which tweets were considered humorous by the show.", "In this work we present a crowd-annotated Spanish corpus of tweets tagged with a humor/no humor value and also by a funniness score from one to five.", "The corpus contains tweets extracted from varied sources and has several annotations per tweet, reaching a high humor inter-annotator agreement.", "The contribution of this work is twofold: the dataset is not only useful for building a humor classifier but it also serves to approach subjectivity in humor and funniness.", "Even though there are not enough annotations per tweet as required to study subjectivity in a genuine way with techniques such as the ones by Geng (2016) , the dataset aids as a playground to study the funniness and disagree-ment among several people.", "This document is organized as follows.", "Section 2 explains where and how we obtained the data, and Section 3 describes how it was annotated.", "In Section 4 we present the corpus, and we address the analysis in Section 5.", "Finally, in Section 6 we present draw the conclusions and present the future work.", "Extraction The aim of the extraction and annotation process was to build a corpus of at least 20,000 tweets that was as balanced as possible between the humor and not humor classes.", "Furthermore, as we intended to have a way of calculating the funniness score of a tweet, we needed to have several votes for the tweets that were considered humorous.", "As we wanted to have both humorous and non-humorous tweet samples, we extracted tweets from selected accounts and from realtime samples.", "For the former, based on Castro et al.", "(2016) , we selected tweets from fifty humorous accounts from Spanish speaking countries, and took a random sample of size 12,000.", "For the latter, we fetched tweet samples written in Spanish throughout February 2018 1 , and from this collection we took another random sample of size 12,000.", "Note that we preferred to take realtime tweet samples as we did not want to bias by selecting certain negative examples, such as news or inspirational thoughts as in Castro et al.", "(2016) and Mihalcea and Strapparava (2005b) .", "From both sources we ignored retweets, responses, citations and tweets containing links, as we wanted the text to be selfcontained.", "As expected, both sources contained a mix of humorous and non-humorous tweets.", "In the case of humorous accounts, this may be due to the fact that many tweets are used to increase the number of followers, expressing an opinion on a current event or supporting some popular cause.", "We first aimed to have five votes for each tweet, and to decide which tweet was humorous by simple majority.", "However, at a certain stage during the annotation process, we noticed that the users were voting too many tweets as non-humorous, and the result was highly unbalanced.", "Because of this, we made some adjustments in the corpus and the process: as the target was to have five votes for each tweet, we considered that the 1 The language detection feature is provided by the Twitter REST API.", "The annotator is asked whether the tweet intends to be humorous.", "The available options are \"Yes\", \"No\" or \"Skip\".", "If the annotator selects \"Yes\", five emoji are shown so the annotator can specify how funny he considers the tweet.", "The emoji also include labels describing the funniness levels.", "tweets that already had three non-humorous annotations at this stage should be considered as not humor, then we deprioritized them so the users could focus in annotating the rest of the tweets that were still ambiguous.", "We also injected 4,500 more tweets randomly extracted only from the humorous accounts.", "These new tweets were also prioritized since they had less annotations than the rest.", "Annotation A crowdsourced web annotation task was carried out to tag all tweets.", "2 The annotators were shown tweets as in Fig.", "1 .", "The tweets were randomly chosen but web session information was kept to avoid showing duplicates.", "We tried to keep the user interface as intuitive and self-explanatory as possible, trying not to induce any bias on users and letting them come up with their own definition of humor.", "The simple and friendly interface is meant to keep the users engaged and having fun while classifying tweets as humorous or not, and how funny they are, with as few instructions as possible.", "If a person decides that a tweet is humorous, he has to rate it between one to five by using emoji.", "In this way, the annotator gives more information rather than just stating the tweet is humorous.", "We also allowed to skip a tweet or click a help button for more information.", "We consider that explicitly asking the annotator if the text intends to be humorous makes the distinction between the Not Humorous and Not Funny classes less ambiguous, which we believe was a problem of (Castro et al., 2016) user interface.", "Also, we consider our emoji rated funniness score to be clearer for annotators than their stars based rating.", "The web page was shared on popular social networks along with some context about the task and the annotation period occurred between March 8 th and 27 th , 2018.", "The first tweets shown to every session were the same: three tweets for which we know a clear answer (one of them was humorous and the other two were not).", "These first tweets (\"test tweets\") were meant as a way of introducing the user into how the interface works, and also as an initial way for evaluating the quality of the annotations.", "After the introductory tweets, the rest of the tweets were sampled randomly, starting with the ones with the least number of votes.", "Corpus The dataset consists of two CSV files: tweets and annotations.", "The former contains the identifier and origin (which can be the realtime samples or the selected accounts) for each one of the 27, 282 tweets 3 , while the latter contains the tweet identifier, session identifier, date and annotation value for each one of the 117, 800 annotations received during the annotation phase (including the times the skip button was pressed, 2, 959 times).", "The dataset was released and it is available online.", "4 When compiling the final version of the corpus, we considered the annotations of users that did not answer the first three tweets correctly as having lower quality.", "These sessions should not be used for training or testing machine learning algorithms.", "Fortunately, only a small number of annotations had to be discarded because of this reason.", "The final number of annotations is 107, 634 (not including the times the skip button was pressed), including 3, 916 annotations assigned to the test tweets themselves.", "Analysis Annotation Distribution Each tweet received 3.8 annotations on average, with a standard deviation of 1.16, not considering the test tweets as they are outliers (they have a large number of annotations).", "The annotation 3 Tweet text is not included in the corpus due to Twitter Terms and Conditions.", "They can be obtained from the IDs.", "distribution is shown in Fig.", "2 .", "The histogram is highly concentrated: more than 98% of the tweets received between two and six annotations each.", "Even though the strategy was to show random tweets among the ones with less annotations, note that there are tweets with less than three annotations because some annotations were finally filtered out.", "At the same time, there are some tweets with more than six annotations because we merged annotations from a few dozen duplicate tweets.", "Also, note that there is a considerable amount of tweets with at least six annotations (1, 001).", "This subset can be useful to study the different annotator opinions under the same instances.", "Fig.", "3 shows how the classes are distributed between the annotations.", "Roughly two thirds were assigned to the class Not Humorous, agreeing with the fact that there seem to be more non-humorous tweets from humorous accounts than the other way around.", "The graph also indicates that there is a bias towards bad jokes in humor, according to the annotators.", "We use simple majority of votes for categorizing between humorous or not humorous, and weighted average for computing the funniness score only for humorous tweets.", "The scale goes from one (Not Funny) to five (Excellent).", "Under this scheme, 27.01% of the tweets are humorous, 70.6% are not-humorous while 2.39% is undecided (2.38% tied and 0.01% no annotations).", "At the same time, humorous tweets have little funniness overall: the funniness score average is 1.35 and standard deviation 0.85.", "Class Distribution Annotators Distribution There were 1, 271 annotators who tagged the tweets roughly as follows: two annotators tagged 13, 000 tweets, then one annotated 8, 000, the next eight annotated between one and three thousand, the next 105 annotated between one hundred and one thousand and the rest annotated less than a hundred, having 32, 584 annotations in total (see Fig.", "4 ).", "The average was 83 tags by annotator, with a standard deviation of 597.", "Annotators Agreement An important aspect to analyze is to what extent the annotators agree on which tweets are humorous.", "We used the alpha measure from Krippendorff (2012) , a generalized version of the kappa measure (Cohen, 1960; Fleiss, 1971 ) that takes in account an arbitray number of raters.", "The agreement alpha value on humorous versus nonhumorous is 0.5710.", "According to Fleiss (1981) , it means that the agreement is somewhat between \"moderate\" to \"substantial\", suggesting there is acceptable agreement but the humans cannot completely agree.", "We believe that the carefully designed user interface impacted in the quality of the annotation, as unlike Castro et al.", "(2016) this work's annotation web page presented less ambiguity between the class Not Humorous and Not Funny.", "We clearly outperformed their interannotator agreement (which was 0.3654).", "Additionally, if we consider the whole corpus (including the removed annotations), this figure decreases to 0.5512.", "This shows that the test tweets were helpful to filter out low quality annotations.", "Additionally, we can try to estimate to what extent the annotators agree on the funniness value of the tweets.", "In this case, disagreement between close values in the scale (e.g.", "Not Funny and Little Funny) should have less impact than disagreement between values that are further (e.g.", "Not Funny and Excellent).", "Following Stevens (1946) , in the previous case we were dealing with a nominal measure while in this case it is an ordinal measure.", "Alpha considers this into the formula by using a generic distance function between ratings, so we applied it and obtained a value of 0.1625 which is far from good; it is closer to a random annotation.", "There is a lack of agreement on the funniness.", "In this case, a machine will not be able to assign a unique value of funniness to a tweet, which makes sense with its subjectivity, albeit other techniques could be used (Geng, 2016) .", "In this case, if we consider the whole dataset, this number decreases to 0.1442.", "If we only consider the eleven annotators who tagged more than a thousand times (who tagged 50, 939 times in total), the humor and funniness agreement are respectively 0.6345 and 0.2635.", "Conclusion and Future Work Our main contribution is a corpus of tweets in Spanish labeled by their humor value and funniness score with respect to a crowd-sourced annotation.", "The dataset contains 27, 282 tweets coming from multiple sources, with 107, 634 annotations.", "The corpus showed high quality because of the significant inter-annotator agreement value.", "The dataset serves to build a Spanish humor classifier, but it also serves as a first step to tackle humor and funniness subjectivity.", "Even though more annotations per tweet would be appropriate, there is a subset of a thousand tweets with at least six annotations that could be used to study people's opinion on the same instances.", "Future steps involve gathering more annotations per tweet for a considerable amount of tweets, so techniques such as the ones in (Geng, 2016 ) could be used to study how people perceive the humorous pieces and what subjects and phrases they consider funnier.", "It would be interesting to consider social strata (e.g.", "origin, age and gender) when trying to find these patterns.", "Additionally, a similar dataset could be built for other languages which count with more data to cross over with (such as English) and build a humor classifier exploiting re-" ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5.1", "5.3", "5.4", "6" ], "paper_header_content": [ "Introduction", "Extraction", "Annotation", "Corpus", "Annotation Distribution", "Annotators Distribution", "Annotators Agreement", "Conclusion and Future Work" ] }
GEM-SciDuet-train-44#paper-1064#slide-1
Background ii
Some previous work, such as Barbieri and Saggion (2014), Mihalcea and Strapparava (2005), and Sjobergh and Araki (2007), created binary Humor Classifiers for short texts written in English. They extracted one-liners from the Internet and from Beauty is in the eye of the beer holder. Castro et al. (2016) worked on Spanish tweets since our group is interested in leveraging tools for Spanish. Back then, we conceived the first and only Spanish dataset to study Humor.
Some previous work, such as Barbieri and Saggion (2014), Mihalcea and Strapparava (2005), and Sjobergh and Araki (2007), created binary Humor Classifiers for short texts written in English. They extracted one-liners from the Internet and from Beauty is in the eye of the beer holder. Castro et al. (2016) worked on Spanish tweets since our group is interested in leveraging tools for Spanish. Back then, we conceived the first and only Spanish dataset to study Humor.
[]
GEM-SciDuet-train-44#paper-1064#slide-2
1064
A Crowd-Annotated Spanish Corpus for Humor Analysis
Computational Humor involves several tasks, such as humor recognition, humor generation, and humor scoring, for which it is useful to have human-curated data. In this work we present a corpus of 27,000 tweets written in Spanish and crowd-annotated by their humor value and funniness score, with about four annotations per tweet, tagged by 1,300 people over the Internet. It is equally divided between tweets coming from humorous and non-humorous accounts. The interannotator agreement Krippendorff's alpha value is 0.5710. The dataset is available for general usage and can serve as a basis for humor detection and as a first step to tackle subjectivity.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114 ], "paper_content_text": [ "Introduction Computational Humor studies humor from a computational perspective, involving several tasks such as humor recognition, which aims to tell if a piece of text is humorous or not; humor generation, with the objective of generating new texts with funny content; and humor scoring, whose goal is to predict how funny a piece of text is.", "In order to carry out this kind of tasks through supervised machine learning methods, humancurated data is necessary.", "Castro et al.", "(2016) built a humor classifier for Spanish and provided a dataset for humor recognition.", "However, there are some issues: few annotations per instance, low annotator agreement, and limited variety of sources for the humorous and mostly for the nonhumorous tweets (the latter were only about news, inspirational thoughts and curious facts).", "Up to our knowledge, there is no other dataset to work on humor comprehension in Spanish.", "Some other authors, such as Mihalcea and Strapparava (2005a,b) ; Sjöbergh and Araki (2007) have tackled humor recognition in English texts, building their own corpora by downloading one-liners (onesentence jokes) from the Internet, since working with longer texts would involve additional work, such as determining humor scope.", "The microblogging platform Twitter has been found particularly useful for building humor corpora due to its public availability and the fact that its short messages are suitable for jokes or humorous comments.", "Castro et al.", "(2016) built their corpus based on Twitter, selecting nine humorous accounts and nine non-humorous accounts about news, thoughts and curious facts.", "Reyes et al.", "(2013) built a corpus for detecting irony in tweets by searching for several hashtags (i.e., #irony, #humor, #education and #politics), which is also used in Barbieri and Saggion (2014) to train a classifier that detects humor.", "More recently, Potash et al.", "(2017) built a corpus based on tweets that aims to distinguish the degree of funniness in a given tweet.", "They used the tweet set issued in response to a TV game show, labeling which tweets were considered humorous by the show.", "In this work we present a crowd-annotated Spanish corpus of tweets tagged with a humor/no humor value and also by a funniness score from one to five.", "The corpus contains tweets extracted from varied sources and has several annotations per tweet, reaching a high humor inter-annotator agreement.", "The contribution of this work is twofold: the dataset is not only useful for building a humor classifier but it also serves to approach subjectivity in humor and funniness.", "Even though there are not enough annotations per tweet as required to study subjectivity in a genuine way with techniques such as the ones by Geng (2016) , the dataset aids as a playground to study the funniness and disagree-ment among several people.", "This document is organized as follows.", "Section 2 explains where and how we obtained the data, and Section 3 describes how it was annotated.", "In Section 4 we present the corpus, and we address the analysis in Section 5.", "Finally, in Section 6 we present draw the conclusions and present the future work.", "Extraction The aim of the extraction and annotation process was to build a corpus of at least 20,000 tweets that was as balanced as possible between the humor and not humor classes.", "Furthermore, as we intended to have a way of calculating the funniness score of a tweet, we needed to have several votes for the tweets that were considered humorous.", "As we wanted to have both humorous and non-humorous tweet samples, we extracted tweets from selected accounts and from realtime samples.", "For the former, based on Castro et al.", "(2016) , we selected tweets from fifty humorous accounts from Spanish speaking countries, and took a random sample of size 12,000.", "For the latter, we fetched tweet samples written in Spanish throughout February 2018 1 , and from this collection we took another random sample of size 12,000.", "Note that we preferred to take realtime tweet samples as we did not want to bias by selecting certain negative examples, such as news or inspirational thoughts as in Castro et al.", "(2016) and Mihalcea and Strapparava (2005b) .", "From both sources we ignored retweets, responses, citations and tweets containing links, as we wanted the text to be selfcontained.", "As expected, both sources contained a mix of humorous and non-humorous tweets.", "In the case of humorous accounts, this may be due to the fact that many tweets are used to increase the number of followers, expressing an opinion on a current event or supporting some popular cause.", "We first aimed to have five votes for each tweet, and to decide which tweet was humorous by simple majority.", "However, at a certain stage during the annotation process, we noticed that the users were voting too many tweets as non-humorous, and the result was highly unbalanced.", "Because of this, we made some adjustments in the corpus and the process: as the target was to have five votes for each tweet, we considered that the 1 The language detection feature is provided by the Twitter REST API.", "The annotator is asked whether the tweet intends to be humorous.", "The available options are \"Yes\", \"No\" or \"Skip\".", "If the annotator selects \"Yes\", five emoji are shown so the annotator can specify how funny he considers the tweet.", "The emoji also include labels describing the funniness levels.", "tweets that already had three non-humorous annotations at this stage should be considered as not humor, then we deprioritized them so the users could focus in annotating the rest of the tweets that were still ambiguous.", "We also injected 4,500 more tweets randomly extracted only from the humorous accounts.", "These new tweets were also prioritized since they had less annotations than the rest.", "Annotation A crowdsourced web annotation task was carried out to tag all tweets.", "2 The annotators were shown tweets as in Fig.", "1 .", "The tweets were randomly chosen but web session information was kept to avoid showing duplicates.", "We tried to keep the user interface as intuitive and self-explanatory as possible, trying not to induce any bias on users and letting them come up with their own definition of humor.", "The simple and friendly interface is meant to keep the users engaged and having fun while classifying tweets as humorous or not, and how funny they are, with as few instructions as possible.", "If a person decides that a tweet is humorous, he has to rate it between one to five by using emoji.", "In this way, the annotator gives more information rather than just stating the tweet is humorous.", "We also allowed to skip a tweet or click a help button for more information.", "We consider that explicitly asking the annotator if the text intends to be humorous makes the distinction between the Not Humorous and Not Funny classes less ambiguous, which we believe was a problem of (Castro et al., 2016) user interface.", "Also, we consider our emoji rated funniness score to be clearer for annotators than their stars based rating.", "The web page was shared on popular social networks along with some context about the task and the annotation period occurred between March 8 th and 27 th , 2018.", "The first tweets shown to every session were the same: three tweets for which we know a clear answer (one of them was humorous and the other two were not).", "These first tweets (\"test tweets\") were meant as a way of introducing the user into how the interface works, and also as an initial way for evaluating the quality of the annotations.", "After the introductory tweets, the rest of the tweets were sampled randomly, starting with the ones with the least number of votes.", "Corpus The dataset consists of two CSV files: tweets and annotations.", "The former contains the identifier and origin (which can be the realtime samples or the selected accounts) for each one of the 27, 282 tweets 3 , while the latter contains the tweet identifier, session identifier, date and annotation value for each one of the 117, 800 annotations received during the annotation phase (including the times the skip button was pressed, 2, 959 times).", "The dataset was released and it is available online.", "4 When compiling the final version of the corpus, we considered the annotations of users that did not answer the first three tweets correctly as having lower quality.", "These sessions should not be used for training or testing machine learning algorithms.", "Fortunately, only a small number of annotations had to be discarded because of this reason.", "The final number of annotations is 107, 634 (not including the times the skip button was pressed), including 3, 916 annotations assigned to the test tweets themselves.", "Analysis Annotation Distribution Each tweet received 3.8 annotations on average, with a standard deviation of 1.16, not considering the test tweets as they are outliers (they have a large number of annotations).", "The annotation 3 Tweet text is not included in the corpus due to Twitter Terms and Conditions.", "They can be obtained from the IDs.", "distribution is shown in Fig.", "2 .", "The histogram is highly concentrated: more than 98% of the tweets received between two and six annotations each.", "Even though the strategy was to show random tweets among the ones with less annotations, note that there are tweets with less than three annotations because some annotations were finally filtered out.", "At the same time, there are some tweets with more than six annotations because we merged annotations from a few dozen duplicate tweets.", "Also, note that there is a considerable amount of tweets with at least six annotations (1, 001).", "This subset can be useful to study the different annotator opinions under the same instances.", "Fig.", "3 shows how the classes are distributed between the annotations.", "Roughly two thirds were assigned to the class Not Humorous, agreeing with the fact that there seem to be more non-humorous tweets from humorous accounts than the other way around.", "The graph also indicates that there is a bias towards bad jokes in humor, according to the annotators.", "We use simple majority of votes for categorizing between humorous or not humorous, and weighted average for computing the funniness score only for humorous tweets.", "The scale goes from one (Not Funny) to five (Excellent).", "Under this scheme, 27.01% of the tweets are humorous, 70.6% are not-humorous while 2.39% is undecided (2.38% tied and 0.01% no annotations).", "At the same time, humorous tweets have little funniness overall: the funniness score average is 1.35 and standard deviation 0.85.", "Class Distribution Annotators Distribution There were 1, 271 annotators who tagged the tweets roughly as follows: two annotators tagged 13, 000 tweets, then one annotated 8, 000, the next eight annotated between one and three thousand, the next 105 annotated between one hundred and one thousand and the rest annotated less than a hundred, having 32, 584 annotations in total (see Fig.", "4 ).", "The average was 83 tags by annotator, with a standard deviation of 597.", "Annotators Agreement An important aspect to analyze is to what extent the annotators agree on which tweets are humorous.", "We used the alpha measure from Krippendorff (2012) , a generalized version of the kappa measure (Cohen, 1960; Fleiss, 1971 ) that takes in account an arbitray number of raters.", "The agreement alpha value on humorous versus nonhumorous is 0.5710.", "According to Fleiss (1981) , it means that the agreement is somewhat between \"moderate\" to \"substantial\", suggesting there is acceptable agreement but the humans cannot completely agree.", "We believe that the carefully designed user interface impacted in the quality of the annotation, as unlike Castro et al.", "(2016) this work's annotation web page presented less ambiguity between the class Not Humorous and Not Funny.", "We clearly outperformed their interannotator agreement (which was 0.3654).", "Additionally, if we consider the whole corpus (including the removed annotations), this figure decreases to 0.5512.", "This shows that the test tweets were helpful to filter out low quality annotations.", "Additionally, we can try to estimate to what extent the annotators agree on the funniness value of the tweets.", "In this case, disagreement between close values in the scale (e.g.", "Not Funny and Little Funny) should have less impact than disagreement between values that are further (e.g.", "Not Funny and Excellent).", "Following Stevens (1946) , in the previous case we were dealing with a nominal measure while in this case it is an ordinal measure.", "Alpha considers this into the formula by using a generic distance function between ratings, so we applied it and obtained a value of 0.1625 which is far from good; it is closer to a random annotation.", "There is a lack of agreement on the funniness.", "In this case, a machine will not be able to assign a unique value of funniness to a tweet, which makes sense with its subjectivity, albeit other techniques could be used (Geng, 2016) .", "In this case, if we consider the whole dataset, this number decreases to 0.1442.", "If we only consider the eleven annotators who tagged more than a thousand times (who tagged 50, 939 times in total), the humor and funniness agreement are respectively 0.6345 and 0.2635.", "Conclusion and Future Work Our main contribution is a corpus of tweets in Spanish labeled by their humor value and funniness score with respect to a crowd-sourced annotation.", "The dataset contains 27, 282 tweets coming from multiple sources, with 107, 634 annotations.", "The corpus showed high quality because of the significant inter-annotator agreement value.", "The dataset serves to build a Spanish humor classifier, but it also serves as a first step to tackle humor and funniness subjectivity.", "Even though more annotations per tweet would be appropriate, there is a subset of a thousand tweets with at least six annotations that could be used to study people's opinion on the same instances.", "Future steps involve gathering more annotations per tweet for a considerable amount of tweets, so techniques such as the ones in (Geng, 2016 ) could be used to study how people perceive the humorous pieces and what subjects and phrases they consider funnier.", "It would be interesting to consider social strata (e.g.", "origin, age and gender) when trying to find these patterns.", "Additionally, a similar dataset could be built for other languages which count with more data to cross over with (such as English) and build a humor classifier exploiting re-" ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5.1", "5.3", "5.4", "6" ], "paper_header_content": [ "Introduction", "Extraction", "Annotation", "Corpus", "Annotation Distribution", "Annotators Distribution", "Annotators Agreement", "Conclusion and Future Work" ] }
GEM-SciDuet-train-44#paper-1064#slide-2
Background iii
Castro et al. (2016) corpus provided 40k tweets from 18 accounts, with 34k annotations. The annotators decided if the tweets were humorous or not, and if so they rated them from 1 to 5. However, the dataset has some issues: 1. low inter-annotator agreement (Fleiss 2. limited variety of sources (humorous: 9 Twitter accounts, non-humorous: 3 about news accounts, 3 about inspirational thoughts and 3 about curious facts) 3. very few annotations per tweet (less than 2 in average, around 500 with 5 annotations) 4. only 6k were considered humorous by the crowd
Castro et al. (2016) corpus provided 40k tweets from 18 accounts, with 34k annotations. The annotators decided if the tweets were humorous or not, and if so they rated them from 1 to 5. However, the dataset has some issues: 1. low inter-annotator agreement (Fleiss 2. limited variety of sources (humorous: 9 Twitter accounts, non-humorous: 3 about news accounts, 3 about inspirational thoughts and 3 about curious facts) 3. very few annotations per tweet (less than 2 in average, around 500 with 5 annotations) 4. only 6k were considered humorous by the crowd
[]
GEM-SciDuet-train-44#paper-1064#slide-4
1064
A Crowd-Annotated Spanish Corpus for Humor Analysis
Computational Humor involves several tasks, such as humor recognition, humor generation, and humor scoring, for which it is useful to have human-curated data. In this work we present a corpus of 27,000 tweets written in Spanish and crowd-annotated by their humor value and funniness score, with about four annotations per tweet, tagged by 1,300 people over the Internet. It is equally divided between tweets coming from humorous and non-humorous accounts. The interannotator agreement Krippendorff's alpha value is 0.5710. The dataset is available for general usage and can serve as a basis for humor detection and as a first step to tackle subjectivity.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114 ], "paper_content_text": [ "Introduction Computational Humor studies humor from a computational perspective, involving several tasks such as humor recognition, which aims to tell if a piece of text is humorous or not; humor generation, with the objective of generating new texts with funny content; and humor scoring, whose goal is to predict how funny a piece of text is.", "In order to carry out this kind of tasks through supervised machine learning methods, humancurated data is necessary.", "Castro et al.", "(2016) built a humor classifier for Spanish and provided a dataset for humor recognition.", "However, there are some issues: few annotations per instance, low annotator agreement, and limited variety of sources for the humorous and mostly for the nonhumorous tweets (the latter were only about news, inspirational thoughts and curious facts).", "Up to our knowledge, there is no other dataset to work on humor comprehension in Spanish.", "Some other authors, such as Mihalcea and Strapparava (2005a,b) ; Sjöbergh and Araki (2007) have tackled humor recognition in English texts, building their own corpora by downloading one-liners (onesentence jokes) from the Internet, since working with longer texts would involve additional work, such as determining humor scope.", "The microblogging platform Twitter has been found particularly useful for building humor corpora due to its public availability and the fact that its short messages are suitable for jokes or humorous comments.", "Castro et al.", "(2016) built their corpus based on Twitter, selecting nine humorous accounts and nine non-humorous accounts about news, thoughts and curious facts.", "Reyes et al.", "(2013) built a corpus for detecting irony in tweets by searching for several hashtags (i.e., #irony, #humor, #education and #politics), which is also used in Barbieri and Saggion (2014) to train a classifier that detects humor.", "More recently, Potash et al.", "(2017) built a corpus based on tweets that aims to distinguish the degree of funniness in a given tweet.", "They used the tweet set issued in response to a TV game show, labeling which tweets were considered humorous by the show.", "In this work we present a crowd-annotated Spanish corpus of tweets tagged with a humor/no humor value and also by a funniness score from one to five.", "The corpus contains tweets extracted from varied sources and has several annotations per tweet, reaching a high humor inter-annotator agreement.", "The contribution of this work is twofold: the dataset is not only useful for building a humor classifier but it also serves to approach subjectivity in humor and funniness.", "Even though there are not enough annotations per tweet as required to study subjectivity in a genuine way with techniques such as the ones by Geng (2016) , the dataset aids as a playground to study the funniness and disagree-ment among several people.", "This document is organized as follows.", "Section 2 explains where and how we obtained the data, and Section 3 describes how it was annotated.", "In Section 4 we present the corpus, and we address the analysis in Section 5.", "Finally, in Section 6 we present draw the conclusions and present the future work.", "Extraction The aim of the extraction and annotation process was to build a corpus of at least 20,000 tweets that was as balanced as possible between the humor and not humor classes.", "Furthermore, as we intended to have a way of calculating the funniness score of a tweet, we needed to have several votes for the tweets that were considered humorous.", "As we wanted to have both humorous and non-humorous tweet samples, we extracted tweets from selected accounts and from realtime samples.", "For the former, based on Castro et al.", "(2016) , we selected tweets from fifty humorous accounts from Spanish speaking countries, and took a random sample of size 12,000.", "For the latter, we fetched tweet samples written in Spanish throughout February 2018 1 , and from this collection we took another random sample of size 12,000.", "Note that we preferred to take realtime tweet samples as we did not want to bias by selecting certain negative examples, such as news or inspirational thoughts as in Castro et al.", "(2016) and Mihalcea and Strapparava (2005b) .", "From both sources we ignored retweets, responses, citations and tweets containing links, as we wanted the text to be selfcontained.", "As expected, both sources contained a mix of humorous and non-humorous tweets.", "In the case of humorous accounts, this may be due to the fact that many tweets are used to increase the number of followers, expressing an opinion on a current event or supporting some popular cause.", "We first aimed to have five votes for each tweet, and to decide which tweet was humorous by simple majority.", "However, at a certain stage during the annotation process, we noticed that the users were voting too many tweets as non-humorous, and the result was highly unbalanced.", "Because of this, we made some adjustments in the corpus and the process: as the target was to have five votes for each tweet, we considered that the 1 The language detection feature is provided by the Twitter REST API.", "The annotator is asked whether the tweet intends to be humorous.", "The available options are \"Yes\", \"No\" or \"Skip\".", "If the annotator selects \"Yes\", five emoji are shown so the annotator can specify how funny he considers the tweet.", "The emoji also include labels describing the funniness levels.", "tweets that already had three non-humorous annotations at this stage should be considered as not humor, then we deprioritized them so the users could focus in annotating the rest of the tweets that were still ambiguous.", "We also injected 4,500 more tweets randomly extracted only from the humorous accounts.", "These new tweets were also prioritized since they had less annotations than the rest.", "Annotation A crowdsourced web annotation task was carried out to tag all tweets.", "2 The annotators were shown tweets as in Fig.", "1 .", "The tweets were randomly chosen but web session information was kept to avoid showing duplicates.", "We tried to keep the user interface as intuitive and self-explanatory as possible, trying not to induce any bias on users and letting them come up with their own definition of humor.", "The simple and friendly interface is meant to keep the users engaged and having fun while classifying tweets as humorous or not, and how funny they are, with as few instructions as possible.", "If a person decides that a tweet is humorous, he has to rate it between one to five by using emoji.", "In this way, the annotator gives more information rather than just stating the tweet is humorous.", "We also allowed to skip a tweet or click a help button for more information.", "We consider that explicitly asking the annotator if the text intends to be humorous makes the distinction between the Not Humorous and Not Funny classes less ambiguous, which we believe was a problem of (Castro et al., 2016) user interface.", "Also, we consider our emoji rated funniness score to be clearer for annotators than their stars based rating.", "The web page was shared on popular social networks along with some context about the task and the annotation period occurred between March 8 th and 27 th , 2018.", "The first tweets shown to every session were the same: three tweets for which we know a clear answer (one of them was humorous and the other two were not).", "These first tweets (\"test tweets\") were meant as a way of introducing the user into how the interface works, and also as an initial way for evaluating the quality of the annotations.", "After the introductory tweets, the rest of the tweets were sampled randomly, starting with the ones with the least number of votes.", "Corpus The dataset consists of two CSV files: tweets and annotations.", "The former contains the identifier and origin (which can be the realtime samples or the selected accounts) for each one of the 27, 282 tweets 3 , while the latter contains the tweet identifier, session identifier, date and annotation value for each one of the 117, 800 annotations received during the annotation phase (including the times the skip button was pressed, 2, 959 times).", "The dataset was released and it is available online.", "4 When compiling the final version of the corpus, we considered the annotations of users that did not answer the first three tweets correctly as having lower quality.", "These sessions should not be used for training or testing machine learning algorithms.", "Fortunately, only a small number of annotations had to be discarded because of this reason.", "The final number of annotations is 107, 634 (not including the times the skip button was pressed), including 3, 916 annotations assigned to the test tweets themselves.", "Analysis Annotation Distribution Each tweet received 3.8 annotations on average, with a standard deviation of 1.16, not considering the test tweets as they are outliers (they have a large number of annotations).", "The annotation 3 Tweet text is not included in the corpus due to Twitter Terms and Conditions.", "They can be obtained from the IDs.", "distribution is shown in Fig.", "2 .", "The histogram is highly concentrated: more than 98% of the tweets received between two and six annotations each.", "Even though the strategy was to show random tweets among the ones with less annotations, note that there are tweets with less than three annotations because some annotations were finally filtered out.", "At the same time, there are some tweets with more than six annotations because we merged annotations from a few dozen duplicate tweets.", "Also, note that there is a considerable amount of tweets with at least six annotations (1, 001).", "This subset can be useful to study the different annotator opinions under the same instances.", "Fig.", "3 shows how the classes are distributed between the annotations.", "Roughly two thirds were assigned to the class Not Humorous, agreeing with the fact that there seem to be more non-humorous tweets from humorous accounts than the other way around.", "The graph also indicates that there is a bias towards bad jokes in humor, according to the annotators.", "We use simple majority of votes for categorizing between humorous or not humorous, and weighted average for computing the funniness score only for humorous tweets.", "The scale goes from one (Not Funny) to five (Excellent).", "Under this scheme, 27.01% of the tweets are humorous, 70.6% are not-humorous while 2.39% is undecided (2.38% tied and 0.01% no annotations).", "At the same time, humorous tweets have little funniness overall: the funniness score average is 1.35 and standard deviation 0.85.", "Class Distribution Annotators Distribution There were 1, 271 annotators who tagged the tweets roughly as follows: two annotators tagged 13, 000 tweets, then one annotated 8, 000, the next eight annotated between one and three thousand, the next 105 annotated between one hundred and one thousand and the rest annotated less than a hundred, having 32, 584 annotations in total (see Fig.", "4 ).", "The average was 83 tags by annotator, with a standard deviation of 597.", "Annotators Agreement An important aspect to analyze is to what extent the annotators agree on which tweets are humorous.", "We used the alpha measure from Krippendorff (2012) , a generalized version of the kappa measure (Cohen, 1960; Fleiss, 1971 ) that takes in account an arbitray number of raters.", "The agreement alpha value on humorous versus nonhumorous is 0.5710.", "According to Fleiss (1981) , it means that the agreement is somewhat between \"moderate\" to \"substantial\", suggesting there is acceptable agreement but the humans cannot completely agree.", "We believe that the carefully designed user interface impacted in the quality of the annotation, as unlike Castro et al.", "(2016) this work's annotation web page presented less ambiguity between the class Not Humorous and Not Funny.", "We clearly outperformed their interannotator agreement (which was 0.3654).", "Additionally, if we consider the whole corpus (including the removed annotations), this figure decreases to 0.5512.", "This shows that the test tweets were helpful to filter out low quality annotations.", "Additionally, we can try to estimate to what extent the annotators agree on the funniness value of the tweets.", "In this case, disagreement between close values in the scale (e.g.", "Not Funny and Little Funny) should have less impact than disagreement between values that are further (e.g.", "Not Funny and Excellent).", "Following Stevens (1946) , in the previous case we were dealing with a nominal measure while in this case it is an ordinal measure.", "Alpha considers this into the formula by using a generic distance function between ratings, so we applied it and obtained a value of 0.1625 which is far from good; it is closer to a random annotation.", "There is a lack of agreement on the funniness.", "In this case, a machine will not be able to assign a unique value of funniness to a tweet, which makes sense with its subjectivity, albeit other techniques could be used (Geng, 2016) .", "In this case, if we consider the whole dataset, this number decreases to 0.1442.", "If we only consider the eleven annotators who tagged more than a thousand times (who tagged 50, 939 times in total), the humor and funniness agreement are respectively 0.6345 and 0.2635.", "Conclusion and Future Work Our main contribution is a corpus of tweets in Spanish labeled by their humor value and funniness score with respect to a crowd-sourced annotation.", "The dataset contains 27, 282 tweets coming from multiple sources, with 107, 634 annotations.", "The corpus showed high quality because of the significant inter-annotator agreement value.", "The dataset serves to build a Spanish humor classifier, but it also serves as a first step to tackle humor and funniness subjectivity.", "Even though more annotations per tweet would be appropriate, there is a subset of a thousand tweets with at least six annotations that could be used to study people's opinion on the same instances.", "Future steps involve gathering more annotations per tweet for a considerable amount of tweets, so techniques such as the ones in (Geng, 2016 ) could be used to study how people perceive the humorous pieces and what subjects and phrases they consider funnier.", "It would be interesting to consider social strata (e.g.", "origin, age and gender) when trying to find these patterns.", "Additionally, a similar dataset could be built for other languages which count with more data to cross over with (such as English) and build a humor classifier exploiting re-" ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5.1", "5.3", "5.4", "6" ], "paper_header_content": [ "Introduction", "Extraction", "Annotation", "Corpus", "Annotation Distribution", "Annotators Distribution", "Annotators Agreement", "Conclusion and Future Work" ] }
GEM-SciDuet-train-44#paper-1064#slide-4
Related work
Potash, Romanov, and Rumshisky (2017) built a corpus based on tweets in English that aims to distinguish the degree of funniness in a given tweet. They used the tweet set issued in response to a TV game show, labeling which tweets were considered humorous by the show. Used in SemEval 2017 Task
Potash, Romanov, and Rumshisky (2017) built a corpus based on tweets in English that aims to distinguish the degree of funniness in a given tweet. They used the tweet set issued in response to a TV game show, labeling which tweets were considered humorous by the show. Used in SemEval 2017 Task
[]
GEM-SciDuet-train-44#paper-1064#slide-5
1064
A Crowd-Annotated Spanish Corpus for Humor Analysis
Computational Humor involves several tasks, such as humor recognition, humor generation, and humor scoring, for which it is useful to have human-curated data. In this work we present a corpus of 27,000 tweets written in Spanish and crowd-annotated by their humor value and funniness score, with about four annotations per tweet, tagged by 1,300 people over the Internet. It is equally divided between tweets coming from humorous and non-humorous accounts. The interannotator agreement Krippendorff's alpha value is 0.5710. The dataset is available for general usage and can serve as a basis for humor detection and as a first step to tackle subjectivity.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114 ], "paper_content_text": [ "Introduction Computational Humor studies humor from a computational perspective, involving several tasks such as humor recognition, which aims to tell if a piece of text is humorous or not; humor generation, with the objective of generating new texts with funny content; and humor scoring, whose goal is to predict how funny a piece of text is.", "In order to carry out this kind of tasks through supervised machine learning methods, humancurated data is necessary.", "Castro et al.", "(2016) built a humor classifier for Spanish and provided a dataset for humor recognition.", "However, there are some issues: few annotations per instance, low annotator agreement, and limited variety of sources for the humorous and mostly for the nonhumorous tweets (the latter were only about news, inspirational thoughts and curious facts).", "Up to our knowledge, there is no other dataset to work on humor comprehension in Spanish.", "Some other authors, such as Mihalcea and Strapparava (2005a,b) ; Sjöbergh and Araki (2007) have tackled humor recognition in English texts, building their own corpora by downloading one-liners (onesentence jokes) from the Internet, since working with longer texts would involve additional work, such as determining humor scope.", "The microblogging platform Twitter has been found particularly useful for building humor corpora due to its public availability and the fact that its short messages are suitable for jokes or humorous comments.", "Castro et al.", "(2016) built their corpus based on Twitter, selecting nine humorous accounts and nine non-humorous accounts about news, thoughts and curious facts.", "Reyes et al.", "(2013) built a corpus for detecting irony in tweets by searching for several hashtags (i.e., #irony, #humor, #education and #politics), which is also used in Barbieri and Saggion (2014) to train a classifier that detects humor.", "More recently, Potash et al.", "(2017) built a corpus based on tweets that aims to distinguish the degree of funniness in a given tweet.", "They used the tweet set issued in response to a TV game show, labeling which tweets were considered humorous by the show.", "In this work we present a crowd-annotated Spanish corpus of tweets tagged with a humor/no humor value and also by a funniness score from one to five.", "The corpus contains tweets extracted from varied sources and has several annotations per tweet, reaching a high humor inter-annotator agreement.", "The contribution of this work is twofold: the dataset is not only useful for building a humor classifier but it also serves to approach subjectivity in humor and funniness.", "Even though there are not enough annotations per tweet as required to study subjectivity in a genuine way with techniques such as the ones by Geng (2016) , the dataset aids as a playground to study the funniness and disagree-ment among several people.", "This document is organized as follows.", "Section 2 explains where and how we obtained the data, and Section 3 describes how it was annotated.", "In Section 4 we present the corpus, and we address the analysis in Section 5.", "Finally, in Section 6 we present draw the conclusions and present the future work.", "Extraction The aim of the extraction and annotation process was to build a corpus of at least 20,000 tweets that was as balanced as possible between the humor and not humor classes.", "Furthermore, as we intended to have a way of calculating the funniness score of a tweet, we needed to have several votes for the tweets that were considered humorous.", "As we wanted to have both humorous and non-humorous tweet samples, we extracted tweets from selected accounts and from realtime samples.", "For the former, based on Castro et al.", "(2016) , we selected tweets from fifty humorous accounts from Spanish speaking countries, and took a random sample of size 12,000.", "For the latter, we fetched tweet samples written in Spanish throughout February 2018 1 , and from this collection we took another random sample of size 12,000.", "Note that we preferred to take realtime tweet samples as we did not want to bias by selecting certain negative examples, such as news or inspirational thoughts as in Castro et al.", "(2016) and Mihalcea and Strapparava (2005b) .", "From both sources we ignored retweets, responses, citations and tweets containing links, as we wanted the text to be selfcontained.", "As expected, both sources contained a mix of humorous and non-humorous tweets.", "In the case of humorous accounts, this may be due to the fact that many tweets are used to increase the number of followers, expressing an opinion on a current event or supporting some popular cause.", "We first aimed to have five votes for each tweet, and to decide which tweet was humorous by simple majority.", "However, at a certain stage during the annotation process, we noticed that the users were voting too many tweets as non-humorous, and the result was highly unbalanced.", "Because of this, we made some adjustments in the corpus and the process: as the target was to have five votes for each tweet, we considered that the 1 The language detection feature is provided by the Twitter REST API.", "The annotator is asked whether the tweet intends to be humorous.", "The available options are \"Yes\", \"No\" or \"Skip\".", "If the annotator selects \"Yes\", five emoji are shown so the annotator can specify how funny he considers the tweet.", "The emoji also include labels describing the funniness levels.", "tweets that already had three non-humorous annotations at this stage should be considered as not humor, then we deprioritized them so the users could focus in annotating the rest of the tweets that were still ambiguous.", "We also injected 4,500 more tweets randomly extracted only from the humorous accounts.", "These new tweets were also prioritized since they had less annotations than the rest.", "Annotation A crowdsourced web annotation task was carried out to tag all tweets.", "2 The annotators were shown tweets as in Fig.", "1 .", "The tweets were randomly chosen but web session information was kept to avoid showing duplicates.", "We tried to keep the user interface as intuitive and self-explanatory as possible, trying not to induce any bias on users and letting them come up with their own definition of humor.", "The simple and friendly interface is meant to keep the users engaged and having fun while classifying tweets as humorous or not, and how funny they are, with as few instructions as possible.", "If a person decides that a tweet is humorous, he has to rate it between one to five by using emoji.", "In this way, the annotator gives more information rather than just stating the tweet is humorous.", "We also allowed to skip a tweet or click a help button for more information.", "We consider that explicitly asking the annotator if the text intends to be humorous makes the distinction between the Not Humorous and Not Funny classes less ambiguous, which we believe was a problem of (Castro et al., 2016) user interface.", "Also, we consider our emoji rated funniness score to be clearer for annotators than their stars based rating.", "The web page was shared on popular social networks along with some context about the task and the annotation period occurred between March 8 th and 27 th , 2018.", "The first tweets shown to every session were the same: three tweets for which we know a clear answer (one of them was humorous and the other two were not).", "These first tweets (\"test tweets\") were meant as a way of introducing the user into how the interface works, and also as an initial way for evaluating the quality of the annotations.", "After the introductory tweets, the rest of the tweets were sampled randomly, starting with the ones with the least number of votes.", "Corpus The dataset consists of two CSV files: tweets and annotations.", "The former contains the identifier and origin (which can be the realtime samples or the selected accounts) for each one of the 27, 282 tweets 3 , while the latter contains the tweet identifier, session identifier, date and annotation value for each one of the 117, 800 annotations received during the annotation phase (including the times the skip button was pressed, 2, 959 times).", "The dataset was released and it is available online.", "4 When compiling the final version of the corpus, we considered the annotations of users that did not answer the first three tweets correctly as having lower quality.", "These sessions should not be used for training or testing machine learning algorithms.", "Fortunately, only a small number of annotations had to be discarded because of this reason.", "The final number of annotations is 107, 634 (not including the times the skip button was pressed), including 3, 916 annotations assigned to the test tweets themselves.", "Analysis Annotation Distribution Each tweet received 3.8 annotations on average, with a standard deviation of 1.16, not considering the test tweets as they are outliers (they have a large number of annotations).", "The annotation 3 Tweet text is not included in the corpus due to Twitter Terms and Conditions.", "They can be obtained from the IDs.", "distribution is shown in Fig.", "2 .", "The histogram is highly concentrated: more than 98% of the tweets received between two and six annotations each.", "Even though the strategy was to show random tweets among the ones with less annotations, note that there are tweets with less than three annotations because some annotations were finally filtered out.", "At the same time, there are some tweets with more than six annotations because we merged annotations from a few dozen duplicate tweets.", "Also, note that there is a considerable amount of tweets with at least six annotations (1, 001).", "This subset can be useful to study the different annotator opinions under the same instances.", "Fig.", "3 shows how the classes are distributed between the annotations.", "Roughly two thirds were assigned to the class Not Humorous, agreeing with the fact that there seem to be more non-humorous tweets from humorous accounts than the other way around.", "The graph also indicates that there is a bias towards bad jokes in humor, according to the annotators.", "We use simple majority of votes for categorizing between humorous or not humorous, and weighted average for computing the funniness score only for humorous tweets.", "The scale goes from one (Not Funny) to five (Excellent).", "Under this scheme, 27.01% of the tweets are humorous, 70.6% are not-humorous while 2.39% is undecided (2.38% tied and 0.01% no annotations).", "At the same time, humorous tweets have little funniness overall: the funniness score average is 1.35 and standard deviation 0.85.", "Class Distribution Annotators Distribution There were 1, 271 annotators who tagged the tweets roughly as follows: two annotators tagged 13, 000 tweets, then one annotated 8, 000, the next eight annotated between one and three thousand, the next 105 annotated between one hundred and one thousand and the rest annotated less than a hundred, having 32, 584 annotations in total (see Fig.", "4 ).", "The average was 83 tags by annotator, with a standard deviation of 597.", "Annotators Agreement An important aspect to analyze is to what extent the annotators agree on which tweets are humorous.", "We used the alpha measure from Krippendorff (2012) , a generalized version of the kappa measure (Cohen, 1960; Fleiss, 1971 ) that takes in account an arbitray number of raters.", "The agreement alpha value on humorous versus nonhumorous is 0.5710.", "According to Fleiss (1981) , it means that the agreement is somewhat between \"moderate\" to \"substantial\", suggesting there is acceptable agreement but the humans cannot completely agree.", "We believe that the carefully designed user interface impacted in the quality of the annotation, as unlike Castro et al.", "(2016) this work's annotation web page presented less ambiguity between the class Not Humorous and Not Funny.", "We clearly outperformed their interannotator agreement (which was 0.3654).", "Additionally, if we consider the whole corpus (including the removed annotations), this figure decreases to 0.5512.", "This shows that the test tweets were helpful to filter out low quality annotations.", "Additionally, we can try to estimate to what extent the annotators agree on the funniness value of the tweets.", "In this case, disagreement between close values in the scale (e.g.", "Not Funny and Little Funny) should have less impact than disagreement between values that are further (e.g.", "Not Funny and Excellent).", "Following Stevens (1946) , in the previous case we were dealing with a nominal measure while in this case it is an ordinal measure.", "Alpha considers this into the formula by using a generic distance function between ratings, so we applied it and obtained a value of 0.1625 which is far from good; it is closer to a random annotation.", "There is a lack of agreement on the funniness.", "In this case, a machine will not be able to assign a unique value of funniness to a tweet, which makes sense with its subjectivity, albeit other techniques could be used (Geng, 2016) .", "In this case, if we consider the whole dataset, this number decreases to 0.1442.", "If we only consider the eleven annotators who tagged more than a thousand times (who tagged 50, 939 times in total), the humor and funniness agreement are respectively 0.6345 and 0.2635.", "Conclusion and Future Work Our main contribution is a corpus of tweets in Spanish labeled by their humor value and funniness score with respect to a crowd-sourced annotation.", "The dataset contains 27, 282 tweets coming from multiple sources, with 107, 634 annotations.", "The corpus showed high quality because of the significant inter-annotator agreement value.", "The dataset serves to build a Spanish humor classifier, but it also serves as a first step to tackle humor and funniness subjectivity.", "Even though more annotations per tweet would be appropriate, there is a subset of a thousand tweets with at least six annotations that could be used to study people's opinion on the same instances.", "Future steps involve gathering more annotations per tweet for a considerable amount of tweets, so techniques such as the ones in (Geng, 2016 ) could be used to study how people perceive the humorous pieces and what subjects and phrases they consider funnier.", "It would be interesting to consider social strata (e.g.", "origin, age and gender) when trying to find these patterns.", "Additionally, a similar dataset could be built for other languages which count with more data to cross over with (such as English) and build a humor classifier exploiting re-" ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5.1", "5.3", "5.4", "6" ], "paper_header_content": [ "Introduction", "Extraction", "Annotation", "Corpus", "Annotation Distribution", "Annotators Distribution", "Annotators Agreement", "Conclusion and Future Work" ] }
GEM-SciDuet-train-44#paper-1064#slide-5
Extraction i
1. We wanted to have at least 20k tweets as balanced as possible, at least 5 annotations each. 2. We fetched tweets from 50 humorous accounts from Spanish speaking countries, taking 12k at random. 3. We fetched tweet samples written in Spanish throughout
1. We wanted to have at least 20k tweets as balanced as possible, at least 5 annotations each. 2. We fetched tweets from 50 humorous accounts from Spanish speaking countries, taking 12k at random. 3. We fetched tweet samples written in Spanish throughout
[]
GEM-SciDuet-train-44#paper-1064#slide-6
1064
A Crowd-Annotated Spanish Corpus for Humor Analysis
Computational Humor involves several tasks, such as humor recognition, humor generation, and humor scoring, for which it is useful to have human-curated data. In this work we present a corpus of 27,000 tweets written in Spanish and crowd-annotated by their humor value and funniness score, with about four annotations per tweet, tagged by 1,300 people over the Internet. It is equally divided between tweets coming from humorous and non-humorous accounts. The interannotator agreement Krippendorff's alpha value is 0.5710. The dataset is available for general usage and can serve as a basis for humor detection and as a first step to tackle subjectivity.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114 ], "paper_content_text": [ "Introduction Computational Humor studies humor from a computational perspective, involving several tasks such as humor recognition, which aims to tell if a piece of text is humorous or not; humor generation, with the objective of generating new texts with funny content; and humor scoring, whose goal is to predict how funny a piece of text is.", "In order to carry out this kind of tasks through supervised machine learning methods, humancurated data is necessary.", "Castro et al.", "(2016) built a humor classifier for Spanish and provided a dataset for humor recognition.", "However, there are some issues: few annotations per instance, low annotator agreement, and limited variety of sources for the humorous and mostly for the nonhumorous tweets (the latter were only about news, inspirational thoughts and curious facts).", "Up to our knowledge, there is no other dataset to work on humor comprehension in Spanish.", "Some other authors, such as Mihalcea and Strapparava (2005a,b) ; Sjöbergh and Araki (2007) have tackled humor recognition in English texts, building their own corpora by downloading one-liners (onesentence jokes) from the Internet, since working with longer texts would involve additional work, such as determining humor scope.", "The microblogging platform Twitter has been found particularly useful for building humor corpora due to its public availability and the fact that its short messages are suitable for jokes or humorous comments.", "Castro et al.", "(2016) built their corpus based on Twitter, selecting nine humorous accounts and nine non-humorous accounts about news, thoughts and curious facts.", "Reyes et al.", "(2013) built a corpus for detecting irony in tweets by searching for several hashtags (i.e., #irony, #humor, #education and #politics), which is also used in Barbieri and Saggion (2014) to train a classifier that detects humor.", "More recently, Potash et al.", "(2017) built a corpus based on tweets that aims to distinguish the degree of funniness in a given tweet.", "They used the tweet set issued in response to a TV game show, labeling which tweets were considered humorous by the show.", "In this work we present a crowd-annotated Spanish corpus of tweets tagged with a humor/no humor value and also by a funniness score from one to five.", "The corpus contains tweets extracted from varied sources and has several annotations per tweet, reaching a high humor inter-annotator agreement.", "The contribution of this work is twofold: the dataset is not only useful for building a humor classifier but it also serves to approach subjectivity in humor and funniness.", "Even though there are not enough annotations per tweet as required to study subjectivity in a genuine way with techniques such as the ones by Geng (2016) , the dataset aids as a playground to study the funniness and disagree-ment among several people.", "This document is organized as follows.", "Section 2 explains where and how we obtained the data, and Section 3 describes how it was annotated.", "In Section 4 we present the corpus, and we address the analysis in Section 5.", "Finally, in Section 6 we present draw the conclusions and present the future work.", "Extraction The aim of the extraction and annotation process was to build a corpus of at least 20,000 tweets that was as balanced as possible between the humor and not humor classes.", "Furthermore, as we intended to have a way of calculating the funniness score of a tweet, we needed to have several votes for the tweets that were considered humorous.", "As we wanted to have both humorous and non-humorous tweet samples, we extracted tweets from selected accounts and from realtime samples.", "For the former, based on Castro et al.", "(2016) , we selected tweets from fifty humorous accounts from Spanish speaking countries, and took a random sample of size 12,000.", "For the latter, we fetched tweet samples written in Spanish throughout February 2018 1 , and from this collection we took another random sample of size 12,000.", "Note that we preferred to take realtime tweet samples as we did not want to bias by selecting certain negative examples, such as news or inspirational thoughts as in Castro et al.", "(2016) and Mihalcea and Strapparava (2005b) .", "From both sources we ignored retweets, responses, citations and tweets containing links, as we wanted the text to be selfcontained.", "As expected, both sources contained a mix of humorous and non-humorous tweets.", "In the case of humorous accounts, this may be due to the fact that many tweets are used to increase the number of followers, expressing an opinion on a current event or supporting some popular cause.", "We first aimed to have five votes for each tweet, and to decide which tweet was humorous by simple majority.", "However, at a certain stage during the annotation process, we noticed that the users were voting too many tweets as non-humorous, and the result was highly unbalanced.", "Because of this, we made some adjustments in the corpus and the process: as the target was to have five votes for each tweet, we considered that the 1 The language detection feature is provided by the Twitter REST API.", "The annotator is asked whether the tweet intends to be humorous.", "The available options are \"Yes\", \"No\" or \"Skip\".", "If the annotator selects \"Yes\", five emoji are shown so the annotator can specify how funny he considers the tweet.", "The emoji also include labels describing the funniness levels.", "tweets that already had three non-humorous annotations at this stage should be considered as not humor, then we deprioritized them so the users could focus in annotating the rest of the tweets that were still ambiguous.", "We also injected 4,500 more tweets randomly extracted only from the humorous accounts.", "These new tweets were also prioritized since they had less annotations than the rest.", "Annotation A crowdsourced web annotation task was carried out to tag all tweets.", "2 The annotators were shown tweets as in Fig.", "1 .", "The tweets were randomly chosen but web session information was kept to avoid showing duplicates.", "We tried to keep the user interface as intuitive and self-explanatory as possible, trying not to induce any bias on users and letting them come up with their own definition of humor.", "The simple and friendly interface is meant to keep the users engaged and having fun while classifying tweets as humorous or not, and how funny they are, with as few instructions as possible.", "If a person decides that a tweet is humorous, he has to rate it between one to five by using emoji.", "In this way, the annotator gives more information rather than just stating the tweet is humorous.", "We also allowed to skip a tweet or click a help button for more information.", "We consider that explicitly asking the annotator if the text intends to be humorous makes the distinction between the Not Humorous and Not Funny classes less ambiguous, which we believe was a problem of (Castro et al., 2016) user interface.", "Also, we consider our emoji rated funniness score to be clearer for annotators than their stars based rating.", "The web page was shared on popular social networks along with some context about the task and the annotation period occurred between March 8 th and 27 th , 2018.", "The first tweets shown to every session were the same: three tweets for which we know a clear answer (one of them was humorous and the other two were not).", "These first tweets (\"test tweets\") were meant as a way of introducing the user into how the interface works, and also as an initial way for evaluating the quality of the annotations.", "After the introductory tweets, the rest of the tweets were sampled randomly, starting with the ones with the least number of votes.", "Corpus The dataset consists of two CSV files: tweets and annotations.", "The former contains the identifier and origin (which can be the realtime samples or the selected accounts) for each one of the 27, 282 tweets 3 , while the latter contains the tweet identifier, session identifier, date and annotation value for each one of the 117, 800 annotations received during the annotation phase (including the times the skip button was pressed, 2, 959 times).", "The dataset was released and it is available online.", "4 When compiling the final version of the corpus, we considered the annotations of users that did not answer the first three tweets correctly as having lower quality.", "These sessions should not be used for training or testing machine learning algorithms.", "Fortunately, only a small number of annotations had to be discarded because of this reason.", "The final number of annotations is 107, 634 (not including the times the skip button was pressed), including 3, 916 annotations assigned to the test tweets themselves.", "Analysis Annotation Distribution Each tweet received 3.8 annotations on average, with a standard deviation of 1.16, not considering the test tweets as they are outliers (they have a large number of annotations).", "The annotation 3 Tweet text is not included in the corpus due to Twitter Terms and Conditions.", "They can be obtained from the IDs.", "distribution is shown in Fig.", "2 .", "The histogram is highly concentrated: more than 98% of the tweets received between two and six annotations each.", "Even though the strategy was to show random tweets among the ones with less annotations, note that there are tweets with less than three annotations because some annotations were finally filtered out.", "At the same time, there are some tweets with more than six annotations because we merged annotations from a few dozen duplicate tweets.", "Also, note that there is a considerable amount of tweets with at least six annotations (1, 001).", "This subset can be useful to study the different annotator opinions under the same instances.", "Fig.", "3 shows how the classes are distributed between the annotations.", "Roughly two thirds were assigned to the class Not Humorous, agreeing with the fact that there seem to be more non-humorous tweets from humorous accounts than the other way around.", "The graph also indicates that there is a bias towards bad jokes in humor, according to the annotators.", "We use simple majority of votes for categorizing between humorous or not humorous, and weighted average for computing the funniness score only for humorous tweets.", "The scale goes from one (Not Funny) to five (Excellent).", "Under this scheme, 27.01% of the tweets are humorous, 70.6% are not-humorous while 2.39% is undecided (2.38% tied and 0.01% no annotations).", "At the same time, humorous tweets have little funniness overall: the funniness score average is 1.35 and standard deviation 0.85.", "Class Distribution Annotators Distribution There were 1, 271 annotators who tagged the tweets roughly as follows: two annotators tagged 13, 000 tweets, then one annotated 8, 000, the next eight annotated between one and three thousand, the next 105 annotated between one hundred and one thousand and the rest annotated less than a hundred, having 32, 584 annotations in total (see Fig.", "4 ).", "The average was 83 tags by annotator, with a standard deviation of 597.", "Annotators Agreement An important aspect to analyze is to what extent the annotators agree on which tweets are humorous.", "We used the alpha measure from Krippendorff (2012) , a generalized version of the kappa measure (Cohen, 1960; Fleiss, 1971 ) that takes in account an arbitray number of raters.", "The agreement alpha value on humorous versus nonhumorous is 0.5710.", "According to Fleiss (1981) , it means that the agreement is somewhat between \"moderate\" to \"substantial\", suggesting there is acceptable agreement but the humans cannot completely agree.", "We believe that the carefully designed user interface impacted in the quality of the annotation, as unlike Castro et al.", "(2016) this work's annotation web page presented less ambiguity between the class Not Humorous and Not Funny.", "We clearly outperformed their interannotator agreement (which was 0.3654).", "Additionally, if we consider the whole corpus (including the removed annotations), this figure decreases to 0.5512.", "This shows that the test tweets were helpful to filter out low quality annotations.", "Additionally, we can try to estimate to what extent the annotators agree on the funniness value of the tweets.", "In this case, disagreement between close values in the scale (e.g.", "Not Funny and Little Funny) should have less impact than disagreement between values that are further (e.g.", "Not Funny and Excellent).", "Following Stevens (1946) , in the previous case we were dealing with a nominal measure while in this case it is an ordinal measure.", "Alpha considers this into the formula by using a generic distance function between ratings, so we applied it and obtained a value of 0.1625 which is far from good; it is closer to a random annotation.", "There is a lack of agreement on the funniness.", "In this case, a machine will not be able to assign a unique value of funniness to a tweet, which makes sense with its subjectivity, albeit other techniques could be used (Geng, 2016) .", "In this case, if we consider the whole dataset, this number decreases to 0.1442.", "If we only consider the eleven annotators who tagged more than a thousand times (who tagged 50, 939 times in total), the humor and funniness agreement are respectively 0.6345 and 0.2635.", "Conclusion and Future Work Our main contribution is a corpus of tweets in Spanish labeled by their humor value and funniness score with respect to a crowd-sourced annotation.", "The dataset contains 27, 282 tweets coming from multiple sources, with 107, 634 annotations.", "The corpus showed high quality because of the significant inter-annotator agreement value.", "The dataset serves to build a Spanish humor classifier, but it also serves as a first step to tackle humor and funniness subjectivity.", "Even though more annotations per tweet would be appropriate, there is a subset of a thousand tweets with at least six annotations that could be used to study people's opinion on the same instances.", "Future steps involve gathering more annotations per tweet for a considerable amount of tweets, so techniques such as the ones in (Geng, 2016 ) could be used to study how people perceive the humorous pieces and what subjects and phrases they consider funnier.", "It would be interesting to consider social strata (e.g.", "origin, age and gender) when trying to find these patterns.", "Additionally, a similar dataset could be built for other languages which count with more data to cross over with (such as English) and build a humor classifier exploiting re-" ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5.1", "5.3", "5.4", "6" ], "paper_header_content": [ "Introduction", "Extraction", "Annotation", "Corpus", "Annotation Distribution", "Annotators Distribution", "Annotators Agreement", "Conclusion and Future Work" ] }
GEM-SciDuet-train-44#paper-1064#slide-6
Extraction ii
4. As expected, both sources contained a mix of humorous and non-humorous tweets.
4. As expected, both sources contained a mix of humorous and non-humorous tweets.
[]
GEM-SciDuet-train-44#paper-1064#slide-7
1064
A Crowd-Annotated Spanish Corpus for Humor Analysis
Computational Humor involves several tasks, such as humor recognition, humor generation, and humor scoring, for which it is useful to have human-curated data. In this work we present a corpus of 27,000 tweets written in Spanish and crowd-annotated by their humor value and funniness score, with about four annotations per tweet, tagged by 1,300 people over the Internet. It is equally divided between tweets coming from humorous and non-humorous accounts. The interannotator agreement Krippendorff's alpha value is 0.5710. The dataset is available for general usage and can serve as a basis for humor detection and as a first step to tackle subjectivity.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114 ], "paper_content_text": [ "Introduction Computational Humor studies humor from a computational perspective, involving several tasks such as humor recognition, which aims to tell if a piece of text is humorous or not; humor generation, with the objective of generating new texts with funny content; and humor scoring, whose goal is to predict how funny a piece of text is.", "In order to carry out this kind of tasks through supervised machine learning methods, humancurated data is necessary.", "Castro et al.", "(2016) built a humor classifier for Spanish and provided a dataset for humor recognition.", "However, there are some issues: few annotations per instance, low annotator agreement, and limited variety of sources for the humorous and mostly for the nonhumorous tweets (the latter were only about news, inspirational thoughts and curious facts).", "Up to our knowledge, there is no other dataset to work on humor comprehension in Spanish.", "Some other authors, such as Mihalcea and Strapparava (2005a,b) ; Sjöbergh and Araki (2007) have tackled humor recognition in English texts, building their own corpora by downloading one-liners (onesentence jokes) from the Internet, since working with longer texts would involve additional work, such as determining humor scope.", "The microblogging platform Twitter has been found particularly useful for building humor corpora due to its public availability and the fact that its short messages are suitable for jokes or humorous comments.", "Castro et al.", "(2016) built their corpus based on Twitter, selecting nine humorous accounts and nine non-humorous accounts about news, thoughts and curious facts.", "Reyes et al.", "(2013) built a corpus for detecting irony in tweets by searching for several hashtags (i.e., #irony, #humor, #education and #politics), which is also used in Barbieri and Saggion (2014) to train a classifier that detects humor.", "More recently, Potash et al.", "(2017) built a corpus based on tweets that aims to distinguish the degree of funniness in a given tweet.", "They used the tweet set issued in response to a TV game show, labeling which tweets were considered humorous by the show.", "In this work we present a crowd-annotated Spanish corpus of tweets tagged with a humor/no humor value and also by a funniness score from one to five.", "The corpus contains tweets extracted from varied sources and has several annotations per tweet, reaching a high humor inter-annotator agreement.", "The contribution of this work is twofold: the dataset is not only useful for building a humor classifier but it also serves to approach subjectivity in humor and funniness.", "Even though there are not enough annotations per tweet as required to study subjectivity in a genuine way with techniques such as the ones by Geng (2016) , the dataset aids as a playground to study the funniness and disagree-ment among several people.", "This document is organized as follows.", "Section 2 explains where and how we obtained the data, and Section 3 describes how it was annotated.", "In Section 4 we present the corpus, and we address the analysis in Section 5.", "Finally, in Section 6 we present draw the conclusions and present the future work.", "Extraction The aim of the extraction and annotation process was to build a corpus of at least 20,000 tweets that was as balanced as possible between the humor and not humor classes.", "Furthermore, as we intended to have a way of calculating the funniness score of a tweet, we needed to have several votes for the tweets that were considered humorous.", "As we wanted to have both humorous and non-humorous tweet samples, we extracted tweets from selected accounts and from realtime samples.", "For the former, based on Castro et al.", "(2016) , we selected tweets from fifty humorous accounts from Spanish speaking countries, and took a random sample of size 12,000.", "For the latter, we fetched tweet samples written in Spanish throughout February 2018 1 , and from this collection we took another random sample of size 12,000.", "Note that we preferred to take realtime tweet samples as we did not want to bias by selecting certain negative examples, such as news or inspirational thoughts as in Castro et al.", "(2016) and Mihalcea and Strapparava (2005b) .", "From both sources we ignored retweets, responses, citations and tweets containing links, as we wanted the text to be selfcontained.", "As expected, both sources contained a mix of humorous and non-humorous tweets.", "In the case of humorous accounts, this may be due to the fact that many tweets are used to increase the number of followers, expressing an opinion on a current event or supporting some popular cause.", "We first aimed to have five votes for each tweet, and to decide which tweet was humorous by simple majority.", "However, at a certain stage during the annotation process, we noticed that the users were voting too many tweets as non-humorous, and the result was highly unbalanced.", "Because of this, we made some adjustments in the corpus and the process: as the target was to have five votes for each tweet, we considered that the 1 The language detection feature is provided by the Twitter REST API.", "The annotator is asked whether the tweet intends to be humorous.", "The available options are \"Yes\", \"No\" or \"Skip\".", "If the annotator selects \"Yes\", five emoji are shown so the annotator can specify how funny he considers the tweet.", "The emoji also include labels describing the funniness levels.", "tweets that already had three non-humorous annotations at this stage should be considered as not humor, then we deprioritized them so the users could focus in annotating the rest of the tweets that were still ambiguous.", "We also injected 4,500 more tweets randomly extracted only from the humorous accounts.", "These new tweets were also prioritized since they had less annotations than the rest.", "Annotation A crowdsourced web annotation task was carried out to tag all tweets.", "2 The annotators were shown tweets as in Fig.", "1 .", "The tweets were randomly chosen but web session information was kept to avoid showing duplicates.", "We tried to keep the user interface as intuitive and self-explanatory as possible, trying not to induce any bias on users and letting them come up with their own definition of humor.", "The simple and friendly interface is meant to keep the users engaged and having fun while classifying tweets as humorous or not, and how funny they are, with as few instructions as possible.", "If a person decides that a tweet is humorous, he has to rate it between one to five by using emoji.", "In this way, the annotator gives more information rather than just stating the tweet is humorous.", "We also allowed to skip a tweet or click a help button for more information.", "We consider that explicitly asking the annotator if the text intends to be humorous makes the distinction between the Not Humorous and Not Funny classes less ambiguous, which we believe was a problem of (Castro et al., 2016) user interface.", "Also, we consider our emoji rated funniness score to be clearer for annotators than their stars based rating.", "The web page was shared on popular social networks along with some context about the task and the annotation period occurred between March 8 th and 27 th , 2018.", "The first tweets shown to every session were the same: three tweets for which we know a clear answer (one of them was humorous and the other two were not).", "These first tweets (\"test tweets\") were meant as a way of introducing the user into how the interface works, and also as an initial way for evaluating the quality of the annotations.", "After the introductory tweets, the rest of the tweets were sampled randomly, starting with the ones with the least number of votes.", "Corpus The dataset consists of two CSV files: tweets and annotations.", "The former contains the identifier and origin (which can be the realtime samples or the selected accounts) for each one of the 27, 282 tweets 3 , while the latter contains the tweet identifier, session identifier, date and annotation value for each one of the 117, 800 annotations received during the annotation phase (including the times the skip button was pressed, 2, 959 times).", "The dataset was released and it is available online.", "4 When compiling the final version of the corpus, we considered the annotations of users that did not answer the first three tweets correctly as having lower quality.", "These sessions should not be used for training or testing machine learning algorithms.", "Fortunately, only a small number of annotations had to be discarded because of this reason.", "The final number of annotations is 107, 634 (not including the times the skip button was pressed), including 3, 916 annotations assigned to the test tweets themselves.", "Analysis Annotation Distribution Each tweet received 3.8 annotations on average, with a standard deviation of 1.16, not considering the test tweets as they are outliers (they have a large number of annotations).", "The annotation 3 Tweet text is not included in the corpus due to Twitter Terms and Conditions.", "They can be obtained from the IDs.", "distribution is shown in Fig.", "2 .", "The histogram is highly concentrated: more than 98% of the tweets received between two and six annotations each.", "Even though the strategy was to show random tweets among the ones with less annotations, note that there are tweets with less than three annotations because some annotations were finally filtered out.", "At the same time, there are some tweets with more than six annotations because we merged annotations from a few dozen duplicate tweets.", "Also, note that there is a considerable amount of tweets with at least six annotations (1, 001).", "This subset can be useful to study the different annotator opinions under the same instances.", "Fig.", "3 shows how the classes are distributed between the annotations.", "Roughly two thirds were assigned to the class Not Humorous, agreeing with the fact that there seem to be more non-humorous tweets from humorous accounts than the other way around.", "The graph also indicates that there is a bias towards bad jokes in humor, according to the annotators.", "We use simple majority of votes for categorizing between humorous or not humorous, and weighted average for computing the funniness score only for humorous tweets.", "The scale goes from one (Not Funny) to five (Excellent).", "Under this scheme, 27.01% of the tweets are humorous, 70.6% are not-humorous while 2.39% is undecided (2.38% tied and 0.01% no annotations).", "At the same time, humorous tweets have little funniness overall: the funniness score average is 1.35 and standard deviation 0.85.", "Class Distribution Annotators Distribution There were 1, 271 annotators who tagged the tweets roughly as follows: two annotators tagged 13, 000 tweets, then one annotated 8, 000, the next eight annotated between one and three thousand, the next 105 annotated between one hundred and one thousand and the rest annotated less than a hundred, having 32, 584 annotations in total (see Fig.", "4 ).", "The average was 83 tags by annotator, with a standard deviation of 597.", "Annotators Agreement An important aspect to analyze is to what extent the annotators agree on which tweets are humorous.", "We used the alpha measure from Krippendorff (2012) , a generalized version of the kappa measure (Cohen, 1960; Fleiss, 1971 ) that takes in account an arbitray number of raters.", "The agreement alpha value on humorous versus nonhumorous is 0.5710.", "According to Fleiss (1981) , it means that the agreement is somewhat between \"moderate\" to \"substantial\", suggesting there is acceptable agreement but the humans cannot completely agree.", "We believe that the carefully designed user interface impacted in the quality of the annotation, as unlike Castro et al.", "(2016) this work's annotation web page presented less ambiguity between the class Not Humorous and Not Funny.", "We clearly outperformed their interannotator agreement (which was 0.3654).", "Additionally, if we consider the whole corpus (including the removed annotations), this figure decreases to 0.5512.", "This shows that the test tweets were helpful to filter out low quality annotations.", "Additionally, we can try to estimate to what extent the annotators agree on the funniness value of the tweets.", "In this case, disagreement between close values in the scale (e.g.", "Not Funny and Little Funny) should have less impact than disagreement between values that are further (e.g.", "Not Funny and Excellent).", "Following Stevens (1946) , in the previous case we were dealing with a nominal measure while in this case it is an ordinal measure.", "Alpha considers this into the formula by using a generic distance function between ratings, so we applied it and obtained a value of 0.1625 which is far from good; it is closer to a random annotation.", "There is a lack of agreement on the funniness.", "In this case, a machine will not be able to assign a unique value of funniness to a tweet, which makes sense with its subjectivity, albeit other techniques could be used (Geng, 2016) .", "In this case, if we consider the whole dataset, this number decreases to 0.1442.", "If we only consider the eleven annotators who tagged more than a thousand times (who tagged 50, 939 times in total), the humor and funniness agreement are respectively 0.6345 and 0.2635.", "Conclusion and Future Work Our main contribution is a corpus of tweets in Spanish labeled by their humor value and funniness score with respect to a crowd-sourced annotation.", "The dataset contains 27, 282 tweets coming from multiple sources, with 107, 634 annotations.", "The corpus showed high quality because of the significant inter-annotator agreement value.", "The dataset serves to build a Spanish humor classifier, but it also serves as a first step to tackle humor and funniness subjectivity.", "Even though more annotations per tweet would be appropriate, there is a subset of a thousand tweets with at least six annotations that could be used to study people's opinion on the same instances.", "Future steps involve gathering more annotations per tweet for a considerable amount of tweets, so techniques such as the ones in (Geng, 2016 ) could be used to study how people perceive the humorous pieces and what subjects and phrases they consider funnier.", "It would be interesting to consider social strata (e.g.", "origin, age and gender) when trying to find these patterns.", "Additionally, a similar dataset could be built for other languages which count with more data to cross over with (such as English) and build a humor classifier exploiting re-" ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5.1", "5.3", "5.4", "6" ], "paper_header_content": [ "Introduction", "Extraction", "Annotation", "Corpus", "Annotation Distribution", "Annotators Distribution", "Annotators Agreement", "Conclusion and Future Work" ] }
GEM-SciDuet-train-44#paper-1064#slide-7
Annotation i
We built a web page, similar to the one used by Castro et al.
We built a web page, similar to the one used by Castro et al.
[]
GEM-SciDuet-train-44#paper-1064#slide-9
1064
A Crowd-Annotated Spanish Corpus for Humor Analysis
Computational Humor involves several tasks, such as humor recognition, humor generation, and humor scoring, for which it is useful to have human-curated data. In this work we present a corpus of 27,000 tweets written in Spanish and crowd-annotated by their humor value and funniness score, with about four annotations per tweet, tagged by 1,300 people over the Internet. It is equally divided between tweets coming from humorous and non-humorous accounts. The interannotator agreement Krippendorff's alpha value is 0.5710. The dataset is available for general usage and can serve as a basis for humor detection and as a first step to tackle subjectivity.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114 ], "paper_content_text": [ "Introduction Computational Humor studies humor from a computational perspective, involving several tasks such as humor recognition, which aims to tell if a piece of text is humorous or not; humor generation, with the objective of generating new texts with funny content; and humor scoring, whose goal is to predict how funny a piece of text is.", "In order to carry out this kind of tasks through supervised machine learning methods, humancurated data is necessary.", "Castro et al.", "(2016) built a humor classifier for Spanish and provided a dataset for humor recognition.", "However, there are some issues: few annotations per instance, low annotator agreement, and limited variety of sources for the humorous and mostly for the nonhumorous tweets (the latter were only about news, inspirational thoughts and curious facts).", "Up to our knowledge, there is no other dataset to work on humor comprehension in Spanish.", "Some other authors, such as Mihalcea and Strapparava (2005a,b) ; Sjöbergh and Araki (2007) have tackled humor recognition in English texts, building their own corpora by downloading one-liners (onesentence jokes) from the Internet, since working with longer texts would involve additional work, such as determining humor scope.", "The microblogging platform Twitter has been found particularly useful for building humor corpora due to its public availability and the fact that its short messages are suitable for jokes or humorous comments.", "Castro et al.", "(2016) built their corpus based on Twitter, selecting nine humorous accounts and nine non-humorous accounts about news, thoughts and curious facts.", "Reyes et al.", "(2013) built a corpus for detecting irony in tweets by searching for several hashtags (i.e., #irony, #humor, #education and #politics), which is also used in Barbieri and Saggion (2014) to train a classifier that detects humor.", "More recently, Potash et al.", "(2017) built a corpus based on tweets that aims to distinguish the degree of funniness in a given tweet.", "They used the tweet set issued in response to a TV game show, labeling which tweets were considered humorous by the show.", "In this work we present a crowd-annotated Spanish corpus of tweets tagged with a humor/no humor value and also by a funniness score from one to five.", "The corpus contains tweets extracted from varied sources and has several annotations per tweet, reaching a high humor inter-annotator agreement.", "The contribution of this work is twofold: the dataset is not only useful for building a humor classifier but it also serves to approach subjectivity in humor and funniness.", "Even though there are not enough annotations per tweet as required to study subjectivity in a genuine way with techniques such as the ones by Geng (2016) , the dataset aids as a playground to study the funniness and disagree-ment among several people.", "This document is organized as follows.", "Section 2 explains where and how we obtained the data, and Section 3 describes how it was annotated.", "In Section 4 we present the corpus, and we address the analysis in Section 5.", "Finally, in Section 6 we present draw the conclusions and present the future work.", "Extraction The aim of the extraction and annotation process was to build a corpus of at least 20,000 tweets that was as balanced as possible between the humor and not humor classes.", "Furthermore, as we intended to have a way of calculating the funniness score of a tweet, we needed to have several votes for the tweets that were considered humorous.", "As we wanted to have both humorous and non-humorous tweet samples, we extracted tweets from selected accounts and from realtime samples.", "For the former, based on Castro et al.", "(2016) , we selected tweets from fifty humorous accounts from Spanish speaking countries, and took a random sample of size 12,000.", "For the latter, we fetched tweet samples written in Spanish throughout February 2018 1 , and from this collection we took another random sample of size 12,000.", "Note that we preferred to take realtime tweet samples as we did not want to bias by selecting certain negative examples, such as news or inspirational thoughts as in Castro et al.", "(2016) and Mihalcea and Strapparava (2005b) .", "From both sources we ignored retweets, responses, citations and tweets containing links, as we wanted the text to be selfcontained.", "As expected, both sources contained a mix of humorous and non-humorous tweets.", "In the case of humorous accounts, this may be due to the fact that many tweets are used to increase the number of followers, expressing an opinion on a current event or supporting some popular cause.", "We first aimed to have five votes for each tweet, and to decide which tweet was humorous by simple majority.", "However, at a certain stage during the annotation process, we noticed that the users were voting too many tweets as non-humorous, and the result was highly unbalanced.", "Because of this, we made some adjustments in the corpus and the process: as the target was to have five votes for each tweet, we considered that the 1 The language detection feature is provided by the Twitter REST API.", "The annotator is asked whether the tweet intends to be humorous.", "The available options are \"Yes\", \"No\" or \"Skip\".", "If the annotator selects \"Yes\", five emoji are shown so the annotator can specify how funny he considers the tweet.", "The emoji also include labels describing the funniness levels.", "tweets that already had three non-humorous annotations at this stage should be considered as not humor, then we deprioritized them so the users could focus in annotating the rest of the tweets that were still ambiguous.", "We also injected 4,500 more tweets randomly extracted only from the humorous accounts.", "These new tweets were also prioritized since they had less annotations than the rest.", "Annotation A crowdsourced web annotation task was carried out to tag all tweets.", "2 The annotators were shown tweets as in Fig.", "1 .", "The tweets were randomly chosen but web session information was kept to avoid showing duplicates.", "We tried to keep the user interface as intuitive and self-explanatory as possible, trying not to induce any bias on users and letting them come up with their own definition of humor.", "The simple and friendly interface is meant to keep the users engaged and having fun while classifying tweets as humorous or not, and how funny they are, with as few instructions as possible.", "If a person decides that a tweet is humorous, he has to rate it between one to five by using emoji.", "In this way, the annotator gives more information rather than just stating the tweet is humorous.", "We also allowed to skip a tweet or click a help button for more information.", "We consider that explicitly asking the annotator if the text intends to be humorous makes the distinction between the Not Humorous and Not Funny classes less ambiguous, which we believe was a problem of (Castro et al., 2016) user interface.", "Also, we consider our emoji rated funniness score to be clearer for annotators than their stars based rating.", "The web page was shared on popular social networks along with some context about the task and the annotation period occurred between March 8 th and 27 th , 2018.", "The first tweets shown to every session were the same: three tweets for which we know a clear answer (one of them was humorous and the other two were not).", "These first tweets (\"test tweets\") were meant as a way of introducing the user into how the interface works, and also as an initial way for evaluating the quality of the annotations.", "After the introductory tweets, the rest of the tweets were sampled randomly, starting with the ones with the least number of votes.", "Corpus The dataset consists of two CSV files: tweets and annotations.", "The former contains the identifier and origin (which can be the realtime samples or the selected accounts) for each one of the 27, 282 tweets 3 , while the latter contains the tweet identifier, session identifier, date and annotation value for each one of the 117, 800 annotations received during the annotation phase (including the times the skip button was pressed, 2, 959 times).", "The dataset was released and it is available online.", "4 When compiling the final version of the corpus, we considered the annotations of users that did not answer the first three tweets correctly as having lower quality.", "These sessions should not be used for training or testing machine learning algorithms.", "Fortunately, only a small number of annotations had to be discarded because of this reason.", "The final number of annotations is 107, 634 (not including the times the skip button was pressed), including 3, 916 annotations assigned to the test tweets themselves.", "Analysis Annotation Distribution Each tweet received 3.8 annotations on average, with a standard deviation of 1.16, not considering the test tweets as they are outliers (they have a large number of annotations).", "The annotation 3 Tweet text is not included in the corpus due to Twitter Terms and Conditions.", "They can be obtained from the IDs.", "distribution is shown in Fig.", "2 .", "The histogram is highly concentrated: more than 98% of the tweets received between two and six annotations each.", "Even though the strategy was to show random tweets among the ones with less annotations, note that there are tweets with less than three annotations because some annotations were finally filtered out.", "At the same time, there are some tweets with more than six annotations because we merged annotations from a few dozen duplicate tweets.", "Also, note that there is a considerable amount of tweets with at least six annotations (1, 001).", "This subset can be useful to study the different annotator opinions under the same instances.", "Fig.", "3 shows how the classes are distributed between the annotations.", "Roughly two thirds were assigned to the class Not Humorous, agreeing with the fact that there seem to be more non-humorous tweets from humorous accounts than the other way around.", "The graph also indicates that there is a bias towards bad jokes in humor, according to the annotators.", "We use simple majority of votes for categorizing between humorous or not humorous, and weighted average for computing the funniness score only for humorous tweets.", "The scale goes from one (Not Funny) to five (Excellent).", "Under this scheme, 27.01% of the tweets are humorous, 70.6% are not-humorous while 2.39% is undecided (2.38% tied and 0.01% no annotations).", "At the same time, humorous tweets have little funniness overall: the funniness score average is 1.35 and standard deviation 0.85.", "Class Distribution Annotators Distribution There were 1, 271 annotators who tagged the tweets roughly as follows: two annotators tagged 13, 000 tweets, then one annotated 8, 000, the next eight annotated between one and three thousand, the next 105 annotated between one hundred and one thousand and the rest annotated less than a hundred, having 32, 584 annotations in total (see Fig.", "4 ).", "The average was 83 tags by annotator, with a standard deviation of 597.", "Annotators Agreement An important aspect to analyze is to what extent the annotators agree on which tweets are humorous.", "We used the alpha measure from Krippendorff (2012) , a generalized version of the kappa measure (Cohen, 1960; Fleiss, 1971 ) that takes in account an arbitray number of raters.", "The agreement alpha value on humorous versus nonhumorous is 0.5710.", "According to Fleiss (1981) , it means that the agreement is somewhat between \"moderate\" to \"substantial\", suggesting there is acceptable agreement but the humans cannot completely agree.", "We believe that the carefully designed user interface impacted in the quality of the annotation, as unlike Castro et al.", "(2016) this work's annotation web page presented less ambiguity between the class Not Humorous and Not Funny.", "We clearly outperformed their interannotator agreement (which was 0.3654).", "Additionally, if we consider the whole corpus (including the removed annotations), this figure decreases to 0.5512.", "This shows that the test tweets were helpful to filter out low quality annotations.", "Additionally, we can try to estimate to what extent the annotators agree on the funniness value of the tweets.", "In this case, disagreement between close values in the scale (e.g.", "Not Funny and Little Funny) should have less impact than disagreement between values that are further (e.g.", "Not Funny and Excellent).", "Following Stevens (1946) , in the previous case we were dealing with a nominal measure while in this case it is an ordinal measure.", "Alpha considers this into the formula by using a generic distance function between ratings, so we applied it and obtained a value of 0.1625 which is far from good; it is closer to a random annotation.", "There is a lack of agreement on the funniness.", "In this case, a machine will not be able to assign a unique value of funniness to a tweet, which makes sense with its subjectivity, albeit other techniques could be used (Geng, 2016) .", "In this case, if we consider the whole dataset, this number decreases to 0.1442.", "If we only consider the eleven annotators who tagged more than a thousand times (who tagged 50, 939 times in total), the humor and funniness agreement are respectively 0.6345 and 0.2635.", "Conclusion and Future Work Our main contribution is a corpus of tweets in Spanish labeled by their humor value and funniness score with respect to a crowd-sourced annotation.", "The dataset contains 27, 282 tweets coming from multiple sources, with 107, 634 annotations.", "The corpus showed high quality because of the significant inter-annotator agreement value.", "The dataset serves to build a Spanish humor classifier, but it also serves as a first step to tackle humor and funniness subjectivity.", "Even though more annotations per tweet would be appropriate, there is a subset of a thousand tweets with at least six annotations that could be used to study people's opinion on the same instances.", "Future steps involve gathering more annotations per tweet for a considerable amount of tweets, so techniques such as the ones in (Geng, 2016 ) could be used to study how people perceive the humorous pieces and what subjects and phrases they consider funnier.", "It would be interesting to consider social strata (e.g.", "origin, age and gender) when trying to find these patterns.", "Additionally, a similar dataset could be built for other languages which count with more data to cross over with (such as English) and build a humor classifier exploiting re-" ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5.1", "5.3", "5.4", "6" ], "paper_header_content": [ "Introduction", "Extraction", "Annotation", "Corpus", "Annotation Distribution", "Annotators Distribution", "Annotators Agreement", "Conclusion and Future Work" ] }
GEM-SciDuet-train-44#paper-1064#slide-9
Annotation iii
Tweets were randomly shown to annotators, but avoiding duplicates (by using web cookies). We wanted UI to be the more intuitive and self-explanatory as possible, trying not to induce any bias on users and letting them come up with their own definition of humor. The simple and friendly interface is meant to keep the users engaged and having fun while classifying tweets.
Tweets were randomly shown to annotators, but avoiding duplicates (by using web cookies). We wanted UI to be the more intuitive and self-explanatory as possible, trying not to induce any bias on users and letting them come up with their own definition of humor. The simple and friendly interface is meant to keep the users engaged and having fun while classifying tweets.
[]
GEM-SciDuet-train-44#paper-1064#slide-10
1064
A Crowd-Annotated Spanish Corpus for Humor Analysis
Computational Humor involves several tasks, such as humor recognition, humor generation, and humor scoring, for which it is useful to have human-curated data. In this work we present a corpus of 27,000 tweets written in Spanish and crowd-annotated by their humor value and funniness score, with about four annotations per tweet, tagged by 1,300 people over the Internet. It is equally divided between tweets coming from humorous and non-humorous accounts. The interannotator agreement Krippendorff's alpha value is 0.5710. The dataset is available for general usage and can serve as a basis for humor detection and as a first step to tackle subjectivity.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114 ], "paper_content_text": [ "Introduction Computational Humor studies humor from a computational perspective, involving several tasks such as humor recognition, which aims to tell if a piece of text is humorous or not; humor generation, with the objective of generating new texts with funny content; and humor scoring, whose goal is to predict how funny a piece of text is.", "In order to carry out this kind of tasks through supervised machine learning methods, humancurated data is necessary.", "Castro et al.", "(2016) built a humor classifier for Spanish and provided a dataset for humor recognition.", "However, there are some issues: few annotations per instance, low annotator agreement, and limited variety of sources for the humorous and mostly for the nonhumorous tweets (the latter were only about news, inspirational thoughts and curious facts).", "Up to our knowledge, there is no other dataset to work on humor comprehension in Spanish.", "Some other authors, such as Mihalcea and Strapparava (2005a,b) ; Sjöbergh and Araki (2007) have tackled humor recognition in English texts, building their own corpora by downloading one-liners (onesentence jokes) from the Internet, since working with longer texts would involve additional work, such as determining humor scope.", "The microblogging platform Twitter has been found particularly useful for building humor corpora due to its public availability and the fact that its short messages are suitable for jokes or humorous comments.", "Castro et al.", "(2016) built their corpus based on Twitter, selecting nine humorous accounts and nine non-humorous accounts about news, thoughts and curious facts.", "Reyes et al.", "(2013) built a corpus for detecting irony in tweets by searching for several hashtags (i.e., #irony, #humor, #education and #politics), which is also used in Barbieri and Saggion (2014) to train a classifier that detects humor.", "More recently, Potash et al.", "(2017) built a corpus based on tweets that aims to distinguish the degree of funniness in a given tweet.", "They used the tweet set issued in response to a TV game show, labeling which tweets were considered humorous by the show.", "In this work we present a crowd-annotated Spanish corpus of tweets tagged with a humor/no humor value and also by a funniness score from one to five.", "The corpus contains tweets extracted from varied sources and has several annotations per tweet, reaching a high humor inter-annotator agreement.", "The contribution of this work is twofold: the dataset is not only useful for building a humor classifier but it also serves to approach subjectivity in humor and funniness.", "Even though there are not enough annotations per tweet as required to study subjectivity in a genuine way with techniques such as the ones by Geng (2016) , the dataset aids as a playground to study the funniness and disagree-ment among several people.", "This document is organized as follows.", "Section 2 explains where and how we obtained the data, and Section 3 describes how it was annotated.", "In Section 4 we present the corpus, and we address the analysis in Section 5.", "Finally, in Section 6 we present draw the conclusions and present the future work.", "Extraction The aim of the extraction and annotation process was to build a corpus of at least 20,000 tweets that was as balanced as possible between the humor and not humor classes.", "Furthermore, as we intended to have a way of calculating the funniness score of a tweet, we needed to have several votes for the tweets that were considered humorous.", "As we wanted to have both humorous and non-humorous tweet samples, we extracted tweets from selected accounts and from realtime samples.", "For the former, based on Castro et al.", "(2016) , we selected tweets from fifty humorous accounts from Spanish speaking countries, and took a random sample of size 12,000.", "For the latter, we fetched tweet samples written in Spanish throughout February 2018 1 , and from this collection we took another random sample of size 12,000.", "Note that we preferred to take realtime tweet samples as we did not want to bias by selecting certain negative examples, such as news or inspirational thoughts as in Castro et al.", "(2016) and Mihalcea and Strapparava (2005b) .", "From both sources we ignored retweets, responses, citations and tweets containing links, as we wanted the text to be selfcontained.", "As expected, both sources contained a mix of humorous and non-humorous tweets.", "In the case of humorous accounts, this may be due to the fact that many tweets are used to increase the number of followers, expressing an opinion on a current event or supporting some popular cause.", "We first aimed to have five votes for each tweet, and to decide which tweet was humorous by simple majority.", "However, at a certain stage during the annotation process, we noticed that the users were voting too many tweets as non-humorous, and the result was highly unbalanced.", "Because of this, we made some adjustments in the corpus and the process: as the target was to have five votes for each tweet, we considered that the 1 The language detection feature is provided by the Twitter REST API.", "The annotator is asked whether the tweet intends to be humorous.", "The available options are \"Yes\", \"No\" or \"Skip\".", "If the annotator selects \"Yes\", five emoji are shown so the annotator can specify how funny he considers the tweet.", "The emoji also include labels describing the funniness levels.", "tweets that already had three non-humorous annotations at this stage should be considered as not humor, then we deprioritized them so the users could focus in annotating the rest of the tweets that were still ambiguous.", "We also injected 4,500 more tweets randomly extracted only from the humorous accounts.", "These new tweets were also prioritized since they had less annotations than the rest.", "Annotation A crowdsourced web annotation task was carried out to tag all tweets.", "2 The annotators were shown tweets as in Fig.", "1 .", "The tweets were randomly chosen but web session information was kept to avoid showing duplicates.", "We tried to keep the user interface as intuitive and self-explanatory as possible, trying not to induce any bias on users and letting them come up with their own definition of humor.", "The simple and friendly interface is meant to keep the users engaged and having fun while classifying tweets as humorous or not, and how funny they are, with as few instructions as possible.", "If a person decides that a tweet is humorous, he has to rate it between one to five by using emoji.", "In this way, the annotator gives more information rather than just stating the tweet is humorous.", "We also allowed to skip a tweet or click a help button for more information.", "We consider that explicitly asking the annotator if the text intends to be humorous makes the distinction between the Not Humorous and Not Funny classes less ambiguous, which we believe was a problem of (Castro et al., 2016) user interface.", "Also, we consider our emoji rated funniness score to be clearer for annotators than their stars based rating.", "The web page was shared on popular social networks along with some context about the task and the annotation period occurred between March 8 th and 27 th , 2018.", "The first tweets shown to every session were the same: three tweets for which we know a clear answer (one of them was humorous and the other two were not).", "These first tweets (\"test tweets\") were meant as a way of introducing the user into how the interface works, and also as an initial way for evaluating the quality of the annotations.", "After the introductory tweets, the rest of the tweets were sampled randomly, starting with the ones with the least number of votes.", "Corpus The dataset consists of two CSV files: tweets and annotations.", "The former contains the identifier and origin (which can be the realtime samples or the selected accounts) for each one of the 27, 282 tweets 3 , while the latter contains the tweet identifier, session identifier, date and annotation value for each one of the 117, 800 annotations received during the annotation phase (including the times the skip button was pressed, 2, 959 times).", "The dataset was released and it is available online.", "4 When compiling the final version of the corpus, we considered the annotations of users that did not answer the first three tweets correctly as having lower quality.", "These sessions should not be used for training or testing machine learning algorithms.", "Fortunately, only a small number of annotations had to be discarded because of this reason.", "The final number of annotations is 107, 634 (not including the times the skip button was pressed), including 3, 916 annotations assigned to the test tweets themselves.", "Analysis Annotation Distribution Each tweet received 3.8 annotations on average, with a standard deviation of 1.16, not considering the test tweets as they are outliers (they have a large number of annotations).", "The annotation 3 Tweet text is not included in the corpus due to Twitter Terms and Conditions.", "They can be obtained from the IDs.", "distribution is shown in Fig.", "2 .", "The histogram is highly concentrated: more than 98% of the tweets received between two and six annotations each.", "Even though the strategy was to show random tweets among the ones with less annotations, note that there are tweets with less than three annotations because some annotations were finally filtered out.", "At the same time, there are some tweets with more than six annotations because we merged annotations from a few dozen duplicate tweets.", "Also, note that there is a considerable amount of tweets with at least six annotations (1, 001).", "This subset can be useful to study the different annotator opinions under the same instances.", "Fig.", "3 shows how the classes are distributed between the annotations.", "Roughly two thirds were assigned to the class Not Humorous, agreeing with the fact that there seem to be more non-humorous tweets from humorous accounts than the other way around.", "The graph also indicates that there is a bias towards bad jokes in humor, according to the annotators.", "We use simple majority of votes for categorizing between humorous or not humorous, and weighted average for computing the funniness score only for humorous tweets.", "The scale goes from one (Not Funny) to five (Excellent).", "Under this scheme, 27.01% of the tweets are humorous, 70.6% are not-humorous while 2.39% is undecided (2.38% tied and 0.01% no annotations).", "At the same time, humorous tweets have little funniness overall: the funniness score average is 1.35 and standard deviation 0.85.", "Class Distribution Annotators Distribution There were 1, 271 annotators who tagged the tweets roughly as follows: two annotators tagged 13, 000 tweets, then one annotated 8, 000, the next eight annotated between one and three thousand, the next 105 annotated between one hundred and one thousand and the rest annotated less than a hundred, having 32, 584 annotations in total (see Fig.", "4 ).", "The average was 83 tags by annotator, with a standard deviation of 597.", "Annotators Agreement An important aspect to analyze is to what extent the annotators agree on which tweets are humorous.", "We used the alpha measure from Krippendorff (2012) , a generalized version of the kappa measure (Cohen, 1960; Fleiss, 1971 ) that takes in account an arbitray number of raters.", "The agreement alpha value on humorous versus nonhumorous is 0.5710.", "According to Fleiss (1981) , it means that the agreement is somewhat between \"moderate\" to \"substantial\", suggesting there is acceptable agreement but the humans cannot completely agree.", "We believe that the carefully designed user interface impacted in the quality of the annotation, as unlike Castro et al.", "(2016) this work's annotation web page presented less ambiguity between the class Not Humorous and Not Funny.", "We clearly outperformed their interannotator agreement (which was 0.3654).", "Additionally, if we consider the whole corpus (including the removed annotations), this figure decreases to 0.5512.", "This shows that the test tweets were helpful to filter out low quality annotations.", "Additionally, we can try to estimate to what extent the annotators agree on the funniness value of the tweets.", "In this case, disagreement between close values in the scale (e.g.", "Not Funny and Little Funny) should have less impact than disagreement between values that are further (e.g.", "Not Funny and Excellent).", "Following Stevens (1946) , in the previous case we were dealing with a nominal measure while in this case it is an ordinal measure.", "Alpha considers this into the formula by using a generic distance function between ratings, so we applied it and obtained a value of 0.1625 which is far from good; it is closer to a random annotation.", "There is a lack of agreement on the funniness.", "In this case, a machine will not be able to assign a unique value of funniness to a tweet, which makes sense with its subjectivity, albeit other techniques could be used (Geng, 2016) .", "In this case, if we consider the whole dataset, this number decreases to 0.1442.", "If we only consider the eleven annotators who tagged more than a thousand times (who tagged 50, 939 times in total), the humor and funniness agreement are respectively 0.6345 and 0.2635.", "Conclusion and Future Work Our main contribution is a corpus of tweets in Spanish labeled by their humor value and funniness score with respect to a crowd-sourced annotation.", "The dataset contains 27, 282 tweets coming from multiple sources, with 107, 634 annotations.", "The corpus showed high quality because of the significant inter-annotator agreement value.", "The dataset serves to build a Spanish humor classifier, but it also serves as a first step to tackle humor and funniness subjectivity.", "Even though more annotations per tweet would be appropriate, there is a subset of a thousand tweets with at least six annotations that could be used to study people's opinion on the same instances.", "Future steps involve gathering more annotations per tweet for a considerable amount of tweets, so techniques such as the ones in (Geng, 2016 ) could be used to study how people perceive the humorous pieces and what subjects and phrases they consider funnier.", "It would be interesting to consider social strata (e.g.", "origin, age and gender) when trying to find these patterns.", "Additionally, a similar dataset could be built for other languages which count with more data to cross over with (such as English) and build a humor classifier exploiting re-" ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5.1", "5.3", "5.4", "6" ], "paper_header_content": [ "Introduction", "Extraction", "Annotation", "Corpus", "Annotation Distribution", "Annotators Distribution", "Annotators Agreement", "Conclusion and Future Work" ] }
GEM-SciDuet-train-44#paper-1064#slide-10
Annotation iv
People annotated from March 8th to 27th, 2018. The first tweets shown to every session were the same: 3 tweets for which we know a clear answer. During the annotation process, we added around 4,500 tweets coming from humorous accounts to help the balance.
People annotated from March 8th to 27th, 2018. The first tweets shown to every session were the same: 3 tweets for which we know a clear answer. During the annotation process, we added around 4,500 tweets coming from humorous accounts to help the balance.
[]
GEM-SciDuet-train-44#paper-1064#slide-11
1064
A Crowd-Annotated Spanish Corpus for Humor Analysis
Computational Humor involves several tasks, such as humor recognition, humor generation, and humor scoring, for which it is useful to have human-curated data. In this work we present a corpus of 27,000 tweets written in Spanish and crowd-annotated by their humor value and funniness score, with about four annotations per tweet, tagged by 1,300 people over the Internet. It is equally divided between tweets coming from humorous and non-humorous accounts. The interannotator agreement Krippendorff's alpha value is 0.5710. The dataset is available for general usage and can serve as a basis for humor detection and as a first step to tackle subjectivity.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114 ], "paper_content_text": [ "Introduction Computational Humor studies humor from a computational perspective, involving several tasks such as humor recognition, which aims to tell if a piece of text is humorous or not; humor generation, with the objective of generating new texts with funny content; and humor scoring, whose goal is to predict how funny a piece of text is.", "In order to carry out this kind of tasks through supervised machine learning methods, humancurated data is necessary.", "Castro et al.", "(2016) built a humor classifier for Spanish and provided a dataset for humor recognition.", "However, there are some issues: few annotations per instance, low annotator agreement, and limited variety of sources for the humorous and mostly for the nonhumorous tweets (the latter were only about news, inspirational thoughts and curious facts).", "Up to our knowledge, there is no other dataset to work on humor comprehension in Spanish.", "Some other authors, such as Mihalcea and Strapparava (2005a,b) ; Sjöbergh and Araki (2007) have tackled humor recognition in English texts, building their own corpora by downloading one-liners (onesentence jokes) from the Internet, since working with longer texts would involve additional work, such as determining humor scope.", "The microblogging platform Twitter has been found particularly useful for building humor corpora due to its public availability and the fact that its short messages are suitable for jokes or humorous comments.", "Castro et al.", "(2016) built their corpus based on Twitter, selecting nine humorous accounts and nine non-humorous accounts about news, thoughts and curious facts.", "Reyes et al.", "(2013) built a corpus for detecting irony in tweets by searching for several hashtags (i.e., #irony, #humor, #education and #politics), which is also used in Barbieri and Saggion (2014) to train a classifier that detects humor.", "More recently, Potash et al.", "(2017) built a corpus based on tweets that aims to distinguish the degree of funniness in a given tweet.", "They used the tweet set issued in response to a TV game show, labeling which tweets were considered humorous by the show.", "In this work we present a crowd-annotated Spanish corpus of tweets tagged with a humor/no humor value and also by a funniness score from one to five.", "The corpus contains tweets extracted from varied sources and has several annotations per tweet, reaching a high humor inter-annotator agreement.", "The contribution of this work is twofold: the dataset is not only useful for building a humor classifier but it also serves to approach subjectivity in humor and funniness.", "Even though there are not enough annotations per tweet as required to study subjectivity in a genuine way with techniques such as the ones by Geng (2016) , the dataset aids as a playground to study the funniness and disagree-ment among several people.", "This document is organized as follows.", "Section 2 explains where and how we obtained the data, and Section 3 describes how it was annotated.", "In Section 4 we present the corpus, and we address the analysis in Section 5.", "Finally, in Section 6 we present draw the conclusions and present the future work.", "Extraction The aim of the extraction and annotation process was to build a corpus of at least 20,000 tweets that was as balanced as possible between the humor and not humor classes.", "Furthermore, as we intended to have a way of calculating the funniness score of a tweet, we needed to have several votes for the tweets that were considered humorous.", "As we wanted to have both humorous and non-humorous tweet samples, we extracted tweets from selected accounts and from realtime samples.", "For the former, based on Castro et al.", "(2016) , we selected tweets from fifty humorous accounts from Spanish speaking countries, and took a random sample of size 12,000.", "For the latter, we fetched tweet samples written in Spanish throughout February 2018 1 , and from this collection we took another random sample of size 12,000.", "Note that we preferred to take realtime tweet samples as we did not want to bias by selecting certain negative examples, such as news or inspirational thoughts as in Castro et al.", "(2016) and Mihalcea and Strapparava (2005b) .", "From both sources we ignored retweets, responses, citations and tweets containing links, as we wanted the text to be selfcontained.", "As expected, both sources contained a mix of humorous and non-humorous tweets.", "In the case of humorous accounts, this may be due to the fact that many tweets are used to increase the number of followers, expressing an opinion on a current event or supporting some popular cause.", "We first aimed to have five votes for each tweet, and to decide which tweet was humorous by simple majority.", "However, at a certain stage during the annotation process, we noticed that the users were voting too many tweets as non-humorous, and the result was highly unbalanced.", "Because of this, we made some adjustments in the corpus and the process: as the target was to have five votes for each tweet, we considered that the 1 The language detection feature is provided by the Twitter REST API.", "The annotator is asked whether the tweet intends to be humorous.", "The available options are \"Yes\", \"No\" or \"Skip\".", "If the annotator selects \"Yes\", five emoji are shown so the annotator can specify how funny he considers the tweet.", "The emoji also include labels describing the funniness levels.", "tweets that already had three non-humorous annotations at this stage should be considered as not humor, then we deprioritized them so the users could focus in annotating the rest of the tweets that were still ambiguous.", "We also injected 4,500 more tweets randomly extracted only from the humorous accounts.", "These new tweets were also prioritized since they had less annotations than the rest.", "Annotation A crowdsourced web annotation task was carried out to tag all tweets.", "2 The annotators were shown tweets as in Fig.", "1 .", "The tweets were randomly chosen but web session information was kept to avoid showing duplicates.", "We tried to keep the user interface as intuitive and self-explanatory as possible, trying not to induce any bias on users and letting them come up with their own definition of humor.", "The simple and friendly interface is meant to keep the users engaged and having fun while classifying tweets as humorous or not, and how funny they are, with as few instructions as possible.", "If a person decides that a tweet is humorous, he has to rate it between one to five by using emoji.", "In this way, the annotator gives more information rather than just stating the tweet is humorous.", "We also allowed to skip a tweet or click a help button for more information.", "We consider that explicitly asking the annotator if the text intends to be humorous makes the distinction between the Not Humorous and Not Funny classes less ambiguous, which we believe was a problem of (Castro et al., 2016) user interface.", "Also, we consider our emoji rated funniness score to be clearer for annotators than their stars based rating.", "The web page was shared on popular social networks along with some context about the task and the annotation period occurred between March 8 th and 27 th , 2018.", "The first tweets shown to every session were the same: three tweets for which we know a clear answer (one of them was humorous and the other two were not).", "These first tweets (\"test tweets\") were meant as a way of introducing the user into how the interface works, and also as an initial way for evaluating the quality of the annotations.", "After the introductory tweets, the rest of the tweets were sampled randomly, starting with the ones with the least number of votes.", "Corpus The dataset consists of two CSV files: tweets and annotations.", "The former contains the identifier and origin (which can be the realtime samples or the selected accounts) for each one of the 27, 282 tweets 3 , while the latter contains the tweet identifier, session identifier, date and annotation value for each one of the 117, 800 annotations received during the annotation phase (including the times the skip button was pressed, 2, 959 times).", "The dataset was released and it is available online.", "4 When compiling the final version of the corpus, we considered the annotations of users that did not answer the first three tweets correctly as having lower quality.", "These sessions should not be used for training or testing machine learning algorithms.", "Fortunately, only a small number of annotations had to be discarded because of this reason.", "The final number of annotations is 107, 634 (not including the times the skip button was pressed), including 3, 916 annotations assigned to the test tweets themselves.", "Analysis Annotation Distribution Each tweet received 3.8 annotations on average, with a standard deviation of 1.16, not considering the test tweets as they are outliers (they have a large number of annotations).", "The annotation 3 Tweet text is not included in the corpus due to Twitter Terms and Conditions.", "They can be obtained from the IDs.", "distribution is shown in Fig.", "2 .", "The histogram is highly concentrated: more than 98% of the tweets received between two and six annotations each.", "Even though the strategy was to show random tweets among the ones with less annotations, note that there are tweets with less than three annotations because some annotations were finally filtered out.", "At the same time, there are some tweets with more than six annotations because we merged annotations from a few dozen duplicate tweets.", "Also, note that there is a considerable amount of tweets with at least six annotations (1, 001).", "This subset can be useful to study the different annotator opinions under the same instances.", "Fig.", "3 shows how the classes are distributed between the annotations.", "Roughly two thirds were assigned to the class Not Humorous, agreeing with the fact that there seem to be more non-humorous tweets from humorous accounts than the other way around.", "The graph also indicates that there is a bias towards bad jokes in humor, according to the annotators.", "We use simple majority of votes for categorizing between humorous or not humorous, and weighted average for computing the funniness score only for humorous tweets.", "The scale goes from one (Not Funny) to five (Excellent).", "Under this scheme, 27.01% of the tweets are humorous, 70.6% are not-humorous while 2.39% is undecided (2.38% tied and 0.01% no annotations).", "At the same time, humorous tweets have little funniness overall: the funniness score average is 1.35 and standard deviation 0.85.", "Class Distribution Annotators Distribution There were 1, 271 annotators who tagged the tweets roughly as follows: two annotators tagged 13, 000 tweets, then one annotated 8, 000, the next eight annotated between one and three thousand, the next 105 annotated between one hundred and one thousand and the rest annotated less than a hundred, having 32, 584 annotations in total (see Fig.", "4 ).", "The average was 83 tags by annotator, with a standard deviation of 597.", "Annotators Agreement An important aspect to analyze is to what extent the annotators agree on which tweets are humorous.", "We used the alpha measure from Krippendorff (2012) , a generalized version of the kappa measure (Cohen, 1960; Fleiss, 1971 ) that takes in account an arbitray number of raters.", "The agreement alpha value on humorous versus nonhumorous is 0.5710.", "According to Fleiss (1981) , it means that the agreement is somewhat between \"moderate\" to \"substantial\", suggesting there is acceptable agreement but the humans cannot completely agree.", "We believe that the carefully designed user interface impacted in the quality of the annotation, as unlike Castro et al.", "(2016) this work's annotation web page presented less ambiguity between the class Not Humorous and Not Funny.", "We clearly outperformed their interannotator agreement (which was 0.3654).", "Additionally, if we consider the whole corpus (including the removed annotations), this figure decreases to 0.5512.", "This shows that the test tweets were helpful to filter out low quality annotations.", "Additionally, we can try to estimate to what extent the annotators agree on the funniness value of the tweets.", "In this case, disagreement between close values in the scale (e.g.", "Not Funny and Little Funny) should have less impact than disagreement between values that are further (e.g.", "Not Funny and Excellent).", "Following Stevens (1946) , in the previous case we were dealing with a nominal measure while in this case it is an ordinal measure.", "Alpha considers this into the formula by using a generic distance function between ratings, so we applied it and obtained a value of 0.1625 which is far from good; it is closer to a random annotation.", "There is a lack of agreement on the funniness.", "In this case, a machine will not be able to assign a unique value of funniness to a tweet, which makes sense with its subjectivity, albeit other techniques could be used (Geng, 2016) .", "In this case, if we consider the whole dataset, this number decreases to 0.1442.", "If we only consider the eleven annotators who tagged more than a thousand times (who tagged 50, 939 times in total), the humor and funniness agreement are respectively 0.6345 and 0.2635.", "Conclusion and Future Work Our main contribution is a corpus of tweets in Spanish labeled by their humor value and funniness score with respect to a crowd-sourced annotation.", "The dataset contains 27, 282 tweets coming from multiple sources, with 107, 634 annotations.", "The corpus showed high quality because of the significant inter-annotator agreement value.", "The dataset serves to build a Spanish humor classifier, but it also serves as a first step to tackle humor and funniness subjectivity.", "Even though more annotations per tweet would be appropriate, there is a subset of a thousand tweets with at least six annotations that could be used to study people's opinion on the same instances.", "Future steps involve gathering more annotations per tweet for a considerable amount of tweets, so techniques such as the ones in (Geng, 2016 ) could be used to study how people perceive the humorous pieces and what subjects and phrases they consider funnier.", "It would be interesting to consider social strata (e.g.", "origin, age and gender) when trying to find these patterns.", "Additionally, a similar dataset could be built for other languages which count with more data to cross over with (such as English) and build a humor classifier exploiting re-" ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5.1", "5.3", "5.4", "6" ], "paper_header_content": [ "Introduction", "Extraction", "Annotation", "Corpus", "Annotation Distribution", "Annotators Distribution", "Annotators Agreement", "Conclusion and Future Work" ] }
GEM-SciDuet-train-44#paper-1064#slide-11
Dataset i
The dataset consists of two CSV files: tweets and annotations. tweet ID session ID date value
The dataset consists of two CSV files: tweets and annotations. tweet ID session ID date value
[]
GEM-SciDuet-train-44#paper-1064#slide-12
1064
A Crowd-Annotated Spanish Corpus for Humor Analysis
Computational Humor involves several tasks, such as humor recognition, humor generation, and humor scoring, for which it is useful to have human-curated data. In this work we present a corpus of 27,000 tweets written in Spanish and crowd-annotated by their humor value and funniness score, with about four annotations per tweet, tagged by 1,300 people over the Internet. It is equally divided between tweets coming from humorous and non-humorous accounts. The interannotator agreement Krippendorff's alpha value is 0.5710. The dataset is available for general usage and can serve as a basis for humor detection and as a first step to tackle subjectivity.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114 ], "paper_content_text": [ "Introduction Computational Humor studies humor from a computational perspective, involving several tasks such as humor recognition, which aims to tell if a piece of text is humorous or not; humor generation, with the objective of generating new texts with funny content; and humor scoring, whose goal is to predict how funny a piece of text is.", "In order to carry out this kind of tasks through supervised machine learning methods, humancurated data is necessary.", "Castro et al.", "(2016) built a humor classifier for Spanish and provided a dataset for humor recognition.", "However, there are some issues: few annotations per instance, low annotator agreement, and limited variety of sources for the humorous and mostly for the nonhumorous tweets (the latter were only about news, inspirational thoughts and curious facts).", "Up to our knowledge, there is no other dataset to work on humor comprehension in Spanish.", "Some other authors, such as Mihalcea and Strapparava (2005a,b) ; Sjöbergh and Araki (2007) have tackled humor recognition in English texts, building their own corpora by downloading one-liners (onesentence jokes) from the Internet, since working with longer texts would involve additional work, such as determining humor scope.", "The microblogging platform Twitter has been found particularly useful for building humor corpora due to its public availability and the fact that its short messages are suitable for jokes or humorous comments.", "Castro et al.", "(2016) built their corpus based on Twitter, selecting nine humorous accounts and nine non-humorous accounts about news, thoughts and curious facts.", "Reyes et al.", "(2013) built a corpus for detecting irony in tweets by searching for several hashtags (i.e., #irony, #humor, #education and #politics), which is also used in Barbieri and Saggion (2014) to train a classifier that detects humor.", "More recently, Potash et al.", "(2017) built a corpus based on tweets that aims to distinguish the degree of funniness in a given tweet.", "They used the tweet set issued in response to a TV game show, labeling which tweets were considered humorous by the show.", "In this work we present a crowd-annotated Spanish corpus of tweets tagged with a humor/no humor value and also by a funniness score from one to five.", "The corpus contains tweets extracted from varied sources and has several annotations per tweet, reaching a high humor inter-annotator agreement.", "The contribution of this work is twofold: the dataset is not only useful for building a humor classifier but it also serves to approach subjectivity in humor and funniness.", "Even though there are not enough annotations per tweet as required to study subjectivity in a genuine way with techniques such as the ones by Geng (2016) , the dataset aids as a playground to study the funniness and disagree-ment among several people.", "This document is organized as follows.", "Section 2 explains where and how we obtained the data, and Section 3 describes how it was annotated.", "In Section 4 we present the corpus, and we address the analysis in Section 5.", "Finally, in Section 6 we present draw the conclusions and present the future work.", "Extraction The aim of the extraction and annotation process was to build a corpus of at least 20,000 tweets that was as balanced as possible between the humor and not humor classes.", "Furthermore, as we intended to have a way of calculating the funniness score of a tweet, we needed to have several votes for the tweets that were considered humorous.", "As we wanted to have both humorous and non-humorous tweet samples, we extracted tweets from selected accounts and from realtime samples.", "For the former, based on Castro et al.", "(2016) , we selected tweets from fifty humorous accounts from Spanish speaking countries, and took a random sample of size 12,000.", "For the latter, we fetched tweet samples written in Spanish throughout February 2018 1 , and from this collection we took another random sample of size 12,000.", "Note that we preferred to take realtime tweet samples as we did not want to bias by selecting certain negative examples, such as news or inspirational thoughts as in Castro et al.", "(2016) and Mihalcea and Strapparava (2005b) .", "From both sources we ignored retweets, responses, citations and tweets containing links, as we wanted the text to be selfcontained.", "As expected, both sources contained a mix of humorous and non-humorous tweets.", "In the case of humorous accounts, this may be due to the fact that many tweets are used to increase the number of followers, expressing an opinion on a current event or supporting some popular cause.", "We first aimed to have five votes for each tweet, and to decide which tweet was humorous by simple majority.", "However, at a certain stage during the annotation process, we noticed that the users were voting too many tweets as non-humorous, and the result was highly unbalanced.", "Because of this, we made some adjustments in the corpus and the process: as the target was to have five votes for each tweet, we considered that the 1 The language detection feature is provided by the Twitter REST API.", "The annotator is asked whether the tweet intends to be humorous.", "The available options are \"Yes\", \"No\" or \"Skip\".", "If the annotator selects \"Yes\", five emoji are shown so the annotator can specify how funny he considers the tweet.", "The emoji also include labels describing the funniness levels.", "tweets that already had three non-humorous annotations at this stage should be considered as not humor, then we deprioritized them so the users could focus in annotating the rest of the tweets that were still ambiguous.", "We also injected 4,500 more tweets randomly extracted only from the humorous accounts.", "These new tweets were also prioritized since they had less annotations than the rest.", "Annotation A crowdsourced web annotation task was carried out to tag all tweets.", "2 The annotators were shown tweets as in Fig.", "1 .", "The tweets were randomly chosen but web session information was kept to avoid showing duplicates.", "We tried to keep the user interface as intuitive and self-explanatory as possible, trying not to induce any bias on users and letting them come up with their own definition of humor.", "The simple and friendly interface is meant to keep the users engaged and having fun while classifying tweets as humorous or not, and how funny they are, with as few instructions as possible.", "If a person decides that a tweet is humorous, he has to rate it between one to five by using emoji.", "In this way, the annotator gives more information rather than just stating the tweet is humorous.", "We also allowed to skip a tweet or click a help button for more information.", "We consider that explicitly asking the annotator if the text intends to be humorous makes the distinction between the Not Humorous and Not Funny classes less ambiguous, which we believe was a problem of (Castro et al., 2016) user interface.", "Also, we consider our emoji rated funniness score to be clearer for annotators than their stars based rating.", "The web page was shared on popular social networks along with some context about the task and the annotation period occurred between March 8 th and 27 th , 2018.", "The first tweets shown to every session were the same: three tweets for which we know a clear answer (one of them was humorous and the other two were not).", "These first tweets (\"test tweets\") were meant as a way of introducing the user into how the interface works, and also as an initial way for evaluating the quality of the annotations.", "After the introductory tweets, the rest of the tweets were sampled randomly, starting with the ones with the least number of votes.", "Corpus The dataset consists of two CSV files: tweets and annotations.", "The former contains the identifier and origin (which can be the realtime samples or the selected accounts) for each one of the 27, 282 tweets 3 , while the latter contains the tweet identifier, session identifier, date and annotation value for each one of the 117, 800 annotations received during the annotation phase (including the times the skip button was pressed, 2, 959 times).", "The dataset was released and it is available online.", "4 When compiling the final version of the corpus, we considered the annotations of users that did not answer the first three tweets correctly as having lower quality.", "These sessions should not be used for training or testing machine learning algorithms.", "Fortunately, only a small number of annotations had to be discarded because of this reason.", "The final number of annotations is 107, 634 (not including the times the skip button was pressed), including 3, 916 annotations assigned to the test tweets themselves.", "Analysis Annotation Distribution Each tweet received 3.8 annotations on average, with a standard deviation of 1.16, not considering the test tweets as they are outliers (they have a large number of annotations).", "The annotation 3 Tweet text is not included in the corpus due to Twitter Terms and Conditions.", "They can be obtained from the IDs.", "distribution is shown in Fig.", "2 .", "The histogram is highly concentrated: more than 98% of the tweets received between two and six annotations each.", "Even though the strategy was to show random tweets among the ones with less annotations, note that there are tweets with less than three annotations because some annotations were finally filtered out.", "At the same time, there are some tweets with more than six annotations because we merged annotations from a few dozen duplicate tweets.", "Also, note that there is a considerable amount of tweets with at least six annotations (1, 001).", "This subset can be useful to study the different annotator opinions under the same instances.", "Fig.", "3 shows how the classes are distributed between the annotations.", "Roughly two thirds were assigned to the class Not Humorous, agreeing with the fact that there seem to be more non-humorous tweets from humorous accounts than the other way around.", "The graph also indicates that there is a bias towards bad jokes in humor, according to the annotators.", "We use simple majority of votes for categorizing between humorous or not humorous, and weighted average for computing the funniness score only for humorous tweets.", "The scale goes from one (Not Funny) to five (Excellent).", "Under this scheme, 27.01% of the tweets are humorous, 70.6% are not-humorous while 2.39% is undecided (2.38% tied and 0.01% no annotations).", "At the same time, humorous tweets have little funniness overall: the funniness score average is 1.35 and standard deviation 0.85.", "Class Distribution Annotators Distribution There were 1, 271 annotators who tagged the tweets roughly as follows: two annotators tagged 13, 000 tweets, then one annotated 8, 000, the next eight annotated between one and three thousand, the next 105 annotated between one hundred and one thousand and the rest annotated less than a hundred, having 32, 584 annotations in total (see Fig.", "4 ).", "The average was 83 tags by annotator, with a standard deviation of 597.", "Annotators Agreement An important aspect to analyze is to what extent the annotators agree on which tweets are humorous.", "We used the alpha measure from Krippendorff (2012) , a generalized version of the kappa measure (Cohen, 1960; Fleiss, 1971 ) that takes in account an arbitray number of raters.", "The agreement alpha value on humorous versus nonhumorous is 0.5710.", "According to Fleiss (1981) , it means that the agreement is somewhat between \"moderate\" to \"substantial\", suggesting there is acceptable agreement but the humans cannot completely agree.", "We believe that the carefully designed user interface impacted in the quality of the annotation, as unlike Castro et al.", "(2016) this work's annotation web page presented less ambiguity between the class Not Humorous and Not Funny.", "We clearly outperformed their interannotator agreement (which was 0.3654).", "Additionally, if we consider the whole corpus (including the removed annotations), this figure decreases to 0.5512.", "This shows that the test tweets were helpful to filter out low quality annotations.", "Additionally, we can try to estimate to what extent the annotators agree on the funniness value of the tweets.", "In this case, disagreement between close values in the scale (e.g.", "Not Funny and Little Funny) should have less impact than disagreement between values that are further (e.g.", "Not Funny and Excellent).", "Following Stevens (1946) , in the previous case we were dealing with a nominal measure while in this case it is an ordinal measure.", "Alpha considers this into the formula by using a generic distance function between ratings, so we applied it and obtained a value of 0.1625 which is far from good; it is closer to a random annotation.", "There is a lack of agreement on the funniness.", "In this case, a machine will not be able to assign a unique value of funniness to a tweet, which makes sense with its subjectivity, albeit other techniques could be used (Geng, 2016) .", "In this case, if we consider the whole dataset, this number decreases to 0.1442.", "If we only consider the eleven annotators who tagged more than a thousand times (who tagged 50, 939 times in total), the humor and funniness agreement are respectively 0.6345 and 0.2635.", "Conclusion and Future Work Our main contribution is a corpus of tweets in Spanish labeled by their humor value and funniness score with respect to a crowd-sourced annotation.", "The dataset contains 27, 282 tweets coming from multiple sources, with 107, 634 annotations.", "The corpus showed high quality because of the significant inter-annotator agreement value.", "The dataset serves to build a Spanish humor classifier, but it also serves as a first step to tackle humor and funniness subjectivity.", "Even though more annotations per tweet would be appropriate, there is a subset of a thousand tweets with at least six annotations that could be used to study people's opinion on the same instances.", "Future steps involve gathering more annotations per tweet for a considerable amount of tweets, so techniques such as the ones in (Geng, 2016 ) could be used to study how people perceive the humorous pieces and what subjects and phrases they consider funnier.", "It would be interesting to consider social strata (e.g.", "origin, age and gender) when trying to find these patterns.", "Additionally, a similar dataset could be built for other languages which count with more data to cross over with (such as English) and build a humor classifier exploiting re-" ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5.1", "5.3", "5.4", "6" ], "paper_header_content": [ "Introduction", "Extraction", "Annotation", "Corpus", "Annotation Distribution", "Annotators Distribution", "Annotators Agreement", "Conclusion and Future Work" ] }
GEM-SciDuet-train-44#paper-1064#slide-12
Dataset ii
high quality annotations (excluding skips)
high quality annotations (excluding skips)
[]
GEM-SciDuet-train-44#paper-1064#slide-16
1064
A Crowd-Annotated Spanish Corpus for Humor Analysis
Computational Humor involves several tasks, such as humor recognition, humor generation, and humor scoring, for which it is useful to have human-curated data. In this work we present a corpus of 27,000 tweets written in Spanish and crowd-annotated by their humor value and funniness score, with about four annotations per tweet, tagged by 1,300 people over the Internet. It is equally divided between tweets coming from humorous and non-humorous accounts. The interannotator agreement Krippendorff's alpha value is 0.5710. The dataset is available for general usage and can serve as a basis for humor detection and as a first step to tackle subjectivity.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114 ], "paper_content_text": [ "Introduction Computational Humor studies humor from a computational perspective, involving several tasks such as humor recognition, which aims to tell if a piece of text is humorous or not; humor generation, with the objective of generating new texts with funny content; and humor scoring, whose goal is to predict how funny a piece of text is.", "In order to carry out this kind of tasks through supervised machine learning methods, humancurated data is necessary.", "Castro et al.", "(2016) built a humor classifier for Spanish and provided a dataset for humor recognition.", "However, there are some issues: few annotations per instance, low annotator agreement, and limited variety of sources for the humorous and mostly for the nonhumorous tweets (the latter were only about news, inspirational thoughts and curious facts).", "Up to our knowledge, there is no other dataset to work on humor comprehension in Spanish.", "Some other authors, such as Mihalcea and Strapparava (2005a,b) ; Sjöbergh and Araki (2007) have tackled humor recognition in English texts, building their own corpora by downloading one-liners (onesentence jokes) from the Internet, since working with longer texts would involve additional work, such as determining humor scope.", "The microblogging platform Twitter has been found particularly useful for building humor corpora due to its public availability and the fact that its short messages are suitable for jokes or humorous comments.", "Castro et al.", "(2016) built their corpus based on Twitter, selecting nine humorous accounts and nine non-humorous accounts about news, thoughts and curious facts.", "Reyes et al.", "(2013) built a corpus for detecting irony in tweets by searching for several hashtags (i.e., #irony, #humor, #education and #politics), which is also used in Barbieri and Saggion (2014) to train a classifier that detects humor.", "More recently, Potash et al.", "(2017) built a corpus based on tweets that aims to distinguish the degree of funniness in a given tweet.", "They used the tweet set issued in response to a TV game show, labeling which tweets were considered humorous by the show.", "In this work we present a crowd-annotated Spanish corpus of tweets tagged with a humor/no humor value and also by a funniness score from one to five.", "The corpus contains tweets extracted from varied sources and has several annotations per tweet, reaching a high humor inter-annotator agreement.", "The contribution of this work is twofold: the dataset is not only useful for building a humor classifier but it also serves to approach subjectivity in humor and funniness.", "Even though there are not enough annotations per tweet as required to study subjectivity in a genuine way with techniques such as the ones by Geng (2016) , the dataset aids as a playground to study the funniness and disagree-ment among several people.", "This document is organized as follows.", "Section 2 explains where and how we obtained the data, and Section 3 describes how it was annotated.", "In Section 4 we present the corpus, and we address the analysis in Section 5.", "Finally, in Section 6 we present draw the conclusions and present the future work.", "Extraction The aim of the extraction and annotation process was to build a corpus of at least 20,000 tweets that was as balanced as possible between the humor and not humor classes.", "Furthermore, as we intended to have a way of calculating the funniness score of a tweet, we needed to have several votes for the tweets that were considered humorous.", "As we wanted to have both humorous and non-humorous tweet samples, we extracted tweets from selected accounts and from realtime samples.", "For the former, based on Castro et al.", "(2016) , we selected tweets from fifty humorous accounts from Spanish speaking countries, and took a random sample of size 12,000.", "For the latter, we fetched tweet samples written in Spanish throughout February 2018 1 , and from this collection we took another random sample of size 12,000.", "Note that we preferred to take realtime tweet samples as we did not want to bias by selecting certain negative examples, such as news or inspirational thoughts as in Castro et al.", "(2016) and Mihalcea and Strapparava (2005b) .", "From both sources we ignored retweets, responses, citations and tweets containing links, as we wanted the text to be selfcontained.", "As expected, both sources contained a mix of humorous and non-humorous tweets.", "In the case of humorous accounts, this may be due to the fact that many tweets are used to increase the number of followers, expressing an opinion on a current event or supporting some popular cause.", "We first aimed to have five votes for each tweet, and to decide which tweet was humorous by simple majority.", "However, at a certain stage during the annotation process, we noticed that the users were voting too many tweets as non-humorous, and the result was highly unbalanced.", "Because of this, we made some adjustments in the corpus and the process: as the target was to have five votes for each tweet, we considered that the 1 The language detection feature is provided by the Twitter REST API.", "The annotator is asked whether the tweet intends to be humorous.", "The available options are \"Yes\", \"No\" or \"Skip\".", "If the annotator selects \"Yes\", five emoji are shown so the annotator can specify how funny he considers the tweet.", "The emoji also include labels describing the funniness levels.", "tweets that already had three non-humorous annotations at this stage should be considered as not humor, then we deprioritized them so the users could focus in annotating the rest of the tweets that were still ambiguous.", "We also injected 4,500 more tweets randomly extracted only from the humorous accounts.", "These new tweets were also prioritized since they had less annotations than the rest.", "Annotation A crowdsourced web annotation task was carried out to tag all tweets.", "2 The annotators were shown tweets as in Fig.", "1 .", "The tweets were randomly chosen but web session information was kept to avoid showing duplicates.", "We tried to keep the user interface as intuitive and self-explanatory as possible, trying not to induce any bias on users and letting them come up with their own definition of humor.", "The simple and friendly interface is meant to keep the users engaged and having fun while classifying tweets as humorous or not, and how funny they are, with as few instructions as possible.", "If a person decides that a tweet is humorous, he has to rate it between one to five by using emoji.", "In this way, the annotator gives more information rather than just stating the tweet is humorous.", "We also allowed to skip a tweet or click a help button for more information.", "We consider that explicitly asking the annotator if the text intends to be humorous makes the distinction between the Not Humorous and Not Funny classes less ambiguous, which we believe was a problem of (Castro et al., 2016) user interface.", "Also, we consider our emoji rated funniness score to be clearer for annotators than their stars based rating.", "The web page was shared on popular social networks along with some context about the task and the annotation period occurred between March 8 th and 27 th , 2018.", "The first tweets shown to every session were the same: three tweets for which we know a clear answer (one of them was humorous and the other two were not).", "These first tweets (\"test tweets\") were meant as a way of introducing the user into how the interface works, and also as an initial way for evaluating the quality of the annotations.", "After the introductory tweets, the rest of the tweets were sampled randomly, starting with the ones with the least number of votes.", "Corpus The dataset consists of two CSV files: tweets and annotations.", "The former contains the identifier and origin (which can be the realtime samples or the selected accounts) for each one of the 27, 282 tweets 3 , while the latter contains the tweet identifier, session identifier, date and annotation value for each one of the 117, 800 annotations received during the annotation phase (including the times the skip button was pressed, 2, 959 times).", "The dataset was released and it is available online.", "4 When compiling the final version of the corpus, we considered the annotations of users that did not answer the first three tweets correctly as having lower quality.", "These sessions should not be used for training or testing machine learning algorithms.", "Fortunately, only a small number of annotations had to be discarded because of this reason.", "The final number of annotations is 107, 634 (not including the times the skip button was pressed), including 3, 916 annotations assigned to the test tweets themselves.", "Analysis Annotation Distribution Each tweet received 3.8 annotations on average, with a standard deviation of 1.16, not considering the test tweets as they are outliers (they have a large number of annotations).", "The annotation 3 Tweet text is not included in the corpus due to Twitter Terms and Conditions.", "They can be obtained from the IDs.", "distribution is shown in Fig.", "2 .", "The histogram is highly concentrated: more than 98% of the tweets received between two and six annotations each.", "Even though the strategy was to show random tweets among the ones with less annotations, note that there are tweets with less than three annotations because some annotations were finally filtered out.", "At the same time, there are some tweets with more than six annotations because we merged annotations from a few dozen duplicate tweets.", "Also, note that there is a considerable amount of tweets with at least six annotations (1, 001).", "This subset can be useful to study the different annotator opinions under the same instances.", "Fig.", "3 shows how the classes are distributed between the annotations.", "Roughly two thirds were assigned to the class Not Humorous, agreeing with the fact that there seem to be more non-humorous tweets from humorous accounts than the other way around.", "The graph also indicates that there is a bias towards bad jokes in humor, according to the annotators.", "We use simple majority of votes for categorizing between humorous or not humorous, and weighted average for computing the funniness score only for humorous tweets.", "The scale goes from one (Not Funny) to five (Excellent).", "Under this scheme, 27.01% of the tweets are humorous, 70.6% are not-humorous while 2.39% is undecided (2.38% tied and 0.01% no annotations).", "At the same time, humorous tweets have little funniness overall: the funniness score average is 1.35 and standard deviation 0.85.", "Class Distribution Annotators Distribution There were 1, 271 annotators who tagged the tweets roughly as follows: two annotators tagged 13, 000 tweets, then one annotated 8, 000, the next eight annotated between one and three thousand, the next 105 annotated between one hundred and one thousand and the rest annotated less than a hundred, having 32, 584 annotations in total (see Fig.", "4 ).", "The average was 83 tags by annotator, with a standard deviation of 597.", "Annotators Agreement An important aspect to analyze is to what extent the annotators agree on which tweets are humorous.", "We used the alpha measure from Krippendorff (2012) , a generalized version of the kappa measure (Cohen, 1960; Fleiss, 1971 ) that takes in account an arbitray number of raters.", "The agreement alpha value on humorous versus nonhumorous is 0.5710.", "According to Fleiss (1981) , it means that the agreement is somewhat between \"moderate\" to \"substantial\", suggesting there is acceptable agreement but the humans cannot completely agree.", "We believe that the carefully designed user interface impacted in the quality of the annotation, as unlike Castro et al.", "(2016) this work's annotation web page presented less ambiguity between the class Not Humorous and Not Funny.", "We clearly outperformed their interannotator agreement (which was 0.3654).", "Additionally, if we consider the whole corpus (including the removed annotations), this figure decreases to 0.5512.", "This shows that the test tweets were helpful to filter out low quality annotations.", "Additionally, we can try to estimate to what extent the annotators agree on the funniness value of the tweets.", "In this case, disagreement between close values in the scale (e.g.", "Not Funny and Little Funny) should have less impact than disagreement between values that are further (e.g.", "Not Funny and Excellent).", "Following Stevens (1946) , in the previous case we were dealing with a nominal measure while in this case it is an ordinal measure.", "Alpha considers this into the formula by using a generic distance function between ratings, so we applied it and obtained a value of 0.1625 which is far from good; it is closer to a random annotation.", "There is a lack of agreement on the funniness.", "In this case, a machine will not be able to assign a unique value of funniness to a tweet, which makes sense with its subjectivity, albeit other techniques could be used (Geng, 2016) .", "In this case, if we consider the whole dataset, this number decreases to 0.1442.", "If we only consider the eleven annotators who tagged more than a thousand times (who tagged 50, 939 times in total), the humor and funniness agreement are respectively 0.6345 and 0.2635.", "Conclusion and Future Work Our main contribution is a corpus of tweets in Spanish labeled by their humor value and funniness score with respect to a crowd-sourced annotation.", "The dataset contains 27, 282 tweets coming from multiple sources, with 107, 634 annotations.", "The corpus showed high quality because of the significant inter-annotator agreement value.", "The dataset serves to build a Spanish humor classifier, but it also serves as a first step to tackle humor and funniness subjectivity.", "Even though more annotations per tweet would be appropriate, there is a subset of a thousand tweets with at least six annotations that could be used to study people's opinion on the same instances.", "Future steps involve gathering more annotations per tweet for a considerable amount of tweets, so techniques such as the ones in (Geng, 2016 ) could be used to study how people perceive the humorous pieces and what subjects and phrases they consider funnier.", "It would be interesting to consider social strata (e.g.", "origin, age and gender) when trying to find these patterns.", "Additionally, a similar dataset could be built for other languages which count with more data to cross over with (such as English) and build a humor classifier exploiting re-" ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5.1", "5.3", "5.4", "6" ], "paper_header_content": [ "Introduction", "Extraction", "Annotation", "Corpus", "Annotation Distribution", "Annotators Distribution", "Annotators Agreement", "Conclusion and Future Work" ] }
GEM-SciDuet-train-44#paper-1064#slide-16
Agreement
If we include the low quality, If we only consider the 11 annotators who tagged more humor and funniness agreement are respectively 0.6345
If we include the low quality, If we only consider the 11 annotators who tagged more humor and funniness agreement are respectively 0.6345
[]
GEM-SciDuet-train-44#paper-1064#slide-17
1064
A Crowd-Annotated Spanish Corpus for Humor Analysis
Computational Humor involves several tasks, such as humor recognition, humor generation, and humor scoring, for which it is useful to have human-curated data. In this work we present a corpus of 27,000 tweets written in Spanish and crowd-annotated by their humor value and funniness score, with about four annotations per tweet, tagged by 1,300 people over the Internet. It is equally divided between tweets coming from humorous and non-humorous accounts. The interannotator agreement Krippendorff's alpha value is 0.5710. The dataset is available for general usage and can serve as a basis for humor detection and as a first step to tackle subjectivity.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114 ], "paper_content_text": [ "Introduction Computational Humor studies humor from a computational perspective, involving several tasks such as humor recognition, which aims to tell if a piece of text is humorous or not; humor generation, with the objective of generating new texts with funny content; and humor scoring, whose goal is to predict how funny a piece of text is.", "In order to carry out this kind of tasks through supervised machine learning methods, humancurated data is necessary.", "Castro et al.", "(2016) built a humor classifier for Spanish and provided a dataset for humor recognition.", "However, there are some issues: few annotations per instance, low annotator agreement, and limited variety of sources for the humorous and mostly for the nonhumorous tweets (the latter were only about news, inspirational thoughts and curious facts).", "Up to our knowledge, there is no other dataset to work on humor comprehension in Spanish.", "Some other authors, such as Mihalcea and Strapparava (2005a,b) ; Sjöbergh and Araki (2007) have tackled humor recognition in English texts, building their own corpora by downloading one-liners (onesentence jokes) from the Internet, since working with longer texts would involve additional work, such as determining humor scope.", "The microblogging platform Twitter has been found particularly useful for building humor corpora due to its public availability and the fact that its short messages are suitable for jokes or humorous comments.", "Castro et al.", "(2016) built their corpus based on Twitter, selecting nine humorous accounts and nine non-humorous accounts about news, thoughts and curious facts.", "Reyes et al.", "(2013) built a corpus for detecting irony in tweets by searching for several hashtags (i.e., #irony, #humor, #education and #politics), which is also used in Barbieri and Saggion (2014) to train a classifier that detects humor.", "More recently, Potash et al.", "(2017) built a corpus based on tweets that aims to distinguish the degree of funniness in a given tweet.", "They used the tweet set issued in response to a TV game show, labeling which tweets were considered humorous by the show.", "In this work we present a crowd-annotated Spanish corpus of tweets tagged with a humor/no humor value and also by a funniness score from one to five.", "The corpus contains tweets extracted from varied sources and has several annotations per tweet, reaching a high humor inter-annotator agreement.", "The contribution of this work is twofold: the dataset is not only useful for building a humor classifier but it also serves to approach subjectivity in humor and funniness.", "Even though there are not enough annotations per tweet as required to study subjectivity in a genuine way with techniques such as the ones by Geng (2016) , the dataset aids as a playground to study the funniness and disagree-ment among several people.", "This document is organized as follows.", "Section 2 explains where and how we obtained the data, and Section 3 describes how it was annotated.", "In Section 4 we present the corpus, and we address the analysis in Section 5.", "Finally, in Section 6 we present draw the conclusions and present the future work.", "Extraction The aim of the extraction and annotation process was to build a corpus of at least 20,000 tweets that was as balanced as possible between the humor and not humor classes.", "Furthermore, as we intended to have a way of calculating the funniness score of a tweet, we needed to have several votes for the tweets that were considered humorous.", "As we wanted to have both humorous and non-humorous tweet samples, we extracted tweets from selected accounts and from realtime samples.", "For the former, based on Castro et al.", "(2016) , we selected tweets from fifty humorous accounts from Spanish speaking countries, and took a random sample of size 12,000.", "For the latter, we fetched tweet samples written in Spanish throughout February 2018 1 , and from this collection we took another random sample of size 12,000.", "Note that we preferred to take realtime tweet samples as we did not want to bias by selecting certain negative examples, such as news or inspirational thoughts as in Castro et al.", "(2016) and Mihalcea and Strapparava (2005b) .", "From both sources we ignored retweets, responses, citations and tweets containing links, as we wanted the text to be selfcontained.", "As expected, both sources contained a mix of humorous and non-humorous tweets.", "In the case of humorous accounts, this may be due to the fact that many tweets are used to increase the number of followers, expressing an opinion on a current event or supporting some popular cause.", "We first aimed to have five votes for each tweet, and to decide which tweet was humorous by simple majority.", "However, at a certain stage during the annotation process, we noticed that the users were voting too many tweets as non-humorous, and the result was highly unbalanced.", "Because of this, we made some adjustments in the corpus and the process: as the target was to have five votes for each tweet, we considered that the 1 The language detection feature is provided by the Twitter REST API.", "The annotator is asked whether the tweet intends to be humorous.", "The available options are \"Yes\", \"No\" or \"Skip\".", "If the annotator selects \"Yes\", five emoji are shown so the annotator can specify how funny he considers the tweet.", "The emoji also include labels describing the funniness levels.", "tweets that already had three non-humorous annotations at this stage should be considered as not humor, then we deprioritized them so the users could focus in annotating the rest of the tweets that were still ambiguous.", "We also injected 4,500 more tweets randomly extracted only from the humorous accounts.", "These new tweets were also prioritized since they had less annotations than the rest.", "Annotation A crowdsourced web annotation task was carried out to tag all tweets.", "2 The annotators were shown tweets as in Fig.", "1 .", "The tweets were randomly chosen but web session information was kept to avoid showing duplicates.", "We tried to keep the user interface as intuitive and self-explanatory as possible, trying not to induce any bias on users and letting them come up with their own definition of humor.", "The simple and friendly interface is meant to keep the users engaged and having fun while classifying tweets as humorous or not, and how funny they are, with as few instructions as possible.", "If a person decides that a tweet is humorous, he has to rate it between one to five by using emoji.", "In this way, the annotator gives more information rather than just stating the tweet is humorous.", "We also allowed to skip a tweet or click a help button for more information.", "We consider that explicitly asking the annotator if the text intends to be humorous makes the distinction between the Not Humorous and Not Funny classes less ambiguous, which we believe was a problem of (Castro et al., 2016) user interface.", "Also, we consider our emoji rated funniness score to be clearer for annotators than their stars based rating.", "The web page was shared on popular social networks along with some context about the task and the annotation period occurred between March 8 th and 27 th , 2018.", "The first tweets shown to every session were the same: three tweets for which we know a clear answer (one of them was humorous and the other two were not).", "These first tweets (\"test tweets\") were meant as a way of introducing the user into how the interface works, and also as an initial way for evaluating the quality of the annotations.", "After the introductory tweets, the rest of the tweets were sampled randomly, starting with the ones with the least number of votes.", "Corpus The dataset consists of two CSV files: tweets and annotations.", "The former contains the identifier and origin (which can be the realtime samples or the selected accounts) for each one of the 27, 282 tweets 3 , while the latter contains the tweet identifier, session identifier, date and annotation value for each one of the 117, 800 annotations received during the annotation phase (including the times the skip button was pressed, 2, 959 times).", "The dataset was released and it is available online.", "4 When compiling the final version of the corpus, we considered the annotations of users that did not answer the first three tweets correctly as having lower quality.", "These sessions should not be used for training or testing machine learning algorithms.", "Fortunately, only a small number of annotations had to be discarded because of this reason.", "The final number of annotations is 107, 634 (not including the times the skip button was pressed), including 3, 916 annotations assigned to the test tweets themselves.", "Analysis Annotation Distribution Each tweet received 3.8 annotations on average, with a standard deviation of 1.16, not considering the test tweets as they are outliers (they have a large number of annotations).", "The annotation 3 Tweet text is not included in the corpus due to Twitter Terms and Conditions.", "They can be obtained from the IDs.", "distribution is shown in Fig.", "2 .", "The histogram is highly concentrated: more than 98% of the tweets received between two and six annotations each.", "Even though the strategy was to show random tweets among the ones with less annotations, note that there are tweets with less than three annotations because some annotations were finally filtered out.", "At the same time, there are some tweets with more than six annotations because we merged annotations from a few dozen duplicate tweets.", "Also, note that there is a considerable amount of tweets with at least six annotations (1, 001).", "This subset can be useful to study the different annotator opinions under the same instances.", "Fig.", "3 shows how the classes are distributed between the annotations.", "Roughly two thirds were assigned to the class Not Humorous, agreeing with the fact that there seem to be more non-humorous tweets from humorous accounts than the other way around.", "The graph also indicates that there is a bias towards bad jokes in humor, according to the annotators.", "We use simple majority of votes for categorizing between humorous or not humorous, and weighted average for computing the funniness score only for humorous tweets.", "The scale goes from one (Not Funny) to five (Excellent).", "Under this scheme, 27.01% of the tweets are humorous, 70.6% are not-humorous while 2.39% is undecided (2.38% tied and 0.01% no annotations).", "At the same time, humorous tweets have little funniness overall: the funniness score average is 1.35 and standard deviation 0.85.", "Class Distribution Annotators Distribution There were 1, 271 annotators who tagged the tweets roughly as follows: two annotators tagged 13, 000 tweets, then one annotated 8, 000, the next eight annotated between one and three thousand, the next 105 annotated between one hundred and one thousand and the rest annotated less than a hundred, having 32, 584 annotations in total (see Fig.", "4 ).", "The average was 83 tags by annotator, with a standard deviation of 597.", "Annotators Agreement An important aspect to analyze is to what extent the annotators agree on which tweets are humorous.", "We used the alpha measure from Krippendorff (2012) , a generalized version of the kappa measure (Cohen, 1960; Fleiss, 1971 ) that takes in account an arbitray number of raters.", "The agreement alpha value on humorous versus nonhumorous is 0.5710.", "According to Fleiss (1981) , it means that the agreement is somewhat between \"moderate\" to \"substantial\", suggesting there is acceptable agreement but the humans cannot completely agree.", "We believe that the carefully designed user interface impacted in the quality of the annotation, as unlike Castro et al.", "(2016) this work's annotation web page presented less ambiguity between the class Not Humorous and Not Funny.", "We clearly outperformed their interannotator agreement (which was 0.3654).", "Additionally, if we consider the whole corpus (including the removed annotations), this figure decreases to 0.5512.", "This shows that the test tweets were helpful to filter out low quality annotations.", "Additionally, we can try to estimate to what extent the annotators agree on the funniness value of the tweets.", "In this case, disagreement between close values in the scale (e.g.", "Not Funny and Little Funny) should have less impact than disagreement between values that are further (e.g.", "Not Funny and Excellent).", "Following Stevens (1946) , in the previous case we were dealing with a nominal measure while in this case it is an ordinal measure.", "Alpha considers this into the formula by using a generic distance function between ratings, so we applied it and obtained a value of 0.1625 which is far from good; it is closer to a random annotation.", "There is a lack of agreement on the funniness.", "In this case, a machine will not be able to assign a unique value of funniness to a tweet, which makes sense with its subjectivity, albeit other techniques could be used (Geng, 2016) .", "In this case, if we consider the whole dataset, this number decreases to 0.1442.", "If we only consider the eleven annotators who tagged more than a thousand times (who tagged 50, 939 times in total), the humor and funniness agreement are respectively 0.6345 and 0.2635.", "Conclusion and Future Work Our main contribution is a corpus of tweets in Spanish labeled by their humor value and funniness score with respect to a crowd-sourced annotation.", "The dataset contains 27, 282 tweets coming from multiple sources, with 107, 634 annotations.", "The corpus showed high quality because of the significant inter-annotator agreement value.", "The dataset serves to build a Spanish humor classifier, but it also serves as a first step to tackle humor and funniness subjectivity.", "Even though more annotations per tweet would be appropriate, there is a subset of a thousand tweets with at least six annotations that could be used to study people's opinion on the same instances.", "Future steps involve gathering more annotations per tweet for a considerable amount of tweets, so techniques such as the ones in (Geng, 2016 ) could be used to study how people perceive the humorous pieces and what subjects and phrases they consider funnier.", "It would be interesting to consider social strata (e.g.", "origin, age and gender) when trying to find these patterns.", "Additionally, a similar dataset could be built for other languages which count with more data to cross over with (such as English) and build a humor classifier exploiting re-" ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5.1", "5.3", "5.4", "6" ], "paper_header_content": [ "Introduction", "Extraction", "Annotation", "Corpus", "Annotation Distribution", "Annotators Distribution", "Annotators Agreement", "Conclusion and Future Work" ] }
GEM-SciDuet-train-44#paper-1064#slide-17
Conclusion
We created a better version of a dataset to study Humor in Spanish. 27,282 tweets coming from multiple sources, with 107,634 annotations high quality annotations. Significant inter-annotator agreement value. It is also a first step to study subjectivity. Although more annotations per tweet would be appropriate, there is a subset of a thousand tweets with at least six annotations that could be used to study peoples opinion on the same instances.
We created a better version of a dataset to study Humor in Spanish. 27,282 tweets coming from multiple sources, with 107,634 annotations high quality annotations. Significant inter-annotator agreement value. It is also a first step to study subjectivity. Although more annotations per tweet would be appropriate, there is a subset of a thousand tweets with at least six annotations that could be used to study peoples opinion on the same instances.
[]
GEM-SciDuet-train-44#paper-1064#slide-18
1064
A Crowd-Annotated Spanish Corpus for Humor Analysis
Computational Humor involves several tasks, such as humor recognition, humor generation, and humor scoring, for which it is useful to have human-curated data. In this work we present a corpus of 27,000 tweets written in Spanish and crowd-annotated by their humor value and funniness score, with about four annotations per tweet, tagged by 1,300 people over the Internet. It is equally divided between tweets coming from humorous and non-humorous accounts. The interannotator agreement Krippendorff's alpha value is 0.5710. The dataset is available for general usage and can serve as a basis for humor detection and as a first step to tackle subjectivity.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114 ], "paper_content_text": [ "Introduction Computational Humor studies humor from a computational perspective, involving several tasks such as humor recognition, which aims to tell if a piece of text is humorous or not; humor generation, with the objective of generating new texts with funny content; and humor scoring, whose goal is to predict how funny a piece of text is.", "In order to carry out this kind of tasks through supervised machine learning methods, humancurated data is necessary.", "Castro et al.", "(2016) built a humor classifier for Spanish and provided a dataset for humor recognition.", "However, there are some issues: few annotations per instance, low annotator agreement, and limited variety of sources for the humorous and mostly for the nonhumorous tweets (the latter were only about news, inspirational thoughts and curious facts).", "Up to our knowledge, there is no other dataset to work on humor comprehension in Spanish.", "Some other authors, such as Mihalcea and Strapparava (2005a,b) ; Sjöbergh and Araki (2007) have tackled humor recognition in English texts, building their own corpora by downloading one-liners (onesentence jokes) from the Internet, since working with longer texts would involve additional work, such as determining humor scope.", "The microblogging platform Twitter has been found particularly useful for building humor corpora due to its public availability and the fact that its short messages are suitable for jokes or humorous comments.", "Castro et al.", "(2016) built their corpus based on Twitter, selecting nine humorous accounts and nine non-humorous accounts about news, thoughts and curious facts.", "Reyes et al.", "(2013) built a corpus for detecting irony in tweets by searching for several hashtags (i.e., #irony, #humor, #education and #politics), which is also used in Barbieri and Saggion (2014) to train a classifier that detects humor.", "More recently, Potash et al.", "(2017) built a corpus based on tweets that aims to distinguish the degree of funniness in a given tweet.", "They used the tweet set issued in response to a TV game show, labeling which tweets were considered humorous by the show.", "In this work we present a crowd-annotated Spanish corpus of tweets tagged with a humor/no humor value and also by a funniness score from one to five.", "The corpus contains tweets extracted from varied sources and has several annotations per tweet, reaching a high humor inter-annotator agreement.", "The contribution of this work is twofold: the dataset is not only useful for building a humor classifier but it also serves to approach subjectivity in humor and funniness.", "Even though there are not enough annotations per tweet as required to study subjectivity in a genuine way with techniques such as the ones by Geng (2016) , the dataset aids as a playground to study the funniness and disagree-ment among several people.", "This document is organized as follows.", "Section 2 explains where and how we obtained the data, and Section 3 describes how it was annotated.", "In Section 4 we present the corpus, and we address the analysis in Section 5.", "Finally, in Section 6 we present draw the conclusions and present the future work.", "Extraction The aim of the extraction and annotation process was to build a corpus of at least 20,000 tweets that was as balanced as possible between the humor and not humor classes.", "Furthermore, as we intended to have a way of calculating the funniness score of a tweet, we needed to have several votes for the tweets that were considered humorous.", "As we wanted to have both humorous and non-humorous tweet samples, we extracted tweets from selected accounts and from realtime samples.", "For the former, based on Castro et al.", "(2016) , we selected tweets from fifty humorous accounts from Spanish speaking countries, and took a random sample of size 12,000.", "For the latter, we fetched tweet samples written in Spanish throughout February 2018 1 , and from this collection we took another random sample of size 12,000.", "Note that we preferred to take realtime tweet samples as we did not want to bias by selecting certain negative examples, such as news or inspirational thoughts as in Castro et al.", "(2016) and Mihalcea and Strapparava (2005b) .", "From both sources we ignored retweets, responses, citations and tweets containing links, as we wanted the text to be selfcontained.", "As expected, both sources contained a mix of humorous and non-humorous tweets.", "In the case of humorous accounts, this may be due to the fact that many tweets are used to increase the number of followers, expressing an opinion on a current event or supporting some popular cause.", "We first aimed to have five votes for each tweet, and to decide which tweet was humorous by simple majority.", "However, at a certain stage during the annotation process, we noticed that the users were voting too many tweets as non-humorous, and the result was highly unbalanced.", "Because of this, we made some adjustments in the corpus and the process: as the target was to have five votes for each tweet, we considered that the 1 The language detection feature is provided by the Twitter REST API.", "The annotator is asked whether the tweet intends to be humorous.", "The available options are \"Yes\", \"No\" or \"Skip\".", "If the annotator selects \"Yes\", five emoji are shown so the annotator can specify how funny he considers the tweet.", "The emoji also include labels describing the funniness levels.", "tweets that already had three non-humorous annotations at this stage should be considered as not humor, then we deprioritized them so the users could focus in annotating the rest of the tweets that were still ambiguous.", "We also injected 4,500 more tweets randomly extracted only from the humorous accounts.", "These new tweets were also prioritized since they had less annotations than the rest.", "Annotation A crowdsourced web annotation task was carried out to tag all tweets.", "2 The annotators were shown tweets as in Fig.", "1 .", "The tweets were randomly chosen but web session information was kept to avoid showing duplicates.", "We tried to keep the user interface as intuitive and self-explanatory as possible, trying not to induce any bias on users and letting them come up with their own definition of humor.", "The simple and friendly interface is meant to keep the users engaged and having fun while classifying tweets as humorous or not, and how funny they are, with as few instructions as possible.", "If a person decides that a tweet is humorous, he has to rate it between one to five by using emoji.", "In this way, the annotator gives more information rather than just stating the tweet is humorous.", "We also allowed to skip a tweet or click a help button for more information.", "We consider that explicitly asking the annotator if the text intends to be humorous makes the distinction between the Not Humorous and Not Funny classes less ambiguous, which we believe was a problem of (Castro et al., 2016) user interface.", "Also, we consider our emoji rated funniness score to be clearer for annotators than their stars based rating.", "The web page was shared on popular social networks along with some context about the task and the annotation period occurred between March 8 th and 27 th , 2018.", "The first tweets shown to every session were the same: three tweets for which we know a clear answer (one of them was humorous and the other two were not).", "These first tweets (\"test tweets\") were meant as a way of introducing the user into how the interface works, and also as an initial way for evaluating the quality of the annotations.", "After the introductory tweets, the rest of the tweets were sampled randomly, starting with the ones with the least number of votes.", "Corpus The dataset consists of two CSV files: tweets and annotations.", "The former contains the identifier and origin (which can be the realtime samples or the selected accounts) for each one of the 27, 282 tweets 3 , while the latter contains the tweet identifier, session identifier, date and annotation value for each one of the 117, 800 annotations received during the annotation phase (including the times the skip button was pressed, 2, 959 times).", "The dataset was released and it is available online.", "4 When compiling the final version of the corpus, we considered the annotations of users that did not answer the first three tweets correctly as having lower quality.", "These sessions should not be used for training or testing machine learning algorithms.", "Fortunately, only a small number of annotations had to be discarded because of this reason.", "The final number of annotations is 107, 634 (not including the times the skip button was pressed), including 3, 916 annotations assigned to the test tweets themselves.", "Analysis Annotation Distribution Each tweet received 3.8 annotations on average, with a standard deviation of 1.16, not considering the test tweets as they are outliers (they have a large number of annotations).", "The annotation 3 Tweet text is not included in the corpus due to Twitter Terms and Conditions.", "They can be obtained from the IDs.", "distribution is shown in Fig.", "2 .", "The histogram is highly concentrated: more than 98% of the tweets received between two and six annotations each.", "Even though the strategy was to show random tweets among the ones with less annotations, note that there are tweets with less than three annotations because some annotations were finally filtered out.", "At the same time, there are some tweets with more than six annotations because we merged annotations from a few dozen duplicate tweets.", "Also, note that there is a considerable amount of tweets with at least six annotations (1, 001).", "This subset can be useful to study the different annotator opinions under the same instances.", "Fig.", "3 shows how the classes are distributed between the annotations.", "Roughly two thirds were assigned to the class Not Humorous, agreeing with the fact that there seem to be more non-humorous tweets from humorous accounts than the other way around.", "The graph also indicates that there is a bias towards bad jokes in humor, according to the annotators.", "We use simple majority of votes for categorizing between humorous or not humorous, and weighted average for computing the funniness score only for humorous tweets.", "The scale goes from one (Not Funny) to five (Excellent).", "Under this scheme, 27.01% of the tweets are humorous, 70.6% are not-humorous while 2.39% is undecided (2.38% tied and 0.01% no annotations).", "At the same time, humorous tweets have little funniness overall: the funniness score average is 1.35 and standard deviation 0.85.", "Class Distribution Annotators Distribution There were 1, 271 annotators who tagged the tweets roughly as follows: two annotators tagged 13, 000 tweets, then one annotated 8, 000, the next eight annotated between one and three thousand, the next 105 annotated between one hundred and one thousand and the rest annotated less than a hundred, having 32, 584 annotations in total (see Fig.", "4 ).", "The average was 83 tags by annotator, with a standard deviation of 597.", "Annotators Agreement An important aspect to analyze is to what extent the annotators agree on which tweets are humorous.", "We used the alpha measure from Krippendorff (2012) , a generalized version of the kappa measure (Cohen, 1960; Fleiss, 1971 ) that takes in account an arbitray number of raters.", "The agreement alpha value on humorous versus nonhumorous is 0.5710.", "According to Fleiss (1981) , it means that the agreement is somewhat between \"moderate\" to \"substantial\", suggesting there is acceptable agreement but the humans cannot completely agree.", "We believe that the carefully designed user interface impacted in the quality of the annotation, as unlike Castro et al.", "(2016) this work's annotation web page presented less ambiguity between the class Not Humorous and Not Funny.", "We clearly outperformed their interannotator agreement (which was 0.3654).", "Additionally, if we consider the whole corpus (including the removed annotations), this figure decreases to 0.5512.", "This shows that the test tweets were helpful to filter out low quality annotations.", "Additionally, we can try to estimate to what extent the annotators agree on the funniness value of the tweets.", "In this case, disagreement between close values in the scale (e.g.", "Not Funny and Little Funny) should have less impact than disagreement between values that are further (e.g.", "Not Funny and Excellent).", "Following Stevens (1946) , in the previous case we were dealing with a nominal measure while in this case it is an ordinal measure.", "Alpha considers this into the formula by using a generic distance function between ratings, so we applied it and obtained a value of 0.1625 which is far from good; it is closer to a random annotation.", "There is a lack of agreement on the funniness.", "In this case, a machine will not be able to assign a unique value of funniness to a tweet, which makes sense with its subjectivity, albeit other techniques could be used (Geng, 2016) .", "In this case, if we consider the whole dataset, this number decreases to 0.1442.", "If we only consider the eleven annotators who tagged more than a thousand times (who tagged 50, 939 times in total), the humor and funniness agreement are respectively 0.6345 and 0.2635.", "Conclusion and Future Work Our main contribution is a corpus of tweets in Spanish labeled by their humor value and funniness score with respect to a crowd-sourced annotation.", "The dataset contains 27, 282 tweets coming from multiple sources, with 107, 634 annotations.", "The corpus showed high quality because of the significant inter-annotator agreement value.", "The dataset serves to build a Spanish humor classifier, but it also serves as a first step to tackle humor and funniness subjectivity.", "Even though more annotations per tweet would be appropriate, there is a subset of a thousand tweets with at least six annotations that could be used to study people's opinion on the same instances.", "Future steps involve gathering more annotations per tweet for a considerable amount of tweets, so techniques such as the ones in (Geng, 2016 ) could be used to study how people perceive the humorous pieces and what subjects and phrases they consider funnier.", "It would be interesting to consider social strata (e.g.", "origin, age and gender) when trying to find these patterns.", "Additionally, a similar dataset could be built for other languages which count with more data to cross over with (such as English) and build a humor classifier exploiting re-" ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5.1", "5.3", "5.4", "6" ], "paper_header_content": [ "Introduction", "Extraction", "Annotation", "Corpus", "Annotation Distribution", "Annotators Distribution", "Annotators Agreement", "Conclusion and Future Work" ] }
GEM-SciDuet-train-44#paper-1064#slide-18
HAHA Task
Two subtasks: Humor Classification and Funniness Subset of 20k tweets. 7 and 2 submissions respectively.
Two subtasks: Humor Classification and Funniness Subset of 20k tweets. 7 and 2 submissions respectively.
[]
GEM-SciDuet-train-45#paper-1065#slide-0
1065
A Dependency-to-String Model for Chinese-Japanese SMT System
This paper describes the Beijing Jiaotong University Chinese-Japanese machine translation system which participated in the 2st Workshop on Asian Translation (WAT2015). We exploit the syntactic and semantic knowledge encoded in dependency tree to build a dependency-to-string translation model for Chinese-Japanese statistical machine translation (SMT). Our system achieves a BLEU of 34.87 and a RIBES of 79.25 on the Chinese-Japanese translation task in the official evaluation.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65 ], "paper_content_text": [ "Introduction Motivated by representing the grammatical function of the constituents of a sentence or phrase,dependency grammar holds both syntactic and semantic knowledge.How to building translation model by exploiting the syntactic and semantic knowledge encoded in dependency tree has been now one of the most popular research topics in the recent years. In dependency tree based models, researchers propose some tree decomposition methods or grammars to build translation model.", "These models can be classified into string-to-tree model, tree-to-tree model and tree-to-string model.", "Our system participated in WAT2015 (Nakazawa et al., 2015) adopts tree-to-string model.", "Particularly, we use the dependency-to-string translation method proposed by (Xie et al., 2011) in Chinese-Japanese translation task.", "This method proposes a novel tree decompose tion, which takes head-dependents relation (HDR) fragments as elementary structures of rule extraction.", "An HDR is a tree fragment composed of a head and all its dependents.", "In this method, the translation rules are expressed with the source side as generalized HDR fragments and the target sides as strings.", "The model takes substitution as the only operation and can specify reordering information directly into translation rules, thus requires no additional heuristics or reordering models as the previous works.", "And the model is more concise.", "Section 2 describes dependency-to-string translation model in detail.", "Section 3 reports on our experiment results on a Chinese-SMT system.", "Section 4 concludes this paper.", "Dependency-to-String Translation Model In this paper, we describes the translation model in four aspects, dependency-to-string grammar, translation rule acquisition, the model and the decoding.", "Dependency-to-String Grammar A dependency structure for a sentence is a directed acyclic graph with words as nodes and modification relations as edges, each edge directing from a head to a dependent.", "Figure 1 FIFA World Cup in South Africa successfully hold Here are some properties of a HDR fragment : 1) head determines the syntactic category of HDR, and can often replace HDR; 2) head determines the semantic category of HDR; dependent gives semantic specification.", "According to the above properties, we can represent the corresponding HDR fragment with head.", "The translation rules of dependency-to-string model can be classified into two categories: -HDR rules, which represent the source side as generalized HDR fragments and the target sides as strings and act as both translation rules and reordering rules.", "-H rules, which represent the source side as a word and the target side as words or strings and are used for translating words.", "Figure 1 shows examples of the two translation rules.", "(b), (c) and (d) are three examples of HDR rules, and (d) is an example of H rules.", "In the figure, the nodes modified by \"*\" are head of HDR fragment.", "By the way, the three HDR rules describes translation ways of the same sentence pattern (that is, constituted by \"noun phrase + preposition phrase + adverb + verb\" ) and different contexts.", "Thereinto, rule (b) appoints its context completely, rule (c) restrains its context partially and rule (d) has no restraint for its context.", "Rule Acquisition The rule acquisition of dependency-to-string model begins with a parallel corpus with word-aligned results, the source dependency structures and the target side sentence.", "We accomplish the rule automatic acquisition through the following three steps: 1) Tree annotation: annotate the necessary information on each node of depend ency trees for translation rule acquisition.", "2) Acceptable HDR fragments identification: identify HDR fragments from the annotated trees for HDR rules generation.", "3) HDR rules generation: generate a series of HDR rules according to the identified acceptable HDR fragments.", "The following describes each of these in detail.", "Tree Annotation and Acceptable HDR Fragments Identification 83 The tree annotation can be accomplished by a single postorder transversal of dependency tree T. For each node n of T, we annotated with head span hsp(n) and dependency span dsp(n) (Xie et al., 2011) .", "During the recursive walk, we calculate hsp(n) according to alignment relation for each node n accessed.", "The dsp(n) can be obtained according to hsp(n) and dependency span of all dependents of n. After tree annotation, we can identify HDR fragments for HDR rules generation, according to head span and dependency span of each node.", "HDR Rules and H Rules Generation According to the identified acceptable HDR fragments, a series of lexicalized and unlexicalized HDR rules will be generated.", "This paper will not describe in detail about it and you can refer to (Xie et al., 2011) .", "H rules acquisition can be implemented as a sub procedure of HDR rules acquisition.", "Specifically, in the recursive walk of dependency tree, a H rule is generated according to alignment information for each node accessed.", "Translation Model Given the dependency-to-string grammar, for a given source language dependency tree T, it may generate more than one derivations D that convert a source dependency tree T into a target string e, thus producing varieties of candidate translations.", "To compare the candidate translations, we adopt a general log-linear model (Och and Ney, 2002) to define D as: P(D) ∝ ∏ ϕi (D) λi (1) where ϕi (D) is feature function defined on derivation D and λi are the feature weights.", "Our paper used seven features as follows: 1) translation probabilities: P(t|s) and P(s|t); 2) lexical translation probabilities: Plex (t|s) and Plex (s|t); 3) rule penalty: exp(-1); 4) target word penalty: exp(|e|); 5) language model : Plm(e); Decoding Our decoder is based on bottom up chart parsing algorithm that convert the input dependency structure into a target string.", "It finds the best derivation among all possible derivations D. Given a source dependency structure T, the decoder traverses each internal node n of T in post-order.", "And we process it as follows.", "1) If n is a leaf node, it checks the rule set for matched translation rules H and uses the rules to generate candidate translation; 2) If n is a internal node, it enumerates all instances of the related sentence, clauses or phrases of the HDR fragment rooted at n, and checks the translation rule set for matched translation rules.", "If there is no matched rules, we construct a pseudo translation rule according to the word order of the HDR fragment in the source side; 3) Make use of Cube Pruning algorithm (Chiang, 2007; Huang and Chiang, 2007) to generate the candidate translation for the node n. To balance the decoder's performance and speed, we use four constraints as follows: 1) Beam-threshold: we get the score threshold from the best score in the current stack multiplied by a fixed ratio.", "The candidate translations with a score worse than the score threshold will be discarded; 2) beam-limit: the maximum number of candidate translations in the beam; 3) rule-threshold: we get the rule score threshold from the best score multiplied by a fixed ratio in the rule table queue.", "The rules with a score worse than rule score threshold will be dis-carded; 4) rule-limit: the maximum number of rules in the rule table queue.", "For our experiments, we set the beam-threshold = 10 -2 , beam-limit = 100, rule-threshold = 10 -2 and rule-limit = 100.", "Table 1 The comparison results of the two systems Then, we use dependency-to-string model described in Section 2 to build a Chinese-Japanese translation system.", "And use the BLUE score and RIBES score for evaluation.", "Experiments Data preparation Experiments and Evaluation Results The Chinese-Japanese translation system (Dep2str) consists of three modules: 1) Rule extraction module: extract rules using the Chinese dependency tree, the Japanese sentence and alignment information of the training corpus.", "2) Decoding module: decode the Chinese sentences for the n-best Japanese translations according to the model parameters that have been set.", "3) Training module: train the translation model using minimum error rate to get the best parameters on the development data.", "We then decode the test data using the system.", "Table 1 shows the number of the extracted translation rules and the translation performance on the test data.", "Furthermore, we implemented a MOSES PBSMT system (Koehn et al., 2002) as the baseline for a comparison.", "In our experiments the value of the distortion limit of the baseline system is the default.", "The number of translation rules and translation performance of the baseline system are also showed in the table.", "In terms of the number of translation rules, the number of the extracted translation rules in the baseline system is over 3 times more than that of dep2str system.", "We think that the lack of restrictions on syntactic structure resulted in this.", "In terms of translation performance, the BLEU score and RIBES score on the test data achieved by dep2str system are higher than the baseline system by 0.62 and 0.31 respectively.", "These evaluation results illustrate that the translation system based on the dependency-to-string model is effective on the Chinese-Japanese translation task.", "Conclusions This paper describes the Beijing Jiaotong University Chinese-Japanese machine translation system participated in WAT2015.", "The system employs a dependency-to-string model, which takes the HDR fragments as elementary structures for the rule extraction and directly specifies the ordering information in translation rules, making the decoding algorithm simplified.", "The experiment results on the ASPEC data showed that the BLEU score and the RIBES score are increased by 0.62 and 0.31 respectively, compared with the phrase-based system.", "At present, the accuracy of the Chinese dependency parsing is not very high, and our system's performance is affected by the accuracy.", "Meanwhile, we filtered out the sentences which could not be parsed by the dependency parser.", "This caused a decrease in the amount of training data by about 100 thousand sentence pairs.", "We think that the system's performance will be improved with Chinese dependency parsing with high accuracy." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2010", "2.2", "2.2.2", "2.3", "2.4", "3.2", "4" ], "paper_header_content": [ "Introduction", "Dependency-to-String Translation Model", "Dependency-to-String Grammar", "FIFA World Cup in South Africa successfully hold", "Rule Acquisition", "HDR Rules and H Rules Generation", "Translation Model", "Decoding", "Experiments and Evaluation Results", "Conclusions" ] }
GEM-SciDuet-train-45#paper-1065#slide-0
Dependency to String Grammar
HDR rules: the source side is generalized HDR fragments and the target side is strings. H rules: the source side is a word and the target side is words or strings.
HDR rules: the source side is generalized HDR fragments and the target side is strings. H rules: the source side is a word and the target side is words or strings.
[]
GEM-SciDuet-train-45#paper-1065#slide-1
1065
A Dependency-to-String Model for Chinese-Japanese SMT System
This paper describes the Beijing Jiaotong University Chinese-Japanese machine translation system which participated in the 2st Workshop on Asian Translation (WAT2015). We exploit the syntactic and semantic knowledge encoded in dependency tree to build a dependency-to-string translation model for Chinese-Japanese statistical machine translation (SMT). Our system achieves a BLEU of 34.87 and a RIBES of 79.25 on the Chinese-Japanese translation task in the official evaluation.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65 ], "paper_content_text": [ "Introduction Motivated by representing the grammatical function of the constituents of a sentence or phrase,dependency grammar holds both syntactic and semantic knowledge.How to building translation model by exploiting the syntactic and semantic knowledge encoded in dependency tree has been now one of the most popular research topics in the recent years. In dependency tree based models, researchers propose some tree decomposition methods or grammars to build translation model.", "These models can be classified into string-to-tree model, tree-to-tree model and tree-to-string model.", "Our system participated in WAT2015 (Nakazawa et al., 2015) adopts tree-to-string model.", "Particularly, we use the dependency-to-string translation method proposed by (Xie et al., 2011) in Chinese-Japanese translation task.", "This method proposes a novel tree decompose tion, which takes head-dependents relation (HDR) fragments as elementary structures of rule extraction.", "An HDR is a tree fragment composed of a head and all its dependents.", "In this method, the translation rules are expressed with the source side as generalized HDR fragments and the target sides as strings.", "The model takes substitution as the only operation and can specify reordering information directly into translation rules, thus requires no additional heuristics or reordering models as the previous works.", "And the model is more concise.", "Section 2 describes dependency-to-string translation model in detail.", "Section 3 reports on our experiment results on a Chinese-SMT system.", "Section 4 concludes this paper.", "Dependency-to-String Translation Model In this paper, we describes the translation model in four aspects, dependency-to-string grammar, translation rule acquisition, the model and the decoding.", "Dependency-to-String Grammar A dependency structure for a sentence is a directed acyclic graph with words as nodes and modification relations as edges, each edge directing from a head to a dependent.", "Figure 1 FIFA World Cup in South Africa successfully hold Here are some properties of a HDR fragment : 1) head determines the syntactic category of HDR, and can often replace HDR; 2) head determines the semantic category of HDR; dependent gives semantic specification.", "According to the above properties, we can represent the corresponding HDR fragment with head.", "The translation rules of dependency-to-string model can be classified into two categories: -HDR rules, which represent the source side as generalized HDR fragments and the target sides as strings and act as both translation rules and reordering rules.", "-H rules, which represent the source side as a word and the target side as words or strings and are used for translating words.", "Figure 1 shows examples of the two translation rules.", "(b), (c) and (d) are three examples of HDR rules, and (d) is an example of H rules.", "In the figure, the nodes modified by \"*\" are head of HDR fragment.", "By the way, the three HDR rules describes translation ways of the same sentence pattern (that is, constituted by \"noun phrase + preposition phrase + adverb + verb\" ) and different contexts.", "Thereinto, rule (b) appoints its context completely, rule (c) restrains its context partially and rule (d) has no restraint for its context.", "Rule Acquisition The rule acquisition of dependency-to-string model begins with a parallel corpus with word-aligned results, the source dependency structures and the target side sentence.", "We accomplish the rule automatic acquisition through the following three steps: 1) Tree annotation: annotate the necessary information on each node of depend ency trees for translation rule acquisition.", "2) Acceptable HDR fragments identification: identify HDR fragments from the annotated trees for HDR rules generation.", "3) HDR rules generation: generate a series of HDR rules according to the identified acceptable HDR fragments.", "The following describes each of these in detail.", "Tree Annotation and Acceptable HDR Fragments Identification 83 The tree annotation can be accomplished by a single postorder transversal of dependency tree T. For each node n of T, we annotated with head span hsp(n) and dependency span dsp(n) (Xie et al., 2011) .", "During the recursive walk, we calculate hsp(n) according to alignment relation for each node n accessed.", "The dsp(n) can be obtained according to hsp(n) and dependency span of all dependents of n. After tree annotation, we can identify HDR fragments for HDR rules generation, according to head span and dependency span of each node.", "HDR Rules and H Rules Generation According to the identified acceptable HDR fragments, a series of lexicalized and unlexicalized HDR rules will be generated.", "This paper will not describe in detail about it and you can refer to (Xie et al., 2011) .", "H rules acquisition can be implemented as a sub procedure of HDR rules acquisition.", "Specifically, in the recursive walk of dependency tree, a H rule is generated according to alignment information for each node accessed.", "Translation Model Given the dependency-to-string grammar, for a given source language dependency tree T, it may generate more than one derivations D that convert a source dependency tree T into a target string e, thus producing varieties of candidate translations.", "To compare the candidate translations, we adopt a general log-linear model (Och and Ney, 2002) to define D as: P(D) ∝ ∏ ϕi (D) λi (1) where ϕi (D) is feature function defined on derivation D and λi are the feature weights.", "Our paper used seven features as follows: 1) translation probabilities: P(t|s) and P(s|t); 2) lexical translation probabilities: Plex (t|s) and Plex (s|t); 3) rule penalty: exp(-1); 4) target word penalty: exp(|e|); 5) language model : Plm(e); Decoding Our decoder is based on bottom up chart parsing algorithm that convert the input dependency structure into a target string.", "It finds the best derivation among all possible derivations D. Given a source dependency structure T, the decoder traverses each internal node n of T in post-order.", "And we process it as follows.", "1) If n is a leaf node, it checks the rule set for matched translation rules H and uses the rules to generate candidate translation; 2) If n is a internal node, it enumerates all instances of the related sentence, clauses or phrases of the HDR fragment rooted at n, and checks the translation rule set for matched translation rules.", "If there is no matched rules, we construct a pseudo translation rule according to the word order of the HDR fragment in the source side; 3) Make use of Cube Pruning algorithm (Chiang, 2007; Huang and Chiang, 2007) to generate the candidate translation for the node n. To balance the decoder's performance and speed, we use four constraints as follows: 1) Beam-threshold: we get the score threshold from the best score in the current stack multiplied by a fixed ratio.", "The candidate translations with a score worse than the score threshold will be discarded; 2) beam-limit: the maximum number of candidate translations in the beam; 3) rule-threshold: we get the rule score threshold from the best score multiplied by a fixed ratio in the rule table queue.", "The rules with a score worse than rule score threshold will be dis-carded; 4) rule-limit: the maximum number of rules in the rule table queue.", "For our experiments, we set the beam-threshold = 10 -2 , beam-limit = 100, rule-threshold = 10 -2 and rule-limit = 100.", "Table 1 The comparison results of the two systems Then, we use dependency-to-string model described in Section 2 to build a Chinese-Japanese translation system.", "And use the BLUE score and RIBES score for evaluation.", "Experiments Data preparation Experiments and Evaluation Results The Chinese-Japanese translation system (Dep2str) consists of three modules: 1) Rule extraction module: extract rules using the Chinese dependency tree, the Japanese sentence and alignment information of the training corpus.", "2) Decoding module: decode the Chinese sentences for the n-best Japanese translations according to the model parameters that have been set.", "3) Training module: train the translation model using minimum error rate to get the best parameters on the development data.", "We then decode the test data using the system.", "Table 1 shows the number of the extracted translation rules and the translation performance on the test data.", "Furthermore, we implemented a MOSES PBSMT system (Koehn et al., 2002) as the baseline for a comparison.", "In our experiments the value of the distortion limit of the baseline system is the default.", "The number of translation rules and translation performance of the baseline system are also showed in the table.", "In terms of the number of translation rules, the number of the extracted translation rules in the baseline system is over 3 times more than that of dep2str system.", "We think that the lack of restrictions on syntactic structure resulted in this.", "In terms of translation performance, the BLEU score and RIBES score on the test data achieved by dep2str system are higher than the baseline system by 0.62 and 0.31 respectively.", "These evaluation results illustrate that the translation system based on the dependency-to-string model is effective on the Chinese-Japanese translation task.", "Conclusions This paper describes the Beijing Jiaotong University Chinese-Japanese machine translation system participated in WAT2015.", "The system employs a dependency-to-string model, which takes the HDR fragments as elementary structures for the rule extraction and directly specifies the ordering information in translation rules, making the decoding algorithm simplified.", "The experiment results on the ASPEC data showed that the BLEU score and the RIBES score are increased by 0.62 and 0.31 respectively, compared with the phrase-based system.", "At present, the accuracy of the Chinese dependency parsing is not very high, and our system's performance is affected by the accuracy.", "Meanwhile, we filtered out the sentences which could not be parsed by the dependency parser.", "This caused a decrease in the amount of training data by about 100 thousand sentence pairs.", "We think that the system's performance will be improved with Chinese dependency parsing with high accuracy." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2010", "2.2", "2.2.2", "2.3", "2.4", "3.2", "4" ], "paper_header_content": [ "Introduction", "Dependency-to-String Translation Model", "Dependency-to-String Grammar", "FIFA World Cup in South Africa successfully hold", "Rule Acquisition", "HDR Rules and H Rules Generation", "Translation Model", "Decoding", "Experiments and Evaluation Results", "Conclusions" ] }
GEM-SciDuet-train-45#paper-1065#slide-1
Rule Acquisition
Annotate the necessary information on each node of dependency trees for translation rule Identification of acceptable HDR fragments Identify HDR fragments from the annotated trees for HDR rules generation Generate a set of HDR rules according to the identified acceptable HDR fragments
Annotate the necessary information on each node of dependency trees for translation rule Identification of acceptable HDR fragments Identify HDR fragments from the annotated trees for HDR rules generation Generate a set of HDR rules according to the identified acceptable HDR fragments
[]
GEM-SciDuet-train-45#paper-1065#slide-2
1065
A Dependency-to-String Model for Chinese-Japanese SMT System
This paper describes the Beijing Jiaotong University Chinese-Japanese machine translation system which participated in the 2st Workshop on Asian Translation (WAT2015). We exploit the syntactic and semantic knowledge encoded in dependency tree to build a dependency-to-string translation model for Chinese-Japanese statistical machine translation (SMT). Our system achieves a BLEU of 34.87 and a RIBES of 79.25 on the Chinese-Japanese translation task in the official evaluation.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65 ], "paper_content_text": [ "Introduction Motivated by representing the grammatical function of the constituents of a sentence or phrase,dependency grammar holds both syntactic and semantic knowledge.How to building translation model by exploiting the syntactic and semantic knowledge encoded in dependency tree has been now one of the most popular research topics in the recent years. In dependency tree based models, researchers propose some tree decomposition methods or grammars to build translation model.", "These models can be classified into string-to-tree model, tree-to-tree model and tree-to-string model.", "Our system participated in WAT2015 (Nakazawa et al., 2015) adopts tree-to-string model.", "Particularly, we use the dependency-to-string translation method proposed by (Xie et al., 2011) in Chinese-Japanese translation task.", "This method proposes a novel tree decompose tion, which takes head-dependents relation (HDR) fragments as elementary structures of rule extraction.", "An HDR is a tree fragment composed of a head and all its dependents.", "In this method, the translation rules are expressed with the source side as generalized HDR fragments and the target sides as strings.", "The model takes substitution as the only operation and can specify reordering information directly into translation rules, thus requires no additional heuristics or reordering models as the previous works.", "And the model is more concise.", "Section 2 describes dependency-to-string translation model in detail.", "Section 3 reports on our experiment results on a Chinese-SMT system.", "Section 4 concludes this paper.", "Dependency-to-String Translation Model In this paper, we describes the translation model in four aspects, dependency-to-string grammar, translation rule acquisition, the model and the decoding.", "Dependency-to-String Grammar A dependency structure for a sentence is a directed acyclic graph with words as nodes and modification relations as edges, each edge directing from a head to a dependent.", "Figure 1 FIFA World Cup in South Africa successfully hold Here are some properties of a HDR fragment : 1) head determines the syntactic category of HDR, and can often replace HDR; 2) head determines the semantic category of HDR; dependent gives semantic specification.", "According to the above properties, we can represent the corresponding HDR fragment with head.", "The translation rules of dependency-to-string model can be classified into two categories: -HDR rules, which represent the source side as generalized HDR fragments and the target sides as strings and act as both translation rules and reordering rules.", "-H rules, which represent the source side as a word and the target side as words or strings and are used for translating words.", "Figure 1 shows examples of the two translation rules.", "(b), (c) and (d) are three examples of HDR rules, and (d) is an example of H rules.", "In the figure, the nodes modified by \"*\" are head of HDR fragment.", "By the way, the three HDR rules describes translation ways of the same sentence pattern (that is, constituted by \"noun phrase + preposition phrase + adverb + verb\" ) and different contexts.", "Thereinto, rule (b) appoints its context completely, rule (c) restrains its context partially and rule (d) has no restraint for its context.", "Rule Acquisition The rule acquisition of dependency-to-string model begins with a parallel corpus with word-aligned results, the source dependency structures and the target side sentence.", "We accomplish the rule automatic acquisition through the following three steps: 1) Tree annotation: annotate the necessary information on each node of depend ency trees for translation rule acquisition.", "2) Acceptable HDR fragments identification: identify HDR fragments from the annotated trees for HDR rules generation.", "3) HDR rules generation: generate a series of HDR rules according to the identified acceptable HDR fragments.", "The following describes each of these in detail.", "Tree Annotation and Acceptable HDR Fragments Identification 83 The tree annotation can be accomplished by a single postorder transversal of dependency tree T. For each node n of T, we annotated with head span hsp(n) and dependency span dsp(n) (Xie et al., 2011) .", "During the recursive walk, we calculate hsp(n) according to alignment relation for each node n accessed.", "The dsp(n) can be obtained according to hsp(n) and dependency span of all dependents of n. After tree annotation, we can identify HDR fragments for HDR rules generation, according to head span and dependency span of each node.", "HDR Rules and H Rules Generation According to the identified acceptable HDR fragments, a series of lexicalized and unlexicalized HDR rules will be generated.", "This paper will not describe in detail about it and you can refer to (Xie et al., 2011) .", "H rules acquisition can be implemented as a sub procedure of HDR rules acquisition.", "Specifically, in the recursive walk of dependency tree, a H rule is generated according to alignment information for each node accessed.", "Translation Model Given the dependency-to-string grammar, for a given source language dependency tree T, it may generate more than one derivations D that convert a source dependency tree T into a target string e, thus producing varieties of candidate translations.", "To compare the candidate translations, we adopt a general log-linear model (Och and Ney, 2002) to define D as: P(D) ∝ ∏ ϕi (D) λi (1) where ϕi (D) is feature function defined on derivation D and λi are the feature weights.", "Our paper used seven features as follows: 1) translation probabilities: P(t|s) and P(s|t); 2) lexical translation probabilities: Plex (t|s) and Plex (s|t); 3) rule penalty: exp(-1); 4) target word penalty: exp(|e|); 5) language model : Plm(e); Decoding Our decoder is based on bottom up chart parsing algorithm that convert the input dependency structure into a target string.", "It finds the best derivation among all possible derivations D. Given a source dependency structure T, the decoder traverses each internal node n of T in post-order.", "And we process it as follows.", "1) If n is a leaf node, it checks the rule set for matched translation rules H and uses the rules to generate candidate translation; 2) If n is a internal node, it enumerates all instances of the related sentence, clauses or phrases of the HDR fragment rooted at n, and checks the translation rule set for matched translation rules.", "If there is no matched rules, we construct a pseudo translation rule according to the word order of the HDR fragment in the source side; 3) Make use of Cube Pruning algorithm (Chiang, 2007; Huang and Chiang, 2007) to generate the candidate translation for the node n. To balance the decoder's performance and speed, we use four constraints as follows: 1) Beam-threshold: we get the score threshold from the best score in the current stack multiplied by a fixed ratio.", "The candidate translations with a score worse than the score threshold will be discarded; 2) beam-limit: the maximum number of candidate translations in the beam; 3) rule-threshold: we get the rule score threshold from the best score multiplied by a fixed ratio in the rule table queue.", "The rules with a score worse than rule score threshold will be dis-carded; 4) rule-limit: the maximum number of rules in the rule table queue.", "For our experiments, we set the beam-threshold = 10 -2 , beam-limit = 100, rule-threshold = 10 -2 and rule-limit = 100.", "Table 1 The comparison results of the two systems Then, we use dependency-to-string model described in Section 2 to build a Chinese-Japanese translation system.", "And use the BLUE score and RIBES score for evaluation.", "Experiments Data preparation Experiments and Evaluation Results The Chinese-Japanese translation system (Dep2str) consists of three modules: 1) Rule extraction module: extract rules using the Chinese dependency tree, the Japanese sentence and alignment information of the training corpus.", "2) Decoding module: decode the Chinese sentences for the n-best Japanese translations according to the model parameters that have been set.", "3) Training module: train the translation model using minimum error rate to get the best parameters on the development data.", "We then decode the test data using the system.", "Table 1 shows the number of the extracted translation rules and the translation performance on the test data.", "Furthermore, we implemented a MOSES PBSMT system (Koehn et al., 2002) as the baseline for a comparison.", "In our experiments the value of the distortion limit of the baseline system is the default.", "The number of translation rules and translation performance of the baseline system are also showed in the table.", "In terms of the number of translation rules, the number of the extracted translation rules in the baseline system is over 3 times more than that of dep2str system.", "We think that the lack of restrictions on syntactic structure resulted in this.", "In terms of translation performance, the BLEU score and RIBES score on the test data achieved by dep2str system are higher than the baseline system by 0.62 and 0.31 respectively.", "These evaluation results illustrate that the translation system based on the dependency-to-string model is effective on the Chinese-Japanese translation task.", "Conclusions This paper describes the Beijing Jiaotong University Chinese-Japanese machine translation system participated in WAT2015.", "The system employs a dependency-to-string model, which takes the HDR fragments as elementary structures for the rule extraction and directly specifies the ordering information in translation rules, making the decoding algorithm simplified.", "The experiment results on the ASPEC data showed that the BLEU score and the RIBES score are increased by 0.62 and 0.31 respectively, compared with the phrase-based system.", "At present, the accuracy of the Chinese dependency parsing is not very high, and our system's performance is affected by the accuracy.", "Meanwhile, we filtered out the sentences which could not be parsed by the dependency parser.", "This caused a decrease in the amount of training data by about 100 thousand sentence pairs.", "We think that the system's performance will be improved with Chinese dependency parsing with high accuracy." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2010", "2.2", "2.2.2", "2.3", "2.4", "3.2", "4" ], "paper_header_content": [ "Introduction", "Dependency-to-String Translation Model", "Dependency-to-String Grammar", "FIFA World Cup in South Africa successfully hold", "Rule Acquisition", "HDR Rules and H Rules Generation", "Translation Model", "Decoding", "Experiments and Evaluation Results", "Conclusions" ] }
GEM-SciDuet-train-45#paper-1065#slide-2
Decoding
Bottom up chart parsing Find the best derivation among all possible Apply H rules when n is leaf node Apply HDR rules when n is an internal node Generate the candidate translation for n by
Bottom up chart parsing Find the best derivation among all possible Apply H rules when n is leaf node Apply HDR rules when n is an internal node Generate the candidate translation for n by
[]
GEM-SciDuet-train-45#paper-1065#slide-3
1065
A Dependency-to-String Model for Chinese-Japanese SMT System
This paper describes the Beijing Jiaotong University Chinese-Japanese machine translation system which participated in the 2st Workshop on Asian Translation (WAT2015). We exploit the syntactic and semantic knowledge encoded in dependency tree to build a dependency-to-string translation model for Chinese-Japanese statistical machine translation (SMT). Our system achieves a BLEU of 34.87 and a RIBES of 79.25 on the Chinese-Japanese translation task in the official evaluation.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65 ], "paper_content_text": [ "Introduction Motivated by representing the grammatical function of the constituents of a sentence or phrase,dependency grammar holds both syntactic and semantic knowledge.How to building translation model by exploiting the syntactic and semantic knowledge encoded in dependency tree has been now one of the most popular research topics in the recent years. In dependency tree based models, researchers propose some tree decomposition methods or grammars to build translation model.", "These models can be classified into string-to-tree model, tree-to-tree model and tree-to-string model.", "Our system participated in WAT2015 (Nakazawa et al., 2015) adopts tree-to-string model.", "Particularly, we use the dependency-to-string translation method proposed by (Xie et al., 2011) in Chinese-Japanese translation task.", "This method proposes a novel tree decompose tion, which takes head-dependents relation (HDR) fragments as elementary structures of rule extraction.", "An HDR is a tree fragment composed of a head and all its dependents.", "In this method, the translation rules are expressed with the source side as generalized HDR fragments and the target sides as strings.", "The model takes substitution as the only operation and can specify reordering information directly into translation rules, thus requires no additional heuristics or reordering models as the previous works.", "And the model is more concise.", "Section 2 describes dependency-to-string translation model in detail.", "Section 3 reports on our experiment results on a Chinese-SMT system.", "Section 4 concludes this paper.", "Dependency-to-String Translation Model In this paper, we describes the translation model in four aspects, dependency-to-string grammar, translation rule acquisition, the model and the decoding.", "Dependency-to-String Grammar A dependency structure for a sentence is a directed acyclic graph with words as nodes and modification relations as edges, each edge directing from a head to a dependent.", "Figure 1 FIFA World Cup in South Africa successfully hold Here are some properties of a HDR fragment : 1) head determines the syntactic category of HDR, and can often replace HDR; 2) head determines the semantic category of HDR; dependent gives semantic specification.", "According to the above properties, we can represent the corresponding HDR fragment with head.", "The translation rules of dependency-to-string model can be classified into two categories: -HDR rules, which represent the source side as generalized HDR fragments and the target sides as strings and act as both translation rules and reordering rules.", "-H rules, which represent the source side as a word and the target side as words or strings and are used for translating words.", "Figure 1 shows examples of the two translation rules.", "(b), (c) and (d) are three examples of HDR rules, and (d) is an example of H rules.", "In the figure, the nodes modified by \"*\" are head of HDR fragment.", "By the way, the three HDR rules describes translation ways of the same sentence pattern (that is, constituted by \"noun phrase + preposition phrase + adverb + verb\" ) and different contexts.", "Thereinto, rule (b) appoints its context completely, rule (c) restrains its context partially and rule (d) has no restraint for its context.", "Rule Acquisition The rule acquisition of dependency-to-string model begins with a parallel corpus with word-aligned results, the source dependency structures and the target side sentence.", "We accomplish the rule automatic acquisition through the following three steps: 1) Tree annotation: annotate the necessary information on each node of depend ency trees for translation rule acquisition.", "2) Acceptable HDR fragments identification: identify HDR fragments from the annotated trees for HDR rules generation.", "3) HDR rules generation: generate a series of HDR rules according to the identified acceptable HDR fragments.", "The following describes each of these in detail.", "Tree Annotation and Acceptable HDR Fragments Identification 83 The tree annotation can be accomplished by a single postorder transversal of dependency tree T. For each node n of T, we annotated with head span hsp(n) and dependency span dsp(n) (Xie et al., 2011) .", "During the recursive walk, we calculate hsp(n) according to alignment relation for each node n accessed.", "The dsp(n) can be obtained according to hsp(n) and dependency span of all dependents of n. After tree annotation, we can identify HDR fragments for HDR rules generation, according to head span and dependency span of each node.", "HDR Rules and H Rules Generation According to the identified acceptable HDR fragments, a series of lexicalized and unlexicalized HDR rules will be generated.", "This paper will not describe in detail about it and you can refer to (Xie et al., 2011) .", "H rules acquisition can be implemented as a sub procedure of HDR rules acquisition.", "Specifically, in the recursive walk of dependency tree, a H rule is generated according to alignment information for each node accessed.", "Translation Model Given the dependency-to-string grammar, for a given source language dependency tree T, it may generate more than one derivations D that convert a source dependency tree T into a target string e, thus producing varieties of candidate translations.", "To compare the candidate translations, we adopt a general log-linear model (Och and Ney, 2002) to define D as: P(D) ∝ ∏ ϕi (D) λi (1) where ϕi (D) is feature function defined on derivation D and λi are the feature weights.", "Our paper used seven features as follows: 1) translation probabilities: P(t|s) and P(s|t); 2) lexical translation probabilities: Plex (t|s) and Plex (s|t); 3) rule penalty: exp(-1); 4) target word penalty: exp(|e|); 5) language model : Plm(e); Decoding Our decoder is based on bottom up chart parsing algorithm that convert the input dependency structure into a target string.", "It finds the best derivation among all possible derivations D. Given a source dependency structure T, the decoder traverses each internal node n of T in post-order.", "And we process it as follows.", "1) If n is a leaf node, it checks the rule set for matched translation rules H and uses the rules to generate candidate translation; 2) If n is a internal node, it enumerates all instances of the related sentence, clauses or phrases of the HDR fragment rooted at n, and checks the translation rule set for matched translation rules.", "If there is no matched rules, we construct a pseudo translation rule according to the word order of the HDR fragment in the source side; 3) Make use of Cube Pruning algorithm (Chiang, 2007; Huang and Chiang, 2007) to generate the candidate translation for the node n. To balance the decoder's performance and speed, we use four constraints as follows: 1) Beam-threshold: we get the score threshold from the best score in the current stack multiplied by a fixed ratio.", "The candidate translations with a score worse than the score threshold will be discarded; 2) beam-limit: the maximum number of candidate translations in the beam; 3) rule-threshold: we get the rule score threshold from the best score multiplied by a fixed ratio in the rule table queue.", "The rules with a score worse than rule score threshold will be dis-carded; 4) rule-limit: the maximum number of rules in the rule table queue.", "For our experiments, we set the beam-threshold = 10 -2 , beam-limit = 100, rule-threshold = 10 -2 and rule-limit = 100.", "Table 1 The comparison results of the two systems Then, we use dependency-to-string model described in Section 2 to build a Chinese-Japanese translation system.", "And use the BLUE score and RIBES score for evaluation.", "Experiments Data preparation Experiments and Evaluation Results The Chinese-Japanese translation system (Dep2str) consists of three modules: 1) Rule extraction module: extract rules using the Chinese dependency tree, the Japanese sentence and alignment information of the training corpus.", "2) Decoding module: decode the Chinese sentences for the n-best Japanese translations according to the model parameters that have been set.", "3) Training module: train the translation model using minimum error rate to get the best parameters on the development data.", "We then decode the test data using the system.", "Table 1 shows the number of the extracted translation rules and the translation performance on the test data.", "Furthermore, we implemented a MOSES PBSMT system (Koehn et al., 2002) as the baseline for a comparison.", "In our experiments the value of the distortion limit of the baseline system is the default.", "The number of translation rules and translation performance of the baseline system are also showed in the table.", "In terms of the number of translation rules, the number of the extracted translation rules in the baseline system is over 3 times more than that of dep2str system.", "We think that the lack of restrictions on syntactic structure resulted in this.", "In terms of translation performance, the BLEU score and RIBES score on the test data achieved by dep2str system are higher than the baseline system by 0.62 and 0.31 respectively.", "These evaluation results illustrate that the translation system based on the dependency-to-string model is effective on the Chinese-Japanese translation task.", "Conclusions This paper describes the Beijing Jiaotong University Chinese-Japanese machine translation system participated in WAT2015.", "The system employs a dependency-to-string model, which takes the HDR fragments as elementary structures for the rule extraction and directly specifies the ordering information in translation rules, making the decoding algorithm simplified.", "The experiment results on the ASPEC data showed that the BLEU score and the RIBES score are increased by 0.62 and 0.31 respectively, compared with the phrase-based system.", "At present, the accuracy of the Chinese dependency parsing is not very high, and our system's performance is affected by the accuracy.", "Meanwhile, we filtered out the sentences which could not be parsed by the dependency parser.", "This caused a decrease in the amount of training data by about 100 thousand sentence pairs.", "We think that the system's performance will be improved with Chinese dependency parsing with high accuracy." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2010", "2.2", "2.2.2", "2.3", "2.4", "3.2", "4" ], "paper_header_content": [ "Introduction", "Dependency-to-String Translation Model", "Dependency-to-String Grammar", "FIFA World Cup in South Africa successfully hold", "Rule Acquisition", "HDR Rules and H Rules Generation", "Translation Model", "Decoding", "Experiments and Evaluation Results", "Conclusions" ] }
GEM-SciDuet-train-45#paper-1065#slide-3
Experiment and Evaluation
SRI Language Modeling Toolkit System Rule # BLEU RIBES Baseline: MOSES PBSMT system Ours performed better although using only a small size of translation rules
SRI Language Modeling Toolkit System Rule # BLEU RIBES Baseline: MOSES PBSMT system Ours performed better although using only a small size of translation rules
[]
GEM-SciDuet-train-46#paper-1069#slide-0
1069
Arc-Hybrid Non-Projective Dependency Parsing with a Static-Dynamic Oracle
We extend the arc-hybrid transition system for dependency parsing with a SWAP transition that enables reordering of the words and construction of non-projective trees. Although this extension potentially breaks the arc-decomposability of the transition system, we show that the existing dynamic oracle can be modified and combined with a static oracle for the SWAP transition. Experiments on five languages with different degrees of non-projectivity show that the new system gives competitive accuracy and is significantly better than a system trained with a purely static oracle.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112 ], "paper_content_text": [ "Introduction Non-projective sentences are a notorious problem in dependency parsing.", "Traditional algorithms like those developed by Nivre (2003 Nivre ( , 2004 for transition-based parsing only allow the construction of projective trees.", "These algorithms make use of a stack, a buffer and a set of arcs, and parsing consists of performing a sequence of transitions on these structures.", "Traditional algorithms have been extended in different ways to allow the construction of non-projective trees (Nivre and Nilsson, 2005; Attardi, 2006; Nivre, 2007; Gómez-Rodríguez and Nivre, 2010) .", "One method proposed by Nivre (2009) is based on the idea of word reordering.", "This is achieved by adding a transition that swaps two items in the data structures used, enabling the construction of arbitrary non-projective trees while still only adding arcs between adjacent words (after possible reordering).", "This technique was previously used in the arc-standard transition system (Nivre, 2004) .", "The first contribution of this paper is to show that it can also be combined with the arc-hybrid system proposed by Kuhlmann et al.", "(2011) .", "Recent advances in dependency parsing have demonstrated the benefit of using dynamic oracles for training dependency parsers (Goldberg and Nivre, 2013) .", "Traditionally, parsers were trained in a static way and were only exposed to configurations resulting from optimal transitions during training.", "Dynamic oracles define optimal transition sequences for any configuration in which the parser may be.", "The use of dynamic oracles enables training with exploration of errors, which mitigates the problem of error propagation at prediction time.", "In order to define a dynamic oracle, we need to be able to compute the cost of any transition in any configuration, where cost is usually defined as minimum Hamming loss with respect to the best tree reachable from that configuration.", "Goldberg and Nivre (2013) showed that this computation is straightforward for transition systems that satisfy the property of arc-decomposability, meaning that a tree is reachable from a configuration if and only if every arc in the tree is reachable in itself.", "Based on this result, they defined dynamic oracles for the arc-eager (Nivre, 2003) , arc-hybrid (Kuhlmann et al., 2011) and easy-first (Goldberg and Elhadad, 2010) systems.", "Transition systems that allow non-projective trees are in general not arc-decomposable and therefore require different methods for constructing dynamic oracles (Gómez-Rodríguez and Fernández-González, 2015) .", "The online reordering system of Nivre (2009) is furthermore based on the arc-standard system, which is not even arc-decomposable in itself (Goldberg and Nivre, 2013) .", "The second contribution of this paper is to show that we can take advantage of the arcdecomposability of the arc-hybrid transition system and extend the existing dynamic oracle to deal with the added swap transition.", "The resulting or-acle is static with respect to the new transition but remains dynamic for all other transitions.", "We show experimentally that this static-dynamic oracle gives a significant advantage over the alternative static oracle and results in competitive results for non-projective parsing.", "An Extended Transition System The arc-hybrid system has configurations of the form c = (Σ, B, A), where • Σ is a stack (represented as a list with the head to the right), • B is a buffer (represented as a list with the head to the left), • A is a set of dependency arcs (represented as (h, d) pairs).", "1 Given a sentence W = w 1 , .", ".", ".", ", w n , the system is initialized to: c 0 = ([ ], [1, .", ".", ".", ", n, n+1], { }) where n+1 is a special root node, denoted r from now on.", "Terminal configurations have the form: c = ([ ], [r], A) and the parse tree is given by the arc set A.", "There are preconditions such that SHIFT is legal only if b = r, RIGHT only if |Σ| > 1 and LEFT only if |Σ| > 0.", "In order to enforce that r has exactly one dependent, as required by some dependency grammar frameworks, we add a precondition such that LEFT is legal only if |Σ| = 1 or b = r. In the extended system, we add a SWAP transition to be able to construct non-projective trees using online reordering: • SWAP[(σ|s 0 , b|β, A)] = (σ, b|s 0 |β, A) There is a precondition making SWAP legal only if |Σ| > 0, |B| > 1 and s 0 < b.", "3 The SWAP transition reorders nodes by moving the item on top of the stack (s 0 ) to the second position in the buffer, thus inverting the order of s 0 and b.", "The SHIFT and SWAP transitions together implement a simple sorting algorithm, which allows us to permute the order of nodes arbitrarily.", "As shown by (Nivre, 2009) , this implies that we can construct any non-projective tree by reordering and adding arcs between adjacent nodes using LEFT and RIGHT.", "As already observed, the main advantage of the arc-hybrid system over the arc-standard system is that it is arc-decomposable, which allows us to construct a simple and efficient dynamic oracle.", "The arc-eager system (Nivre, 2003) is also arcdecomposable but cannot be combined with SWAP because items on the stack in that system do not necessarily represent disjoint subtrees.", "A Static-Dynamic Oracle The dynamic oracle for arc-hybrid parsing defined by Goldberg and Nivre (2013) computes the cost of a transition by counting the number of gold arcs that are made unreachable by applying that transition.", "This presupposes that the system is arcdecomposable, a result that is proven in the same paper.", "Our construction of an oracle for arc-hybrid parsing with online ordering is based on the conjecture that we can retain arc-decomposition by only making SWAP transitions that are necessary to make non-projective arcs reachable and by enforcing all such transitions.", "Proving this conjecture is, however, outside the scope of this paper.", "Auxiliary Functions and Notation We assume that gold trees are preprocessed at training time to compute the following information for each input node i: • PROJ(i) = the position of node i in the projective order.", "4 • RDEPS(i) = the set of reachable dependents of i, initially all dependents of i.", "• LEFT: C(LEFT) = |RDEPS(s 0 )| + [[h(s 0 ) = b and s 0 ∈ RDEPS(h(s 0 ))]] Updates: Set RDEPS(s 0 ) = [ ] and remove s 0 from RDEPS(h(s 0 )).", "• RIGHT: C(RIGHT) = |RDEPS(s 0 )| + [[h(s 0 ) = s 1 and s 0 ∈ RDEPS(h(s 0 ))]] Updates: Set RDEPS(s 0 ) = [ ] and remove s 0 from RDEPS(h(s 0 )).", "• SHIFT: We use h(i) to denote the head of a node i in the gold tree.", "1.", "If there exists a node i ∈ B −b such that b < i and PROJ(b) > PROJ(i): C(SHIFT) = 0 2.", "Else: C(SHIFT) = |{d ∈ RDEPS(b) | d ∈ Σ}| + [[h(b) ∈ Σ −s 0 and b ∈ RDEPS(h(b))]] Updates: Remove b from RDEPS(h(b)) if h(b) ∈ Σ −s 0 and remove d ∈ Σ from RDEPS(b).", "Static Oracle for SWAP Our oracle is fully dynamic with respect to SHIFT, LEFT and RIGHT but basically static with respect to SWAP.", "This means that only optimal (zero cost) SWAP transitions are allowed during training and that we force the parser to apply the SWAP transition when needed.", "Optimal: To prevent non-optimal SWAP transitions, we add a precondition so that SWAP is legal only if PROJ(s 0 ) > PROJ(b).", "Forced: To force necessary SWAP transitions, we bypass the oracle whenever PROJ(s 0 ) > PROJ(b).", "5 Dynamic Oracle Since we use a static oracle for SWAP transitions, these will always have zero cost.", "The dynamic oracle thus only needs to define costs for the remaining three transitions.", "To construct the oracle, we start from the old dynamic oracle for the projective system and extend it to account for the added flexibility introduced by SWAP.", "Figure 1 defines the transition costs as well as the necessary updates to RDEPS after applying a transition.", "• LEFT: Adding the arc (b, s 0 ) and popping s 0 from the stack means that s 0 will not be able to acquire a head different from b nor any of its reachable dependents.", "In the old projective case, the loss was limited to a head in s 0 |β and dependents in b|β, but because s 0 can potentially be swapped back to the buffer, we instead define reachability explicitly through RDEPS(s 0 ) (for dependents) and RDEPS(h(s 0 )) (for the head) and update these accordingly after applying the transition.", "• RIGHT: Adding the arc (s 1 , s 0 ) and popping s 0 from the stack means that s 0 will not be able to acquire a head different from s 1 nor any of its reachable dependents.", "In the old projective case, the loss was limited to a head and dependents in b|β, but because s 0 can potentially be swapped back to the buffer, we again define reachability explicitly through RDEPS(s 0 ) (for dependents) and RDEPS(h(s 0 )) (for the head) and update these accordingly after applying the transition.", "1 2 3 4 s 1 s 0 b [ 1 2 ] Σ [ 3 4 ] B RIGHT ⇒ 1 2 3 4 [ 1 ] Σ [ 3 4 ] B SHIFT ⇓ 1 2 3 4 [ 1 2 3 ] Σ [ 4 ] B 1 2 4 3 s 1 s 0 b [ 1 2 ] Σ [ 4 3 ] B • SHIFT: In the projective case, shifting b onto the stack means that b will not be able to acquire a head in Σ other than the top item s 0 nor any dependents in Σ.", "With the SWAP transition and a static oracle, we also have to consider the case where b can later be swapped back to the buffer, in which case SHIFT has zero cost.", "We therefore have two cases in Figure 1 .", "In the first case, no updates are needed.", "In the second case, the updates are analogous to the old projective case.", "To illustrate how the oracle works, let us look at some hypothetical configurations.", "First, we can have a situation as in the top left corner of Figure 2 , where all nodes are in projective order given the gold tree displayed above the nodes.", "For simplicity, the nodes are named after their projective order.", "Applying a RIGHT transition in this configuration makes it impossible for s 0 (2) to be attached to its head (3) and therefore makes us lose the arc 3 → 2, as shown in the top right corner.", "If we instead apply a SHIFT transition, we lose the arc between b (3) and its head (1) as well as the arc 3 → 2, as shown in the bottom left corner.", "By contrast, a LEFT transition has zero cost, because no arcs are lost so the best tree reachable in the orig-inal configuration is still reachable after applying the LEFT transition.", "Consider now a configuration, like the one in the bottom right corner of Figure 2 , where the nodes are not in projective order.", "Here we can shift b (4) onto the stack without cost, because it will later be swapped back to the buffer to restore the projective order between 4 and 3.", "A RIGHT transition makes us lose the head and dependent of s 0 (4 → 2 and 2 → 3).", "A LEFT transition makes us lose the dependent of s 0 (2 → 3) .", "The combination of a dynamic oracle for LEFT, RIGHT and SHIFT with a static oracle for SWAP allows us to benefit from training with exploration in most situations and at the same time capture nonprojective dependencies.", "Experiments We extend the parser we used in de Lhoneux et al.", "(2017), a greedy transition-based parser that predicts the dependency tree given the raw words of a sentence.", "That parser is itself an extension of the parser developed by Kiperwasser and Goldberg (2016) .", "It relies on a BiLSTM to learn informative features of words in context and a feed-forward network for predicting the next parsing transition.", "It learns vector representations of the words as well as characters.", "Contrary to parsing tradition, it makes no use of part-of-speech tags.", "We released our system as UUparser 2.0, available at https: //github.com/UppsalaNLP/uuparser.", "We first compare our system, which uses our static-dynamic oracle, with the same system using a static oracle.", "This is to find out if we can benefit from error exploration using our partially dynamic oracle.", "We use the same set of hyperparameters as in that paper in all our experiments.", "We additionally compare our method to a different approach to handling non-projectivity, pseudo-projective parsing, as performed in de Lhoneux et al.", "(2017) .", "Pseudo-projective parsing was developed by Nivre and Nilsson (2005) .", "In a pre-processing step, the training data is projectivised: some nodes get reattached to a close parent.", "Parsed data are then 'deprojectivised' in a post-processing step.", "In order for information about non-projectivity to be recoverable after parsing, when projectivising, arcs are renamed to encode information about the original parent of dependents which get re-attached.", "Note that hyperparameters were tweaked for the pseudo-projective system, possibly giving an unfair advantage.", "Lastly, we compare to a projective baseline, using a dynamic oracle but no SWAP transition.", "6 This is to find out the extent to which dealing with non-projectivity is important.", "We selected a sample of 5 treebanks from the Universal Dependencies CoNLL 2017 shared task data .", "We selected languages to have different frequencies of non-projectivity, both at the sentence level and at the level of individual arcs, ranging from a very high frequency (Ancient-Greek) to a low frequency (English), as well as some typological variety.", "Non-projective frequencies were taken from Straka et al.", "(2015) and are included in Table 1 , which report our results on the development sets (best out of 20 epochs).", "Comparing to the static baseline, we can verify that our static-dynamic oracle really preserves the benefits of training with error exploration, with improvements ranging from 0.5 to 3.5 points.", "(All differences here are statistically significant with p<0.01, except for Portuguese, where the difference is statistically significant with p<0.05 according to the McNemar test).", "The new system achieves results largely on par with the pseudo-projective parser.", "Our method is better by a small margin for 3 out of 5 languages Table 1 : LAS on dev sets with gold tokenization for our static-dynamic system (S-Dy), the static and projective baselines (Static, Proj) and the pseudo-projective system of de Lhoneux et al.", "(2017) (PProj).", "%NP = percentage of nonprojective arcs/sentences.", "and worse by a large margin for 1.", "Overall, these results are encouraging given that our method is simpler and more efficient to train, with no need for pre-or post-processing and no extension of the dependency label set.", "7 Comparing to the projective baseline, we see that strictly projective parsing can be slightly better than both online reordering and pseudoprojective parsing for a language with few non-projective arcs/sentences like English.", "For all other languages, we see small (Arabic) to big (Ancient Greek) improvements from dealing with non-projectivity in some way.", "Conclusion We have shown how the SWAP transition for online reordering can be integrated into the archybrid transition system for dependency parsing in such a way that we still benefit from training with exploration using a static-dynamic oracle.", "In the future, we intend to test this further by evaluating our model on more languages with proper tuning of hyperparameters.", "We are also interested in the question of whether it is possible to define a fully dynamic oracle for our system and allow exploration for the SWAP transition too." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "4", "5" ], "paper_header_content": [ "Introduction", "An Extended Transition System", "A Static-Dynamic Oracle", "Auxiliary Functions and Notation", "Static Oracle for SWAP", "Dynamic Oracle", "Experiments", "Conclusion" ] }
GEM-SciDuet-train-46#paper-1069#slide-0
Transition Based Parsing with Arc Hybrid
Drive your friend home root
Drive your friend home root
[]
GEM-SciDuet-train-46#paper-1069#slide-1
1069
Arc-Hybrid Non-Projective Dependency Parsing with a Static-Dynamic Oracle
We extend the arc-hybrid transition system for dependency parsing with a SWAP transition that enables reordering of the words and construction of non-projective trees. Although this extension potentially breaks the arc-decomposability of the transition system, we show that the existing dynamic oracle can be modified and combined with a static oracle for the SWAP transition. Experiments on five languages with different degrees of non-projectivity show that the new system gives competitive accuracy and is significantly better than a system trained with a purely static oracle.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112 ], "paper_content_text": [ "Introduction Non-projective sentences are a notorious problem in dependency parsing.", "Traditional algorithms like those developed by Nivre (2003 Nivre ( , 2004 for transition-based parsing only allow the construction of projective trees.", "These algorithms make use of a stack, a buffer and a set of arcs, and parsing consists of performing a sequence of transitions on these structures.", "Traditional algorithms have been extended in different ways to allow the construction of non-projective trees (Nivre and Nilsson, 2005; Attardi, 2006; Nivre, 2007; Gómez-Rodríguez and Nivre, 2010) .", "One method proposed by Nivre (2009) is based on the idea of word reordering.", "This is achieved by adding a transition that swaps two items in the data structures used, enabling the construction of arbitrary non-projective trees while still only adding arcs between adjacent words (after possible reordering).", "This technique was previously used in the arc-standard transition system (Nivre, 2004) .", "The first contribution of this paper is to show that it can also be combined with the arc-hybrid system proposed by Kuhlmann et al.", "(2011) .", "Recent advances in dependency parsing have demonstrated the benefit of using dynamic oracles for training dependency parsers (Goldberg and Nivre, 2013) .", "Traditionally, parsers were trained in a static way and were only exposed to configurations resulting from optimal transitions during training.", "Dynamic oracles define optimal transition sequences for any configuration in which the parser may be.", "The use of dynamic oracles enables training with exploration of errors, which mitigates the problem of error propagation at prediction time.", "In order to define a dynamic oracle, we need to be able to compute the cost of any transition in any configuration, where cost is usually defined as minimum Hamming loss with respect to the best tree reachable from that configuration.", "Goldberg and Nivre (2013) showed that this computation is straightforward for transition systems that satisfy the property of arc-decomposability, meaning that a tree is reachable from a configuration if and only if every arc in the tree is reachable in itself.", "Based on this result, they defined dynamic oracles for the arc-eager (Nivre, 2003) , arc-hybrid (Kuhlmann et al., 2011) and easy-first (Goldberg and Elhadad, 2010) systems.", "Transition systems that allow non-projective trees are in general not arc-decomposable and therefore require different methods for constructing dynamic oracles (Gómez-Rodríguez and Fernández-González, 2015) .", "The online reordering system of Nivre (2009) is furthermore based on the arc-standard system, which is not even arc-decomposable in itself (Goldberg and Nivre, 2013) .", "The second contribution of this paper is to show that we can take advantage of the arcdecomposability of the arc-hybrid transition system and extend the existing dynamic oracle to deal with the added swap transition.", "The resulting or-acle is static with respect to the new transition but remains dynamic for all other transitions.", "We show experimentally that this static-dynamic oracle gives a significant advantage over the alternative static oracle and results in competitive results for non-projective parsing.", "An Extended Transition System The arc-hybrid system has configurations of the form c = (Σ, B, A), where • Σ is a stack (represented as a list with the head to the right), • B is a buffer (represented as a list with the head to the left), • A is a set of dependency arcs (represented as (h, d) pairs).", "1 Given a sentence W = w 1 , .", ".", ".", ", w n , the system is initialized to: c 0 = ([ ], [1, .", ".", ".", ", n, n+1], { }) where n+1 is a special root node, denoted r from now on.", "Terminal configurations have the form: c = ([ ], [r], A) and the parse tree is given by the arc set A.", "There are preconditions such that SHIFT is legal only if b = r, RIGHT only if |Σ| > 1 and LEFT only if |Σ| > 0.", "In order to enforce that r has exactly one dependent, as required by some dependency grammar frameworks, we add a precondition such that LEFT is legal only if |Σ| = 1 or b = r. In the extended system, we add a SWAP transition to be able to construct non-projective trees using online reordering: • SWAP[(σ|s 0 , b|β, A)] = (σ, b|s 0 |β, A) There is a precondition making SWAP legal only if |Σ| > 0, |B| > 1 and s 0 < b.", "3 The SWAP transition reorders nodes by moving the item on top of the stack (s 0 ) to the second position in the buffer, thus inverting the order of s 0 and b.", "The SHIFT and SWAP transitions together implement a simple sorting algorithm, which allows us to permute the order of nodes arbitrarily.", "As shown by (Nivre, 2009) , this implies that we can construct any non-projective tree by reordering and adding arcs between adjacent nodes using LEFT and RIGHT.", "As already observed, the main advantage of the arc-hybrid system over the arc-standard system is that it is arc-decomposable, which allows us to construct a simple and efficient dynamic oracle.", "The arc-eager system (Nivre, 2003) is also arcdecomposable but cannot be combined with SWAP because items on the stack in that system do not necessarily represent disjoint subtrees.", "A Static-Dynamic Oracle The dynamic oracle for arc-hybrid parsing defined by Goldberg and Nivre (2013) computes the cost of a transition by counting the number of gold arcs that are made unreachable by applying that transition.", "This presupposes that the system is arcdecomposable, a result that is proven in the same paper.", "Our construction of an oracle for arc-hybrid parsing with online ordering is based on the conjecture that we can retain arc-decomposition by only making SWAP transitions that are necessary to make non-projective arcs reachable and by enforcing all such transitions.", "Proving this conjecture is, however, outside the scope of this paper.", "Auxiliary Functions and Notation We assume that gold trees are preprocessed at training time to compute the following information for each input node i: • PROJ(i) = the position of node i in the projective order.", "4 • RDEPS(i) = the set of reachable dependents of i, initially all dependents of i.", "• LEFT: C(LEFT) = |RDEPS(s 0 )| + [[h(s 0 ) = b and s 0 ∈ RDEPS(h(s 0 ))]] Updates: Set RDEPS(s 0 ) = [ ] and remove s 0 from RDEPS(h(s 0 )).", "• RIGHT: C(RIGHT) = |RDEPS(s 0 )| + [[h(s 0 ) = s 1 and s 0 ∈ RDEPS(h(s 0 ))]] Updates: Set RDEPS(s 0 ) = [ ] and remove s 0 from RDEPS(h(s 0 )).", "• SHIFT: We use h(i) to denote the head of a node i in the gold tree.", "1.", "If there exists a node i ∈ B −b such that b < i and PROJ(b) > PROJ(i): C(SHIFT) = 0 2.", "Else: C(SHIFT) = |{d ∈ RDEPS(b) | d ∈ Σ}| + [[h(b) ∈ Σ −s 0 and b ∈ RDEPS(h(b))]] Updates: Remove b from RDEPS(h(b)) if h(b) ∈ Σ −s 0 and remove d ∈ Σ from RDEPS(b).", "Static Oracle for SWAP Our oracle is fully dynamic with respect to SHIFT, LEFT and RIGHT but basically static with respect to SWAP.", "This means that only optimal (zero cost) SWAP transitions are allowed during training and that we force the parser to apply the SWAP transition when needed.", "Optimal: To prevent non-optimal SWAP transitions, we add a precondition so that SWAP is legal only if PROJ(s 0 ) > PROJ(b).", "Forced: To force necessary SWAP transitions, we bypass the oracle whenever PROJ(s 0 ) > PROJ(b).", "5 Dynamic Oracle Since we use a static oracle for SWAP transitions, these will always have zero cost.", "The dynamic oracle thus only needs to define costs for the remaining three transitions.", "To construct the oracle, we start from the old dynamic oracle for the projective system and extend it to account for the added flexibility introduced by SWAP.", "Figure 1 defines the transition costs as well as the necessary updates to RDEPS after applying a transition.", "• LEFT: Adding the arc (b, s 0 ) and popping s 0 from the stack means that s 0 will not be able to acquire a head different from b nor any of its reachable dependents.", "In the old projective case, the loss was limited to a head in s 0 |β and dependents in b|β, but because s 0 can potentially be swapped back to the buffer, we instead define reachability explicitly through RDEPS(s 0 ) (for dependents) and RDEPS(h(s 0 )) (for the head) and update these accordingly after applying the transition.", "• RIGHT: Adding the arc (s 1 , s 0 ) and popping s 0 from the stack means that s 0 will not be able to acquire a head different from s 1 nor any of its reachable dependents.", "In the old projective case, the loss was limited to a head and dependents in b|β, but because s 0 can potentially be swapped back to the buffer, we again define reachability explicitly through RDEPS(s 0 ) (for dependents) and RDEPS(h(s 0 )) (for the head) and update these accordingly after applying the transition.", "1 2 3 4 s 1 s 0 b [ 1 2 ] Σ [ 3 4 ] B RIGHT ⇒ 1 2 3 4 [ 1 ] Σ [ 3 4 ] B SHIFT ⇓ 1 2 3 4 [ 1 2 3 ] Σ [ 4 ] B 1 2 4 3 s 1 s 0 b [ 1 2 ] Σ [ 4 3 ] B • SHIFT: In the projective case, shifting b onto the stack means that b will not be able to acquire a head in Σ other than the top item s 0 nor any dependents in Σ.", "With the SWAP transition and a static oracle, we also have to consider the case where b can later be swapped back to the buffer, in which case SHIFT has zero cost.", "We therefore have two cases in Figure 1 .", "In the first case, no updates are needed.", "In the second case, the updates are analogous to the old projective case.", "To illustrate how the oracle works, let us look at some hypothetical configurations.", "First, we can have a situation as in the top left corner of Figure 2 , where all nodes are in projective order given the gold tree displayed above the nodes.", "For simplicity, the nodes are named after their projective order.", "Applying a RIGHT transition in this configuration makes it impossible for s 0 (2) to be attached to its head (3) and therefore makes us lose the arc 3 → 2, as shown in the top right corner.", "If we instead apply a SHIFT transition, we lose the arc between b (3) and its head (1) as well as the arc 3 → 2, as shown in the bottom left corner.", "By contrast, a LEFT transition has zero cost, because no arcs are lost so the best tree reachable in the orig-inal configuration is still reachable after applying the LEFT transition.", "Consider now a configuration, like the one in the bottom right corner of Figure 2 , where the nodes are not in projective order.", "Here we can shift b (4) onto the stack without cost, because it will later be swapped back to the buffer to restore the projective order between 4 and 3.", "A RIGHT transition makes us lose the head and dependent of s 0 (4 → 2 and 2 → 3).", "A LEFT transition makes us lose the dependent of s 0 (2 → 3) .", "The combination of a dynamic oracle for LEFT, RIGHT and SHIFT with a static oracle for SWAP allows us to benefit from training with exploration in most situations and at the same time capture nonprojective dependencies.", "Experiments We extend the parser we used in de Lhoneux et al.", "(2017), a greedy transition-based parser that predicts the dependency tree given the raw words of a sentence.", "That parser is itself an extension of the parser developed by Kiperwasser and Goldberg (2016) .", "It relies on a BiLSTM to learn informative features of words in context and a feed-forward network for predicting the next parsing transition.", "It learns vector representations of the words as well as characters.", "Contrary to parsing tradition, it makes no use of part-of-speech tags.", "We released our system as UUparser 2.0, available at https: //github.com/UppsalaNLP/uuparser.", "We first compare our system, which uses our static-dynamic oracle, with the same system using a static oracle.", "This is to find out if we can benefit from error exploration using our partially dynamic oracle.", "We use the same set of hyperparameters as in that paper in all our experiments.", "We additionally compare our method to a different approach to handling non-projectivity, pseudo-projective parsing, as performed in de Lhoneux et al.", "(2017) .", "Pseudo-projective parsing was developed by Nivre and Nilsson (2005) .", "In a pre-processing step, the training data is projectivised: some nodes get reattached to a close parent.", "Parsed data are then 'deprojectivised' in a post-processing step.", "In order for information about non-projectivity to be recoverable after parsing, when projectivising, arcs are renamed to encode information about the original parent of dependents which get re-attached.", "Note that hyperparameters were tweaked for the pseudo-projective system, possibly giving an unfair advantage.", "Lastly, we compare to a projective baseline, using a dynamic oracle but no SWAP transition.", "6 This is to find out the extent to which dealing with non-projectivity is important.", "We selected a sample of 5 treebanks from the Universal Dependencies CoNLL 2017 shared task data .", "We selected languages to have different frequencies of non-projectivity, both at the sentence level and at the level of individual arcs, ranging from a very high frequency (Ancient-Greek) to a low frequency (English), as well as some typological variety.", "Non-projective frequencies were taken from Straka et al.", "(2015) and are included in Table 1 , which report our results on the development sets (best out of 20 epochs).", "Comparing to the static baseline, we can verify that our static-dynamic oracle really preserves the benefits of training with error exploration, with improvements ranging from 0.5 to 3.5 points.", "(All differences here are statistically significant with p<0.01, except for Portuguese, where the difference is statistically significant with p<0.05 according to the McNemar test).", "The new system achieves results largely on par with the pseudo-projective parser.", "Our method is better by a small margin for 3 out of 5 languages Table 1 : LAS on dev sets with gold tokenization for our static-dynamic system (S-Dy), the static and projective baselines (Static, Proj) and the pseudo-projective system of de Lhoneux et al.", "(2017) (PProj).", "%NP = percentage of nonprojective arcs/sentences.", "and worse by a large margin for 1.", "Overall, these results are encouraging given that our method is simpler and more efficient to train, with no need for pre-or post-processing and no extension of the dependency label set.", "7 Comparing to the projective baseline, we see that strictly projective parsing can be slightly better than both online reordering and pseudoprojective parsing for a language with few non-projective arcs/sentences like English.", "For all other languages, we see small (Arabic) to big (Ancient Greek) improvements from dealing with non-projectivity in some way.", "Conclusion We have shown how the SWAP transition for online reordering can be integrated into the archybrid transition system for dependency parsing in such a way that we still benefit from training with exploration using a static-dynamic oracle.", "In the future, we intend to test this further by evaluating our model on more languages with proper tuning of hyperparameters.", "We are also interested in the question of whether it is possible to define a fully dynamic oracle for our system and allow exploration for the SWAP transition too." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "4", "5" ], "paper_header_content": [ "Introduction", "An Extended Transition System", "A Static-Dynamic Oracle", "Auxiliary Functions and Notation", "Static Oracle for SWAP", "Dynamic Oracle", "Experiments", "Conclusion" ] }
GEM-SciDuet-train-46#paper-1069#slide-1
Static Oracle for Arc Hybrid
[Drive your friend home **root**] Drive [friend home **root**]
[Drive your friend home **root**] Drive [friend home **root**]
[]
GEM-SciDuet-train-46#paper-1069#slide-2
1069
Arc-Hybrid Non-Projective Dependency Parsing with a Static-Dynamic Oracle
We extend the arc-hybrid transition system for dependency parsing with a SWAP transition that enables reordering of the words and construction of non-projective trees. Although this extension potentially breaks the arc-decomposability of the transition system, we show that the existing dynamic oracle can be modified and combined with a static oracle for the SWAP transition. Experiments on five languages with different degrees of non-projectivity show that the new system gives competitive accuracy and is significantly better than a system trained with a purely static oracle.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112 ], "paper_content_text": [ "Introduction Non-projective sentences are a notorious problem in dependency parsing.", "Traditional algorithms like those developed by Nivre (2003 Nivre ( , 2004 for transition-based parsing only allow the construction of projective trees.", "These algorithms make use of a stack, a buffer and a set of arcs, and parsing consists of performing a sequence of transitions on these structures.", "Traditional algorithms have been extended in different ways to allow the construction of non-projective trees (Nivre and Nilsson, 2005; Attardi, 2006; Nivre, 2007; Gómez-Rodríguez and Nivre, 2010) .", "One method proposed by Nivre (2009) is based on the idea of word reordering.", "This is achieved by adding a transition that swaps two items in the data structures used, enabling the construction of arbitrary non-projective trees while still only adding arcs between adjacent words (after possible reordering).", "This technique was previously used in the arc-standard transition system (Nivre, 2004) .", "The first contribution of this paper is to show that it can also be combined with the arc-hybrid system proposed by Kuhlmann et al.", "(2011) .", "Recent advances in dependency parsing have demonstrated the benefit of using dynamic oracles for training dependency parsers (Goldberg and Nivre, 2013) .", "Traditionally, parsers were trained in a static way and were only exposed to configurations resulting from optimal transitions during training.", "Dynamic oracles define optimal transition sequences for any configuration in which the parser may be.", "The use of dynamic oracles enables training with exploration of errors, which mitigates the problem of error propagation at prediction time.", "In order to define a dynamic oracle, we need to be able to compute the cost of any transition in any configuration, where cost is usually defined as minimum Hamming loss with respect to the best tree reachable from that configuration.", "Goldberg and Nivre (2013) showed that this computation is straightforward for transition systems that satisfy the property of arc-decomposability, meaning that a tree is reachable from a configuration if and only if every arc in the tree is reachable in itself.", "Based on this result, they defined dynamic oracles for the arc-eager (Nivre, 2003) , arc-hybrid (Kuhlmann et al., 2011) and easy-first (Goldberg and Elhadad, 2010) systems.", "Transition systems that allow non-projective trees are in general not arc-decomposable and therefore require different methods for constructing dynamic oracles (Gómez-Rodríguez and Fernández-González, 2015) .", "The online reordering system of Nivre (2009) is furthermore based on the arc-standard system, which is not even arc-decomposable in itself (Goldberg and Nivre, 2013) .", "The second contribution of this paper is to show that we can take advantage of the arcdecomposability of the arc-hybrid transition system and extend the existing dynamic oracle to deal with the added swap transition.", "The resulting or-acle is static with respect to the new transition but remains dynamic for all other transitions.", "We show experimentally that this static-dynamic oracle gives a significant advantage over the alternative static oracle and results in competitive results for non-projective parsing.", "An Extended Transition System The arc-hybrid system has configurations of the form c = (Σ, B, A), where • Σ is a stack (represented as a list with the head to the right), • B is a buffer (represented as a list with the head to the left), • A is a set of dependency arcs (represented as (h, d) pairs).", "1 Given a sentence W = w 1 , .", ".", ".", ", w n , the system is initialized to: c 0 = ([ ], [1, .", ".", ".", ", n, n+1], { }) where n+1 is a special root node, denoted r from now on.", "Terminal configurations have the form: c = ([ ], [r], A) and the parse tree is given by the arc set A.", "There are preconditions such that SHIFT is legal only if b = r, RIGHT only if |Σ| > 1 and LEFT only if |Σ| > 0.", "In order to enforce that r has exactly one dependent, as required by some dependency grammar frameworks, we add a precondition such that LEFT is legal only if |Σ| = 1 or b = r. In the extended system, we add a SWAP transition to be able to construct non-projective trees using online reordering: • SWAP[(σ|s 0 , b|β, A)] = (σ, b|s 0 |β, A) There is a precondition making SWAP legal only if |Σ| > 0, |B| > 1 and s 0 < b.", "3 The SWAP transition reorders nodes by moving the item on top of the stack (s 0 ) to the second position in the buffer, thus inverting the order of s 0 and b.", "The SHIFT and SWAP transitions together implement a simple sorting algorithm, which allows us to permute the order of nodes arbitrarily.", "As shown by (Nivre, 2009) , this implies that we can construct any non-projective tree by reordering and adding arcs between adjacent nodes using LEFT and RIGHT.", "As already observed, the main advantage of the arc-hybrid system over the arc-standard system is that it is arc-decomposable, which allows us to construct a simple and efficient dynamic oracle.", "The arc-eager system (Nivre, 2003) is also arcdecomposable but cannot be combined with SWAP because items on the stack in that system do not necessarily represent disjoint subtrees.", "A Static-Dynamic Oracle The dynamic oracle for arc-hybrid parsing defined by Goldberg and Nivre (2013) computes the cost of a transition by counting the number of gold arcs that are made unreachable by applying that transition.", "This presupposes that the system is arcdecomposable, a result that is proven in the same paper.", "Our construction of an oracle for arc-hybrid parsing with online ordering is based on the conjecture that we can retain arc-decomposition by only making SWAP transitions that are necessary to make non-projective arcs reachable and by enforcing all such transitions.", "Proving this conjecture is, however, outside the scope of this paper.", "Auxiliary Functions and Notation We assume that gold trees are preprocessed at training time to compute the following information for each input node i: • PROJ(i) = the position of node i in the projective order.", "4 • RDEPS(i) = the set of reachable dependents of i, initially all dependents of i.", "• LEFT: C(LEFT) = |RDEPS(s 0 )| + [[h(s 0 ) = b and s 0 ∈ RDEPS(h(s 0 ))]] Updates: Set RDEPS(s 0 ) = [ ] and remove s 0 from RDEPS(h(s 0 )).", "• RIGHT: C(RIGHT) = |RDEPS(s 0 )| + [[h(s 0 ) = s 1 and s 0 ∈ RDEPS(h(s 0 ))]] Updates: Set RDEPS(s 0 ) = [ ] and remove s 0 from RDEPS(h(s 0 )).", "• SHIFT: We use h(i) to denote the head of a node i in the gold tree.", "1.", "If there exists a node i ∈ B −b such that b < i and PROJ(b) > PROJ(i): C(SHIFT) = 0 2.", "Else: C(SHIFT) = |{d ∈ RDEPS(b) | d ∈ Σ}| + [[h(b) ∈ Σ −s 0 and b ∈ RDEPS(h(b))]] Updates: Remove b from RDEPS(h(b)) if h(b) ∈ Σ −s 0 and remove d ∈ Σ from RDEPS(b).", "Static Oracle for SWAP Our oracle is fully dynamic with respect to SHIFT, LEFT and RIGHT but basically static with respect to SWAP.", "This means that only optimal (zero cost) SWAP transitions are allowed during training and that we force the parser to apply the SWAP transition when needed.", "Optimal: To prevent non-optimal SWAP transitions, we add a precondition so that SWAP is legal only if PROJ(s 0 ) > PROJ(b).", "Forced: To force necessary SWAP transitions, we bypass the oracle whenever PROJ(s 0 ) > PROJ(b).", "5 Dynamic Oracle Since we use a static oracle for SWAP transitions, these will always have zero cost.", "The dynamic oracle thus only needs to define costs for the remaining three transitions.", "To construct the oracle, we start from the old dynamic oracle for the projective system and extend it to account for the added flexibility introduced by SWAP.", "Figure 1 defines the transition costs as well as the necessary updates to RDEPS after applying a transition.", "• LEFT: Adding the arc (b, s 0 ) and popping s 0 from the stack means that s 0 will not be able to acquire a head different from b nor any of its reachable dependents.", "In the old projective case, the loss was limited to a head in s 0 |β and dependents in b|β, but because s 0 can potentially be swapped back to the buffer, we instead define reachability explicitly through RDEPS(s 0 ) (for dependents) and RDEPS(h(s 0 )) (for the head) and update these accordingly after applying the transition.", "• RIGHT: Adding the arc (s 1 , s 0 ) and popping s 0 from the stack means that s 0 will not be able to acquire a head different from s 1 nor any of its reachable dependents.", "In the old projective case, the loss was limited to a head and dependents in b|β, but because s 0 can potentially be swapped back to the buffer, we again define reachability explicitly through RDEPS(s 0 ) (for dependents) and RDEPS(h(s 0 )) (for the head) and update these accordingly after applying the transition.", "1 2 3 4 s 1 s 0 b [ 1 2 ] Σ [ 3 4 ] B RIGHT ⇒ 1 2 3 4 [ 1 ] Σ [ 3 4 ] B SHIFT ⇓ 1 2 3 4 [ 1 2 3 ] Σ [ 4 ] B 1 2 4 3 s 1 s 0 b [ 1 2 ] Σ [ 4 3 ] B • SHIFT: In the projective case, shifting b onto the stack means that b will not be able to acquire a head in Σ other than the top item s 0 nor any dependents in Σ.", "With the SWAP transition and a static oracle, we also have to consider the case where b can later be swapped back to the buffer, in which case SHIFT has zero cost.", "We therefore have two cases in Figure 1 .", "In the first case, no updates are needed.", "In the second case, the updates are analogous to the old projective case.", "To illustrate how the oracle works, let us look at some hypothetical configurations.", "First, we can have a situation as in the top left corner of Figure 2 , where all nodes are in projective order given the gold tree displayed above the nodes.", "For simplicity, the nodes are named after their projective order.", "Applying a RIGHT transition in this configuration makes it impossible for s 0 (2) to be attached to its head (3) and therefore makes us lose the arc 3 → 2, as shown in the top right corner.", "If we instead apply a SHIFT transition, we lose the arc between b (3) and its head (1) as well as the arc 3 → 2, as shown in the bottom left corner.", "By contrast, a LEFT transition has zero cost, because no arcs are lost so the best tree reachable in the orig-inal configuration is still reachable after applying the LEFT transition.", "Consider now a configuration, like the one in the bottom right corner of Figure 2 , where the nodes are not in projective order.", "Here we can shift b (4) onto the stack without cost, because it will later be swapped back to the buffer to restore the projective order between 4 and 3.", "A RIGHT transition makes us lose the head and dependent of s 0 (4 → 2 and 2 → 3).", "A LEFT transition makes us lose the dependent of s 0 (2 → 3) .", "The combination of a dynamic oracle for LEFT, RIGHT and SHIFT with a static oracle for SWAP allows us to benefit from training with exploration in most situations and at the same time capture nonprojective dependencies.", "Experiments We extend the parser we used in de Lhoneux et al.", "(2017), a greedy transition-based parser that predicts the dependency tree given the raw words of a sentence.", "That parser is itself an extension of the parser developed by Kiperwasser and Goldberg (2016) .", "It relies on a BiLSTM to learn informative features of words in context and a feed-forward network for predicting the next parsing transition.", "It learns vector representations of the words as well as characters.", "Contrary to parsing tradition, it makes no use of part-of-speech tags.", "We released our system as UUparser 2.0, available at https: //github.com/UppsalaNLP/uuparser.", "We first compare our system, which uses our static-dynamic oracle, with the same system using a static oracle.", "This is to find out if we can benefit from error exploration using our partially dynamic oracle.", "We use the same set of hyperparameters as in that paper in all our experiments.", "We additionally compare our method to a different approach to handling non-projectivity, pseudo-projective parsing, as performed in de Lhoneux et al.", "(2017) .", "Pseudo-projective parsing was developed by Nivre and Nilsson (2005) .", "In a pre-processing step, the training data is projectivised: some nodes get reattached to a close parent.", "Parsed data are then 'deprojectivised' in a post-processing step.", "In order for information about non-projectivity to be recoverable after parsing, when projectivising, arcs are renamed to encode information about the original parent of dependents which get re-attached.", "Note that hyperparameters were tweaked for the pseudo-projective system, possibly giving an unfair advantage.", "Lastly, we compare to a projective baseline, using a dynamic oracle but no SWAP transition.", "6 This is to find out the extent to which dealing with non-projectivity is important.", "We selected a sample of 5 treebanks from the Universal Dependencies CoNLL 2017 shared task data .", "We selected languages to have different frequencies of non-projectivity, both at the sentence level and at the level of individual arcs, ranging from a very high frequency (Ancient-Greek) to a low frequency (English), as well as some typological variety.", "Non-projective frequencies were taken from Straka et al.", "(2015) and are included in Table 1 , which report our results on the development sets (best out of 20 epochs).", "Comparing to the static baseline, we can verify that our static-dynamic oracle really preserves the benefits of training with error exploration, with improvements ranging from 0.5 to 3.5 points.", "(All differences here are statistically significant with p<0.01, except for Portuguese, where the difference is statistically significant with p<0.05 according to the McNemar test).", "The new system achieves results largely on par with the pseudo-projective parser.", "Our method is better by a small margin for 3 out of 5 languages Table 1 : LAS on dev sets with gold tokenization for our static-dynamic system (S-Dy), the static and projective baselines (Static, Proj) and the pseudo-projective system of de Lhoneux et al.", "(2017) (PProj).", "%NP = percentage of nonprojective arcs/sentences.", "and worse by a large margin for 1.", "Overall, these results are encouraging given that our method is simpler and more efficient to train, with no need for pre-or post-processing and no extension of the dependency label set.", "7 Comparing to the projective baseline, we see that strictly projective parsing can be slightly better than both online reordering and pseudoprojective parsing for a language with few non-projective arcs/sentences like English.", "For all other languages, we see small (Arabic) to big (Ancient Greek) improvements from dealing with non-projectivity in some way.", "Conclusion We have shown how the SWAP transition for online reordering can be integrated into the archybrid transition system for dependency parsing in such a way that we still benefit from training with exploration using a static-dynamic oracle.", "In the future, we intend to test this further by evaluating our model on more languages with proper tuning of hyperparameters.", "We are also interested in the question of whether it is possible to define a fully dynamic oracle for our system and allow exploration for the SWAP transition too." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "4", "5" ], "paper_header_content": [ "Introduction", "An Extended Transition System", "A Static-Dynamic Oracle", "Auxiliary Functions and Notation", "Static Oracle for SWAP", "Dynamic Oracle", "Experiments", "Conclusion" ] }
GEM-SciDuet-train-46#paper-1069#slide-2
Dynamic Oracle for Arc Hybrid
Drive your friend home **root**] Drive friend home **root**]
Drive your friend home **root**] Drive friend home **root**]
[]
GEM-SciDuet-train-46#paper-1069#slide-3
1069
Arc-Hybrid Non-Projective Dependency Parsing with a Static-Dynamic Oracle
We extend the arc-hybrid transition system for dependency parsing with a SWAP transition that enables reordering of the words and construction of non-projective trees. Although this extension potentially breaks the arc-decomposability of the transition system, we show that the existing dynamic oracle can be modified and combined with a static oracle for the SWAP transition. Experiments on five languages with different degrees of non-projectivity show that the new system gives competitive accuracy and is significantly better than a system trained with a purely static oracle.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112 ], "paper_content_text": [ "Introduction Non-projective sentences are a notorious problem in dependency parsing.", "Traditional algorithms like those developed by Nivre (2003 Nivre ( , 2004 for transition-based parsing only allow the construction of projective trees.", "These algorithms make use of a stack, a buffer and a set of arcs, and parsing consists of performing a sequence of transitions on these structures.", "Traditional algorithms have been extended in different ways to allow the construction of non-projective trees (Nivre and Nilsson, 2005; Attardi, 2006; Nivre, 2007; Gómez-Rodríguez and Nivre, 2010) .", "One method proposed by Nivre (2009) is based on the idea of word reordering.", "This is achieved by adding a transition that swaps two items in the data structures used, enabling the construction of arbitrary non-projective trees while still only adding arcs between adjacent words (after possible reordering).", "This technique was previously used in the arc-standard transition system (Nivre, 2004) .", "The first contribution of this paper is to show that it can also be combined with the arc-hybrid system proposed by Kuhlmann et al.", "(2011) .", "Recent advances in dependency parsing have demonstrated the benefit of using dynamic oracles for training dependency parsers (Goldberg and Nivre, 2013) .", "Traditionally, parsers were trained in a static way and were only exposed to configurations resulting from optimal transitions during training.", "Dynamic oracles define optimal transition sequences for any configuration in which the parser may be.", "The use of dynamic oracles enables training with exploration of errors, which mitigates the problem of error propagation at prediction time.", "In order to define a dynamic oracle, we need to be able to compute the cost of any transition in any configuration, where cost is usually defined as minimum Hamming loss with respect to the best tree reachable from that configuration.", "Goldberg and Nivre (2013) showed that this computation is straightforward for transition systems that satisfy the property of arc-decomposability, meaning that a tree is reachable from a configuration if and only if every arc in the tree is reachable in itself.", "Based on this result, they defined dynamic oracles for the arc-eager (Nivre, 2003) , arc-hybrid (Kuhlmann et al., 2011) and easy-first (Goldberg and Elhadad, 2010) systems.", "Transition systems that allow non-projective trees are in general not arc-decomposable and therefore require different methods for constructing dynamic oracles (Gómez-Rodríguez and Fernández-González, 2015) .", "The online reordering system of Nivre (2009) is furthermore based on the arc-standard system, which is not even arc-decomposable in itself (Goldberg and Nivre, 2013) .", "The second contribution of this paper is to show that we can take advantage of the arcdecomposability of the arc-hybrid transition system and extend the existing dynamic oracle to deal with the added swap transition.", "The resulting or-acle is static with respect to the new transition but remains dynamic for all other transitions.", "We show experimentally that this static-dynamic oracle gives a significant advantage over the alternative static oracle and results in competitive results for non-projective parsing.", "An Extended Transition System The arc-hybrid system has configurations of the form c = (Σ, B, A), where • Σ is a stack (represented as a list with the head to the right), • B is a buffer (represented as a list with the head to the left), • A is a set of dependency arcs (represented as (h, d) pairs).", "1 Given a sentence W = w 1 , .", ".", ".", ", w n , the system is initialized to: c 0 = ([ ], [1, .", ".", ".", ", n, n+1], { }) where n+1 is a special root node, denoted r from now on.", "Terminal configurations have the form: c = ([ ], [r], A) and the parse tree is given by the arc set A.", "There are preconditions such that SHIFT is legal only if b = r, RIGHT only if |Σ| > 1 and LEFT only if |Σ| > 0.", "In order to enforce that r has exactly one dependent, as required by some dependency grammar frameworks, we add a precondition such that LEFT is legal only if |Σ| = 1 or b = r. In the extended system, we add a SWAP transition to be able to construct non-projective trees using online reordering: • SWAP[(σ|s 0 , b|β, A)] = (σ, b|s 0 |β, A) There is a precondition making SWAP legal only if |Σ| > 0, |B| > 1 and s 0 < b.", "3 The SWAP transition reorders nodes by moving the item on top of the stack (s 0 ) to the second position in the buffer, thus inverting the order of s 0 and b.", "The SHIFT and SWAP transitions together implement a simple sorting algorithm, which allows us to permute the order of nodes arbitrarily.", "As shown by (Nivre, 2009) , this implies that we can construct any non-projective tree by reordering and adding arcs between adjacent nodes using LEFT and RIGHT.", "As already observed, the main advantage of the arc-hybrid system over the arc-standard system is that it is arc-decomposable, which allows us to construct a simple and efficient dynamic oracle.", "The arc-eager system (Nivre, 2003) is also arcdecomposable but cannot be combined with SWAP because items on the stack in that system do not necessarily represent disjoint subtrees.", "A Static-Dynamic Oracle The dynamic oracle for arc-hybrid parsing defined by Goldberg and Nivre (2013) computes the cost of a transition by counting the number of gold arcs that are made unreachable by applying that transition.", "This presupposes that the system is arcdecomposable, a result that is proven in the same paper.", "Our construction of an oracle for arc-hybrid parsing with online ordering is based on the conjecture that we can retain arc-decomposition by only making SWAP transitions that are necessary to make non-projective arcs reachable and by enforcing all such transitions.", "Proving this conjecture is, however, outside the scope of this paper.", "Auxiliary Functions and Notation We assume that gold trees are preprocessed at training time to compute the following information for each input node i: • PROJ(i) = the position of node i in the projective order.", "4 • RDEPS(i) = the set of reachable dependents of i, initially all dependents of i.", "• LEFT: C(LEFT) = |RDEPS(s 0 )| + [[h(s 0 ) = b and s 0 ∈ RDEPS(h(s 0 ))]] Updates: Set RDEPS(s 0 ) = [ ] and remove s 0 from RDEPS(h(s 0 )).", "• RIGHT: C(RIGHT) = |RDEPS(s 0 )| + [[h(s 0 ) = s 1 and s 0 ∈ RDEPS(h(s 0 ))]] Updates: Set RDEPS(s 0 ) = [ ] and remove s 0 from RDEPS(h(s 0 )).", "• SHIFT: We use h(i) to denote the head of a node i in the gold tree.", "1.", "If there exists a node i ∈ B −b such that b < i and PROJ(b) > PROJ(i): C(SHIFT) = 0 2.", "Else: C(SHIFT) = |{d ∈ RDEPS(b) | d ∈ Σ}| + [[h(b) ∈ Σ −s 0 and b ∈ RDEPS(h(b))]] Updates: Remove b from RDEPS(h(b)) if h(b) ∈ Σ −s 0 and remove d ∈ Σ from RDEPS(b).", "Static Oracle for SWAP Our oracle is fully dynamic with respect to SHIFT, LEFT and RIGHT but basically static with respect to SWAP.", "This means that only optimal (zero cost) SWAP transitions are allowed during training and that we force the parser to apply the SWAP transition when needed.", "Optimal: To prevent non-optimal SWAP transitions, we add a precondition so that SWAP is legal only if PROJ(s 0 ) > PROJ(b).", "Forced: To force necessary SWAP transitions, we bypass the oracle whenever PROJ(s 0 ) > PROJ(b).", "5 Dynamic Oracle Since we use a static oracle for SWAP transitions, these will always have zero cost.", "The dynamic oracle thus only needs to define costs for the remaining three transitions.", "To construct the oracle, we start from the old dynamic oracle for the projective system and extend it to account for the added flexibility introduced by SWAP.", "Figure 1 defines the transition costs as well as the necessary updates to RDEPS after applying a transition.", "• LEFT: Adding the arc (b, s 0 ) and popping s 0 from the stack means that s 0 will not be able to acquire a head different from b nor any of its reachable dependents.", "In the old projective case, the loss was limited to a head in s 0 |β and dependents in b|β, but because s 0 can potentially be swapped back to the buffer, we instead define reachability explicitly through RDEPS(s 0 ) (for dependents) and RDEPS(h(s 0 )) (for the head) and update these accordingly after applying the transition.", "• RIGHT: Adding the arc (s 1 , s 0 ) and popping s 0 from the stack means that s 0 will not be able to acquire a head different from s 1 nor any of its reachable dependents.", "In the old projective case, the loss was limited to a head and dependents in b|β, but because s 0 can potentially be swapped back to the buffer, we again define reachability explicitly through RDEPS(s 0 ) (for dependents) and RDEPS(h(s 0 )) (for the head) and update these accordingly after applying the transition.", "1 2 3 4 s 1 s 0 b [ 1 2 ] Σ [ 3 4 ] B RIGHT ⇒ 1 2 3 4 [ 1 ] Σ [ 3 4 ] B SHIFT ⇓ 1 2 3 4 [ 1 2 3 ] Σ [ 4 ] B 1 2 4 3 s 1 s 0 b [ 1 2 ] Σ [ 4 3 ] B • SHIFT: In the projective case, shifting b onto the stack means that b will not be able to acquire a head in Σ other than the top item s 0 nor any dependents in Σ.", "With the SWAP transition and a static oracle, we also have to consider the case where b can later be swapped back to the buffer, in which case SHIFT has zero cost.", "We therefore have two cases in Figure 1 .", "In the first case, no updates are needed.", "In the second case, the updates are analogous to the old projective case.", "To illustrate how the oracle works, let us look at some hypothetical configurations.", "First, we can have a situation as in the top left corner of Figure 2 , where all nodes are in projective order given the gold tree displayed above the nodes.", "For simplicity, the nodes are named after their projective order.", "Applying a RIGHT transition in this configuration makes it impossible for s 0 (2) to be attached to its head (3) and therefore makes us lose the arc 3 → 2, as shown in the top right corner.", "If we instead apply a SHIFT transition, we lose the arc between b (3) and its head (1) as well as the arc 3 → 2, as shown in the bottom left corner.", "By contrast, a LEFT transition has zero cost, because no arcs are lost so the best tree reachable in the orig-inal configuration is still reachable after applying the LEFT transition.", "Consider now a configuration, like the one in the bottom right corner of Figure 2 , where the nodes are not in projective order.", "Here we can shift b (4) onto the stack without cost, because it will later be swapped back to the buffer to restore the projective order between 4 and 3.", "A RIGHT transition makes us lose the head and dependent of s 0 (4 → 2 and 2 → 3).", "A LEFT transition makes us lose the dependent of s 0 (2 → 3) .", "The combination of a dynamic oracle for LEFT, RIGHT and SHIFT with a static oracle for SWAP allows us to benefit from training with exploration in most situations and at the same time capture nonprojective dependencies.", "Experiments We extend the parser we used in de Lhoneux et al.", "(2017), a greedy transition-based parser that predicts the dependency tree given the raw words of a sentence.", "That parser is itself an extension of the parser developed by Kiperwasser and Goldberg (2016) .", "It relies on a BiLSTM to learn informative features of words in context and a feed-forward network for predicting the next parsing transition.", "It learns vector representations of the words as well as characters.", "Contrary to parsing tradition, it makes no use of part-of-speech tags.", "We released our system as UUparser 2.0, available at https: //github.com/UppsalaNLP/uuparser.", "We first compare our system, which uses our static-dynamic oracle, with the same system using a static oracle.", "This is to find out if we can benefit from error exploration using our partially dynamic oracle.", "We use the same set of hyperparameters as in that paper in all our experiments.", "We additionally compare our method to a different approach to handling non-projectivity, pseudo-projective parsing, as performed in de Lhoneux et al.", "(2017) .", "Pseudo-projective parsing was developed by Nivre and Nilsson (2005) .", "In a pre-processing step, the training data is projectivised: some nodes get reattached to a close parent.", "Parsed data are then 'deprojectivised' in a post-processing step.", "In order for information about non-projectivity to be recoverable after parsing, when projectivising, arcs are renamed to encode information about the original parent of dependents which get re-attached.", "Note that hyperparameters were tweaked for the pseudo-projective system, possibly giving an unfair advantage.", "Lastly, we compare to a projective baseline, using a dynamic oracle but no SWAP transition.", "6 This is to find out the extent to which dealing with non-projectivity is important.", "We selected a sample of 5 treebanks from the Universal Dependencies CoNLL 2017 shared task data .", "We selected languages to have different frequencies of non-projectivity, both at the sentence level and at the level of individual arcs, ranging from a very high frequency (Ancient-Greek) to a low frequency (English), as well as some typological variety.", "Non-projective frequencies were taken from Straka et al.", "(2015) and are included in Table 1 , which report our results on the development sets (best out of 20 epochs).", "Comparing to the static baseline, we can verify that our static-dynamic oracle really preserves the benefits of training with error exploration, with improvements ranging from 0.5 to 3.5 points.", "(All differences here are statistically significant with p<0.01, except for Portuguese, where the difference is statistically significant with p<0.05 according to the McNemar test).", "The new system achieves results largely on par with the pseudo-projective parser.", "Our method is better by a small margin for 3 out of 5 languages Table 1 : LAS on dev sets with gold tokenization for our static-dynamic system (S-Dy), the static and projective baselines (Static, Proj) and the pseudo-projective system of de Lhoneux et al.", "(2017) (PProj).", "%NP = percentage of nonprojective arcs/sentences.", "and worse by a large margin for 1.", "Overall, these results are encouraging given that our method is simpler and more efficient to train, with no need for pre-or post-processing and no extension of the dependency label set.", "7 Comparing to the projective baseline, we see that strictly projective parsing can be slightly better than both online reordering and pseudoprojective parsing for a language with few non-projective arcs/sentences like English.", "For all other languages, we see small (Arabic) to big (Ancient Greek) improvements from dealing with non-projectivity in some way.", "Conclusion We have shown how the SWAP transition for online reordering can be integrated into the archybrid transition system for dependency parsing in such a way that we still benefit from training with exploration using a static-dynamic oracle.", "In the future, we intend to test this further by evaluating our model on more languages with proper tuning of hyperparameters.", "We are also interested in the question of whether it is possible to define a fully dynamic oracle for our system and allow exploration for the SWAP transition too." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "4", "5" ], "paper_header_content": [ "Introduction", "An Extended Transition System", "A Static-Dynamic Oracle", "Auxiliary Functions and Notation", "Static Oracle for SWAP", "Dynamic Oracle", "Experiments", "Conclusion" ] }
GEM-SciDuet-train-46#paper-1069#slide-3
Arc Hybrid Parsing with Reordering
Drive your friend home root
Drive your friend home root
[]
GEM-SciDuet-train-46#paper-1069#slide-4
1069
Arc-Hybrid Non-Projective Dependency Parsing with a Static-Dynamic Oracle
We extend the arc-hybrid transition system for dependency parsing with a SWAP transition that enables reordering of the words and construction of non-projective trees. Although this extension potentially breaks the arc-decomposability of the transition system, we show that the existing dynamic oracle can be modified and combined with a static oracle for the SWAP transition. Experiments on five languages with different degrees of non-projectivity show that the new system gives competitive accuracy and is significantly better than a system trained with a purely static oracle.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112 ], "paper_content_text": [ "Introduction Non-projective sentences are a notorious problem in dependency parsing.", "Traditional algorithms like those developed by Nivre (2003 Nivre ( , 2004 for transition-based parsing only allow the construction of projective trees.", "These algorithms make use of a stack, a buffer and a set of arcs, and parsing consists of performing a sequence of transitions on these structures.", "Traditional algorithms have been extended in different ways to allow the construction of non-projective trees (Nivre and Nilsson, 2005; Attardi, 2006; Nivre, 2007; Gómez-Rodríguez and Nivre, 2010) .", "One method proposed by Nivre (2009) is based on the idea of word reordering.", "This is achieved by adding a transition that swaps two items in the data structures used, enabling the construction of arbitrary non-projective trees while still only adding arcs between adjacent words (after possible reordering).", "This technique was previously used in the arc-standard transition system (Nivre, 2004) .", "The first contribution of this paper is to show that it can also be combined with the arc-hybrid system proposed by Kuhlmann et al.", "(2011) .", "Recent advances in dependency parsing have demonstrated the benefit of using dynamic oracles for training dependency parsers (Goldberg and Nivre, 2013) .", "Traditionally, parsers were trained in a static way and were only exposed to configurations resulting from optimal transitions during training.", "Dynamic oracles define optimal transition sequences for any configuration in which the parser may be.", "The use of dynamic oracles enables training with exploration of errors, which mitigates the problem of error propagation at prediction time.", "In order to define a dynamic oracle, we need to be able to compute the cost of any transition in any configuration, where cost is usually defined as minimum Hamming loss with respect to the best tree reachable from that configuration.", "Goldberg and Nivre (2013) showed that this computation is straightforward for transition systems that satisfy the property of arc-decomposability, meaning that a tree is reachable from a configuration if and only if every arc in the tree is reachable in itself.", "Based on this result, they defined dynamic oracles for the arc-eager (Nivre, 2003) , arc-hybrid (Kuhlmann et al., 2011) and easy-first (Goldberg and Elhadad, 2010) systems.", "Transition systems that allow non-projective trees are in general not arc-decomposable and therefore require different methods for constructing dynamic oracles (Gómez-Rodríguez and Fernández-González, 2015) .", "The online reordering system of Nivre (2009) is furthermore based on the arc-standard system, which is not even arc-decomposable in itself (Goldberg and Nivre, 2013) .", "The second contribution of this paper is to show that we can take advantage of the arcdecomposability of the arc-hybrid transition system and extend the existing dynamic oracle to deal with the added swap transition.", "The resulting or-acle is static with respect to the new transition but remains dynamic for all other transitions.", "We show experimentally that this static-dynamic oracle gives a significant advantage over the alternative static oracle and results in competitive results for non-projective parsing.", "An Extended Transition System The arc-hybrid system has configurations of the form c = (Σ, B, A), where • Σ is a stack (represented as a list with the head to the right), • B is a buffer (represented as a list with the head to the left), • A is a set of dependency arcs (represented as (h, d) pairs).", "1 Given a sentence W = w 1 , .", ".", ".", ", w n , the system is initialized to: c 0 = ([ ], [1, .", ".", ".", ", n, n+1], { }) where n+1 is a special root node, denoted r from now on.", "Terminal configurations have the form: c = ([ ], [r], A) and the parse tree is given by the arc set A.", "There are preconditions such that SHIFT is legal only if b = r, RIGHT only if |Σ| > 1 and LEFT only if |Σ| > 0.", "In order to enforce that r has exactly one dependent, as required by some dependency grammar frameworks, we add a precondition such that LEFT is legal only if |Σ| = 1 or b = r. In the extended system, we add a SWAP transition to be able to construct non-projective trees using online reordering: • SWAP[(σ|s 0 , b|β, A)] = (σ, b|s 0 |β, A) There is a precondition making SWAP legal only if |Σ| > 0, |B| > 1 and s 0 < b.", "3 The SWAP transition reorders nodes by moving the item on top of the stack (s 0 ) to the second position in the buffer, thus inverting the order of s 0 and b.", "The SHIFT and SWAP transitions together implement a simple sorting algorithm, which allows us to permute the order of nodes arbitrarily.", "As shown by (Nivre, 2009) , this implies that we can construct any non-projective tree by reordering and adding arcs between adjacent nodes using LEFT and RIGHT.", "As already observed, the main advantage of the arc-hybrid system over the arc-standard system is that it is arc-decomposable, which allows us to construct a simple and efficient dynamic oracle.", "The arc-eager system (Nivre, 2003) is also arcdecomposable but cannot be combined with SWAP because items on the stack in that system do not necessarily represent disjoint subtrees.", "A Static-Dynamic Oracle The dynamic oracle for arc-hybrid parsing defined by Goldberg and Nivre (2013) computes the cost of a transition by counting the number of gold arcs that are made unreachable by applying that transition.", "This presupposes that the system is arcdecomposable, a result that is proven in the same paper.", "Our construction of an oracle for arc-hybrid parsing with online ordering is based on the conjecture that we can retain arc-decomposition by only making SWAP transitions that are necessary to make non-projective arcs reachable and by enforcing all such transitions.", "Proving this conjecture is, however, outside the scope of this paper.", "Auxiliary Functions and Notation We assume that gold trees are preprocessed at training time to compute the following information for each input node i: • PROJ(i) = the position of node i in the projective order.", "4 • RDEPS(i) = the set of reachable dependents of i, initially all dependents of i.", "• LEFT: C(LEFT) = |RDEPS(s 0 )| + [[h(s 0 ) = b and s 0 ∈ RDEPS(h(s 0 ))]] Updates: Set RDEPS(s 0 ) = [ ] and remove s 0 from RDEPS(h(s 0 )).", "• RIGHT: C(RIGHT) = |RDEPS(s 0 )| + [[h(s 0 ) = s 1 and s 0 ∈ RDEPS(h(s 0 ))]] Updates: Set RDEPS(s 0 ) = [ ] and remove s 0 from RDEPS(h(s 0 )).", "• SHIFT: We use h(i) to denote the head of a node i in the gold tree.", "1.", "If there exists a node i ∈ B −b such that b < i and PROJ(b) > PROJ(i): C(SHIFT) = 0 2.", "Else: C(SHIFT) = |{d ∈ RDEPS(b) | d ∈ Σ}| + [[h(b) ∈ Σ −s 0 and b ∈ RDEPS(h(b))]] Updates: Remove b from RDEPS(h(b)) if h(b) ∈ Σ −s 0 and remove d ∈ Σ from RDEPS(b).", "Static Oracle for SWAP Our oracle is fully dynamic with respect to SHIFT, LEFT and RIGHT but basically static with respect to SWAP.", "This means that only optimal (zero cost) SWAP transitions are allowed during training and that we force the parser to apply the SWAP transition when needed.", "Optimal: To prevent non-optimal SWAP transitions, we add a precondition so that SWAP is legal only if PROJ(s 0 ) > PROJ(b).", "Forced: To force necessary SWAP transitions, we bypass the oracle whenever PROJ(s 0 ) > PROJ(b).", "5 Dynamic Oracle Since we use a static oracle for SWAP transitions, these will always have zero cost.", "The dynamic oracle thus only needs to define costs for the remaining three transitions.", "To construct the oracle, we start from the old dynamic oracle for the projective system and extend it to account for the added flexibility introduced by SWAP.", "Figure 1 defines the transition costs as well as the necessary updates to RDEPS after applying a transition.", "• LEFT: Adding the arc (b, s 0 ) and popping s 0 from the stack means that s 0 will not be able to acquire a head different from b nor any of its reachable dependents.", "In the old projective case, the loss was limited to a head in s 0 |β and dependents in b|β, but because s 0 can potentially be swapped back to the buffer, we instead define reachability explicitly through RDEPS(s 0 ) (for dependents) and RDEPS(h(s 0 )) (for the head) and update these accordingly after applying the transition.", "• RIGHT: Adding the arc (s 1 , s 0 ) and popping s 0 from the stack means that s 0 will not be able to acquire a head different from s 1 nor any of its reachable dependents.", "In the old projective case, the loss was limited to a head and dependents in b|β, but because s 0 can potentially be swapped back to the buffer, we again define reachability explicitly through RDEPS(s 0 ) (for dependents) and RDEPS(h(s 0 )) (for the head) and update these accordingly after applying the transition.", "1 2 3 4 s 1 s 0 b [ 1 2 ] Σ [ 3 4 ] B RIGHT ⇒ 1 2 3 4 [ 1 ] Σ [ 3 4 ] B SHIFT ⇓ 1 2 3 4 [ 1 2 3 ] Σ [ 4 ] B 1 2 4 3 s 1 s 0 b [ 1 2 ] Σ [ 4 3 ] B • SHIFT: In the projective case, shifting b onto the stack means that b will not be able to acquire a head in Σ other than the top item s 0 nor any dependents in Σ.", "With the SWAP transition and a static oracle, we also have to consider the case where b can later be swapped back to the buffer, in which case SHIFT has zero cost.", "We therefore have two cases in Figure 1 .", "In the first case, no updates are needed.", "In the second case, the updates are analogous to the old projective case.", "To illustrate how the oracle works, let us look at some hypothetical configurations.", "First, we can have a situation as in the top left corner of Figure 2 , where all nodes are in projective order given the gold tree displayed above the nodes.", "For simplicity, the nodes are named after their projective order.", "Applying a RIGHT transition in this configuration makes it impossible for s 0 (2) to be attached to its head (3) and therefore makes us lose the arc 3 → 2, as shown in the top right corner.", "If we instead apply a SHIFT transition, we lose the arc between b (3) and its head (1) as well as the arc 3 → 2, as shown in the bottom left corner.", "By contrast, a LEFT transition has zero cost, because no arcs are lost so the best tree reachable in the orig-inal configuration is still reachable after applying the LEFT transition.", "Consider now a configuration, like the one in the bottom right corner of Figure 2 , where the nodes are not in projective order.", "Here we can shift b (4) onto the stack without cost, because it will later be swapped back to the buffer to restore the projective order between 4 and 3.", "A RIGHT transition makes us lose the head and dependent of s 0 (4 → 2 and 2 → 3).", "A LEFT transition makes us lose the dependent of s 0 (2 → 3) .", "The combination of a dynamic oracle for LEFT, RIGHT and SHIFT with a static oracle for SWAP allows us to benefit from training with exploration in most situations and at the same time capture nonprojective dependencies.", "Experiments We extend the parser we used in de Lhoneux et al.", "(2017), a greedy transition-based parser that predicts the dependency tree given the raw words of a sentence.", "That parser is itself an extension of the parser developed by Kiperwasser and Goldberg (2016) .", "It relies on a BiLSTM to learn informative features of words in context and a feed-forward network for predicting the next parsing transition.", "It learns vector representations of the words as well as characters.", "Contrary to parsing tradition, it makes no use of part-of-speech tags.", "We released our system as UUparser 2.0, available at https: //github.com/UppsalaNLP/uuparser.", "We first compare our system, which uses our static-dynamic oracle, with the same system using a static oracle.", "This is to find out if we can benefit from error exploration using our partially dynamic oracle.", "We use the same set of hyperparameters as in that paper in all our experiments.", "We additionally compare our method to a different approach to handling non-projectivity, pseudo-projective parsing, as performed in de Lhoneux et al.", "(2017) .", "Pseudo-projective parsing was developed by Nivre and Nilsson (2005) .", "In a pre-processing step, the training data is projectivised: some nodes get reattached to a close parent.", "Parsed data are then 'deprojectivised' in a post-processing step.", "In order for information about non-projectivity to be recoverable after parsing, when projectivising, arcs are renamed to encode information about the original parent of dependents which get re-attached.", "Note that hyperparameters were tweaked for the pseudo-projective system, possibly giving an unfair advantage.", "Lastly, we compare to a projective baseline, using a dynamic oracle but no SWAP transition.", "6 This is to find out the extent to which dealing with non-projectivity is important.", "We selected a sample of 5 treebanks from the Universal Dependencies CoNLL 2017 shared task data .", "We selected languages to have different frequencies of non-projectivity, both at the sentence level and at the level of individual arcs, ranging from a very high frequency (Ancient-Greek) to a low frequency (English), as well as some typological variety.", "Non-projective frequencies were taken from Straka et al.", "(2015) and are included in Table 1 , which report our results on the development sets (best out of 20 epochs).", "Comparing to the static baseline, we can verify that our static-dynamic oracle really preserves the benefits of training with error exploration, with improvements ranging from 0.5 to 3.5 points.", "(All differences here are statistically significant with p<0.01, except for Portuguese, where the difference is statistically significant with p<0.05 according to the McNemar test).", "The new system achieves results largely on par with the pseudo-projective parser.", "Our method is better by a small margin for 3 out of 5 languages Table 1 : LAS on dev sets with gold tokenization for our static-dynamic system (S-Dy), the static and projective baselines (Static, Proj) and the pseudo-projective system of de Lhoneux et al.", "(2017) (PProj).", "%NP = percentage of nonprojective arcs/sentences.", "and worse by a large margin for 1.", "Overall, these results are encouraging given that our method is simpler and more efficient to train, with no need for pre-or post-processing and no extension of the dependency label set.", "7 Comparing to the projective baseline, we see that strictly projective parsing can be slightly better than both online reordering and pseudoprojective parsing for a language with few non-projective arcs/sentences like English.", "For all other languages, we see small (Arabic) to big (Ancient Greek) improvements from dealing with non-projectivity in some way.", "Conclusion We have shown how the SWAP transition for online reordering can be integrated into the archybrid transition system for dependency parsing in such a way that we still benefit from training with exploration using a static-dynamic oracle.", "In the future, we intend to test this further by evaluating our model on more languages with proper tuning of hyperparameters.", "We are also interested in the question of whether it is possible to define a fully dynamic oracle for our system and allow exploration for the SWAP transition too." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "4", "5" ], "paper_header_content": [ "Introduction", "An Extended Transition System", "A Static-Dynamic Oracle", "Auxiliary Functions and Notation", "Static Oracle for SWAP", "Dynamic Oracle", "Experiments", "Conclusion" ] }
GEM-SciDuet-train-46#paper-1069#slide-4
Hyrid Parsing with Reordering
found best example ever Thanks Carlos Gomez-Rodriguez for the example!
found best example ever Thanks Carlos Gomez-Rodriguez for the example!
[]
GEM-SciDuet-train-46#paper-1069#slide-5
1069
Arc-Hybrid Non-Projective Dependency Parsing with a Static-Dynamic Oracle
We extend the arc-hybrid transition system for dependency parsing with a SWAP transition that enables reordering of the words and construction of non-projective trees. Although this extension potentially breaks the arc-decomposability of the transition system, we show that the existing dynamic oracle can be modified and combined with a static oracle for the SWAP transition. Experiments on five languages with different degrees of non-projectivity show that the new system gives competitive accuracy and is significantly better than a system trained with a purely static oracle.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112 ], "paper_content_text": [ "Introduction Non-projective sentences are a notorious problem in dependency parsing.", "Traditional algorithms like those developed by Nivre (2003 Nivre ( , 2004 for transition-based parsing only allow the construction of projective trees.", "These algorithms make use of a stack, a buffer and a set of arcs, and parsing consists of performing a sequence of transitions on these structures.", "Traditional algorithms have been extended in different ways to allow the construction of non-projective trees (Nivre and Nilsson, 2005; Attardi, 2006; Nivre, 2007; Gómez-Rodríguez and Nivre, 2010) .", "One method proposed by Nivre (2009) is based on the idea of word reordering.", "This is achieved by adding a transition that swaps two items in the data structures used, enabling the construction of arbitrary non-projective trees while still only adding arcs between adjacent words (after possible reordering).", "This technique was previously used in the arc-standard transition system (Nivre, 2004) .", "The first contribution of this paper is to show that it can also be combined with the arc-hybrid system proposed by Kuhlmann et al.", "(2011) .", "Recent advances in dependency parsing have demonstrated the benefit of using dynamic oracles for training dependency parsers (Goldberg and Nivre, 2013) .", "Traditionally, parsers were trained in a static way and were only exposed to configurations resulting from optimal transitions during training.", "Dynamic oracles define optimal transition sequences for any configuration in which the parser may be.", "The use of dynamic oracles enables training with exploration of errors, which mitigates the problem of error propagation at prediction time.", "In order to define a dynamic oracle, we need to be able to compute the cost of any transition in any configuration, where cost is usually defined as minimum Hamming loss with respect to the best tree reachable from that configuration.", "Goldberg and Nivre (2013) showed that this computation is straightforward for transition systems that satisfy the property of arc-decomposability, meaning that a tree is reachable from a configuration if and only if every arc in the tree is reachable in itself.", "Based on this result, they defined dynamic oracles for the arc-eager (Nivre, 2003) , arc-hybrid (Kuhlmann et al., 2011) and easy-first (Goldberg and Elhadad, 2010) systems.", "Transition systems that allow non-projective trees are in general not arc-decomposable and therefore require different methods for constructing dynamic oracles (Gómez-Rodríguez and Fernández-González, 2015) .", "The online reordering system of Nivre (2009) is furthermore based on the arc-standard system, which is not even arc-decomposable in itself (Goldberg and Nivre, 2013) .", "The second contribution of this paper is to show that we can take advantage of the arcdecomposability of the arc-hybrid transition system and extend the existing dynamic oracle to deal with the added swap transition.", "The resulting or-acle is static with respect to the new transition but remains dynamic for all other transitions.", "We show experimentally that this static-dynamic oracle gives a significant advantage over the alternative static oracle and results in competitive results for non-projective parsing.", "An Extended Transition System The arc-hybrid system has configurations of the form c = (Σ, B, A), where • Σ is a stack (represented as a list with the head to the right), • B is a buffer (represented as a list with the head to the left), • A is a set of dependency arcs (represented as (h, d) pairs).", "1 Given a sentence W = w 1 , .", ".", ".", ", w n , the system is initialized to: c 0 = ([ ], [1, .", ".", ".", ", n, n+1], { }) where n+1 is a special root node, denoted r from now on.", "Terminal configurations have the form: c = ([ ], [r], A) and the parse tree is given by the arc set A.", "There are preconditions such that SHIFT is legal only if b = r, RIGHT only if |Σ| > 1 and LEFT only if |Σ| > 0.", "In order to enforce that r has exactly one dependent, as required by some dependency grammar frameworks, we add a precondition such that LEFT is legal only if |Σ| = 1 or b = r. In the extended system, we add a SWAP transition to be able to construct non-projective trees using online reordering: • SWAP[(σ|s 0 , b|β, A)] = (σ, b|s 0 |β, A) There is a precondition making SWAP legal only if |Σ| > 0, |B| > 1 and s 0 < b.", "3 The SWAP transition reorders nodes by moving the item on top of the stack (s 0 ) to the second position in the buffer, thus inverting the order of s 0 and b.", "The SHIFT and SWAP transitions together implement a simple sorting algorithm, which allows us to permute the order of nodes arbitrarily.", "As shown by (Nivre, 2009) , this implies that we can construct any non-projective tree by reordering and adding arcs between adjacent nodes using LEFT and RIGHT.", "As already observed, the main advantage of the arc-hybrid system over the arc-standard system is that it is arc-decomposable, which allows us to construct a simple and efficient dynamic oracle.", "The arc-eager system (Nivre, 2003) is also arcdecomposable but cannot be combined with SWAP because items on the stack in that system do not necessarily represent disjoint subtrees.", "A Static-Dynamic Oracle The dynamic oracle for arc-hybrid parsing defined by Goldberg and Nivre (2013) computes the cost of a transition by counting the number of gold arcs that are made unreachable by applying that transition.", "This presupposes that the system is arcdecomposable, a result that is proven in the same paper.", "Our construction of an oracle for arc-hybrid parsing with online ordering is based on the conjecture that we can retain arc-decomposition by only making SWAP transitions that are necessary to make non-projective arcs reachable and by enforcing all such transitions.", "Proving this conjecture is, however, outside the scope of this paper.", "Auxiliary Functions and Notation We assume that gold trees are preprocessed at training time to compute the following information for each input node i: • PROJ(i) = the position of node i in the projective order.", "4 • RDEPS(i) = the set of reachable dependents of i, initially all dependents of i.", "• LEFT: C(LEFT) = |RDEPS(s 0 )| + [[h(s 0 ) = b and s 0 ∈ RDEPS(h(s 0 ))]] Updates: Set RDEPS(s 0 ) = [ ] and remove s 0 from RDEPS(h(s 0 )).", "• RIGHT: C(RIGHT) = |RDEPS(s 0 )| + [[h(s 0 ) = s 1 and s 0 ∈ RDEPS(h(s 0 ))]] Updates: Set RDEPS(s 0 ) = [ ] and remove s 0 from RDEPS(h(s 0 )).", "• SHIFT: We use h(i) to denote the head of a node i in the gold tree.", "1.", "If there exists a node i ∈ B −b such that b < i and PROJ(b) > PROJ(i): C(SHIFT) = 0 2.", "Else: C(SHIFT) = |{d ∈ RDEPS(b) | d ∈ Σ}| + [[h(b) ∈ Σ −s 0 and b ∈ RDEPS(h(b))]] Updates: Remove b from RDEPS(h(b)) if h(b) ∈ Σ −s 0 and remove d ∈ Σ from RDEPS(b).", "Static Oracle for SWAP Our oracle is fully dynamic with respect to SHIFT, LEFT and RIGHT but basically static with respect to SWAP.", "This means that only optimal (zero cost) SWAP transitions are allowed during training and that we force the parser to apply the SWAP transition when needed.", "Optimal: To prevent non-optimal SWAP transitions, we add a precondition so that SWAP is legal only if PROJ(s 0 ) > PROJ(b).", "Forced: To force necessary SWAP transitions, we bypass the oracle whenever PROJ(s 0 ) > PROJ(b).", "5 Dynamic Oracle Since we use a static oracle for SWAP transitions, these will always have zero cost.", "The dynamic oracle thus only needs to define costs for the remaining three transitions.", "To construct the oracle, we start from the old dynamic oracle for the projective system and extend it to account for the added flexibility introduced by SWAP.", "Figure 1 defines the transition costs as well as the necessary updates to RDEPS after applying a transition.", "• LEFT: Adding the arc (b, s 0 ) and popping s 0 from the stack means that s 0 will not be able to acquire a head different from b nor any of its reachable dependents.", "In the old projective case, the loss was limited to a head in s 0 |β and dependents in b|β, but because s 0 can potentially be swapped back to the buffer, we instead define reachability explicitly through RDEPS(s 0 ) (for dependents) and RDEPS(h(s 0 )) (for the head) and update these accordingly after applying the transition.", "• RIGHT: Adding the arc (s 1 , s 0 ) and popping s 0 from the stack means that s 0 will not be able to acquire a head different from s 1 nor any of its reachable dependents.", "In the old projective case, the loss was limited to a head and dependents in b|β, but because s 0 can potentially be swapped back to the buffer, we again define reachability explicitly through RDEPS(s 0 ) (for dependents) and RDEPS(h(s 0 )) (for the head) and update these accordingly after applying the transition.", "1 2 3 4 s 1 s 0 b [ 1 2 ] Σ [ 3 4 ] B RIGHT ⇒ 1 2 3 4 [ 1 ] Σ [ 3 4 ] B SHIFT ⇓ 1 2 3 4 [ 1 2 3 ] Σ [ 4 ] B 1 2 4 3 s 1 s 0 b [ 1 2 ] Σ [ 4 3 ] B • SHIFT: In the projective case, shifting b onto the stack means that b will not be able to acquire a head in Σ other than the top item s 0 nor any dependents in Σ.", "With the SWAP transition and a static oracle, we also have to consider the case where b can later be swapped back to the buffer, in which case SHIFT has zero cost.", "We therefore have two cases in Figure 1 .", "In the first case, no updates are needed.", "In the second case, the updates are analogous to the old projective case.", "To illustrate how the oracle works, let us look at some hypothetical configurations.", "First, we can have a situation as in the top left corner of Figure 2 , where all nodes are in projective order given the gold tree displayed above the nodes.", "For simplicity, the nodes are named after their projective order.", "Applying a RIGHT transition in this configuration makes it impossible for s 0 (2) to be attached to its head (3) and therefore makes us lose the arc 3 → 2, as shown in the top right corner.", "If we instead apply a SHIFT transition, we lose the arc between b (3) and its head (1) as well as the arc 3 → 2, as shown in the bottom left corner.", "By contrast, a LEFT transition has zero cost, because no arcs are lost so the best tree reachable in the orig-inal configuration is still reachable after applying the LEFT transition.", "Consider now a configuration, like the one in the bottom right corner of Figure 2 , where the nodes are not in projective order.", "Here we can shift b (4) onto the stack without cost, because it will later be swapped back to the buffer to restore the projective order between 4 and 3.", "A RIGHT transition makes us lose the head and dependent of s 0 (4 → 2 and 2 → 3).", "A LEFT transition makes us lose the dependent of s 0 (2 → 3) .", "The combination of a dynamic oracle for LEFT, RIGHT and SHIFT with a static oracle for SWAP allows us to benefit from training with exploration in most situations and at the same time capture nonprojective dependencies.", "Experiments We extend the parser we used in de Lhoneux et al.", "(2017), a greedy transition-based parser that predicts the dependency tree given the raw words of a sentence.", "That parser is itself an extension of the parser developed by Kiperwasser and Goldberg (2016) .", "It relies on a BiLSTM to learn informative features of words in context and a feed-forward network for predicting the next parsing transition.", "It learns vector representations of the words as well as characters.", "Contrary to parsing tradition, it makes no use of part-of-speech tags.", "We released our system as UUparser 2.0, available at https: //github.com/UppsalaNLP/uuparser.", "We first compare our system, which uses our static-dynamic oracle, with the same system using a static oracle.", "This is to find out if we can benefit from error exploration using our partially dynamic oracle.", "We use the same set of hyperparameters as in that paper in all our experiments.", "We additionally compare our method to a different approach to handling non-projectivity, pseudo-projective parsing, as performed in de Lhoneux et al.", "(2017) .", "Pseudo-projective parsing was developed by Nivre and Nilsson (2005) .", "In a pre-processing step, the training data is projectivised: some nodes get reattached to a close parent.", "Parsed data are then 'deprojectivised' in a post-processing step.", "In order for information about non-projectivity to be recoverable after parsing, when projectivising, arcs are renamed to encode information about the original parent of dependents which get re-attached.", "Note that hyperparameters were tweaked for the pseudo-projective system, possibly giving an unfair advantage.", "Lastly, we compare to a projective baseline, using a dynamic oracle but no SWAP transition.", "6 This is to find out the extent to which dealing with non-projectivity is important.", "We selected a sample of 5 treebanks from the Universal Dependencies CoNLL 2017 shared task data .", "We selected languages to have different frequencies of non-projectivity, both at the sentence level and at the level of individual arcs, ranging from a very high frequency (Ancient-Greek) to a low frequency (English), as well as some typological variety.", "Non-projective frequencies were taken from Straka et al.", "(2015) and are included in Table 1 , which report our results on the development sets (best out of 20 epochs).", "Comparing to the static baseline, we can verify that our static-dynamic oracle really preserves the benefits of training with error exploration, with improvements ranging from 0.5 to 3.5 points.", "(All differences here are statistically significant with p<0.01, except for Portuguese, where the difference is statistically significant with p<0.05 according to the McNemar test).", "The new system achieves results largely on par with the pseudo-projective parser.", "Our method is better by a small margin for 3 out of 5 languages Table 1 : LAS on dev sets with gold tokenization for our static-dynamic system (S-Dy), the static and projective baselines (Static, Proj) and the pseudo-projective system of de Lhoneux et al.", "(2017) (PProj).", "%NP = percentage of nonprojective arcs/sentences.", "and worse by a large margin for 1.", "Overall, these results are encouraging given that our method is simpler and more efficient to train, with no need for pre-or post-processing and no extension of the dependency label set.", "7 Comparing to the projective baseline, we see that strictly projective parsing can be slightly better than both online reordering and pseudoprojective parsing for a language with few non-projective arcs/sentences like English.", "For all other languages, we see small (Arabic) to big (Ancient Greek) improvements from dealing with non-projectivity in some way.", "Conclusion We have shown how the SWAP transition for online reordering can be integrated into the archybrid transition system for dependency parsing in such a way that we still benefit from training with exploration using a static-dynamic oracle.", "In the future, we intend to test this further by evaluating our model on more languages with proper tuning of hyperparameters.", "We are also interested in the question of whether it is possible to define a fully dynamic oracle for our system and allow exploration for the SWAP transition too." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "4", "5" ], "paper_header_content": [ "Introduction", "An Extended Transition System", "A Static-Dynamic Oracle", "Auxiliary Functions and Notation", "Static Oracle for SWAP", "Dynamic Oracle", "Experiments", "Conclusion" ] }
GEM-SciDuet-train-46#paper-1069#slide-5
Arc Hyrid Parsing with Reordering
found best example ever found1 best2 example4 ever3 found1 best2 ever3 example4
found best example ever found1 best2 example4 ever3 found1 best2 ever3 example4
[]
GEM-SciDuet-train-46#paper-1069#slide-7
1069
Arc-Hybrid Non-Projective Dependency Parsing with a Static-Dynamic Oracle
We extend the arc-hybrid transition system for dependency parsing with a SWAP transition that enables reordering of the words and construction of non-projective trees. Although this extension potentially breaks the arc-decomposability of the transition system, we show that the existing dynamic oracle can be modified and combined with a static oracle for the SWAP transition. Experiments on five languages with different degrees of non-projectivity show that the new system gives competitive accuracy and is significantly better than a system trained with a purely static oracle.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112 ], "paper_content_text": [ "Introduction Non-projective sentences are a notorious problem in dependency parsing.", "Traditional algorithms like those developed by Nivre (2003 Nivre ( , 2004 for transition-based parsing only allow the construction of projective trees.", "These algorithms make use of a stack, a buffer and a set of arcs, and parsing consists of performing a sequence of transitions on these structures.", "Traditional algorithms have been extended in different ways to allow the construction of non-projective trees (Nivre and Nilsson, 2005; Attardi, 2006; Nivre, 2007; Gómez-Rodríguez and Nivre, 2010) .", "One method proposed by Nivre (2009) is based on the idea of word reordering.", "This is achieved by adding a transition that swaps two items in the data structures used, enabling the construction of arbitrary non-projective trees while still only adding arcs between adjacent words (after possible reordering).", "This technique was previously used in the arc-standard transition system (Nivre, 2004) .", "The first contribution of this paper is to show that it can also be combined with the arc-hybrid system proposed by Kuhlmann et al.", "(2011) .", "Recent advances in dependency parsing have demonstrated the benefit of using dynamic oracles for training dependency parsers (Goldberg and Nivre, 2013) .", "Traditionally, parsers were trained in a static way and were only exposed to configurations resulting from optimal transitions during training.", "Dynamic oracles define optimal transition sequences for any configuration in which the parser may be.", "The use of dynamic oracles enables training with exploration of errors, which mitigates the problem of error propagation at prediction time.", "In order to define a dynamic oracle, we need to be able to compute the cost of any transition in any configuration, where cost is usually defined as minimum Hamming loss with respect to the best tree reachable from that configuration.", "Goldberg and Nivre (2013) showed that this computation is straightforward for transition systems that satisfy the property of arc-decomposability, meaning that a tree is reachable from a configuration if and only if every arc in the tree is reachable in itself.", "Based on this result, they defined dynamic oracles for the arc-eager (Nivre, 2003) , arc-hybrid (Kuhlmann et al., 2011) and easy-first (Goldberg and Elhadad, 2010) systems.", "Transition systems that allow non-projective trees are in general not arc-decomposable and therefore require different methods for constructing dynamic oracles (Gómez-Rodríguez and Fernández-González, 2015) .", "The online reordering system of Nivre (2009) is furthermore based on the arc-standard system, which is not even arc-decomposable in itself (Goldberg and Nivre, 2013) .", "The second contribution of this paper is to show that we can take advantage of the arcdecomposability of the arc-hybrid transition system and extend the existing dynamic oracle to deal with the added swap transition.", "The resulting or-acle is static with respect to the new transition but remains dynamic for all other transitions.", "We show experimentally that this static-dynamic oracle gives a significant advantage over the alternative static oracle and results in competitive results for non-projective parsing.", "An Extended Transition System The arc-hybrid system has configurations of the form c = (Σ, B, A), where • Σ is a stack (represented as a list with the head to the right), • B is a buffer (represented as a list with the head to the left), • A is a set of dependency arcs (represented as (h, d) pairs).", "1 Given a sentence W = w 1 , .", ".", ".", ", w n , the system is initialized to: c 0 = ([ ], [1, .", ".", ".", ", n, n+1], { }) where n+1 is a special root node, denoted r from now on.", "Terminal configurations have the form: c = ([ ], [r], A) and the parse tree is given by the arc set A.", "There are preconditions such that SHIFT is legal only if b = r, RIGHT only if |Σ| > 1 and LEFT only if |Σ| > 0.", "In order to enforce that r has exactly one dependent, as required by some dependency grammar frameworks, we add a precondition such that LEFT is legal only if |Σ| = 1 or b = r. In the extended system, we add a SWAP transition to be able to construct non-projective trees using online reordering: • SWAP[(σ|s 0 , b|β, A)] = (σ, b|s 0 |β, A) There is a precondition making SWAP legal only if |Σ| > 0, |B| > 1 and s 0 < b.", "3 The SWAP transition reorders nodes by moving the item on top of the stack (s 0 ) to the second position in the buffer, thus inverting the order of s 0 and b.", "The SHIFT and SWAP transitions together implement a simple sorting algorithm, which allows us to permute the order of nodes arbitrarily.", "As shown by (Nivre, 2009) , this implies that we can construct any non-projective tree by reordering and adding arcs between adjacent nodes using LEFT and RIGHT.", "As already observed, the main advantage of the arc-hybrid system over the arc-standard system is that it is arc-decomposable, which allows us to construct a simple and efficient dynamic oracle.", "The arc-eager system (Nivre, 2003) is also arcdecomposable but cannot be combined with SWAP because items on the stack in that system do not necessarily represent disjoint subtrees.", "A Static-Dynamic Oracle The dynamic oracle for arc-hybrid parsing defined by Goldberg and Nivre (2013) computes the cost of a transition by counting the number of gold arcs that are made unreachable by applying that transition.", "This presupposes that the system is arcdecomposable, a result that is proven in the same paper.", "Our construction of an oracle for arc-hybrid parsing with online ordering is based on the conjecture that we can retain arc-decomposition by only making SWAP transitions that are necessary to make non-projective arcs reachable and by enforcing all such transitions.", "Proving this conjecture is, however, outside the scope of this paper.", "Auxiliary Functions and Notation We assume that gold trees are preprocessed at training time to compute the following information for each input node i: • PROJ(i) = the position of node i in the projective order.", "4 • RDEPS(i) = the set of reachable dependents of i, initially all dependents of i.", "• LEFT: C(LEFT) = |RDEPS(s 0 )| + [[h(s 0 ) = b and s 0 ∈ RDEPS(h(s 0 ))]] Updates: Set RDEPS(s 0 ) = [ ] and remove s 0 from RDEPS(h(s 0 )).", "• RIGHT: C(RIGHT) = |RDEPS(s 0 )| + [[h(s 0 ) = s 1 and s 0 ∈ RDEPS(h(s 0 ))]] Updates: Set RDEPS(s 0 ) = [ ] and remove s 0 from RDEPS(h(s 0 )).", "• SHIFT: We use h(i) to denote the head of a node i in the gold tree.", "1.", "If there exists a node i ∈ B −b such that b < i and PROJ(b) > PROJ(i): C(SHIFT) = 0 2.", "Else: C(SHIFT) = |{d ∈ RDEPS(b) | d ∈ Σ}| + [[h(b) ∈ Σ −s 0 and b ∈ RDEPS(h(b))]] Updates: Remove b from RDEPS(h(b)) if h(b) ∈ Σ −s 0 and remove d ∈ Σ from RDEPS(b).", "Static Oracle for SWAP Our oracle is fully dynamic with respect to SHIFT, LEFT and RIGHT but basically static with respect to SWAP.", "This means that only optimal (zero cost) SWAP transitions are allowed during training and that we force the parser to apply the SWAP transition when needed.", "Optimal: To prevent non-optimal SWAP transitions, we add a precondition so that SWAP is legal only if PROJ(s 0 ) > PROJ(b).", "Forced: To force necessary SWAP transitions, we bypass the oracle whenever PROJ(s 0 ) > PROJ(b).", "5 Dynamic Oracle Since we use a static oracle for SWAP transitions, these will always have zero cost.", "The dynamic oracle thus only needs to define costs for the remaining three transitions.", "To construct the oracle, we start from the old dynamic oracle for the projective system and extend it to account for the added flexibility introduced by SWAP.", "Figure 1 defines the transition costs as well as the necessary updates to RDEPS after applying a transition.", "• LEFT: Adding the arc (b, s 0 ) and popping s 0 from the stack means that s 0 will not be able to acquire a head different from b nor any of its reachable dependents.", "In the old projective case, the loss was limited to a head in s 0 |β and dependents in b|β, but because s 0 can potentially be swapped back to the buffer, we instead define reachability explicitly through RDEPS(s 0 ) (for dependents) and RDEPS(h(s 0 )) (for the head) and update these accordingly after applying the transition.", "• RIGHT: Adding the arc (s 1 , s 0 ) and popping s 0 from the stack means that s 0 will not be able to acquire a head different from s 1 nor any of its reachable dependents.", "In the old projective case, the loss was limited to a head and dependents in b|β, but because s 0 can potentially be swapped back to the buffer, we again define reachability explicitly through RDEPS(s 0 ) (for dependents) and RDEPS(h(s 0 )) (for the head) and update these accordingly after applying the transition.", "1 2 3 4 s 1 s 0 b [ 1 2 ] Σ [ 3 4 ] B RIGHT ⇒ 1 2 3 4 [ 1 ] Σ [ 3 4 ] B SHIFT ⇓ 1 2 3 4 [ 1 2 3 ] Σ [ 4 ] B 1 2 4 3 s 1 s 0 b [ 1 2 ] Σ [ 4 3 ] B • SHIFT: In the projective case, shifting b onto the stack means that b will not be able to acquire a head in Σ other than the top item s 0 nor any dependents in Σ.", "With the SWAP transition and a static oracle, we also have to consider the case where b can later be swapped back to the buffer, in which case SHIFT has zero cost.", "We therefore have two cases in Figure 1 .", "In the first case, no updates are needed.", "In the second case, the updates are analogous to the old projective case.", "To illustrate how the oracle works, let us look at some hypothetical configurations.", "First, we can have a situation as in the top left corner of Figure 2 , where all nodes are in projective order given the gold tree displayed above the nodes.", "For simplicity, the nodes are named after their projective order.", "Applying a RIGHT transition in this configuration makes it impossible for s 0 (2) to be attached to its head (3) and therefore makes us lose the arc 3 → 2, as shown in the top right corner.", "If we instead apply a SHIFT transition, we lose the arc between b (3) and its head (1) as well as the arc 3 → 2, as shown in the bottom left corner.", "By contrast, a LEFT transition has zero cost, because no arcs are lost so the best tree reachable in the orig-inal configuration is still reachable after applying the LEFT transition.", "Consider now a configuration, like the one in the bottom right corner of Figure 2 , where the nodes are not in projective order.", "Here we can shift b (4) onto the stack without cost, because it will later be swapped back to the buffer to restore the projective order between 4 and 3.", "A RIGHT transition makes us lose the head and dependent of s 0 (4 → 2 and 2 → 3).", "A LEFT transition makes us lose the dependent of s 0 (2 → 3) .", "The combination of a dynamic oracle for LEFT, RIGHT and SHIFT with a static oracle for SWAP allows us to benefit from training with exploration in most situations and at the same time capture nonprojective dependencies.", "Experiments We extend the parser we used in de Lhoneux et al.", "(2017), a greedy transition-based parser that predicts the dependency tree given the raw words of a sentence.", "That parser is itself an extension of the parser developed by Kiperwasser and Goldberg (2016) .", "It relies on a BiLSTM to learn informative features of words in context and a feed-forward network for predicting the next parsing transition.", "It learns vector representations of the words as well as characters.", "Contrary to parsing tradition, it makes no use of part-of-speech tags.", "We released our system as UUparser 2.0, available at https: //github.com/UppsalaNLP/uuparser.", "We first compare our system, which uses our static-dynamic oracle, with the same system using a static oracle.", "This is to find out if we can benefit from error exploration using our partially dynamic oracle.", "We use the same set of hyperparameters as in that paper in all our experiments.", "We additionally compare our method to a different approach to handling non-projectivity, pseudo-projective parsing, as performed in de Lhoneux et al.", "(2017) .", "Pseudo-projective parsing was developed by Nivre and Nilsson (2005) .", "In a pre-processing step, the training data is projectivised: some nodes get reattached to a close parent.", "Parsed data are then 'deprojectivised' in a post-processing step.", "In order for information about non-projectivity to be recoverable after parsing, when projectivising, arcs are renamed to encode information about the original parent of dependents which get re-attached.", "Note that hyperparameters were tweaked for the pseudo-projective system, possibly giving an unfair advantage.", "Lastly, we compare to a projective baseline, using a dynamic oracle but no SWAP transition.", "6 This is to find out the extent to which dealing with non-projectivity is important.", "We selected a sample of 5 treebanks from the Universal Dependencies CoNLL 2017 shared task data .", "We selected languages to have different frequencies of non-projectivity, both at the sentence level and at the level of individual arcs, ranging from a very high frequency (Ancient-Greek) to a low frequency (English), as well as some typological variety.", "Non-projective frequencies were taken from Straka et al.", "(2015) and are included in Table 1 , which report our results on the development sets (best out of 20 epochs).", "Comparing to the static baseline, we can verify that our static-dynamic oracle really preserves the benefits of training with error exploration, with improvements ranging from 0.5 to 3.5 points.", "(All differences here are statistically significant with p<0.01, except for Portuguese, where the difference is statistically significant with p<0.05 according to the McNemar test).", "The new system achieves results largely on par with the pseudo-projective parser.", "Our method is better by a small margin for 3 out of 5 languages Table 1 : LAS on dev sets with gold tokenization for our static-dynamic system (S-Dy), the static and projective baselines (Static, Proj) and the pseudo-projective system of de Lhoneux et al.", "(2017) (PProj).", "%NP = percentage of nonprojective arcs/sentences.", "and worse by a large margin for 1.", "Overall, these results are encouraging given that our method is simpler and more efficient to train, with no need for pre-or post-processing and no extension of the dependency label set.", "7 Comparing to the projective baseline, we see that strictly projective parsing can be slightly better than both online reordering and pseudoprojective parsing for a language with few non-projective arcs/sentences like English.", "For all other languages, we see small (Arabic) to big (Ancient Greek) improvements from dealing with non-projectivity in some way.", "Conclusion We have shown how the SWAP transition for online reordering can be integrated into the archybrid transition system for dependency parsing in such a way that we still benefit from training with exploration using a static-dynamic oracle.", "In the future, we intend to test this further by evaluating our model on more languages with proper tuning of hyperparameters.", "We are also interested in the question of whether it is possible to define a fully dynamic oracle for our system and allow exploration for the SWAP transition too." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "4", "5" ], "paper_header_content": [ "Introduction", "An Extended Transition System", "A Static-Dynamic Oracle", "Auxiliary Functions and Notation", "Static Oracle for SWAP", "Dynamic Oracle", "Experiments", "Conclusion" ] }
GEM-SciDuet-train-46#paper-1069#slide-7
A Sta ynamic Oracl
found best example ever found1 best2] example4 ever3
found best example ever found1 best2] example4 ever3
[]
GEM-SciDuet-train-46#paper-1069#slide-8
1069
Arc-Hybrid Non-Projective Dependency Parsing with a Static-Dynamic Oracle
We extend the arc-hybrid transition system for dependency parsing with a SWAP transition that enables reordering of the words and construction of non-projective trees. Although this extension potentially breaks the arc-decomposability of the transition system, we show that the existing dynamic oracle can be modified and combined with a static oracle for the SWAP transition. Experiments on five languages with different degrees of non-projectivity show that the new system gives competitive accuracy and is significantly better than a system trained with a purely static oracle.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112 ], "paper_content_text": [ "Introduction Non-projective sentences are a notorious problem in dependency parsing.", "Traditional algorithms like those developed by Nivre (2003 Nivre ( , 2004 for transition-based parsing only allow the construction of projective trees.", "These algorithms make use of a stack, a buffer and a set of arcs, and parsing consists of performing a sequence of transitions on these structures.", "Traditional algorithms have been extended in different ways to allow the construction of non-projective trees (Nivre and Nilsson, 2005; Attardi, 2006; Nivre, 2007; Gómez-Rodríguez and Nivre, 2010) .", "One method proposed by Nivre (2009) is based on the idea of word reordering.", "This is achieved by adding a transition that swaps two items in the data structures used, enabling the construction of arbitrary non-projective trees while still only adding arcs between adjacent words (after possible reordering).", "This technique was previously used in the arc-standard transition system (Nivre, 2004) .", "The first contribution of this paper is to show that it can also be combined with the arc-hybrid system proposed by Kuhlmann et al.", "(2011) .", "Recent advances in dependency parsing have demonstrated the benefit of using dynamic oracles for training dependency parsers (Goldberg and Nivre, 2013) .", "Traditionally, parsers were trained in a static way and were only exposed to configurations resulting from optimal transitions during training.", "Dynamic oracles define optimal transition sequences for any configuration in which the parser may be.", "The use of dynamic oracles enables training with exploration of errors, which mitigates the problem of error propagation at prediction time.", "In order to define a dynamic oracle, we need to be able to compute the cost of any transition in any configuration, where cost is usually defined as minimum Hamming loss with respect to the best tree reachable from that configuration.", "Goldberg and Nivre (2013) showed that this computation is straightforward for transition systems that satisfy the property of arc-decomposability, meaning that a tree is reachable from a configuration if and only if every arc in the tree is reachable in itself.", "Based on this result, they defined dynamic oracles for the arc-eager (Nivre, 2003) , arc-hybrid (Kuhlmann et al., 2011) and easy-first (Goldberg and Elhadad, 2010) systems.", "Transition systems that allow non-projective trees are in general not arc-decomposable and therefore require different methods for constructing dynamic oracles (Gómez-Rodríguez and Fernández-González, 2015) .", "The online reordering system of Nivre (2009) is furthermore based on the arc-standard system, which is not even arc-decomposable in itself (Goldberg and Nivre, 2013) .", "The second contribution of this paper is to show that we can take advantage of the arcdecomposability of the arc-hybrid transition system and extend the existing dynamic oracle to deal with the added swap transition.", "The resulting or-acle is static with respect to the new transition but remains dynamic for all other transitions.", "We show experimentally that this static-dynamic oracle gives a significant advantage over the alternative static oracle and results in competitive results for non-projective parsing.", "An Extended Transition System The arc-hybrid system has configurations of the form c = (Σ, B, A), where • Σ is a stack (represented as a list with the head to the right), • B is a buffer (represented as a list with the head to the left), • A is a set of dependency arcs (represented as (h, d) pairs).", "1 Given a sentence W = w 1 , .", ".", ".", ", w n , the system is initialized to: c 0 = ([ ], [1, .", ".", ".", ", n, n+1], { }) where n+1 is a special root node, denoted r from now on.", "Terminal configurations have the form: c = ([ ], [r], A) and the parse tree is given by the arc set A.", "There are preconditions such that SHIFT is legal only if b = r, RIGHT only if |Σ| > 1 and LEFT only if |Σ| > 0.", "In order to enforce that r has exactly one dependent, as required by some dependency grammar frameworks, we add a precondition such that LEFT is legal only if |Σ| = 1 or b = r. In the extended system, we add a SWAP transition to be able to construct non-projective trees using online reordering: • SWAP[(σ|s 0 , b|β, A)] = (σ, b|s 0 |β, A) There is a precondition making SWAP legal only if |Σ| > 0, |B| > 1 and s 0 < b.", "3 The SWAP transition reorders nodes by moving the item on top of the stack (s 0 ) to the second position in the buffer, thus inverting the order of s 0 and b.", "The SHIFT and SWAP transitions together implement a simple sorting algorithm, which allows us to permute the order of nodes arbitrarily.", "As shown by (Nivre, 2009) , this implies that we can construct any non-projective tree by reordering and adding arcs between adjacent nodes using LEFT and RIGHT.", "As already observed, the main advantage of the arc-hybrid system over the arc-standard system is that it is arc-decomposable, which allows us to construct a simple and efficient dynamic oracle.", "The arc-eager system (Nivre, 2003) is also arcdecomposable but cannot be combined with SWAP because items on the stack in that system do not necessarily represent disjoint subtrees.", "A Static-Dynamic Oracle The dynamic oracle for arc-hybrid parsing defined by Goldberg and Nivre (2013) computes the cost of a transition by counting the number of gold arcs that are made unreachable by applying that transition.", "This presupposes that the system is arcdecomposable, a result that is proven in the same paper.", "Our construction of an oracle for arc-hybrid parsing with online ordering is based on the conjecture that we can retain arc-decomposition by only making SWAP transitions that are necessary to make non-projective arcs reachable and by enforcing all such transitions.", "Proving this conjecture is, however, outside the scope of this paper.", "Auxiliary Functions and Notation We assume that gold trees are preprocessed at training time to compute the following information for each input node i: • PROJ(i) = the position of node i in the projective order.", "4 • RDEPS(i) = the set of reachable dependents of i, initially all dependents of i.", "• LEFT: C(LEFT) = |RDEPS(s 0 )| + [[h(s 0 ) = b and s 0 ∈ RDEPS(h(s 0 ))]] Updates: Set RDEPS(s 0 ) = [ ] and remove s 0 from RDEPS(h(s 0 )).", "• RIGHT: C(RIGHT) = |RDEPS(s 0 )| + [[h(s 0 ) = s 1 and s 0 ∈ RDEPS(h(s 0 ))]] Updates: Set RDEPS(s 0 ) = [ ] and remove s 0 from RDEPS(h(s 0 )).", "• SHIFT: We use h(i) to denote the head of a node i in the gold tree.", "1.", "If there exists a node i ∈ B −b such that b < i and PROJ(b) > PROJ(i): C(SHIFT) = 0 2.", "Else: C(SHIFT) = |{d ∈ RDEPS(b) | d ∈ Σ}| + [[h(b) ∈ Σ −s 0 and b ∈ RDEPS(h(b))]] Updates: Remove b from RDEPS(h(b)) if h(b) ∈ Σ −s 0 and remove d ∈ Σ from RDEPS(b).", "Static Oracle for SWAP Our oracle is fully dynamic with respect to SHIFT, LEFT and RIGHT but basically static with respect to SWAP.", "This means that only optimal (zero cost) SWAP transitions are allowed during training and that we force the parser to apply the SWAP transition when needed.", "Optimal: To prevent non-optimal SWAP transitions, we add a precondition so that SWAP is legal only if PROJ(s 0 ) > PROJ(b).", "Forced: To force necessary SWAP transitions, we bypass the oracle whenever PROJ(s 0 ) > PROJ(b).", "5 Dynamic Oracle Since we use a static oracle for SWAP transitions, these will always have zero cost.", "The dynamic oracle thus only needs to define costs for the remaining three transitions.", "To construct the oracle, we start from the old dynamic oracle for the projective system and extend it to account for the added flexibility introduced by SWAP.", "Figure 1 defines the transition costs as well as the necessary updates to RDEPS after applying a transition.", "• LEFT: Adding the arc (b, s 0 ) and popping s 0 from the stack means that s 0 will not be able to acquire a head different from b nor any of its reachable dependents.", "In the old projective case, the loss was limited to a head in s 0 |β and dependents in b|β, but because s 0 can potentially be swapped back to the buffer, we instead define reachability explicitly through RDEPS(s 0 ) (for dependents) and RDEPS(h(s 0 )) (for the head) and update these accordingly after applying the transition.", "• RIGHT: Adding the arc (s 1 , s 0 ) and popping s 0 from the stack means that s 0 will not be able to acquire a head different from s 1 nor any of its reachable dependents.", "In the old projective case, the loss was limited to a head and dependents in b|β, but because s 0 can potentially be swapped back to the buffer, we again define reachability explicitly through RDEPS(s 0 ) (for dependents) and RDEPS(h(s 0 )) (for the head) and update these accordingly after applying the transition.", "1 2 3 4 s 1 s 0 b [ 1 2 ] Σ [ 3 4 ] B RIGHT ⇒ 1 2 3 4 [ 1 ] Σ [ 3 4 ] B SHIFT ⇓ 1 2 3 4 [ 1 2 3 ] Σ [ 4 ] B 1 2 4 3 s 1 s 0 b [ 1 2 ] Σ [ 4 3 ] B • SHIFT: In the projective case, shifting b onto the stack means that b will not be able to acquire a head in Σ other than the top item s 0 nor any dependents in Σ.", "With the SWAP transition and a static oracle, we also have to consider the case where b can later be swapped back to the buffer, in which case SHIFT has zero cost.", "We therefore have two cases in Figure 1 .", "In the first case, no updates are needed.", "In the second case, the updates are analogous to the old projective case.", "To illustrate how the oracle works, let us look at some hypothetical configurations.", "First, we can have a situation as in the top left corner of Figure 2 , where all nodes are in projective order given the gold tree displayed above the nodes.", "For simplicity, the nodes are named after their projective order.", "Applying a RIGHT transition in this configuration makes it impossible for s 0 (2) to be attached to its head (3) and therefore makes us lose the arc 3 → 2, as shown in the top right corner.", "If we instead apply a SHIFT transition, we lose the arc between b (3) and its head (1) as well as the arc 3 → 2, as shown in the bottom left corner.", "By contrast, a LEFT transition has zero cost, because no arcs are lost so the best tree reachable in the orig-inal configuration is still reachable after applying the LEFT transition.", "Consider now a configuration, like the one in the bottom right corner of Figure 2 , where the nodes are not in projective order.", "Here we can shift b (4) onto the stack without cost, because it will later be swapped back to the buffer to restore the projective order between 4 and 3.", "A RIGHT transition makes us lose the head and dependent of s 0 (4 → 2 and 2 → 3).", "A LEFT transition makes us lose the dependent of s 0 (2 → 3) .", "The combination of a dynamic oracle for LEFT, RIGHT and SHIFT with a static oracle for SWAP allows us to benefit from training with exploration in most situations and at the same time capture nonprojective dependencies.", "Experiments We extend the parser we used in de Lhoneux et al.", "(2017), a greedy transition-based parser that predicts the dependency tree given the raw words of a sentence.", "That parser is itself an extension of the parser developed by Kiperwasser and Goldberg (2016) .", "It relies on a BiLSTM to learn informative features of words in context and a feed-forward network for predicting the next parsing transition.", "It learns vector representations of the words as well as characters.", "Contrary to parsing tradition, it makes no use of part-of-speech tags.", "We released our system as UUparser 2.0, available at https: //github.com/UppsalaNLP/uuparser.", "We first compare our system, which uses our static-dynamic oracle, with the same system using a static oracle.", "This is to find out if we can benefit from error exploration using our partially dynamic oracle.", "We use the same set of hyperparameters as in that paper in all our experiments.", "We additionally compare our method to a different approach to handling non-projectivity, pseudo-projective parsing, as performed in de Lhoneux et al.", "(2017) .", "Pseudo-projective parsing was developed by Nivre and Nilsson (2005) .", "In a pre-processing step, the training data is projectivised: some nodes get reattached to a close parent.", "Parsed data are then 'deprojectivised' in a post-processing step.", "In order for information about non-projectivity to be recoverable after parsing, when projectivising, arcs are renamed to encode information about the original parent of dependents which get re-attached.", "Note that hyperparameters were tweaked for the pseudo-projective system, possibly giving an unfair advantage.", "Lastly, we compare to a projective baseline, using a dynamic oracle but no SWAP transition.", "6 This is to find out the extent to which dealing with non-projectivity is important.", "We selected a sample of 5 treebanks from the Universal Dependencies CoNLL 2017 shared task data .", "We selected languages to have different frequencies of non-projectivity, both at the sentence level and at the level of individual arcs, ranging from a very high frequency (Ancient-Greek) to a low frequency (English), as well as some typological variety.", "Non-projective frequencies were taken from Straka et al.", "(2015) and are included in Table 1 , which report our results on the development sets (best out of 20 epochs).", "Comparing to the static baseline, we can verify that our static-dynamic oracle really preserves the benefits of training with error exploration, with improvements ranging from 0.5 to 3.5 points.", "(All differences here are statistically significant with p<0.01, except for Portuguese, where the difference is statistically significant with p<0.05 according to the McNemar test).", "The new system achieves results largely on par with the pseudo-projective parser.", "Our method is better by a small margin for 3 out of 5 languages Table 1 : LAS on dev sets with gold tokenization for our static-dynamic system (S-Dy), the static and projective baselines (Static, Proj) and the pseudo-projective system of de Lhoneux et al.", "(2017) (PProj).", "%NP = percentage of nonprojective arcs/sentences.", "and worse by a large margin for 1.", "Overall, these results are encouraging given that our method is simpler and more efficient to train, with no need for pre-or post-processing and no extension of the dependency label set.", "7 Comparing to the projective baseline, we see that strictly projective parsing can be slightly better than both online reordering and pseudoprojective parsing for a language with few non-projective arcs/sentences like English.", "For all other languages, we see small (Arabic) to big (Ancient Greek) improvements from dealing with non-projectivity in some way.", "Conclusion We have shown how the SWAP transition for online reordering can be integrated into the archybrid transition system for dependency parsing in such a way that we still benefit from training with exploration using a static-dynamic oracle.", "In the future, we intend to test this further by evaluating our model on more languages with proper tuning of hyperparameters.", "We are also interested in the question of whether it is possible to define a fully dynamic oracle for our system and allow exploration for the SWAP transition too." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "4", "5" ], "paper_header_content": [ "Introduction", "An Extended Transition System", "A Static-Dynamic Oracle", "Auxiliary Functions and Notation", "Static Oracle for SWAP", "Dynamic Oracle", "Experiments", "Conclusion" ] }
GEM-SciDuet-train-46#paper-1069#slide-8
A Static Dynamic Oracle
found best example ever
found best example ever
[]
GEM-SciDuet-train-46#paper-1069#slide-9
1069
Arc-Hybrid Non-Projective Dependency Parsing with a Static-Dynamic Oracle
We extend the arc-hybrid transition system for dependency parsing with a SWAP transition that enables reordering of the words and construction of non-projective trees. Although this extension potentially breaks the arc-decomposability of the transition system, we show that the existing dynamic oracle can be modified and combined with a static oracle for the SWAP transition. Experiments on five languages with different degrees of non-projectivity show that the new system gives competitive accuracy and is significantly better than a system trained with a purely static oracle.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112 ], "paper_content_text": [ "Introduction Non-projective sentences are a notorious problem in dependency parsing.", "Traditional algorithms like those developed by Nivre (2003 Nivre ( , 2004 for transition-based parsing only allow the construction of projective trees.", "These algorithms make use of a stack, a buffer and a set of arcs, and parsing consists of performing a sequence of transitions on these structures.", "Traditional algorithms have been extended in different ways to allow the construction of non-projective trees (Nivre and Nilsson, 2005; Attardi, 2006; Nivre, 2007; Gómez-Rodríguez and Nivre, 2010) .", "One method proposed by Nivre (2009) is based on the idea of word reordering.", "This is achieved by adding a transition that swaps two items in the data structures used, enabling the construction of arbitrary non-projective trees while still only adding arcs between adjacent words (after possible reordering).", "This technique was previously used in the arc-standard transition system (Nivre, 2004) .", "The first contribution of this paper is to show that it can also be combined with the arc-hybrid system proposed by Kuhlmann et al.", "(2011) .", "Recent advances in dependency parsing have demonstrated the benefit of using dynamic oracles for training dependency parsers (Goldberg and Nivre, 2013) .", "Traditionally, parsers were trained in a static way and were only exposed to configurations resulting from optimal transitions during training.", "Dynamic oracles define optimal transition sequences for any configuration in which the parser may be.", "The use of dynamic oracles enables training with exploration of errors, which mitigates the problem of error propagation at prediction time.", "In order to define a dynamic oracle, we need to be able to compute the cost of any transition in any configuration, where cost is usually defined as minimum Hamming loss with respect to the best tree reachable from that configuration.", "Goldberg and Nivre (2013) showed that this computation is straightforward for transition systems that satisfy the property of arc-decomposability, meaning that a tree is reachable from a configuration if and only if every arc in the tree is reachable in itself.", "Based on this result, they defined dynamic oracles for the arc-eager (Nivre, 2003) , arc-hybrid (Kuhlmann et al., 2011) and easy-first (Goldberg and Elhadad, 2010) systems.", "Transition systems that allow non-projective trees are in general not arc-decomposable and therefore require different methods for constructing dynamic oracles (Gómez-Rodríguez and Fernández-González, 2015) .", "The online reordering system of Nivre (2009) is furthermore based on the arc-standard system, which is not even arc-decomposable in itself (Goldberg and Nivre, 2013) .", "The second contribution of this paper is to show that we can take advantage of the arcdecomposability of the arc-hybrid transition system and extend the existing dynamic oracle to deal with the added swap transition.", "The resulting or-acle is static with respect to the new transition but remains dynamic for all other transitions.", "We show experimentally that this static-dynamic oracle gives a significant advantage over the alternative static oracle and results in competitive results for non-projective parsing.", "An Extended Transition System The arc-hybrid system has configurations of the form c = (Σ, B, A), where • Σ is a stack (represented as a list with the head to the right), • B is a buffer (represented as a list with the head to the left), • A is a set of dependency arcs (represented as (h, d) pairs).", "1 Given a sentence W = w 1 , .", ".", ".", ", w n , the system is initialized to: c 0 = ([ ], [1, .", ".", ".", ", n, n+1], { }) where n+1 is a special root node, denoted r from now on.", "Terminal configurations have the form: c = ([ ], [r], A) and the parse tree is given by the arc set A.", "There are preconditions such that SHIFT is legal only if b = r, RIGHT only if |Σ| > 1 and LEFT only if |Σ| > 0.", "In order to enforce that r has exactly one dependent, as required by some dependency grammar frameworks, we add a precondition such that LEFT is legal only if |Σ| = 1 or b = r. In the extended system, we add a SWAP transition to be able to construct non-projective trees using online reordering: • SWAP[(σ|s 0 , b|β, A)] = (σ, b|s 0 |β, A) There is a precondition making SWAP legal only if |Σ| > 0, |B| > 1 and s 0 < b.", "3 The SWAP transition reorders nodes by moving the item on top of the stack (s 0 ) to the second position in the buffer, thus inverting the order of s 0 and b.", "The SHIFT and SWAP transitions together implement a simple sorting algorithm, which allows us to permute the order of nodes arbitrarily.", "As shown by (Nivre, 2009) , this implies that we can construct any non-projective tree by reordering and adding arcs between adjacent nodes using LEFT and RIGHT.", "As already observed, the main advantage of the arc-hybrid system over the arc-standard system is that it is arc-decomposable, which allows us to construct a simple and efficient dynamic oracle.", "The arc-eager system (Nivre, 2003) is also arcdecomposable but cannot be combined with SWAP because items on the stack in that system do not necessarily represent disjoint subtrees.", "A Static-Dynamic Oracle The dynamic oracle for arc-hybrid parsing defined by Goldberg and Nivre (2013) computes the cost of a transition by counting the number of gold arcs that are made unreachable by applying that transition.", "This presupposes that the system is arcdecomposable, a result that is proven in the same paper.", "Our construction of an oracle for arc-hybrid parsing with online ordering is based on the conjecture that we can retain arc-decomposition by only making SWAP transitions that are necessary to make non-projective arcs reachable and by enforcing all such transitions.", "Proving this conjecture is, however, outside the scope of this paper.", "Auxiliary Functions and Notation We assume that gold trees are preprocessed at training time to compute the following information for each input node i: • PROJ(i) = the position of node i in the projective order.", "4 • RDEPS(i) = the set of reachable dependents of i, initially all dependents of i.", "• LEFT: C(LEFT) = |RDEPS(s 0 )| + [[h(s 0 ) = b and s 0 ∈ RDEPS(h(s 0 ))]] Updates: Set RDEPS(s 0 ) = [ ] and remove s 0 from RDEPS(h(s 0 )).", "• RIGHT: C(RIGHT) = |RDEPS(s 0 )| + [[h(s 0 ) = s 1 and s 0 ∈ RDEPS(h(s 0 ))]] Updates: Set RDEPS(s 0 ) = [ ] and remove s 0 from RDEPS(h(s 0 )).", "• SHIFT: We use h(i) to denote the head of a node i in the gold tree.", "1.", "If there exists a node i ∈ B −b such that b < i and PROJ(b) > PROJ(i): C(SHIFT) = 0 2.", "Else: C(SHIFT) = |{d ∈ RDEPS(b) | d ∈ Σ}| + [[h(b) ∈ Σ −s 0 and b ∈ RDEPS(h(b))]] Updates: Remove b from RDEPS(h(b)) if h(b) ∈ Σ −s 0 and remove d ∈ Σ from RDEPS(b).", "Static Oracle for SWAP Our oracle is fully dynamic with respect to SHIFT, LEFT and RIGHT but basically static with respect to SWAP.", "This means that only optimal (zero cost) SWAP transitions are allowed during training and that we force the parser to apply the SWAP transition when needed.", "Optimal: To prevent non-optimal SWAP transitions, we add a precondition so that SWAP is legal only if PROJ(s 0 ) > PROJ(b).", "Forced: To force necessary SWAP transitions, we bypass the oracle whenever PROJ(s 0 ) > PROJ(b).", "5 Dynamic Oracle Since we use a static oracle for SWAP transitions, these will always have zero cost.", "The dynamic oracle thus only needs to define costs for the remaining three transitions.", "To construct the oracle, we start from the old dynamic oracle for the projective system and extend it to account for the added flexibility introduced by SWAP.", "Figure 1 defines the transition costs as well as the necessary updates to RDEPS after applying a transition.", "• LEFT: Adding the arc (b, s 0 ) and popping s 0 from the stack means that s 0 will not be able to acquire a head different from b nor any of its reachable dependents.", "In the old projective case, the loss was limited to a head in s 0 |β and dependents in b|β, but because s 0 can potentially be swapped back to the buffer, we instead define reachability explicitly through RDEPS(s 0 ) (for dependents) and RDEPS(h(s 0 )) (for the head) and update these accordingly after applying the transition.", "• RIGHT: Adding the arc (s 1 , s 0 ) and popping s 0 from the stack means that s 0 will not be able to acquire a head different from s 1 nor any of its reachable dependents.", "In the old projective case, the loss was limited to a head and dependents in b|β, but because s 0 can potentially be swapped back to the buffer, we again define reachability explicitly through RDEPS(s 0 ) (for dependents) and RDEPS(h(s 0 )) (for the head) and update these accordingly after applying the transition.", "1 2 3 4 s 1 s 0 b [ 1 2 ] Σ [ 3 4 ] B RIGHT ⇒ 1 2 3 4 [ 1 ] Σ [ 3 4 ] B SHIFT ⇓ 1 2 3 4 [ 1 2 3 ] Σ [ 4 ] B 1 2 4 3 s 1 s 0 b [ 1 2 ] Σ [ 4 3 ] B • SHIFT: In the projective case, shifting b onto the stack means that b will not be able to acquire a head in Σ other than the top item s 0 nor any dependents in Σ.", "With the SWAP transition and a static oracle, we also have to consider the case where b can later be swapped back to the buffer, in which case SHIFT has zero cost.", "We therefore have two cases in Figure 1 .", "In the first case, no updates are needed.", "In the second case, the updates are analogous to the old projective case.", "To illustrate how the oracle works, let us look at some hypothetical configurations.", "First, we can have a situation as in the top left corner of Figure 2 , where all nodes are in projective order given the gold tree displayed above the nodes.", "For simplicity, the nodes are named after their projective order.", "Applying a RIGHT transition in this configuration makes it impossible for s 0 (2) to be attached to its head (3) and therefore makes us lose the arc 3 → 2, as shown in the top right corner.", "If we instead apply a SHIFT transition, we lose the arc between b (3) and its head (1) as well as the arc 3 → 2, as shown in the bottom left corner.", "By contrast, a LEFT transition has zero cost, because no arcs are lost so the best tree reachable in the orig-inal configuration is still reachable after applying the LEFT transition.", "Consider now a configuration, like the one in the bottom right corner of Figure 2 , where the nodes are not in projective order.", "Here we can shift b (4) onto the stack without cost, because it will later be swapped back to the buffer to restore the projective order between 4 and 3.", "A RIGHT transition makes us lose the head and dependent of s 0 (4 → 2 and 2 → 3).", "A LEFT transition makes us lose the dependent of s 0 (2 → 3) .", "The combination of a dynamic oracle for LEFT, RIGHT and SHIFT with a static oracle for SWAP allows us to benefit from training with exploration in most situations and at the same time capture nonprojective dependencies.", "Experiments We extend the parser we used in de Lhoneux et al.", "(2017), a greedy transition-based parser that predicts the dependency tree given the raw words of a sentence.", "That parser is itself an extension of the parser developed by Kiperwasser and Goldberg (2016) .", "It relies on a BiLSTM to learn informative features of words in context and a feed-forward network for predicting the next parsing transition.", "It learns vector representations of the words as well as characters.", "Contrary to parsing tradition, it makes no use of part-of-speech tags.", "We released our system as UUparser 2.0, available at https: //github.com/UppsalaNLP/uuparser.", "We first compare our system, which uses our static-dynamic oracle, with the same system using a static oracle.", "This is to find out if we can benefit from error exploration using our partially dynamic oracle.", "We use the same set of hyperparameters as in that paper in all our experiments.", "We additionally compare our method to a different approach to handling non-projectivity, pseudo-projective parsing, as performed in de Lhoneux et al.", "(2017) .", "Pseudo-projective parsing was developed by Nivre and Nilsson (2005) .", "In a pre-processing step, the training data is projectivised: some nodes get reattached to a close parent.", "Parsed data are then 'deprojectivised' in a post-processing step.", "In order for information about non-projectivity to be recoverable after parsing, when projectivising, arcs are renamed to encode information about the original parent of dependents which get re-attached.", "Note that hyperparameters were tweaked for the pseudo-projective system, possibly giving an unfair advantage.", "Lastly, we compare to a projective baseline, using a dynamic oracle but no SWAP transition.", "6 This is to find out the extent to which dealing with non-projectivity is important.", "We selected a sample of 5 treebanks from the Universal Dependencies CoNLL 2017 shared task data .", "We selected languages to have different frequencies of non-projectivity, both at the sentence level and at the level of individual arcs, ranging from a very high frequency (Ancient-Greek) to a low frequency (English), as well as some typological variety.", "Non-projective frequencies were taken from Straka et al.", "(2015) and are included in Table 1 , which report our results on the development sets (best out of 20 epochs).", "Comparing to the static baseline, we can verify that our static-dynamic oracle really preserves the benefits of training with error exploration, with improvements ranging from 0.5 to 3.5 points.", "(All differences here are statistically significant with p<0.01, except for Portuguese, where the difference is statistically significant with p<0.05 according to the McNemar test).", "The new system achieves results largely on par with the pseudo-projective parser.", "Our method is better by a small margin for 3 out of 5 languages Table 1 : LAS on dev sets with gold tokenization for our static-dynamic system (S-Dy), the static and projective baselines (Static, Proj) and the pseudo-projective system of de Lhoneux et al.", "(2017) (PProj).", "%NP = percentage of nonprojective arcs/sentences.", "and worse by a large margin for 1.", "Overall, these results are encouraging given that our method is simpler and more efficient to train, with no need for pre-or post-processing and no extension of the dependency label set.", "7 Comparing to the projective baseline, we see that strictly projective parsing can be slightly better than both online reordering and pseudoprojective parsing for a language with few non-projective arcs/sentences like English.", "For all other languages, we see small (Arabic) to big (Ancient Greek) improvements from dealing with non-projectivity in some way.", "Conclusion We have shown how the SWAP transition for online reordering can be integrated into the archybrid transition system for dependency parsing in such a way that we still benefit from training with exploration using a static-dynamic oracle.", "In the future, we intend to test this further by evaluating our model on more languages with proper tuning of hyperparameters.", "We are also interested in the question of whether it is possible to define a fully dynamic oracle for our system and allow exploration for the SWAP transition too." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "4", "5" ], "paper_header_content": [ "Introduction", "An Extended Transition System", "A Static-Dynamic Oracle", "Auxiliary Functions and Notation", "Static Oracle for SWAP", "Dynamic Oracle", "Experiments", "Conclusion" ] }
GEM-SciDuet-train-46#paper-1069#slide-9
Transition Based Parsing using BiLSTM
the brown fox jumped root Vthe Vbrown Vfox Vjumped Vroot concat concat concat concat concat LSTM b LSTM b LSTM b LSTM b LSTM b LSTM f LSTM f LSTM f LSTM f LSTM f X the X brown X fox X jumped X root t h e b r o w n f o x j u m p e d e(the) e(brown) e(fox) e(jumped) pe(the) pe(brown) pe(fox) pe(jumped)
the brown fox jumped root Vthe Vbrown Vfox Vjumped Vroot concat concat concat concat concat LSTM b LSTM b LSTM b LSTM b LSTM b LSTM f LSTM f LSTM f LSTM f LSTM f X the X brown X fox X jumped X root t h e b r o w n f o x j u m p e d e(the) e(brown) e(fox) e(jumped) pe(the) pe(brown) pe(fox) pe(jumped)
[]
GEM-SciDuet-train-46#paper-1069#slide-10
1069
Arc-Hybrid Non-Projective Dependency Parsing with a Static-Dynamic Oracle
We extend the arc-hybrid transition system for dependency parsing with a SWAP transition that enables reordering of the words and construction of non-projective trees. Although this extension potentially breaks the arc-decomposability of the transition system, we show that the existing dynamic oracle can be modified and combined with a static oracle for the SWAP transition. Experiments on five languages with different degrees of non-projectivity show that the new system gives competitive accuracy and is significantly better than a system trained with a purely static oracle.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112 ], "paper_content_text": [ "Introduction Non-projective sentences are a notorious problem in dependency parsing.", "Traditional algorithms like those developed by Nivre (2003 Nivre ( , 2004 for transition-based parsing only allow the construction of projective trees.", "These algorithms make use of a stack, a buffer and a set of arcs, and parsing consists of performing a sequence of transitions on these structures.", "Traditional algorithms have been extended in different ways to allow the construction of non-projective trees (Nivre and Nilsson, 2005; Attardi, 2006; Nivre, 2007; Gómez-Rodríguez and Nivre, 2010) .", "One method proposed by Nivre (2009) is based on the idea of word reordering.", "This is achieved by adding a transition that swaps two items in the data structures used, enabling the construction of arbitrary non-projective trees while still only adding arcs between adjacent words (after possible reordering).", "This technique was previously used in the arc-standard transition system (Nivre, 2004) .", "The first contribution of this paper is to show that it can also be combined with the arc-hybrid system proposed by Kuhlmann et al.", "(2011) .", "Recent advances in dependency parsing have demonstrated the benefit of using dynamic oracles for training dependency parsers (Goldberg and Nivre, 2013) .", "Traditionally, parsers were trained in a static way and were only exposed to configurations resulting from optimal transitions during training.", "Dynamic oracles define optimal transition sequences for any configuration in which the parser may be.", "The use of dynamic oracles enables training with exploration of errors, which mitigates the problem of error propagation at prediction time.", "In order to define a dynamic oracle, we need to be able to compute the cost of any transition in any configuration, where cost is usually defined as minimum Hamming loss with respect to the best tree reachable from that configuration.", "Goldberg and Nivre (2013) showed that this computation is straightforward for transition systems that satisfy the property of arc-decomposability, meaning that a tree is reachable from a configuration if and only if every arc in the tree is reachable in itself.", "Based on this result, they defined dynamic oracles for the arc-eager (Nivre, 2003) , arc-hybrid (Kuhlmann et al., 2011) and easy-first (Goldberg and Elhadad, 2010) systems.", "Transition systems that allow non-projective trees are in general not arc-decomposable and therefore require different methods for constructing dynamic oracles (Gómez-Rodríguez and Fernández-González, 2015) .", "The online reordering system of Nivre (2009) is furthermore based on the arc-standard system, which is not even arc-decomposable in itself (Goldberg and Nivre, 2013) .", "The second contribution of this paper is to show that we can take advantage of the arcdecomposability of the arc-hybrid transition system and extend the existing dynamic oracle to deal with the added swap transition.", "The resulting or-acle is static with respect to the new transition but remains dynamic for all other transitions.", "We show experimentally that this static-dynamic oracle gives a significant advantage over the alternative static oracle and results in competitive results for non-projective parsing.", "An Extended Transition System The arc-hybrid system has configurations of the form c = (Σ, B, A), where • Σ is a stack (represented as a list with the head to the right), • B is a buffer (represented as a list with the head to the left), • A is a set of dependency arcs (represented as (h, d) pairs).", "1 Given a sentence W = w 1 , .", ".", ".", ", w n , the system is initialized to: c 0 = ([ ], [1, .", ".", ".", ", n, n+1], { }) where n+1 is a special root node, denoted r from now on.", "Terminal configurations have the form: c = ([ ], [r], A) and the parse tree is given by the arc set A.", "There are preconditions such that SHIFT is legal only if b = r, RIGHT only if |Σ| > 1 and LEFT only if |Σ| > 0.", "In order to enforce that r has exactly one dependent, as required by some dependency grammar frameworks, we add a precondition such that LEFT is legal only if |Σ| = 1 or b = r. In the extended system, we add a SWAP transition to be able to construct non-projective trees using online reordering: • SWAP[(σ|s 0 , b|β, A)] = (σ, b|s 0 |β, A) There is a precondition making SWAP legal only if |Σ| > 0, |B| > 1 and s 0 < b.", "3 The SWAP transition reorders nodes by moving the item on top of the stack (s 0 ) to the second position in the buffer, thus inverting the order of s 0 and b.", "The SHIFT and SWAP transitions together implement a simple sorting algorithm, which allows us to permute the order of nodes arbitrarily.", "As shown by (Nivre, 2009) , this implies that we can construct any non-projective tree by reordering and adding arcs between adjacent nodes using LEFT and RIGHT.", "As already observed, the main advantage of the arc-hybrid system over the arc-standard system is that it is arc-decomposable, which allows us to construct a simple and efficient dynamic oracle.", "The arc-eager system (Nivre, 2003) is also arcdecomposable but cannot be combined with SWAP because items on the stack in that system do not necessarily represent disjoint subtrees.", "A Static-Dynamic Oracle The dynamic oracle for arc-hybrid parsing defined by Goldberg and Nivre (2013) computes the cost of a transition by counting the number of gold arcs that are made unreachable by applying that transition.", "This presupposes that the system is arcdecomposable, a result that is proven in the same paper.", "Our construction of an oracle for arc-hybrid parsing with online ordering is based on the conjecture that we can retain arc-decomposition by only making SWAP transitions that are necessary to make non-projective arcs reachable and by enforcing all such transitions.", "Proving this conjecture is, however, outside the scope of this paper.", "Auxiliary Functions and Notation We assume that gold trees are preprocessed at training time to compute the following information for each input node i: • PROJ(i) = the position of node i in the projective order.", "4 • RDEPS(i) = the set of reachable dependents of i, initially all dependents of i.", "• LEFT: C(LEFT) = |RDEPS(s 0 )| + [[h(s 0 ) = b and s 0 ∈ RDEPS(h(s 0 ))]] Updates: Set RDEPS(s 0 ) = [ ] and remove s 0 from RDEPS(h(s 0 )).", "• RIGHT: C(RIGHT) = |RDEPS(s 0 )| + [[h(s 0 ) = s 1 and s 0 ∈ RDEPS(h(s 0 ))]] Updates: Set RDEPS(s 0 ) = [ ] and remove s 0 from RDEPS(h(s 0 )).", "• SHIFT: We use h(i) to denote the head of a node i in the gold tree.", "1.", "If there exists a node i ∈ B −b such that b < i and PROJ(b) > PROJ(i): C(SHIFT) = 0 2.", "Else: C(SHIFT) = |{d ∈ RDEPS(b) | d ∈ Σ}| + [[h(b) ∈ Σ −s 0 and b ∈ RDEPS(h(b))]] Updates: Remove b from RDEPS(h(b)) if h(b) ∈ Σ −s 0 and remove d ∈ Σ from RDEPS(b).", "Static Oracle for SWAP Our oracle is fully dynamic with respect to SHIFT, LEFT and RIGHT but basically static with respect to SWAP.", "This means that only optimal (zero cost) SWAP transitions are allowed during training and that we force the parser to apply the SWAP transition when needed.", "Optimal: To prevent non-optimal SWAP transitions, we add a precondition so that SWAP is legal only if PROJ(s 0 ) > PROJ(b).", "Forced: To force necessary SWAP transitions, we bypass the oracle whenever PROJ(s 0 ) > PROJ(b).", "5 Dynamic Oracle Since we use a static oracle for SWAP transitions, these will always have zero cost.", "The dynamic oracle thus only needs to define costs for the remaining three transitions.", "To construct the oracle, we start from the old dynamic oracle for the projective system and extend it to account for the added flexibility introduced by SWAP.", "Figure 1 defines the transition costs as well as the necessary updates to RDEPS after applying a transition.", "• LEFT: Adding the arc (b, s 0 ) and popping s 0 from the stack means that s 0 will not be able to acquire a head different from b nor any of its reachable dependents.", "In the old projective case, the loss was limited to a head in s 0 |β and dependents in b|β, but because s 0 can potentially be swapped back to the buffer, we instead define reachability explicitly through RDEPS(s 0 ) (for dependents) and RDEPS(h(s 0 )) (for the head) and update these accordingly after applying the transition.", "• RIGHT: Adding the arc (s 1 , s 0 ) and popping s 0 from the stack means that s 0 will not be able to acquire a head different from s 1 nor any of its reachable dependents.", "In the old projective case, the loss was limited to a head and dependents in b|β, but because s 0 can potentially be swapped back to the buffer, we again define reachability explicitly through RDEPS(s 0 ) (for dependents) and RDEPS(h(s 0 )) (for the head) and update these accordingly after applying the transition.", "1 2 3 4 s 1 s 0 b [ 1 2 ] Σ [ 3 4 ] B RIGHT ⇒ 1 2 3 4 [ 1 ] Σ [ 3 4 ] B SHIFT ⇓ 1 2 3 4 [ 1 2 3 ] Σ [ 4 ] B 1 2 4 3 s 1 s 0 b [ 1 2 ] Σ [ 4 3 ] B • SHIFT: In the projective case, shifting b onto the stack means that b will not be able to acquire a head in Σ other than the top item s 0 nor any dependents in Σ.", "With the SWAP transition and a static oracle, we also have to consider the case where b can later be swapped back to the buffer, in which case SHIFT has zero cost.", "We therefore have two cases in Figure 1 .", "In the first case, no updates are needed.", "In the second case, the updates are analogous to the old projective case.", "To illustrate how the oracle works, let us look at some hypothetical configurations.", "First, we can have a situation as in the top left corner of Figure 2 , where all nodes are in projective order given the gold tree displayed above the nodes.", "For simplicity, the nodes are named after their projective order.", "Applying a RIGHT transition in this configuration makes it impossible for s 0 (2) to be attached to its head (3) and therefore makes us lose the arc 3 → 2, as shown in the top right corner.", "If we instead apply a SHIFT transition, we lose the arc between b (3) and its head (1) as well as the arc 3 → 2, as shown in the bottom left corner.", "By contrast, a LEFT transition has zero cost, because no arcs are lost so the best tree reachable in the orig-inal configuration is still reachable after applying the LEFT transition.", "Consider now a configuration, like the one in the bottom right corner of Figure 2 , where the nodes are not in projective order.", "Here we can shift b (4) onto the stack without cost, because it will later be swapped back to the buffer to restore the projective order between 4 and 3.", "A RIGHT transition makes us lose the head and dependent of s 0 (4 → 2 and 2 → 3).", "A LEFT transition makes us lose the dependent of s 0 (2 → 3) .", "The combination of a dynamic oracle for LEFT, RIGHT and SHIFT with a static oracle for SWAP allows us to benefit from training with exploration in most situations and at the same time capture nonprojective dependencies.", "Experiments We extend the parser we used in de Lhoneux et al.", "(2017), a greedy transition-based parser that predicts the dependency tree given the raw words of a sentence.", "That parser is itself an extension of the parser developed by Kiperwasser and Goldberg (2016) .", "It relies on a BiLSTM to learn informative features of words in context and a feed-forward network for predicting the next parsing transition.", "It learns vector representations of the words as well as characters.", "Contrary to parsing tradition, it makes no use of part-of-speech tags.", "We released our system as UUparser 2.0, available at https: //github.com/UppsalaNLP/uuparser.", "We first compare our system, which uses our static-dynamic oracle, with the same system using a static oracle.", "This is to find out if we can benefit from error exploration using our partially dynamic oracle.", "We use the same set of hyperparameters as in that paper in all our experiments.", "We additionally compare our method to a different approach to handling non-projectivity, pseudo-projective parsing, as performed in de Lhoneux et al.", "(2017) .", "Pseudo-projective parsing was developed by Nivre and Nilsson (2005) .", "In a pre-processing step, the training data is projectivised: some nodes get reattached to a close parent.", "Parsed data are then 'deprojectivised' in a post-processing step.", "In order for information about non-projectivity to be recoverable after parsing, when projectivising, arcs are renamed to encode information about the original parent of dependents which get re-attached.", "Note that hyperparameters were tweaked for the pseudo-projective system, possibly giving an unfair advantage.", "Lastly, we compare to a projective baseline, using a dynamic oracle but no SWAP transition.", "6 This is to find out the extent to which dealing with non-projectivity is important.", "We selected a sample of 5 treebanks from the Universal Dependencies CoNLL 2017 shared task data .", "We selected languages to have different frequencies of non-projectivity, both at the sentence level and at the level of individual arcs, ranging from a very high frequency (Ancient-Greek) to a low frequency (English), as well as some typological variety.", "Non-projective frequencies were taken from Straka et al.", "(2015) and are included in Table 1 , which report our results on the development sets (best out of 20 epochs).", "Comparing to the static baseline, we can verify that our static-dynamic oracle really preserves the benefits of training with error exploration, with improvements ranging from 0.5 to 3.5 points.", "(All differences here are statistically significant with p<0.01, except for Portuguese, where the difference is statistically significant with p<0.05 according to the McNemar test).", "The new system achieves results largely on par with the pseudo-projective parser.", "Our method is better by a small margin for 3 out of 5 languages Table 1 : LAS on dev sets with gold tokenization for our static-dynamic system (S-Dy), the static and projective baselines (Static, Proj) and the pseudo-projective system of de Lhoneux et al.", "(2017) (PProj).", "%NP = percentage of nonprojective arcs/sentences.", "and worse by a large margin for 1.", "Overall, these results are encouraging given that our method is simpler and more efficient to train, with no need for pre-or post-processing and no extension of the dependency label set.", "7 Comparing to the projective baseline, we see that strictly projective parsing can be slightly better than both online reordering and pseudoprojective parsing for a language with few non-projective arcs/sentences like English.", "For all other languages, we see small (Arabic) to big (Ancient Greek) improvements from dealing with non-projectivity in some way.", "Conclusion We have shown how the SWAP transition for online reordering can be integrated into the archybrid transition system for dependency parsing in such a way that we still benefit from training with exploration using a static-dynamic oracle.", "In the future, we intend to test this further by evaluating our model on more languages with proper tuning of hyperparameters.", "We are also interested in the question of whether it is possible to define a fully dynamic oracle for our system and allow exploration for the SWAP transition too." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "4", "5" ], "paper_header_content": [ "Introduction", "An Extended Transition System", "A Static-Dynamic Oracle", "Auxiliary Functions and Notation", "Static Oracle for SWAP", "Dynamic Oracle", "Experiments", "Conclusion" ] }
GEM-SciDuet-train-46#paper-1069#slide-10
Transition Based Parsing using BiLSTMs
X the X brown X fox X jumped X root Vthe Vbrown Vfox Vjumped Vroot concat concat concat concat concat LSTM b LSTM b LSTM b LSTM b LSTM b LSTM f LSTM f LSTM f LSTM f LSTM f
X the X brown X fox X jumped X root Vthe Vbrown Vfox Vjumped Vroot concat concat concat concat concat LSTM b LSTM b LSTM b LSTM b LSTM b LSTM f LSTM f LSTM f LSTM f LSTM f
[]
GEM-SciDuet-train-46#paper-1069#slide-12
1069
Arc-Hybrid Non-Projective Dependency Parsing with a Static-Dynamic Oracle
We extend the arc-hybrid transition system for dependency parsing with a SWAP transition that enables reordering of the words and construction of non-projective trees. Although this extension potentially breaks the arc-decomposability of the transition system, we show that the existing dynamic oracle can be modified and combined with a static oracle for the SWAP transition. Experiments on five languages with different degrees of non-projectivity show that the new system gives competitive accuracy and is significantly better than a system trained with a purely static oracle.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112 ], "paper_content_text": [ "Introduction Non-projective sentences are a notorious problem in dependency parsing.", "Traditional algorithms like those developed by Nivre (2003 Nivre ( , 2004 for transition-based parsing only allow the construction of projective trees.", "These algorithms make use of a stack, a buffer and a set of arcs, and parsing consists of performing a sequence of transitions on these structures.", "Traditional algorithms have been extended in different ways to allow the construction of non-projective trees (Nivre and Nilsson, 2005; Attardi, 2006; Nivre, 2007; Gómez-Rodríguez and Nivre, 2010) .", "One method proposed by Nivre (2009) is based on the idea of word reordering.", "This is achieved by adding a transition that swaps two items in the data structures used, enabling the construction of arbitrary non-projective trees while still only adding arcs between adjacent words (after possible reordering).", "This technique was previously used in the arc-standard transition system (Nivre, 2004) .", "The first contribution of this paper is to show that it can also be combined with the arc-hybrid system proposed by Kuhlmann et al.", "(2011) .", "Recent advances in dependency parsing have demonstrated the benefit of using dynamic oracles for training dependency parsers (Goldberg and Nivre, 2013) .", "Traditionally, parsers were trained in a static way and were only exposed to configurations resulting from optimal transitions during training.", "Dynamic oracles define optimal transition sequences for any configuration in which the parser may be.", "The use of dynamic oracles enables training with exploration of errors, which mitigates the problem of error propagation at prediction time.", "In order to define a dynamic oracle, we need to be able to compute the cost of any transition in any configuration, where cost is usually defined as minimum Hamming loss with respect to the best tree reachable from that configuration.", "Goldberg and Nivre (2013) showed that this computation is straightforward for transition systems that satisfy the property of arc-decomposability, meaning that a tree is reachable from a configuration if and only if every arc in the tree is reachable in itself.", "Based on this result, they defined dynamic oracles for the arc-eager (Nivre, 2003) , arc-hybrid (Kuhlmann et al., 2011) and easy-first (Goldberg and Elhadad, 2010) systems.", "Transition systems that allow non-projective trees are in general not arc-decomposable and therefore require different methods for constructing dynamic oracles (Gómez-Rodríguez and Fernández-González, 2015) .", "The online reordering system of Nivre (2009) is furthermore based on the arc-standard system, which is not even arc-decomposable in itself (Goldberg and Nivre, 2013) .", "The second contribution of this paper is to show that we can take advantage of the arcdecomposability of the arc-hybrid transition system and extend the existing dynamic oracle to deal with the added swap transition.", "The resulting or-acle is static with respect to the new transition but remains dynamic for all other transitions.", "We show experimentally that this static-dynamic oracle gives a significant advantage over the alternative static oracle and results in competitive results for non-projective parsing.", "An Extended Transition System The arc-hybrid system has configurations of the form c = (Σ, B, A), where • Σ is a stack (represented as a list with the head to the right), • B is a buffer (represented as a list with the head to the left), • A is a set of dependency arcs (represented as (h, d) pairs).", "1 Given a sentence W = w 1 , .", ".", ".", ", w n , the system is initialized to: c 0 = ([ ], [1, .", ".", ".", ", n, n+1], { }) where n+1 is a special root node, denoted r from now on.", "Terminal configurations have the form: c = ([ ], [r], A) and the parse tree is given by the arc set A.", "There are preconditions such that SHIFT is legal only if b = r, RIGHT only if |Σ| > 1 and LEFT only if |Σ| > 0.", "In order to enforce that r has exactly one dependent, as required by some dependency grammar frameworks, we add a precondition such that LEFT is legal only if |Σ| = 1 or b = r. In the extended system, we add a SWAP transition to be able to construct non-projective trees using online reordering: • SWAP[(σ|s 0 , b|β, A)] = (σ, b|s 0 |β, A) There is a precondition making SWAP legal only if |Σ| > 0, |B| > 1 and s 0 < b.", "3 The SWAP transition reorders nodes by moving the item on top of the stack (s 0 ) to the second position in the buffer, thus inverting the order of s 0 and b.", "The SHIFT and SWAP transitions together implement a simple sorting algorithm, which allows us to permute the order of nodes arbitrarily.", "As shown by (Nivre, 2009) , this implies that we can construct any non-projective tree by reordering and adding arcs between adjacent nodes using LEFT and RIGHT.", "As already observed, the main advantage of the arc-hybrid system over the arc-standard system is that it is arc-decomposable, which allows us to construct a simple and efficient dynamic oracle.", "The arc-eager system (Nivre, 2003) is also arcdecomposable but cannot be combined with SWAP because items on the stack in that system do not necessarily represent disjoint subtrees.", "A Static-Dynamic Oracle The dynamic oracle for arc-hybrid parsing defined by Goldberg and Nivre (2013) computes the cost of a transition by counting the number of gold arcs that are made unreachable by applying that transition.", "This presupposes that the system is arcdecomposable, a result that is proven in the same paper.", "Our construction of an oracle for arc-hybrid parsing with online ordering is based on the conjecture that we can retain arc-decomposition by only making SWAP transitions that are necessary to make non-projective arcs reachable and by enforcing all such transitions.", "Proving this conjecture is, however, outside the scope of this paper.", "Auxiliary Functions and Notation We assume that gold trees are preprocessed at training time to compute the following information for each input node i: • PROJ(i) = the position of node i in the projective order.", "4 • RDEPS(i) = the set of reachable dependents of i, initially all dependents of i.", "• LEFT: C(LEFT) = |RDEPS(s 0 )| + [[h(s 0 ) = b and s 0 ∈ RDEPS(h(s 0 ))]] Updates: Set RDEPS(s 0 ) = [ ] and remove s 0 from RDEPS(h(s 0 )).", "• RIGHT: C(RIGHT) = |RDEPS(s 0 )| + [[h(s 0 ) = s 1 and s 0 ∈ RDEPS(h(s 0 ))]] Updates: Set RDEPS(s 0 ) = [ ] and remove s 0 from RDEPS(h(s 0 )).", "• SHIFT: We use h(i) to denote the head of a node i in the gold tree.", "1.", "If there exists a node i ∈ B −b such that b < i and PROJ(b) > PROJ(i): C(SHIFT) = 0 2.", "Else: C(SHIFT) = |{d ∈ RDEPS(b) | d ∈ Σ}| + [[h(b) ∈ Σ −s 0 and b ∈ RDEPS(h(b))]] Updates: Remove b from RDEPS(h(b)) if h(b) ∈ Σ −s 0 and remove d ∈ Σ from RDEPS(b).", "Static Oracle for SWAP Our oracle is fully dynamic with respect to SHIFT, LEFT and RIGHT but basically static with respect to SWAP.", "This means that only optimal (zero cost) SWAP transitions are allowed during training and that we force the parser to apply the SWAP transition when needed.", "Optimal: To prevent non-optimal SWAP transitions, we add a precondition so that SWAP is legal only if PROJ(s 0 ) > PROJ(b).", "Forced: To force necessary SWAP transitions, we bypass the oracle whenever PROJ(s 0 ) > PROJ(b).", "5 Dynamic Oracle Since we use a static oracle for SWAP transitions, these will always have zero cost.", "The dynamic oracle thus only needs to define costs for the remaining three transitions.", "To construct the oracle, we start from the old dynamic oracle for the projective system and extend it to account for the added flexibility introduced by SWAP.", "Figure 1 defines the transition costs as well as the necessary updates to RDEPS after applying a transition.", "• LEFT: Adding the arc (b, s 0 ) and popping s 0 from the stack means that s 0 will not be able to acquire a head different from b nor any of its reachable dependents.", "In the old projective case, the loss was limited to a head in s 0 |β and dependents in b|β, but because s 0 can potentially be swapped back to the buffer, we instead define reachability explicitly through RDEPS(s 0 ) (for dependents) and RDEPS(h(s 0 )) (for the head) and update these accordingly after applying the transition.", "• RIGHT: Adding the arc (s 1 , s 0 ) and popping s 0 from the stack means that s 0 will not be able to acquire a head different from s 1 nor any of its reachable dependents.", "In the old projective case, the loss was limited to a head and dependents in b|β, but because s 0 can potentially be swapped back to the buffer, we again define reachability explicitly through RDEPS(s 0 ) (for dependents) and RDEPS(h(s 0 )) (for the head) and update these accordingly after applying the transition.", "1 2 3 4 s 1 s 0 b [ 1 2 ] Σ [ 3 4 ] B RIGHT ⇒ 1 2 3 4 [ 1 ] Σ [ 3 4 ] B SHIFT ⇓ 1 2 3 4 [ 1 2 3 ] Σ [ 4 ] B 1 2 4 3 s 1 s 0 b [ 1 2 ] Σ [ 4 3 ] B • SHIFT: In the projective case, shifting b onto the stack means that b will not be able to acquire a head in Σ other than the top item s 0 nor any dependents in Σ.", "With the SWAP transition and a static oracle, we also have to consider the case where b can later be swapped back to the buffer, in which case SHIFT has zero cost.", "We therefore have two cases in Figure 1 .", "In the first case, no updates are needed.", "In the second case, the updates are analogous to the old projective case.", "To illustrate how the oracle works, let us look at some hypothetical configurations.", "First, we can have a situation as in the top left corner of Figure 2 , where all nodes are in projective order given the gold tree displayed above the nodes.", "For simplicity, the nodes are named after their projective order.", "Applying a RIGHT transition in this configuration makes it impossible for s 0 (2) to be attached to its head (3) and therefore makes us lose the arc 3 → 2, as shown in the top right corner.", "If we instead apply a SHIFT transition, we lose the arc between b (3) and its head (1) as well as the arc 3 → 2, as shown in the bottom left corner.", "By contrast, a LEFT transition has zero cost, because no arcs are lost so the best tree reachable in the orig-inal configuration is still reachable after applying the LEFT transition.", "Consider now a configuration, like the one in the bottom right corner of Figure 2 , where the nodes are not in projective order.", "Here we can shift b (4) onto the stack without cost, because it will later be swapped back to the buffer to restore the projective order between 4 and 3.", "A RIGHT transition makes us lose the head and dependent of s 0 (4 → 2 and 2 → 3).", "A LEFT transition makes us lose the dependent of s 0 (2 → 3) .", "The combination of a dynamic oracle for LEFT, RIGHT and SHIFT with a static oracle for SWAP allows us to benefit from training with exploration in most situations and at the same time capture nonprojective dependencies.", "Experiments We extend the parser we used in de Lhoneux et al.", "(2017), a greedy transition-based parser that predicts the dependency tree given the raw words of a sentence.", "That parser is itself an extension of the parser developed by Kiperwasser and Goldberg (2016) .", "It relies on a BiLSTM to learn informative features of words in context and a feed-forward network for predicting the next parsing transition.", "It learns vector representations of the words as well as characters.", "Contrary to parsing tradition, it makes no use of part-of-speech tags.", "We released our system as UUparser 2.0, available at https: //github.com/UppsalaNLP/uuparser.", "We first compare our system, which uses our static-dynamic oracle, with the same system using a static oracle.", "This is to find out if we can benefit from error exploration using our partially dynamic oracle.", "We use the same set of hyperparameters as in that paper in all our experiments.", "We additionally compare our method to a different approach to handling non-projectivity, pseudo-projective parsing, as performed in de Lhoneux et al.", "(2017) .", "Pseudo-projective parsing was developed by Nivre and Nilsson (2005) .", "In a pre-processing step, the training data is projectivised: some nodes get reattached to a close parent.", "Parsed data are then 'deprojectivised' in a post-processing step.", "In order for information about non-projectivity to be recoverable after parsing, when projectivising, arcs are renamed to encode information about the original parent of dependents which get re-attached.", "Note that hyperparameters were tweaked for the pseudo-projective system, possibly giving an unfair advantage.", "Lastly, we compare to a projective baseline, using a dynamic oracle but no SWAP transition.", "6 This is to find out the extent to which dealing with non-projectivity is important.", "We selected a sample of 5 treebanks from the Universal Dependencies CoNLL 2017 shared task data .", "We selected languages to have different frequencies of non-projectivity, both at the sentence level and at the level of individual arcs, ranging from a very high frequency (Ancient-Greek) to a low frequency (English), as well as some typological variety.", "Non-projective frequencies were taken from Straka et al.", "(2015) and are included in Table 1 , which report our results on the development sets (best out of 20 epochs).", "Comparing to the static baseline, we can verify that our static-dynamic oracle really preserves the benefits of training with error exploration, with improvements ranging from 0.5 to 3.5 points.", "(All differences here are statistically significant with p<0.01, except for Portuguese, where the difference is statistically significant with p<0.05 according to the McNemar test).", "The new system achieves results largely on par with the pseudo-projective parser.", "Our method is better by a small margin for 3 out of 5 languages Table 1 : LAS on dev sets with gold tokenization for our static-dynamic system (S-Dy), the static and projective baselines (Static, Proj) and the pseudo-projective system of de Lhoneux et al.", "(2017) (PProj).", "%NP = percentage of nonprojective arcs/sentences.", "and worse by a large margin for 1.", "Overall, these results are encouraging given that our method is simpler and more efficient to train, with no need for pre-or post-processing and no extension of the dependency label set.", "7 Comparing to the projective baseline, we see that strictly projective parsing can be slightly better than both online reordering and pseudoprojective parsing for a language with few non-projective arcs/sentences like English.", "For all other languages, we see small (Arabic) to big (Ancient Greek) improvements from dealing with non-projectivity in some way.", "Conclusion We have shown how the SWAP transition for online reordering can be integrated into the archybrid transition system for dependency parsing in such a way that we still benefit from training with exploration using a static-dynamic oracle.", "In the future, we intend to test this further by evaluating our model on more languages with proper tuning of hyperparameters.", "We are also interested in the question of whether it is possible to define a fully dynamic oracle for our system and allow exploration for the SWAP transition too." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "4", "5" ], "paper_header_content": [ "Introduction", "An Extended Transition System", "A Static-Dynamic Oracle", "Auxiliary Functions and Notation", "Static Oracle for SWAP", "Dynamic Oracle", "Experiments", "Conclusion" ] }
GEM-SciDuet-train-46#paper-1069#slide-12
Experiments
Miryam de Lhoneux, Sara Stymne and Joakim Nivre Non-Projective Parsing with a Static-Dynamic Oracle 19 os 03 =e English Arabic Portuguese Basque _Ancient-Greek Miryam de Lhoneux, Sara Stymne and Joakim Nivre eee nn a ence ity Miryam de Lhoneux, Sara Stymne and Joakim Nivre Sree enins ha Static-Dynamic Oracle ) Stat Ml Static-Dynamic Ancient-Greek Basque Portuguese English Stat Static-Dynamic ME Pproj Mm Proj Vien caren nnGient meen ac NO enn Re ann ae
Miryam de Lhoneux, Sara Stymne and Joakim Nivre Non-Projective Parsing with a Static-Dynamic Oracle 19 os 03 =e English Arabic Portuguese Basque _Ancient-Greek Miryam de Lhoneux, Sara Stymne and Joakim Nivre eee nn a ence ity Miryam de Lhoneux, Sara Stymne and Joakim Nivre Sree enins ha Static-Dynamic Oracle ) Stat Ml Static-Dynamic Ancient-Greek Basque Portuguese English Stat Static-Dynamic ME Pproj Mm Proj Vien caren nnGient meen ac NO enn Re ann ae
[]
GEM-SciDuet-train-46#paper-1069#slide-13
1069
Arc-Hybrid Non-Projective Dependency Parsing with a Static-Dynamic Oracle
We extend the arc-hybrid transition system for dependency parsing with a SWAP transition that enables reordering of the words and construction of non-projective trees. Although this extension potentially breaks the arc-decomposability of the transition system, we show that the existing dynamic oracle can be modified and combined with a static oracle for the SWAP transition. Experiments on five languages with different degrees of non-projectivity show that the new system gives competitive accuracy and is significantly better than a system trained with a purely static oracle.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112 ], "paper_content_text": [ "Introduction Non-projective sentences are a notorious problem in dependency parsing.", "Traditional algorithms like those developed by Nivre (2003 Nivre ( , 2004 for transition-based parsing only allow the construction of projective trees.", "These algorithms make use of a stack, a buffer and a set of arcs, and parsing consists of performing a sequence of transitions on these structures.", "Traditional algorithms have been extended in different ways to allow the construction of non-projective trees (Nivre and Nilsson, 2005; Attardi, 2006; Nivre, 2007; Gómez-Rodríguez and Nivre, 2010) .", "One method proposed by Nivre (2009) is based on the idea of word reordering.", "This is achieved by adding a transition that swaps two items in the data structures used, enabling the construction of arbitrary non-projective trees while still only adding arcs between adjacent words (after possible reordering).", "This technique was previously used in the arc-standard transition system (Nivre, 2004) .", "The first contribution of this paper is to show that it can also be combined with the arc-hybrid system proposed by Kuhlmann et al.", "(2011) .", "Recent advances in dependency parsing have demonstrated the benefit of using dynamic oracles for training dependency parsers (Goldberg and Nivre, 2013) .", "Traditionally, parsers were trained in a static way and were only exposed to configurations resulting from optimal transitions during training.", "Dynamic oracles define optimal transition sequences for any configuration in which the parser may be.", "The use of dynamic oracles enables training with exploration of errors, which mitigates the problem of error propagation at prediction time.", "In order to define a dynamic oracle, we need to be able to compute the cost of any transition in any configuration, where cost is usually defined as minimum Hamming loss with respect to the best tree reachable from that configuration.", "Goldberg and Nivre (2013) showed that this computation is straightforward for transition systems that satisfy the property of arc-decomposability, meaning that a tree is reachable from a configuration if and only if every arc in the tree is reachable in itself.", "Based on this result, they defined dynamic oracles for the arc-eager (Nivre, 2003) , arc-hybrid (Kuhlmann et al., 2011) and easy-first (Goldberg and Elhadad, 2010) systems.", "Transition systems that allow non-projective trees are in general not arc-decomposable and therefore require different methods for constructing dynamic oracles (Gómez-Rodríguez and Fernández-González, 2015) .", "The online reordering system of Nivre (2009) is furthermore based on the arc-standard system, which is not even arc-decomposable in itself (Goldberg and Nivre, 2013) .", "The second contribution of this paper is to show that we can take advantage of the arcdecomposability of the arc-hybrid transition system and extend the existing dynamic oracle to deal with the added swap transition.", "The resulting or-acle is static with respect to the new transition but remains dynamic for all other transitions.", "We show experimentally that this static-dynamic oracle gives a significant advantage over the alternative static oracle and results in competitive results for non-projective parsing.", "An Extended Transition System The arc-hybrid system has configurations of the form c = (Σ, B, A), where • Σ is a stack (represented as a list with the head to the right), • B is a buffer (represented as a list with the head to the left), • A is a set of dependency arcs (represented as (h, d) pairs).", "1 Given a sentence W = w 1 , .", ".", ".", ", w n , the system is initialized to: c 0 = ([ ], [1, .", ".", ".", ", n, n+1], { }) where n+1 is a special root node, denoted r from now on.", "Terminal configurations have the form: c = ([ ], [r], A) and the parse tree is given by the arc set A.", "There are preconditions such that SHIFT is legal only if b = r, RIGHT only if |Σ| > 1 and LEFT only if |Σ| > 0.", "In order to enforce that r has exactly one dependent, as required by some dependency grammar frameworks, we add a precondition such that LEFT is legal only if |Σ| = 1 or b = r. In the extended system, we add a SWAP transition to be able to construct non-projective trees using online reordering: • SWAP[(σ|s 0 , b|β, A)] = (σ, b|s 0 |β, A) There is a precondition making SWAP legal only if |Σ| > 0, |B| > 1 and s 0 < b.", "3 The SWAP transition reorders nodes by moving the item on top of the stack (s 0 ) to the second position in the buffer, thus inverting the order of s 0 and b.", "The SHIFT and SWAP transitions together implement a simple sorting algorithm, which allows us to permute the order of nodes arbitrarily.", "As shown by (Nivre, 2009) , this implies that we can construct any non-projective tree by reordering and adding arcs between adjacent nodes using LEFT and RIGHT.", "As already observed, the main advantage of the arc-hybrid system over the arc-standard system is that it is arc-decomposable, which allows us to construct a simple and efficient dynamic oracle.", "The arc-eager system (Nivre, 2003) is also arcdecomposable but cannot be combined with SWAP because items on the stack in that system do not necessarily represent disjoint subtrees.", "A Static-Dynamic Oracle The dynamic oracle for arc-hybrid parsing defined by Goldberg and Nivre (2013) computes the cost of a transition by counting the number of gold arcs that are made unreachable by applying that transition.", "This presupposes that the system is arcdecomposable, a result that is proven in the same paper.", "Our construction of an oracle for arc-hybrid parsing with online ordering is based on the conjecture that we can retain arc-decomposition by only making SWAP transitions that are necessary to make non-projective arcs reachable and by enforcing all such transitions.", "Proving this conjecture is, however, outside the scope of this paper.", "Auxiliary Functions and Notation We assume that gold trees are preprocessed at training time to compute the following information for each input node i: • PROJ(i) = the position of node i in the projective order.", "4 • RDEPS(i) = the set of reachable dependents of i, initially all dependents of i.", "• LEFT: C(LEFT) = |RDEPS(s 0 )| + [[h(s 0 ) = b and s 0 ∈ RDEPS(h(s 0 ))]] Updates: Set RDEPS(s 0 ) = [ ] and remove s 0 from RDEPS(h(s 0 )).", "• RIGHT: C(RIGHT) = |RDEPS(s 0 )| + [[h(s 0 ) = s 1 and s 0 ∈ RDEPS(h(s 0 ))]] Updates: Set RDEPS(s 0 ) = [ ] and remove s 0 from RDEPS(h(s 0 )).", "• SHIFT: We use h(i) to denote the head of a node i in the gold tree.", "1.", "If there exists a node i ∈ B −b such that b < i and PROJ(b) > PROJ(i): C(SHIFT) = 0 2.", "Else: C(SHIFT) = |{d ∈ RDEPS(b) | d ∈ Σ}| + [[h(b) ∈ Σ −s 0 and b ∈ RDEPS(h(b))]] Updates: Remove b from RDEPS(h(b)) if h(b) ∈ Σ −s 0 and remove d ∈ Σ from RDEPS(b).", "Static Oracle for SWAP Our oracle is fully dynamic with respect to SHIFT, LEFT and RIGHT but basically static with respect to SWAP.", "This means that only optimal (zero cost) SWAP transitions are allowed during training and that we force the parser to apply the SWAP transition when needed.", "Optimal: To prevent non-optimal SWAP transitions, we add a precondition so that SWAP is legal only if PROJ(s 0 ) > PROJ(b).", "Forced: To force necessary SWAP transitions, we bypass the oracle whenever PROJ(s 0 ) > PROJ(b).", "5 Dynamic Oracle Since we use a static oracle for SWAP transitions, these will always have zero cost.", "The dynamic oracle thus only needs to define costs for the remaining three transitions.", "To construct the oracle, we start from the old dynamic oracle for the projective system and extend it to account for the added flexibility introduced by SWAP.", "Figure 1 defines the transition costs as well as the necessary updates to RDEPS after applying a transition.", "• LEFT: Adding the arc (b, s 0 ) and popping s 0 from the stack means that s 0 will not be able to acquire a head different from b nor any of its reachable dependents.", "In the old projective case, the loss was limited to a head in s 0 |β and dependents in b|β, but because s 0 can potentially be swapped back to the buffer, we instead define reachability explicitly through RDEPS(s 0 ) (for dependents) and RDEPS(h(s 0 )) (for the head) and update these accordingly after applying the transition.", "• RIGHT: Adding the arc (s 1 , s 0 ) and popping s 0 from the stack means that s 0 will not be able to acquire a head different from s 1 nor any of its reachable dependents.", "In the old projective case, the loss was limited to a head and dependents in b|β, but because s 0 can potentially be swapped back to the buffer, we again define reachability explicitly through RDEPS(s 0 ) (for dependents) and RDEPS(h(s 0 )) (for the head) and update these accordingly after applying the transition.", "1 2 3 4 s 1 s 0 b [ 1 2 ] Σ [ 3 4 ] B RIGHT ⇒ 1 2 3 4 [ 1 ] Σ [ 3 4 ] B SHIFT ⇓ 1 2 3 4 [ 1 2 3 ] Σ [ 4 ] B 1 2 4 3 s 1 s 0 b [ 1 2 ] Σ [ 4 3 ] B • SHIFT: In the projective case, shifting b onto the stack means that b will not be able to acquire a head in Σ other than the top item s 0 nor any dependents in Σ.", "With the SWAP transition and a static oracle, we also have to consider the case where b can later be swapped back to the buffer, in which case SHIFT has zero cost.", "We therefore have two cases in Figure 1 .", "In the first case, no updates are needed.", "In the second case, the updates are analogous to the old projective case.", "To illustrate how the oracle works, let us look at some hypothetical configurations.", "First, we can have a situation as in the top left corner of Figure 2 , where all nodes are in projective order given the gold tree displayed above the nodes.", "For simplicity, the nodes are named after their projective order.", "Applying a RIGHT transition in this configuration makes it impossible for s 0 (2) to be attached to its head (3) and therefore makes us lose the arc 3 → 2, as shown in the top right corner.", "If we instead apply a SHIFT transition, we lose the arc between b (3) and its head (1) as well as the arc 3 → 2, as shown in the bottom left corner.", "By contrast, a LEFT transition has zero cost, because no arcs are lost so the best tree reachable in the orig-inal configuration is still reachable after applying the LEFT transition.", "Consider now a configuration, like the one in the bottom right corner of Figure 2 , where the nodes are not in projective order.", "Here we can shift b (4) onto the stack without cost, because it will later be swapped back to the buffer to restore the projective order between 4 and 3.", "A RIGHT transition makes us lose the head and dependent of s 0 (4 → 2 and 2 → 3).", "A LEFT transition makes us lose the dependent of s 0 (2 → 3) .", "The combination of a dynamic oracle for LEFT, RIGHT and SHIFT with a static oracle for SWAP allows us to benefit from training with exploration in most situations and at the same time capture nonprojective dependencies.", "Experiments We extend the parser we used in de Lhoneux et al.", "(2017), a greedy transition-based parser that predicts the dependency tree given the raw words of a sentence.", "That parser is itself an extension of the parser developed by Kiperwasser and Goldberg (2016) .", "It relies on a BiLSTM to learn informative features of words in context and a feed-forward network for predicting the next parsing transition.", "It learns vector representations of the words as well as characters.", "Contrary to parsing tradition, it makes no use of part-of-speech tags.", "We released our system as UUparser 2.0, available at https: //github.com/UppsalaNLP/uuparser.", "We first compare our system, which uses our static-dynamic oracle, with the same system using a static oracle.", "This is to find out if we can benefit from error exploration using our partially dynamic oracle.", "We use the same set of hyperparameters as in that paper in all our experiments.", "We additionally compare our method to a different approach to handling non-projectivity, pseudo-projective parsing, as performed in de Lhoneux et al.", "(2017) .", "Pseudo-projective parsing was developed by Nivre and Nilsson (2005) .", "In a pre-processing step, the training data is projectivised: some nodes get reattached to a close parent.", "Parsed data are then 'deprojectivised' in a post-processing step.", "In order for information about non-projectivity to be recoverable after parsing, when projectivising, arcs are renamed to encode information about the original parent of dependents which get re-attached.", "Note that hyperparameters were tweaked for the pseudo-projective system, possibly giving an unfair advantage.", "Lastly, we compare to a projective baseline, using a dynamic oracle but no SWAP transition.", "6 This is to find out the extent to which dealing with non-projectivity is important.", "We selected a sample of 5 treebanks from the Universal Dependencies CoNLL 2017 shared task data .", "We selected languages to have different frequencies of non-projectivity, both at the sentence level and at the level of individual arcs, ranging from a very high frequency (Ancient-Greek) to a low frequency (English), as well as some typological variety.", "Non-projective frequencies were taken from Straka et al.", "(2015) and are included in Table 1 , which report our results on the development sets (best out of 20 epochs).", "Comparing to the static baseline, we can verify that our static-dynamic oracle really preserves the benefits of training with error exploration, with improvements ranging from 0.5 to 3.5 points.", "(All differences here are statistically significant with p<0.01, except for Portuguese, where the difference is statistically significant with p<0.05 according to the McNemar test).", "The new system achieves results largely on par with the pseudo-projective parser.", "Our method is better by a small margin for 3 out of 5 languages Table 1 : LAS on dev sets with gold tokenization for our static-dynamic system (S-Dy), the static and projective baselines (Static, Proj) and the pseudo-projective system of de Lhoneux et al.", "(2017) (PProj).", "%NP = percentage of nonprojective arcs/sentences.", "and worse by a large margin for 1.", "Overall, these results are encouraging given that our method is simpler and more efficient to train, with no need for pre-or post-processing and no extension of the dependency label set.", "7 Comparing to the projective baseline, we see that strictly projective parsing can be slightly better than both online reordering and pseudoprojective parsing for a language with few non-projective arcs/sentences like English.", "For all other languages, we see small (Arabic) to big (Ancient Greek) improvements from dealing with non-projectivity in some way.", "Conclusion We have shown how the SWAP transition for online reordering can be integrated into the archybrid transition system for dependency parsing in such a way that we still benefit from training with exploration using a static-dynamic oracle.", "In the future, we intend to test this further by evaluating our model on more languages with proper tuning of hyperparameters.", "We are also interested in the question of whether it is possible to define a fully dynamic oracle for our system and allow exploration for the SWAP transition too." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "4", "5" ], "paper_header_content": [ "Introduction", "An Extended Transition System", "A Static-Dynamic Oracle", "Auxiliary Functions and Notation", "Static Oracle for SWAP", "Dynamic Oracle", "Experiments", "Conclusion" ] }
GEM-SciDuet-train-46#paper-1069#slide-13
Conclusion
Miryam de Lhoneux, Sara Stymne and Joakim Nivre Non-Projective Parsing with a Static-Dynamic Oracle PT) Miryam de Lhoneux, Sara Stymne and Joakim Nivre Re eenaainnt OY PT) We integrated a swap transition into arc-hybrid We defined an oracle that is partially dynamic for this system Our system benefits from error exploration A fully dynamic oracle?
Miryam de Lhoneux, Sara Stymne and Joakim Nivre Non-Projective Parsing with a Static-Dynamic Oracle PT) Miryam de Lhoneux, Sara Stymne and Joakim Nivre Re eenaainnt OY PT) We integrated a swap transition into arc-hybrid We defined an oracle that is partially dynamic for this system Our system benefits from error exploration A fully dynamic oracle?
[]
GEM-SciDuet-train-46#paper-1069#slide-14
1069
Arc-Hybrid Non-Projective Dependency Parsing with a Static-Dynamic Oracle
We extend the arc-hybrid transition system for dependency parsing with a SWAP transition that enables reordering of the words and construction of non-projective trees. Although this extension potentially breaks the arc-decomposability of the transition system, we show that the existing dynamic oracle can be modified and combined with a static oracle for the SWAP transition. Experiments on five languages with different degrees of non-projectivity show that the new system gives competitive accuracy and is significantly better than a system trained with a purely static oracle.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112 ], "paper_content_text": [ "Introduction Non-projective sentences are a notorious problem in dependency parsing.", "Traditional algorithms like those developed by Nivre (2003 Nivre ( , 2004 for transition-based parsing only allow the construction of projective trees.", "These algorithms make use of a stack, a buffer and a set of arcs, and parsing consists of performing a sequence of transitions on these structures.", "Traditional algorithms have been extended in different ways to allow the construction of non-projective trees (Nivre and Nilsson, 2005; Attardi, 2006; Nivre, 2007; Gómez-Rodríguez and Nivre, 2010) .", "One method proposed by Nivre (2009) is based on the idea of word reordering.", "This is achieved by adding a transition that swaps two items in the data structures used, enabling the construction of arbitrary non-projective trees while still only adding arcs between adjacent words (after possible reordering).", "This technique was previously used in the arc-standard transition system (Nivre, 2004) .", "The first contribution of this paper is to show that it can also be combined with the arc-hybrid system proposed by Kuhlmann et al.", "(2011) .", "Recent advances in dependency parsing have demonstrated the benefit of using dynamic oracles for training dependency parsers (Goldberg and Nivre, 2013) .", "Traditionally, parsers were trained in a static way and were only exposed to configurations resulting from optimal transitions during training.", "Dynamic oracles define optimal transition sequences for any configuration in which the parser may be.", "The use of dynamic oracles enables training with exploration of errors, which mitigates the problem of error propagation at prediction time.", "In order to define a dynamic oracle, we need to be able to compute the cost of any transition in any configuration, where cost is usually defined as minimum Hamming loss with respect to the best tree reachable from that configuration.", "Goldberg and Nivre (2013) showed that this computation is straightforward for transition systems that satisfy the property of arc-decomposability, meaning that a tree is reachable from a configuration if and only if every arc in the tree is reachable in itself.", "Based on this result, they defined dynamic oracles for the arc-eager (Nivre, 2003) , arc-hybrid (Kuhlmann et al., 2011) and easy-first (Goldberg and Elhadad, 2010) systems.", "Transition systems that allow non-projective trees are in general not arc-decomposable and therefore require different methods for constructing dynamic oracles (Gómez-Rodríguez and Fernández-González, 2015) .", "The online reordering system of Nivre (2009) is furthermore based on the arc-standard system, which is not even arc-decomposable in itself (Goldberg and Nivre, 2013) .", "The second contribution of this paper is to show that we can take advantage of the arcdecomposability of the arc-hybrid transition system and extend the existing dynamic oracle to deal with the added swap transition.", "The resulting or-acle is static with respect to the new transition but remains dynamic for all other transitions.", "We show experimentally that this static-dynamic oracle gives a significant advantage over the alternative static oracle and results in competitive results for non-projective parsing.", "An Extended Transition System The arc-hybrid system has configurations of the form c = (Σ, B, A), where • Σ is a stack (represented as a list with the head to the right), • B is a buffer (represented as a list with the head to the left), • A is a set of dependency arcs (represented as (h, d) pairs).", "1 Given a sentence W = w 1 , .", ".", ".", ", w n , the system is initialized to: c 0 = ([ ], [1, .", ".", ".", ", n, n+1], { }) where n+1 is a special root node, denoted r from now on.", "Terminal configurations have the form: c = ([ ], [r], A) and the parse tree is given by the arc set A.", "There are preconditions such that SHIFT is legal only if b = r, RIGHT only if |Σ| > 1 and LEFT only if |Σ| > 0.", "In order to enforce that r has exactly one dependent, as required by some dependency grammar frameworks, we add a precondition such that LEFT is legal only if |Σ| = 1 or b = r. In the extended system, we add a SWAP transition to be able to construct non-projective trees using online reordering: • SWAP[(σ|s 0 , b|β, A)] = (σ, b|s 0 |β, A) There is a precondition making SWAP legal only if |Σ| > 0, |B| > 1 and s 0 < b.", "3 The SWAP transition reorders nodes by moving the item on top of the stack (s 0 ) to the second position in the buffer, thus inverting the order of s 0 and b.", "The SHIFT and SWAP transitions together implement a simple sorting algorithm, which allows us to permute the order of nodes arbitrarily.", "As shown by (Nivre, 2009) , this implies that we can construct any non-projective tree by reordering and adding arcs between adjacent nodes using LEFT and RIGHT.", "As already observed, the main advantage of the arc-hybrid system over the arc-standard system is that it is arc-decomposable, which allows us to construct a simple and efficient dynamic oracle.", "The arc-eager system (Nivre, 2003) is also arcdecomposable but cannot be combined with SWAP because items on the stack in that system do not necessarily represent disjoint subtrees.", "A Static-Dynamic Oracle The dynamic oracle for arc-hybrid parsing defined by Goldberg and Nivre (2013) computes the cost of a transition by counting the number of gold arcs that are made unreachable by applying that transition.", "This presupposes that the system is arcdecomposable, a result that is proven in the same paper.", "Our construction of an oracle for arc-hybrid parsing with online ordering is based on the conjecture that we can retain arc-decomposition by only making SWAP transitions that are necessary to make non-projective arcs reachable and by enforcing all such transitions.", "Proving this conjecture is, however, outside the scope of this paper.", "Auxiliary Functions and Notation We assume that gold trees are preprocessed at training time to compute the following information for each input node i: • PROJ(i) = the position of node i in the projective order.", "4 • RDEPS(i) = the set of reachable dependents of i, initially all dependents of i.", "• LEFT: C(LEFT) = |RDEPS(s 0 )| + [[h(s 0 ) = b and s 0 ∈ RDEPS(h(s 0 ))]] Updates: Set RDEPS(s 0 ) = [ ] and remove s 0 from RDEPS(h(s 0 )).", "• RIGHT: C(RIGHT) = |RDEPS(s 0 )| + [[h(s 0 ) = s 1 and s 0 ∈ RDEPS(h(s 0 ))]] Updates: Set RDEPS(s 0 ) = [ ] and remove s 0 from RDEPS(h(s 0 )).", "• SHIFT: We use h(i) to denote the head of a node i in the gold tree.", "1.", "If there exists a node i ∈ B −b such that b < i and PROJ(b) > PROJ(i): C(SHIFT) = 0 2.", "Else: C(SHIFT) = |{d ∈ RDEPS(b) | d ∈ Σ}| + [[h(b) ∈ Σ −s 0 and b ∈ RDEPS(h(b))]] Updates: Remove b from RDEPS(h(b)) if h(b) ∈ Σ −s 0 and remove d ∈ Σ from RDEPS(b).", "Static Oracle for SWAP Our oracle is fully dynamic with respect to SHIFT, LEFT and RIGHT but basically static with respect to SWAP.", "This means that only optimal (zero cost) SWAP transitions are allowed during training and that we force the parser to apply the SWAP transition when needed.", "Optimal: To prevent non-optimal SWAP transitions, we add a precondition so that SWAP is legal only if PROJ(s 0 ) > PROJ(b).", "Forced: To force necessary SWAP transitions, we bypass the oracle whenever PROJ(s 0 ) > PROJ(b).", "5 Dynamic Oracle Since we use a static oracle for SWAP transitions, these will always have zero cost.", "The dynamic oracle thus only needs to define costs for the remaining three transitions.", "To construct the oracle, we start from the old dynamic oracle for the projective system and extend it to account for the added flexibility introduced by SWAP.", "Figure 1 defines the transition costs as well as the necessary updates to RDEPS after applying a transition.", "• LEFT: Adding the arc (b, s 0 ) and popping s 0 from the stack means that s 0 will not be able to acquire a head different from b nor any of its reachable dependents.", "In the old projective case, the loss was limited to a head in s 0 |β and dependents in b|β, but because s 0 can potentially be swapped back to the buffer, we instead define reachability explicitly through RDEPS(s 0 ) (for dependents) and RDEPS(h(s 0 )) (for the head) and update these accordingly after applying the transition.", "• RIGHT: Adding the arc (s 1 , s 0 ) and popping s 0 from the stack means that s 0 will not be able to acquire a head different from s 1 nor any of its reachable dependents.", "In the old projective case, the loss was limited to a head and dependents in b|β, but because s 0 can potentially be swapped back to the buffer, we again define reachability explicitly through RDEPS(s 0 ) (for dependents) and RDEPS(h(s 0 )) (for the head) and update these accordingly after applying the transition.", "1 2 3 4 s 1 s 0 b [ 1 2 ] Σ [ 3 4 ] B RIGHT ⇒ 1 2 3 4 [ 1 ] Σ [ 3 4 ] B SHIFT ⇓ 1 2 3 4 [ 1 2 3 ] Σ [ 4 ] B 1 2 4 3 s 1 s 0 b [ 1 2 ] Σ [ 4 3 ] B • SHIFT: In the projective case, shifting b onto the stack means that b will not be able to acquire a head in Σ other than the top item s 0 nor any dependents in Σ.", "With the SWAP transition and a static oracle, we also have to consider the case where b can later be swapped back to the buffer, in which case SHIFT has zero cost.", "We therefore have two cases in Figure 1 .", "In the first case, no updates are needed.", "In the second case, the updates are analogous to the old projective case.", "To illustrate how the oracle works, let us look at some hypothetical configurations.", "First, we can have a situation as in the top left corner of Figure 2 , where all nodes are in projective order given the gold tree displayed above the nodes.", "For simplicity, the nodes are named after their projective order.", "Applying a RIGHT transition in this configuration makes it impossible for s 0 (2) to be attached to its head (3) and therefore makes us lose the arc 3 → 2, as shown in the top right corner.", "If we instead apply a SHIFT transition, we lose the arc between b (3) and its head (1) as well as the arc 3 → 2, as shown in the bottom left corner.", "By contrast, a LEFT transition has zero cost, because no arcs are lost so the best tree reachable in the orig-inal configuration is still reachable after applying the LEFT transition.", "Consider now a configuration, like the one in the bottom right corner of Figure 2 , where the nodes are not in projective order.", "Here we can shift b (4) onto the stack without cost, because it will later be swapped back to the buffer to restore the projective order between 4 and 3.", "A RIGHT transition makes us lose the head and dependent of s 0 (4 → 2 and 2 → 3).", "A LEFT transition makes us lose the dependent of s 0 (2 → 3) .", "The combination of a dynamic oracle for LEFT, RIGHT and SHIFT with a static oracle for SWAP allows us to benefit from training with exploration in most situations and at the same time capture nonprojective dependencies.", "Experiments We extend the parser we used in de Lhoneux et al.", "(2017), a greedy transition-based parser that predicts the dependency tree given the raw words of a sentence.", "That parser is itself an extension of the parser developed by Kiperwasser and Goldberg (2016) .", "It relies on a BiLSTM to learn informative features of words in context and a feed-forward network for predicting the next parsing transition.", "It learns vector representations of the words as well as characters.", "Contrary to parsing tradition, it makes no use of part-of-speech tags.", "We released our system as UUparser 2.0, available at https: //github.com/UppsalaNLP/uuparser.", "We first compare our system, which uses our static-dynamic oracle, with the same system using a static oracle.", "This is to find out if we can benefit from error exploration using our partially dynamic oracle.", "We use the same set of hyperparameters as in that paper in all our experiments.", "We additionally compare our method to a different approach to handling non-projectivity, pseudo-projective parsing, as performed in de Lhoneux et al.", "(2017) .", "Pseudo-projective parsing was developed by Nivre and Nilsson (2005) .", "In a pre-processing step, the training data is projectivised: some nodes get reattached to a close parent.", "Parsed data are then 'deprojectivised' in a post-processing step.", "In order for information about non-projectivity to be recoverable after parsing, when projectivising, arcs are renamed to encode information about the original parent of dependents which get re-attached.", "Note that hyperparameters were tweaked for the pseudo-projective system, possibly giving an unfair advantage.", "Lastly, we compare to a projective baseline, using a dynamic oracle but no SWAP transition.", "6 This is to find out the extent to which dealing with non-projectivity is important.", "We selected a sample of 5 treebanks from the Universal Dependencies CoNLL 2017 shared task data .", "We selected languages to have different frequencies of non-projectivity, both at the sentence level and at the level of individual arcs, ranging from a very high frequency (Ancient-Greek) to a low frequency (English), as well as some typological variety.", "Non-projective frequencies were taken from Straka et al.", "(2015) and are included in Table 1 , which report our results on the development sets (best out of 20 epochs).", "Comparing to the static baseline, we can verify that our static-dynamic oracle really preserves the benefits of training with error exploration, with improvements ranging from 0.5 to 3.5 points.", "(All differences here are statistically significant with p<0.01, except for Portuguese, where the difference is statistically significant with p<0.05 according to the McNemar test).", "The new system achieves results largely on par with the pseudo-projective parser.", "Our method is better by a small margin for 3 out of 5 languages Table 1 : LAS on dev sets with gold tokenization for our static-dynamic system (S-Dy), the static and projective baselines (Static, Proj) and the pseudo-projective system of de Lhoneux et al.", "(2017) (PProj).", "%NP = percentage of nonprojective arcs/sentences.", "and worse by a large margin for 1.", "Overall, these results are encouraging given that our method is simpler and more efficient to train, with no need for pre-or post-processing and no extension of the dependency label set.", "7 Comparing to the projective baseline, we see that strictly projective parsing can be slightly better than both online reordering and pseudoprojective parsing for a language with few non-projective arcs/sentences like English.", "For all other languages, we see small (Arabic) to big (Ancient Greek) improvements from dealing with non-projectivity in some way.", "Conclusion We have shown how the SWAP transition for online reordering can be integrated into the archybrid transition system for dependency parsing in such a way that we still benefit from training with exploration using a static-dynamic oracle.", "In the future, we intend to test this further by evaluating our model on more languages with proper tuning of hyperparameters.", "We are also interested in the question of whether it is possible to define a fully dynamic oracle for our system and allow exploration for the SWAP transition too." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "4", "5" ], "paper_header_content": [ "Introduction", "An Extended Transition System", "A Static-Dynamic Oracle", "Auxiliary Functions and Notation", "Static Oracle for SWAP", "Dynamic Oracle", "Experiments", "Conclusion" ] }
GEM-SciDuet-train-46#paper-1069#slide-14
Conclusio
We integrated a swap transition into arc-hybrid We defined an oracle that is partially dynamic for this system Our system benefits from error exploration
We integrated a swap transition into arc-hybrid We defined an oracle that is partially dynamic for this system Our system benefits from error exploration
[]
GEM-SciDuet-train-47#paper-1071#slide-0
1071
Bringing replication and reproduction together with generalisability in NLP: Three reproduction studies for Target Dependent Sentiment Analysis
Lack of repeatability and generalisability are two significant threats to continuing scientific development in Natural Language Processing. Language models and learning methods are so complex that scientific conference papers no longer contain enough space for the technical depth required for replication or reproduction. Taking Target Dependent Sentiment Analysis as a case study, we show how recent work in the field has not consistently released code, or described settings for learning methods in enough detail, and lacks comparability and generalisability in train, test or validation data. To investigate generalisability and to enable state of the art comparative evaluations, we carry out the first reproduction studies of three groups of complementary methods and perform the first large-scale mass evaluation on six different English datasets. Reflecting on our experiences, we recommend that future replication or reproduction experiments should always consider a variety of datasets alongside documenting and releasing their methods and published code in order to minimise the barriers to both repeatability and generalisability. We have released our code with a model zoo on GitHub with Jupyter Notebooks to aid understanding and full documentation, and we recommend that others do the same with their papers at submission time through an anonymised GitHub account.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205 ], "paper_content_text": [ "Introduction Repeatable (replicable and/or reproducible 1 ) experimentation is a core tenet of the scientific endeavour.", "In Natural Language Processing (NLP) research as in other areas, this requires three crucial components: (a) published methods described in sufficient detail (b) a working code base and (c) open dataset(s) to permit training, testing and validation to be reproduced and generalised.", "In the cognate sub-discipline of corpus linguistics, releasing textual datasets has been a defining feature of the community for many years, enabling multiple comparative experiments to be conducted on a stable basis since the core underlying corpora are community resources.", "In NLP, with methods becoming increasingly complex with the use of machine learning and deep learning approaches, it is often difficult to describe all settings and configurations in enough detail without releasing code.", "The work described in this paper emerged from recent efforts at our research centre to reimplement other's work across a number of topics (e.g.", "text reuse, identity resolution and sentiment analysis) where previously published methods were not easily repeatable because of missing or broken code or dependencies, and/or where methods were not sufficiently well described to enable reproduction.", "We focus on one sub-area of sentiment analysis to illustrate the extent of these problems, along with our initial recommendations and contributions to address the issues.", "The area of Target Dependent Sentiment Analysis (TDSA) and NLP in general has been growing rapidly in the last few years due to new neural network methods that require no feature engineering.", "However it is difficult to keep track of the state of the art as new models are tested on different datasets, thus preventing true comparative evaluations.", "This is best shown by table 1 where many approaches This work is licenced under a Creative Commons Attribution 4.0 International Licence.", "Licence details: http:// creativecommons.org/licenses/by/4.0/ 1 We follow the definitions in Antske Fokkens' guest blog post \"replication (obtaining the same results using the same experiment) as well as reproduction (reach the same conclusion through different means)\" from http://coling2018.", "org/slowly-growing-offspring-zigglebottom-anno-2017-guest-post/ are evaluated on the SemEval dataset (Pontiki et al., 2014) but not all.", "Datasets can vary by domain (e.g.", "product), type (social media, review), or medium (written or spoken), and to date there has been no comparative evaluation of methods from these multiple classes.", "Our primary and secondary contributions therefore, are to carry out the first study that reports results across all three different dataset classes, and to release a open source code framework implementing three complementary groups of TDSA methods.", "In terms of reproducibility via code release, recent TDSA papers have generally been very good with regards to publishing code alongside their papers (Mitchell et al., 2013; Zhang et al., 2016; Liu and Zhang, 2017; Wang et al., 2017) but other papers have not released code (Wang et al., 2016; Tay et al., 2017) .", "In some cases, the code was initially made available, then removed, and is now back online (Tang et al., 2016a) .", "Unfortunately, in some cases even when code has been published, different results have been obtained relative to the original paper.", "This can be seen when Chen et al.", "(2017) used the code and embeddings in Tang et al.", "(2016b) they observe different results.", "Similarly, when others (Tay et al., 2017; Chen et al., 2017) attempt to replicate the experiments of Tang et al.", "(2016a) they also produce different results to the original authors.", "Our observations within this one sub-field motivates the need to investigate further and understand how such problems can be avoided in the future.", "In some cases, when code has been released, it is difficult to use which could explain why the results were not reproduced.", "Of course, we would not expect researchers to produce industrial strength code, or provide continuing free ongoing support for multiple years after publication, but the situation is clearly problematic for the development of the new field in general.", "In this paper, we therefore reproduce three papers chosen as they employ widely differing methods: Neural Pooling (NP) , NP with dependency parsing (Wang et al., 2017) , and RNN (Tang et al., 2016a) , as well as having been applied largely to different datasets.", "At the end of the paper, we reflect on bringing together elements of repeatability and generalisability which we find are crucial to NLP and data science based disciplines more widely to enable others to make use of the science created.", "Related work Reproducibility and replicability have long been key elements of the scientific method, but have been gaining renewed prominence recently across a number of disciplines with attention being given to a 'reproducibility crisis'.", "For example, in pharmaceutical research, as little as 20-25% of papers were found to be replicable (Prinz et al., 2011) .", "The problem has also been recognised in computer science in general (Collberg and Proebsting, 2016) .", "Reproducibility and replicability have been researched for sometime in Information Retrieval (IR) since the Grid@CLEF pilot track (Ferro and Harman, 2009 ).", "The aim was to create a 'grid of points' where a point defined the performance of a particular IR system using certain pre-processing techniques on a defined dataset.", "Louridas and Gousios (2012) looked at reproducibility in Software Engineering after trying to replicate another authors results and concluded with a list of requirements for papers to be reproducible: (a) All data related to the paper, (b) All code required to reproduce the paper and (c) Documentation for the code and data.", "Fokkens et al.", "(2013) looked at reproducibility in WordNet similarity and Named Entity Recognition finding five key aspects that cause experimental variation and therefore need to be clearly stated: (a) pre-processing, (b) experimental setup, (c) versioning, (d) system output, (e) system variation.", "In Twitter sentiment analysis, Sygkounas et al.", "(2016) stated the need for using the same library versions and datasets when replicating work.", "Different methods of releasing datasets and code have been suggested.", "Ferro and Harman (2009) defined a framework (CIRCO) that enforces a pre-processing pipeline where data can be extracted at each stage therefore facilitating a validation step.", "They stated a mechanism for storing results, dataset and pre-processed data 2 .", "Louridas and Gousios (2012) suggested the use of a virtual machine alongside papers to bundle the data and code together, while most state the advantages of releasing source code (Fokkens et al., 2013; Potthast et al., 2016; Sygkounas et al., 2016) .", "The act of reproducing or replicating results is not just for validating research but to also show how it can be improved.", "Ferro and Silvello (2016) followed up their initial research and were able to analyse which pre-processing techniques were important on a French monolingual dataset and how the different techniques affected each other given an IR system.", "Fokkens et al.", "(2013) showed how changes in the five key aspects affected results.", "The closest related work to our reproducibility study is that of Marrese-Taylor and Matsuo (2017) which they replicate three different syntactic based aspect extraction methods.", "They found that parameter tuning was very important however using different pre-processing pipelines such as Stanford's CoreNLP did not have a consistent effect on the results.", "They found that the methods stated in the original papers are not detailed enough to replicate the study as evidenced by their large results differential.", "Dashtipour et al.", "(2016) undertook a replication study in sentiment prediction, however this was at the document level and on different datasets and languages to the originals.", "In other areas of (aspectbased) sentiment analysis, releasing code for published systems has not been a high priority, e.g.", "in SemEval 2016 task 5 (Pontiki et al., 2016) only 1 out of 21 papers released their source code.", "In IR, specific reproducible research tracks have been created 3 and we are pleased to see the same happening at COLING 2018 4 .", "Turning now to the focus of our investigations, Target Dependent sentiment analysis (TDSA) research (Nasukawa and Yi, 2003) arose as an extension to the coarse grained analysis of document level sentiment analysis (Pang et al., 2002; Turney, 2002) .", "Since its inception, papers have applied different methods such as feature based (Kiritchenko et al., 2014) , Recursive Neural Networks (RecNN) (Dong et al., 2014) , Recurrent Neural Networks (RNN) (Tang et al., 2016a) , attention applied to RNN (Wang et al., 2016; Chen et al., 2017; Tay et al., 2017) , Neural Pooling (NP) Wang et al., 2017) , RNN combined with NP (Zhang et al., 2016) , and attention based neural networks (Tang et al., 2016b) .", "Others have tackled TDSA as a joint task with target extraction, thus treating it as a sequence labelling problem.", "Mitchell et al.", "(2013) carried out this task using Conditional Random Fields (CRF), and this work was then extended using a neural CRF .", "Both approaches found that combining the two tasks did not improve results compared to treating the two tasks separately, apart from when considering POS and NEG when the joint task performs better.", "Finally, created an attention RNN for this task which was evaluated on two very different datasets containing written and spoken (video-based) reviews where the domain adaptation between the two shows some promise.", "Overall, within the field of sentiment analysis there are other granularities such as sentence level (Socher et al., 2013) , topic (Augenstein et al., 2018) , and aspect (Wang et al., 2016; Tay et al., 2017) .", "Aspect-level sentiment analysis relates to identifying the sentiment of (potentially multiple) topics in the same text although this can be seen as a similar task to TDSA.", "However the clear distinction between aspect and TDSA is that TDSA requires the target to be mentioned in the text itself while aspect-level employs a conceptual category with potentially multiple related instantiations in the text.", "Tang et al.", "(2016a) created a Target Dependent LSTM (TDLSTM) which encompassed two LSTMs either side of the target word, then improved the model by concatenating the target vector to the input embeddings to create a Target Connected LSTM (TCLSTM).", "Adding attention has become very popular recently.", "Tang et al.", "(2016b) showed the speed and accuracy improvements of using multiple attention layers only over LSTM based methods, however they found that it could not model complex sentences e.g.", "negations.", "Liu and Zhang (2017) showed that adding attention to a Bi-directional LSTM (BLSTM) improves the results as it takes the importance of each word into account with respect to the target.", "Chen et al.", "(2017) also combined a BLSTM and attention, however they used multiple attention layers and combined the results using a Gated Recurrent Unit (GRU) which they called Recurrent Attention on Memory (RAM), and they found this method to allow models to better understand more complex sentiment for each comparison.", "used neural pooling features e.g.", "max, min, etc of the word embeddings of the left and right context of the target word, the target itself, and the whole Tweet.", "They inputted the features into a linear SVM, and showed the importance of using the left and right context for the first time.", "They found in their study that using a combination of Word2Vec embeddings and sentiment embeddings performed best alongside using sentiment lexicons to filter the embedding space.", "Other studies have adopted more linguistic approaches.", "Wang et al.", "(2017) extended the work of by using the dependency linked words from the target.", "Dong et al.", "(2014) used the dependency tree to create a Recursive Neural Network (RecNN) inspired by Socher et al.", "(2013) but compared to Socher et al.", "(2013) they also utilised the dependency tags to create an Adaptive RecNN (ARecNN).", "Critically, the methods reported above have not been applied to the same datasets, therefore a true comparative evaluation between the different methods is somewhat difficult.", "This has serious implications for generalisability of methods.", "We correct that limitation in our study.", "There are two papers taking a similar approach to our work in terms of generalisability although they do not combine them with the reproduction issues that we highlight.", "First, Chen et al.", "(2017) compared results across Se-mEval's laptop and restaurant reviews in English (Pontiki et al., 2014) , a Twitter dataset (Dong et al., 2014) and their own Chinese news comments dataset.", "They did perform a comparison across different languages, domains, corpora types, and different methods; SVM with features (Kiritchenko et al., 2014) , Rec-NN (Dong et al., 2014) , TDLSTM (Tang et al., 2016a) , Memory Neural Network (MNet) (Tang et al., 2016b) and their own attention method.", "However, the Chinese dataset was not released, and the methods were not compared across all datasets.", "By contrast, we compare all methods across all datasets, using techniques that are not just from the Recurrent Neural Network (RNN) family.", "A second paper, by Barnes et al.", "(2017) compares seven approaches to (document and sentence level) sentiment analysis on six benchmark datasets, but does not systematically explore reproduction issues as we do in our paper.", "Datasets used in our experiments We are evaluating our models over six different English datasets deliberately chosen to represent a range of domains, types and mediums.", "As highlighted above, previous papers tend to only carry out evaluations on one or two datasets which limits the generalisability of their results.", "In this paper, we do not consider the quality or inter-annotator agreement levels of these datasets but it has been noted that some datasets may have issues here.", "For example, Pavlopoulos and Androutsopoulos (2014) point out that the Hu and Liu (2004) dataset does not state their inter-annotator agreement scores nor do they have aspect terms that express neutral opinion.", "We only use a subset of the English datasets available.", "For two reasons.", "First, the time it takes to write parsers and run the models.", "Second, we only used datasets that contain three distinct sentiments (Wilson (2008) only has two).", "From the datasets we have used, we have only had issue with parsing Wang et al.", "(2017) where the annotations for the first set of the data contains the target span but the second set does not.", "Thus making it impossible to use the second set of annotation and forcing us to only use a subset of the dataset.", "An as example of this: \"Got rid of bureaucrats 'and we put that money, into 9000 more doctors and nurses'... to turn the doctors into bureaucrats#BattleForNumber10\" in that Tweet 'bureaucrats' was annotated as negative but it does not state if it was the first or second instance of 'bureaucrats' since it does not use target spans.", "As we can see from table 2, generally the social media datasets (Twitter and YouTube) contain more targets per sentence with the exception of Dong et al.", "(2014) and Mitchell et al.", "(2013) .", "The only dataset that has a small difference between the number of unique sentiments per sentence is the Wang et al.", "(2017) Reproduction studies In the following subsections, we present the three different methods that we are reproducing and how their results differ from the original analysis.", "In all of the experiments below, we lower case all text and tokenise using Twokenizer (Gimpel et al., 2011) .", "This was done as the datasets originate from Twitter and this pre-processing method was to some extent stated in and assumed to be used across the others as they do not explicitly state how they pre-process in the papers.", "Reproduction of Vo and Zhang (2015) Vo and Zhang (2015) created the first NP method for TDSA.", "It takes the word vectors of the left, right, target word, and full tweet/sentence/text contexts and performs max, min, average, standard deviation, and product pooling over these contexts to create a feature vector as input to the Support Vector Machine For each of the experiments below we used the following configurations unless otherwise stated: we performed 5 fold stratified cross validation, features are scaled using Max Min scaling before inputting into the SVM, and used the respective C-Values for the SVM stated in the paper for each of the models.", "One major difficulty with the description of the method in the paper and re-implementation is handling the same target multiple appearances issue as originally raised by Wang et al.", "(2017) .", "As the method requires context with regards to the target word, if there is more than one appearance of the target word then the method does not specify which to use.", "We therefore took the approach of Wang et al.", "(2017) and found all of the features for each appearance and performed median pooling over features.", "This change could explain the subtle differences between the results we report and those of the original paper.", "used three different sentiment lexicons: MPQA 5 (Wilson et al., 2005) , NRC 6 (Mohammad and Turney, 2010) , and HL 7 (Hu and Liu, 2004) .", "We found a small difference in word counts between their reported statistics for the MPQA lexicons and those we performed ourselves, as can be seen in the bold numbers in table 3.", "Originally, we assumed that a word can only occur in one sentiment class within the same lexicon, and this resulted in differing counts for all lexicons.", "This distinction is not clearly documented in the paper or code.", "However, our assumption turned out to be incorrect, giving a further illustration of why detailed descriptions and documentation of all decisions is important.", "We ran the same experiment as to show the effectiveness of sentiment lexicons the results can be seen in table 4.", "We can clearly see there are some difference not just with the accuracy scores but the rank of the sentiment lexicons.", "We found just using HL was best and MPQA does help performance compared to the Target-dep baseline which differs to findings.", "Since we found that using just HL performed best, the rest of the results will apply the Target-dep+ method using HL and using HL & MPQA to show the affect of using the lexicon that both we and found best.", "The original authors tested their methods using three different word vectors: 1.", "Word2Vec trained by on 5 million Tweets containing emoticons (W2V), 2.", "Sentiment Specific Word Embedding (SSWE) from , and 3.", "W2V and SSWE combined.", "Neither of these word embeddings are available from the original authors as never released the embeddings and the link to embeddings no longer works 8 .", "However, the embeddings were released through Wang et al.", "(2017) code base 9 following requesting of the code from .", "Figure 1 shows the results of the different word embeddings across the different methods.", "The main finding we see is that SSWE by themselves are not as informative as W2V vectors which is different to the findings of .", "However we agree that combining the two vectors is beneficial and that the rank of methods is the same in our observations.", "Sentiment Lexicons Word Counts Scaling and Final Model comparison We test all of the methods on the test data set of Dong et al.", "(2014) and show the difference between the original and reproduced models in figure 2.", "Finally, we show the effect of scaling using Max Min and not scaling the data.", "As stated before, we have been using Max Min scaling on the NP features, however did not mention scaling in their paper.", "The library they were using, LibLinear (Fan et al., 2008) , suggests in its practical guide (Hsu et al., 2003) to scale each feature to [0, 1] but this was not re-iterated by .", "We are using scikit-learn's (Pedregosa et al., 2011) LinearSVC which is a wrapper of LibLinear, hence making it appropriate to use here.", "As can be seen in figure 2, not scaling can affect the results by around one-third.", "Reproduction of Wang et al.", "(2017) Wang et al.", "(2017) extended the NP work of and instead of using the full tweet/sentence/text contexts they used the full dependency graph of the target word.", "Thus, they created three different methods: 1.", "TDParseuses only the full dependency graph context, 2.", "TDParse the feature of TDParseand the left and right contexts, and 3.", "TDParse+ the features of TDParse and LS and RS contexts.", "The experiments are performed on the Dong et al.", "(2014) and Wang et al.", "(2017) Twitter datasets where we train and test on the previously specified train and test splits.", "We also scale our features using Max Min scaling before inputting into the SVM.", "We used all three sentiment lexicons as in the original paper, and we found the C-Value by performing 5 fold stratified cross validation on the training datasets.", "The results of these experiments can be seen in figure 3 10 .", "As found with the results of replication, scaling is very important but is typically overlooked when reporting.", "8 http://ir.hit.edu.cn/˜dytang/ 9 https://github.com/bluemonk482/tdparse 10 For the Election Twitter dataset TDParse+ result were never reported in the original paper.", "Tang et al.", "(2016a) was the first to use LSTMs specifically for TDSA.", "They created three different models: 1.", "LSTM a standard LSTM that runs over the length of the sentence and takes no target information into account, 2.", "TDLSTM runs two LSTMs, one over the left and the other over the right context of the target word and concatenates the output of the two, and 3.", "TCLSTM same as the TDLSTM method but each input word vector is concatenated with vector of the target word.", "All of the methods outputs are fed into a softmax activation function.", "The experiments are performed on the Dong et al.", "(2014) dataset where we train and test on the specified splits.", "For the LSTMs we initialised the weights using uniform distribution U(0.003, 0.003), used Stochastic Gradient Descent (SGD) a learning rate of 0.01, cross entropy loss, padded and truncated sequence to the length of the maximum sequence in the training dataset as stated in the original paper, and we did not \"set the clipping threshold of softmax layer as 200\" (Tang et al., 2016a) as we were unsure what this meant.", "With regards to the number of epochs trained, we used early stopping with a patience of 10 and allowed 300 epochs.", "Within their experiments they used SSWE and Glove Twitter vectors 11 (Pennington et al., 2014) .", "As the paper being reproduced does not define the number of epochs they trained for, we use early stopping.", "Thus for early stopping we require to split the training data into train and validation sets to know when to stop.", "As it has been shown by Reimers and Gurevych (2017) that the random seed statistically significantly changes the results of experiments we ran each model over each word embedding thirty times, using a different seed value but keeping the same stratified train and validation split, and reported the results on the same test data as the original paper.", "As can be seen in Figure 4 , the initial seed value makes a large difference more so for the smaller embeddings.", "In table 5, we show the difference between our mean and maximum result and the original result for each model using the 200 dimension Glove Twitter vectors.", "Even though the mean result is quite different from the original the maximum is much closer.", "Our results generally agree with their results on the ranking of the word vectors and the embeddings.", "Overall, we were able to reproduce the results of all three papers.", "However for the neural network/deep learning approach of Tang et al.", "(2016a) we agree with Reimers and Gurevych (2017) that reporting multiple runs of the system over different seed values is required as the single performance scores can be misleading, which could explain why previous papers obtained different results to the original for the TDLSTM method (Chen et al., 2017; Tay et al., 2017) .", "Mass Evaluation For all of the methods we pre-processed the text by lower casing and tokenising using Twokenizer (Gimpel et al., 2011) , and we used all three sentiment lexicons where applicable.", "We found the best word vectors from SSWE and the common crawl 42B 300 dimension Glove vectors by five fold stratified cross validation for the NP methods and the highest accuracy on the validation set for the LSTM methods.", "We chose these word vectors as they have very different sizes (50 and 300), also they have been shown to perform well in different text types; SSWE for social media (Tang et al., 2016a) and Glove for reviews (Chen et al., 2017) .", "To make the experiments quicker and computationally less expensive, we filtered out all words from the word vectors that did not appear in the train and test datasets, and this is equivalent with respect to word coverage as using all words.", "Finally we only reported results for the LSTM methods with one seed value and not multiple due to time constraints.", "The results of the methods using the best found word vectors on the test sets can be seen in table 6.", "We find that the TDParse methods generally perform best but only clearly outperforms the other nondependency parser methods on the YouTuBean dataset.", "We hypothesise that this is due to the dataset containing, on average, a deeper constituency tree depth which could be seen as on average more complex sentences.", "This could be due to it being from the spoken medium compared to the rest of the datasets which are written.", "Also that using a sentiment lexicon is almost always beneficial, but only by a small amount.", "Within the LSTM based methods the TDLSTM method generally performs the best indicating that the extra target information that the TCLSTM method contains is not needed, but we believe this needs further analysis.", "We can conclude that the simpler NP models perform well across domain, type and medium and that even without language specific tools and lexicons they are competitive to the more complex LSTM based methods.", "Dataset Target-Dep F1 Discussion and conclusion The fast developing subfield of TDSA has so far lacked a large-scale comparative mass evaluation of approaches using different models and datasets.", "In this paper, we address this generalisability limitation and perform the first direct comparison and reproduction of three different approaches for TDSA.", "While carrying out these reproductions, we have noted and described above, the many emerging issues in previous research related to incomplete descriptions of methods and settings, patchy release of code, and lack of comparative evaluations.", "This is natural in a developing field, but it is crucial for ongoing development within NLP in general that improved repeatability practices are adopted.", "The practices adopted in our case studies are to reproduce the methods in open source code, adopt only open data, provide format conversion tools to ingest the different data formats, and describe and document all settings via the code and Jupyter Notebooks (released initially in anonymous form at submission time) 12 .", "We therefore argue that papers should not consider repeatability (replication or reproduction) or generalisability alone, but these two key tenets of scientific practice should be brought together.", "In future work, we aim to extend our reproduction framework further, and extend the comparative evaluation to languages other than English.", "This will necessitate changes in the framework since we expect that dependency parsers and sentiment lexicons will be unavailable for specific languages.", "Also we will explore through error analysis in which situations different neural network architectures perform best." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.1.3", "4.2", "5", "6" ], "paper_header_content": [ "Introduction", "Related work", "Datasets used in our experiments", "Reproduction studies", "Reproduction of Vo and Zhang (2015)", "Scaling and Final Model comparison", "Reproduction of Wang et al. (2017)", "Mass Evaluation", "Discussion and conclusion" ] }
GEM-SciDuet-train-47#paper-1071#slide-0
Document Sentiment Example
Rude service, medicore food...there are tons of restaurants in NY...stay away from this one (Pontiki et al., 2015)
Rude service, medicore food...there are tons of restaurants in NY...stay away from this one (Pontiki et al., 2015)
[]
GEM-SciDuet-train-47#paper-1071#slide-1
1071
Bringing replication and reproduction together with generalisability in NLP: Three reproduction studies for Target Dependent Sentiment Analysis
Lack of repeatability and generalisability are two significant threats to continuing scientific development in Natural Language Processing. Language models and learning methods are so complex that scientific conference papers no longer contain enough space for the technical depth required for replication or reproduction. Taking Target Dependent Sentiment Analysis as a case study, we show how recent work in the field has not consistently released code, or described settings for learning methods in enough detail, and lacks comparability and generalisability in train, test or validation data. To investigate generalisability and to enable state of the art comparative evaluations, we carry out the first reproduction studies of three groups of complementary methods and perform the first large-scale mass evaluation on six different English datasets. Reflecting on our experiences, we recommend that future replication or reproduction experiments should always consider a variety of datasets alongside documenting and releasing their methods and published code in order to minimise the barriers to both repeatability and generalisability. We have released our code with a model zoo on GitHub with Jupyter Notebooks to aid understanding and full documentation, and we recommend that others do the same with their papers at submission time through an anonymised GitHub account.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205 ], "paper_content_text": [ "Introduction Repeatable (replicable and/or reproducible 1 ) experimentation is a core tenet of the scientific endeavour.", "In Natural Language Processing (NLP) research as in other areas, this requires three crucial components: (a) published methods described in sufficient detail (b) a working code base and (c) open dataset(s) to permit training, testing and validation to be reproduced and generalised.", "In the cognate sub-discipline of corpus linguistics, releasing textual datasets has been a defining feature of the community for many years, enabling multiple comparative experiments to be conducted on a stable basis since the core underlying corpora are community resources.", "In NLP, with methods becoming increasingly complex with the use of machine learning and deep learning approaches, it is often difficult to describe all settings and configurations in enough detail without releasing code.", "The work described in this paper emerged from recent efforts at our research centre to reimplement other's work across a number of topics (e.g.", "text reuse, identity resolution and sentiment analysis) where previously published methods were not easily repeatable because of missing or broken code or dependencies, and/or where methods were not sufficiently well described to enable reproduction.", "We focus on one sub-area of sentiment analysis to illustrate the extent of these problems, along with our initial recommendations and contributions to address the issues.", "The area of Target Dependent Sentiment Analysis (TDSA) and NLP in general has been growing rapidly in the last few years due to new neural network methods that require no feature engineering.", "However it is difficult to keep track of the state of the art as new models are tested on different datasets, thus preventing true comparative evaluations.", "This is best shown by table 1 where many approaches This work is licenced under a Creative Commons Attribution 4.0 International Licence.", "Licence details: http:// creativecommons.org/licenses/by/4.0/ 1 We follow the definitions in Antske Fokkens' guest blog post \"replication (obtaining the same results using the same experiment) as well as reproduction (reach the same conclusion through different means)\" from http://coling2018.", "org/slowly-growing-offspring-zigglebottom-anno-2017-guest-post/ are evaluated on the SemEval dataset (Pontiki et al., 2014) but not all.", "Datasets can vary by domain (e.g.", "product), type (social media, review), or medium (written or spoken), and to date there has been no comparative evaluation of methods from these multiple classes.", "Our primary and secondary contributions therefore, are to carry out the first study that reports results across all three different dataset classes, and to release a open source code framework implementing three complementary groups of TDSA methods.", "In terms of reproducibility via code release, recent TDSA papers have generally been very good with regards to publishing code alongside their papers (Mitchell et al., 2013; Zhang et al., 2016; Liu and Zhang, 2017; Wang et al., 2017) but other papers have not released code (Wang et al., 2016; Tay et al., 2017) .", "In some cases, the code was initially made available, then removed, and is now back online (Tang et al., 2016a) .", "Unfortunately, in some cases even when code has been published, different results have been obtained relative to the original paper.", "This can be seen when Chen et al.", "(2017) used the code and embeddings in Tang et al.", "(2016b) they observe different results.", "Similarly, when others (Tay et al., 2017; Chen et al., 2017) attempt to replicate the experiments of Tang et al.", "(2016a) they also produce different results to the original authors.", "Our observations within this one sub-field motivates the need to investigate further and understand how such problems can be avoided in the future.", "In some cases, when code has been released, it is difficult to use which could explain why the results were not reproduced.", "Of course, we would not expect researchers to produce industrial strength code, or provide continuing free ongoing support for multiple years after publication, but the situation is clearly problematic for the development of the new field in general.", "In this paper, we therefore reproduce three papers chosen as they employ widely differing methods: Neural Pooling (NP) , NP with dependency parsing (Wang et al., 2017) , and RNN (Tang et al., 2016a) , as well as having been applied largely to different datasets.", "At the end of the paper, we reflect on bringing together elements of repeatability and generalisability which we find are crucial to NLP and data science based disciplines more widely to enable others to make use of the science created.", "Related work Reproducibility and replicability have long been key elements of the scientific method, but have been gaining renewed prominence recently across a number of disciplines with attention being given to a 'reproducibility crisis'.", "For example, in pharmaceutical research, as little as 20-25% of papers were found to be replicable (Prinz et al., 2011) .", "The problem has also been recognised in computer science in general (Collberg and Proebsting, 2016) .", "Reproducibility and replicability have been researched for sometime in Information Retrieval (IR) since the Grid@CLEF pilot track (Ferro and Harman, 2009 ).", "The aim was to create a 'grid of points' where a point defined the performance of a particular IR system using certain pre-processing techniques on a defined dataset.", "Louridas and Gousios (2012) looked at reproducibility in Software Engineering after trying to replicate another authors results and concluded with a list of requirements for papers to be reproducible: (a) All data related to the paper, (b) All code required to reproduce the paper and (c) Documentation for the code and data.", "Fokkens et al.", "(2013) looked at reproducibility in WordNet similarity and Named Entity Recognition finding five key aspects that cause experimental variation and therefore need to be clearly stated: (a) pre-processing, (b) experimental setup, (c) versioning, (d) system output, (e) system variation.", "In Twitter sentiment analysis, Sygkounas et al.", "(2016) stated the need for using the same library versions and datasets when replicating work.", "Different methods of releasing datasets and code have been suggested.", "Ferro and Harman (2009) defined a framework (CIRCO) that enforces a pre-processing pipeline where data can be extracted at each stage therefore facilitating a validation step.", "They stated a mechanism for storing results, dataset and pre-processed data 2 .", "Louridas and Gousios (2012) suggested the use of a virtual machine alongside papers to bundle the data and code together, while most state the advantages of releasing source code (Fokkens et al., 2013; Potthast et al., 2016; Sygkounas et al., 2016) .", "The act of reproducing or replicating results is not just for validating research but to also show how it can be improved.", "Ferro and Silvello (2016) followed up their initial research and were able to analyse which pre-processing techniques were important on a French monolingual dataset and how the different techniques affected each other given an IR system.", "Fokkens et al.", "(2013) showed how changes in the five key aspects affected results.", "The closest related work to our reproducibility study is that of Marrese-Taylor and Matsuo (2017) which they replicate three different syntactic based aspect extraction methods.", "They found that parameter tuning was very important however using different pre-processing pipelines such as Stanford's CoreNLP did not have a consistent effect on the results.", "They found that the methods stated in the original papers are not detailed enough to replicate the study as evidenced by their large results differential.", "Dashtipour et al.", "(2016) undertook a replication study in sentiment prediction, however this was at the document level and on different datasets and languages to the originals.", "In other areas of (aspectbased) sentiment analysis, releasing code for published systems has not been a high priority, e.g.", "in SemEval 2016 task 5 (Pontiki et al., 2016) only 1 out of 21 papers released their source code.", "In IR, specific reproducible research tracks have been created 3 and we are pleased to see the same happening at COLING 2018 4 .", "Turning now to the focus of our investigations, Target Dependent sentiment analysis (TDSA) research (Nasukawa and Yi, 2003) arose as an extension to the coarse grained analysis of document level sentiment analysis (Pang et al., 2002; Turney, 2002) .", "Since its inception, papers have applied different methods such as feature based (Kiritchenko et al., 2014) , Recursive Neural Networks (RecNN) (Dong et al., 2014) , Recurrent Neural Networks (RNN) (Tang et al., 2016a) , attention applied to RNN (Wang et al., 2016; Chen et al., 2017; Tay et al., 2017) , Neural Pooling (NP) Wang et al., 2017) , RNN combined with NP (Zhang et al., 2016) , and attention based neural networks (Tang et al., 2016b) .", "Others have tackled TDSA as a joint task with target extraction, thus treating it as a sequence labelling problem.", "Mitchell et al.", "(2013) carried out this task using Conditional Random Fields (CRF), and this work was then extended using a neural CRF .", "Both approaches found that combining the two tasks did not improve results compared to treating the two tasks separately, apart from when considering POS and NEG when the joint task performs better.", "Finally, created an attention RNN for this task which was evaluated on two very different datasets containing written and spoken (video-based) reviews where the domain adaptation between the two shows some promise.", "Overall, within the field of sentiment analysis there are other granularities such as sentence level (Socher et al., 2013) , topic (Augenstein et al., 2018) , and aspect (Wang et al., 2016; Tay et al., 2017) .", "Aspect-level sentiment analysis relates to identifying the sentiment of (potentially multiple) topics in the same text although this can be seen as a similar task to TDSA.", "However the clear distinction between aspect and TDSA is that TDSA requires the target to be mentioned in the text itself while aspect-level employs a conceptual category with potentially multiple related instantiations in the text.", "Tang et al.", "(2016a) created a Target Dependent LSTM (TDLSTM) which encompassed two LSTMs either side of the target word, then improved the model by concatenating the target vector to the input embeddings to create a Target Connected LSTM (TCLSTM).", "Adding attention has become very popular recently.", "Tang et al.", "(2016b) showed the speed and accuracy improvements of using multiple attention layers only over LSTM based methods, however they found that it could not model complex sentences e.g.", "negations.", "Liu and Zhang (2017) showed that adding attention to a Bi-directional LSTM (BLSTM) improves the results as it takes the importance of each word into account with respect to the target.", "Chen et al.", "(2017) also combined a BLSTM and attention, however they used multiple attention layers and combined the results using a Gated Recurrent Unit (GRU) which they called Recurrent Attention on Memory (RAM), and they found this method to allow models to better understand more complex sentiment for each comparison.", "used neural pooling features e.g.", "max, min, etc of the word embeddings of the left and right context of the target word, the target itself, and the whole Tweet.", "They inputted the features into a linear SVM, and showed the importance of using the left and right context for the first time.", "They found in their study that using a combination of Word2Vec embeddings and sentiment embeddings performed best alongside using sentiment lexicons to filter the embedding space.", "Other studies have adopted more linguistic approaches.", "Wang et al.", "(2017) extended the work of by using the dependency linked words from the target.", "Dong et al.", "(2014) used the dependency tree to create a Recursive Neural Network (RecNN) inspired by Socher et al.", "(2013) but compared to Socher et al.", "(2013) they also utilised the dependency tags to create an Adaptive RecNN (ARecNN).", "Critically, the methods reported above have not been applied to the same datasets, therefore a true comparative evaluation between the different methods is somewhat difficult.", "This has serious implications for generalisability of methods.", "We correct that limitation in our study.", "There are two papers taking a similar approach to our work in terms of generalisability although they do not combine them with the reproduction issues that we highlight.", "First, Chen et al.", "(2017) compared results across Se-mEval's laptop and restaurant reviews in English (Pontiki et al., 2014) , a Twitter dataset (Dong et al., 2014) and their own Chinese news comments dataset.", "They did perform a comparison across different languages, domains, corpora types, and different methods; SVM with features (Kiritchenko et al., 2014) , Rec-NN (Dong et al., 2014) , TDLSTM (Tang et al., 2016a) , Memory Neural Network (MNet) (Tang et al., 2016b) and their own attention method.", "However, the Chinese dataset was not released, and the methods were not compared across all datasets.", "By contrast, we compare all methods across all datasets, using techniques that are not just from the Recurrent Neural Network (RNN) family.", "A second paper, by Barnes et al.", "(2017) compares seven approaches to (document and sentence level) sentiment analysis on six benchmark datasets, but does not systematically explore reproduction issues as we do in our paper.", "Datasets used in our experiments We are evaluating our models over six different English datasets deliberately chosen to represent a range of domains, types and mediums.", "As highlighted above, previous papers tend to only carry out evaluations on one or two datasets which limits the generalisability of their results.", "In this paper, we do not consider the quality or inter-annotator agreement levels of these datasets but it has been noted that some datasets may have issues here.", "For example, Pavlopoulos and Androutsopoulos (2014) point out that the Hu and Liu (2004) dataset does not state their inter-annotator agreement scores nor do they have aspect terms that express neutral opinion.", "We only use a subset of the English datasets available.", "For two reasons.", "First, the time it takes to write parsers and run the models.", "Second, we only used datasets that contain three distinct sentiments (Wilson (2008) only has two).", "From the datasets we have used, we have only had issue with parsing Wang et al.", "(2017) where the annotations for the first set of the data contains the target span but the second set does not.", "Thus making it impossible to use the second set of annotation and forcing us to only use a subset of the dataset.", "An as example of this: \"Got rid of bureaucrats 'and we put that money, into 9000 more doctors and nurses'... to turn the doctors into bureaucrats#BattleForNumber10\" in that Tweet 'bureaucrats' was annotated as negative but it does not state if it was the first or second instance of 'bureaucrats' since it does not use target spans.", "As we can see from table 2, generally the social media datasets (Twitter and YouTube) contain more targets per sentence with the exception of Dong et al.", "(2014) and Mitchell et al.", "(2013) .", "The only dataset that has a small difference between the number of unique sentiments per sentence is the Wang et al.", "(2017) Reproduction studies In the following subsections, we present the three different methods that we are reproducing and how their results differ from the original analysis.", "In all of the experiments below, we lower case all text and tokenise using Twokenizer (Gimpel et al., 2011) .", "This was done as the datasets originate from Twitter and this pre-processing method was to some extent stated in and assumed to be used across the others as they do not explicitly state how they pre-process in the papers.", "Reproduction of Vo and Zhang (2015) Vo and Zhang (2015) created the first NP method for TDSA.", "It takes the word vectors of the left, right, target word, and full tweet/sentence/text contexts and performs max, min, average, standard deviation, and product pooling over these contexts to create a feature vector as input to the Support Vector Machine For each of the experiments below we used the following configurations unless otherwise stated: we performed 5 fold stratified cross validation, features are scaled using Max Min scaling before inputting into the SVM, and used the respective C-Values for the SVM stated in the paper for each of the models.", "One major difficulty with the description of the method in the paper and re-implementation is handling the same target multiple appearances issue as originally raised by Wang et al.", "(2017) .", "As the method requires context with regards to the target word, if there is more than one appearance of the target word then the method does not specify which to use.", "We therefore took the approach of Wang et al.", "(2017) and found all of the features for each appearance and performed median pooling over features.", "This change could explain the subtle differences between the results we report and those of the original paper.", "used three different sentiment lexicons: MPQA 5 (Wilson et al., 2005) , NRC 6 (Mohammad and Turney, 2010) , and HL 7 (Hu and Liu, 2004) .", "We found a small difference in word counts between their reported statistics for the MPQA lexicons and those we performed ourselves, as can be seen in the bold numbers in table 3.", "Originally, we assumed that a word can only occur in one sentiment class within the same lexicon, and this resulted in differing counts for all lexicons.", "This distinction is not clearly documented in the paper or code.", "However, our assumption turned out to be incorrect, giving a further illustration of why detailed descriptions and documentation of all decisions is important.", "We ran the same experiment as to show the effectiveness of sentiment lexicons the results can be seen in table 4.", "We can clearly see there are some difference not just with the accuracy scores but the rank of the sentiment lexicons.", "We found just using HL was best and MPQA does help performance compared to the Target-dep baseline which differs to findings.", "Since we found that using just HL performed best, the rest of the results will apply the Target-dep+ method using HL and using HL & MPQA to show the affect of using the lexicon that both we and found best.", "The original authors tested their methods using three different word vectors: 1.", "Word2Vec trained by on 5 million Tweets containing emoticons (W2V), 2.", "Sentiment Specific Word Embedding (SSWE) from , and 3.", "W2V and SSWE combined.", "Neither of these word embeddings are available from the original authors as never released the embeddings and the link to embeddings no longer works 8 .", "However, the embeddings were released through Wang et al.", "(2017) code base 9 following requesting of the code from .", "Figure 1 shows the results of the different word embeddings across the different methods.", "The main finding we see is that SSWE by themselves are not as informative as W2V vectors which is different to the findings of .", "However we agree that combining the two vectors is beneficial and that the rank of methods is the same in our observations.", "Sentiment Lexicons Word Counts Scaling and Final Model comparison We test all of the methods on the test data set of Dong et al.", "(2014) and show the difference between the original and reproduced models in figure 2.", "Finally, we show the effect of scaling using Max Min and not scaling the data.", "As stated before, we have been using Max Min scaling on the NP features, however did not mention scaling in their paper.", "The library they were using, LibLinear (Fan et al., 2008) , suggests in its practical guide (Hsu et al., 2003) to scale each feature to [0, 1] but this was not re-iterated by .", "We are using scikit-learn's (Pedregosa et al., 2011) LinearSVC which is a wrapper of LibLinear, hence making it appropriate to use here.", "As can be seen in figure 2, not scaling can affect the results by around one-third.", "Reproduction of Wang et al.", "(2017) Wang et al.", "(2017) extended the NP work of and instead of using the full tweet/sentence/text contexts they used the full dependency graph of the target word.", "Thus, they created three different methods: 1.", "TDParseuses only the full dependency graph context, 2.", "TDParse the feature of TDParseand the left and right contexts, and 3.", "TDParse+ the features of TDParse and LS and RS contexts.", "The experiments are performed on the Dong et al.", "(2014) and Wang et al.", "(2017) Twitter datasets where we train and test on the previously specified train and test splits.", "We also scale our features using Max Min scaling before inputting into the SVM.", "We used all three sentiment lexicons as in the original paper, and we found the C-Value by performing 5 fold stratified cross validation on the training datasets.", "The results of these experiments can be seen in figure 3 10 .", "As found with the results of replication, scaling is very important but is typically overlooked when reporting.", "8 http://ir.hit.edu.cn/˜dytang/ 9 https://github.com/bluemonk482/tdparse 10 For the Election Twitter dataset TDParse+ result were never reported in the original paper.", "Tang et al.", "(2016a) was the first to use LSTMs specifically for TDSA.", "They created three different models: 1.", "LSTM a standard LSTM that runs over the length of the sentence and takes no target information into account, 2.", "TDLSTM runs two LSTMs, one over the left and the other over the right context of the target word and concatenates the output of the two, and 3.", "TCLSTM same as the TDLSTM method but each input word vector is concatenated with vector of the target word.", "All of the methods outputs are fed into a softmax activation function.", "The experiments are performed on the Dong et al.", "(2014) dataset where we train and test on the specified splits.", "For the LSTMs we initialised the weights using uniform distribution U(0.003, 0.003), used Stochastic Gradient Descent (SGD) a learning rate of 0.01, cross entropy loss, padded and truncated sequence to the length of the maximum sequence in the training dataset as stated in the original paper, and we did not \"set the clipping threshold of softmax layer as 200\" (Tang et al., 2016a) as we were unsure what this meant.", "With regards to the number of epochs trained, we used early stopping with a patience of 10 and allowed 300 epochs.", "Within their experiments they used SSWE and Glove Twitter vectors 11 (Pennington et al., 2014) .", "As the paper being reproduced does not define the number of epochs they trained for, we use early stopping.", "Thus for early stopping we require to split the training data into train and validation sets to know when to stop.", "As it has been shown by Reimers and Gurevych (2017) that the random seed statistically significantly changes the results of experiments we ran each model over each word embedding thirty times, using a different seed value but keeping the same stratified train and validation split, and reported the results on the same test data as the original paper.", "As can be seen in Figure 4 , the initial seed value makes a large difference more so for the smaller embeddings.", "In table 5, we show the difference between our mean and maximum result and the original result for each model using the 200 dimension Glove Twitter vectors.", "Even though the mean result is quite different from the original the maximum is much closer.", "Our results generally agree with their results on the ranking of the word vectors and the embeddings.", "Overall, we were able to reproduce the results of all three papers.", "However for the neural network/deep learning approach of Tang et al.", "(2016a) we agree with Reimers and Gurevych (2017) that reporting multiple runs of the system over different seed values is required as the single performance scores can be misleading, which could explain why previous papers obtained different results to the original for the TDLSTM method (Chen et al., 2017; Tay et al., 2017) .", "Mass Evaluation For all of the methods we pre-processed the text by lower casing and tokenising using Twokenizer (Gimpel et al., 2011) , and we used all three sentiment lexicons where applicable.", "We found the best word vectors from SSWE and the common crawl 42B 300 dimension Glove vectors by five fold stratified cross validation for the NP methods and the highest accuracy on the validation set for the LSTM methods.", "We chose these word vectors as they have very different sizes (50 and 300), also they have been shown to perform well in different text types; SSWE for social media (Tang et al., 2016a) and Glove for reviews (Chen et al., 2017) .", "To make the experiments quicker and computationally less expensive, we filtered out all words from the word vectors that did not appear in the train and test datasets, and this is equivalent with respect to word coverage as using all words.", "Finally we only reported results for the LSTM methods with one seed value and not multiple due to time constraints.", "The results of the methods using the best found word vectors on the test sets can be seen in table 6.", "We find that the TDParse methods generally perform best but only clearly outperforms the other nondependency parser methods on the YouTuBean dataset.", "We hypothesise that this is due to the dataset containing, on average, a deeper constituency tree depth which could be seen as on average more complex sentences.", "This could be due to it being from the spoken medium compared to the rest of the datasets which are written.", "Also that using a sentiment lexicon is almost always beneficial, but only by a small amount.", "Within the LSTM based methods the TDLSTM method generally performs the best indicating that the extra target information that the TCLSTM method contains is not needed, but we believe this needs further analysis.", "We can conclude that the simpler NP models perform well across domain, type and medium and that even without language specific tools and lexicons they are competitive to the more complex LSTM based methods.", "Dataset Target-Dep F1 Discussion and conclusion The fast developing subfield of TDSA has so far lacked a large-scale comparative mass evaluation of approaches using different models and datasets.", "In this paper, we address this generalisability limitation and perform the first direct comparison and reproduction of three different approaches for TDSA.", "While carrying out these reproductions, we have noted and described above, the many emerging issues in previous research related to incomplete descriptions of methods and settings, patchy release of code, and lack of comparative evaluations.", "This is natural in a developing field, but it is crucial for ongoing development within NLP in general that improved repeatability practices are adopted.", "The practices adopted in our case studies are to reproduce the methods in open source code, adopt only open data, provide format conversion tools to ingest the different data formats, and describe and document all settings via the code and Jupyter Notebooks (released initially in anonymous form at submission time) 12 .", "We therefore argue that papers should not consider repeatability (replication or reproduction) or generalisability alone, but these two key tenets of scientific practice should be brought together.", "In future work, we aim to extend our reproduction framework further, and extend the comparative evaluation to languages other than English.", "This will necessitate changes in the framework since we expect that dependency parsers and sentiment lexicons will be unavailable for specific languages.", "Also we will explore through error analysis in which situations different neural network architectures perform best." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.1.3", "4.2", "5", "6" ], "paper_header_content": [ "Introduction", "Related work", "Datasets used in our experiments", "Reproduction studies", "Reproduction of Vo and Zhang (2015)", "Scaling and Final Model comparison", "Reproduction of Wang et al. (2017)", "Mass Evaluation", "Discussion and conclusion" ] }
GEM-SciDuet-train-47#paper-1071#slide-1
Aspect Based Sentiment Analysis ABSA Example
Rude service, medicore food...there are tons of restaurants in NY...stay away from this one (Pontiki et al., 2015)
Rude service, medicore food...there are tons of restaurants in NY...stay away from this one (Pontiki et al., 2015)
[]
GEM-SciDuet-train-47#paper-1071#slide-2
1071
Bringing replication and reproduction together with generalisability in NLP: Three reproduction studies for Target Dependent Sentiment Analysis
Lack of repeatability and generalisability are two significant threats to continuing scientific development in Natural Language Processing. Language models and learning methods are so complex that scientific conference papers no longer contain enough space for the technical depth required for replication or reproduction. Taking Target Dependent Sentiment Analysis as a case study, we show how recent work in the field has not consistently released code, or described settings for learning methods in enough detail, and lacks comparability and generalisability in train, test or validation data. To investigate generalisability and to enable state of the art comparative evaluations, we carry out the first reproduction studies of three groups of complementary methods and perform the first large-scale mass evaluation on six different English datasets. Reflecting on our experiences, we recommend that future replication or reproduction experiments should always consider a variety of datasets alongside documenting and releasing their methods and published code in order to minimise the barriers to both repeatability and generalisability. We have released our code with a model zoo on GitHub with Jupyter Notebooks to aid understanding and full documentation, and we recommend that others do the same with their papers at submission time through an anonymised GitHub account.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205 ], "paper_content_text": [ "Introduction Repeatable (replicable and/or reproducible 1 ) experimentation is a core tenet of the scientific endeavour.", "In Natural Language Processing (NLP) research as in other areas, this requires three crucial components: (a) published methods described in sufficient detail (b) a working code base and (c) open dataset(s) to permit training, testing and validation to be reproduced and generalised.", "In the cognate sub-discipline of corpus linguistics, releasing textual datasets has been a defining feature of the community for many years, enabling multiple comparative experiments to be conducted on a stable basis since the core underlying corpora are community resources.", "In NLP, with methods becoming increasingly complex with the use of machine learning and deep learning approaches, it is often difficult to describe all settings and configurations in enough detail without releasing code.", "The work described in this paper emerged from recent efforts at our research centre to reimplement other's work across a number of topics (e.g.", "text reuse, identity resolution and sentiment analysis) where previously published methods were not easily repeatable because of missing or broken code or dependencies, and/or where methods were not sufficiently well described to enable reproduction.", "We focus on one sub-area of sentiment analysis to illustrate the extent of these problems, along with our initial recommendations and contributions to address the issues.", "The area of Target Dependent Sentiment Analysis (TDSA) and NLP in general has been growing rapidly in the last few years due to new neural network methods that require no feature engineering.", "However it is difficult to keep track of the state of the art as new models are tested on different datasets, thus preventing true comparative evaluations.", "This is best shown by table 1 where many approaches This work is licenced under a Creative Commons Attribution 4.0 International Licence.", "Licence details: http:// creativecommons.org/licenses/by/4.0/ 1 We follow the definitions in Antske Fokkens' guest blog post \"replication (obtaining the same results using the same experiment) as well as reproduction (reach the same conclusion through different means)\" from http://coling2018.", "org/slowly-growing-offspring-zigglebottom-anno-2017-guest-post/ are evaluated on the SemEval dataset (Pontiki et al., 2014) but not all.", "Datasets can vary by domain (e.g.", "product), type (social media, review), or medium (written or spoken), and to date there has been no comparative evaluation of methods from these multiple classes.", "Our primary and secondary contributions therefore, are to carry out the first study that reports results across all three different dataset classes, and to release a open source code framework implementing three complementary groups of TDSA methods.", "In terms of reproducibility via code release, recent TDSA papers have generally been very good with regards to publishing code alongside their papers (Mitchell et al., 2013; Zhang et al., 2016; Liu and Zhang, 2017; Wang et al., 2017) but other papers have not released code (Wang et al., 2016; Tay et al., 2017) .", "In some cases, the code was initially made available, then removed, and is now back online (Tang et al., 2016a) .", "Unfortunately, in some cases even when code has been published, different results have been obtained relative to the original paper.", "This can be seen when Chen et al.", "(2017) used the code and embeddings in Tang et al.", "(2016b) they observe different results.", "Similarly, when others (Tay et al., 2017; Chen et al., 2017) attempt to replicate the experiments of Tang et al.", "(2016a) they also produce different results to the original authors.", "Our observations within this one sub-field motivates the need to investigate further and understand how such problems can be avoided in the future.", "In some cases, when code has been released, it is difficult to use which could explain why the results were not reproduced.", "Of course, we would not expect researchers to produce industrial strength code, or provide continuing free ongoing support for multiple years after publication, but the situation is clearly problematic for the development of the new field in general.", "In this paper, we therefore reproduce three papers chosen as they employ widely differing methods: Neural Pooling (NP) , NP with dependency parsing (Wang et al., 2017) , and RNN (Tang et al., 2016a) , as well as having been applied largely to different datasets.", "At the end of the paper, we reflect on bringing together elements of repeatability and generalisability which we find are crucial to NLP and data science based disciplines more widely to enable others to make use of the science created.", "Related work Reproducibility and replicability have long been key elements of the scientific method, but have been gaining renewed prominence recently across a number of disciplines with attention being given to a 'reproducibility crisis'.", "For example, in pharmaceutical research, as little as 20-25% of papers were found to be replicable (Prinz et al., 2011) .", "The problem has also been recognised in computer science in general (Collberg and Proebsting, 2016) .", "Reproducibility and replicability have been researched for sometime in Information Retrieval (IR) since the Grid@CLEF pilot track (Ferro and Harman, 2009 ).", "The aim was to create a 'grid of points' where a point defined the performance of a particular IR system using certain pre-processing techniques on a defined dataset.", "Louridas and Gousios (2012) looked at reproducibility in Software Engineering after trying to replicate another authors results and concluded with a list of requirements for papers to be reproducible: (a) All data related to the paper, (b) All code required to reproduce the paper and (c) Documentation for the code and data.", "Fokkens et al.", "(2013) looked at reproducibility in WordNet similarity and Named Entity Recognition finding five key aspects that cause experimental variation and therefore need to be clearly stated: (a) pre-processing, (b) experimental setup, (c) versioning, (d) system output, (e) system variation.", "In Twitter sentiment analysis, Sygkounas et al.", "(2016) stated the need for using the same library versions and datasets when replicating work.", "Different methods of releasing datasets and code have been suggested.", "Ferro and Harman (2009) defined a framework (CIRCO) that enforces a pre-processing pipeline where data can be extracted at each stage therefore facilitating a validation step.", "They stated a mechanism for storing results, dataset and pre-processed data 2 .", "Louridas and Gousios (2012) suggested the use of a virtual machine alongside papers to bundle the data and code together, while most state the advantages of releasing source code (Fokkens et al., 2013; Potthast et al., 2016; Sygkounas et al., 2016) .", "The act of reproducing or replicating results is not just for validating research but to also show how it can be improved.", "Ferro and Silvello (2016) followed up their initial research and were able to analyse which pre-processing techniques were important on a French monolingual dataset and how the different techniques affected each other given an IR system.", "Fokkens et al.", "(2013) showed how changes in the five key aspects affected results.", "The closest related work to our reproducibility study is that of Marrese-Taylor and Matsuo (2017) which they replicate three different syntactic based aspect extraction methods.", "They found that parameter tuning was very important however using different pre-processing pipelines such as Stanford's CoreNLP did not have a consistent effect on the results.", "They found that the methods stated in the original papers are not detailed enough to replicate the study as evidenced by their large results differential.", "Dashtipour et al.", "(2016) undertook a replication study in sentiment prediction, however this was at the document level and on different datasets and languages to the originals.", "In other areas of (aspectbased) sentiment analysis, releasing code for published systems has not been a high priority, e.g.", "in SemEval 2016 task 5 (Pontiki et al., 2016) only 1 out of 21 papers released their source code.", "In IR, specific reproducible research tracks have been created 3 and we are pleased to see the same happening at COLING 2018 4 .", "Turning now to the focus of our investigations, Target Dependent sentiment analysis (TDSA) research (Nasukawa and Yi, 2003) arose as an extension to the coarse grained analysis of document level sentiment analysis (Pang et al., 2002; Turney, 2002) .", "Since its inception, papers have applied different methods such as feature based (Kiritchenko et al., 2014) , Recursive Neural Networks (RecNN) (Dong et al., 2014) , Recurrent Neural Networks (RNN) (Tang et al., 2016a) , attention applied to RNN (Wang et al., 2016; Chen et al., 2017; Tay et al., 2017) , Neural Pooling (NP) Wang et al., 2017) , RNN combined with NP (Zhang et al., 2016) , and attention based neural networks (Tang et al., 2016b) .", "Others have tackled TDSA as a joint task with target extraction, thus treating it as a sequence labelling problem.", "Mitchell et al.", "(2013) carried out this task using Conditional Random Fields (CRF), and this work was then extended using a neural CRF .", "Both approaches found that combining the two tasks did not improve results compared to treating the two tasks separately, apart from when considering POS and NEG when the joint task performs better.", "Finally, created an attention RNN for this task which was evaluated on two very different datasets containing written and spoken (video-based) reviews where the domain adaptation between the two shows some promise.", "Overall, within the field of sentiment analysis there are other granularities such as sentence level (Socher et al., 2013) , topic (Augenstein et al., 2018) , and aspect (Wang et al., 2016; Tay et al., 2017) .", "Aspect-level sentiment analysis relates to identifying the sentiment of (potentially multiple) topics in the same text although this can be seen as a similar task to TDSA.", "However the clear distinction between aspect and TDSA is that TDSA requires the target to be mentioned in the text itself while aspect-level employs a conceptual category with potentially multiple related instantiations in the text.", "Tang et al.", "(2016a) created a Target Dependent LSTM (TDLSTM) which encompassed two LSTMs either side of the target word, then improved the model by concatenating the target vector to the input embeddings to create a Target Connected LSTM (TCLSTM).", "Adding attention has become very popular recently.", "Tang et al.", "(2016b) showed the speed and accuracy improvements of using multiple attention layers only over LSTM based methods, however they found that it could not model complex sentences e.g.", "negations.", "Liu and Zhang (2017) showed that adding attention to a Bi-directional LSTM (BLSTM) improves the results as it takes the importance of each word into account with respect to the target.", "Chen et al.", "(2017) also combined a BLSTM and attention, however they used multiple attention layers and combined the results using a Gated Recurrent Unit (GRU) which they called Recurrent Attention on Memory (RAM), and they found this method to allow models to better understand more complex sentiment for each comparison.", "used neural pooling features e.g.", "max, min, etc of the word embeddings of the left and right context of the target word, the target itself, and the whole Tweet.", "They inputted the features into a linear SVM, and showed the importance of using the left and right context for the first time.", "They found in their study that using a combination of Word2Vec embeddings and sentiment embeddings performed best alongside using sentiment lexicons to filter the embedding space.", "Other studies have adopted more linguistic approaches.", "Wang et al.", "(2017) extended the work of by using the dependency linked words from the target.", "Dong et al.", "(2014) used the dependency tree to create a Recursive Neural Network (RecNN) inspired by Socher et al.", "(2013) but compared to Socher et al.", "(2013) they also utilised the dependency tags to create an Adaptive RecNN (ARecNN).", "Critically, the methods reported above have not been applied to the same datasets, therefore a true comparative evaluation between the different methods is somewhat difficult.", "This has serious implications for generalisability of methods.", "We correct that limitation in our study.", "There are two papers taking a similar approach to our work in terms of generalisability although they do not combine them with the reproduction issues that we highlight.", "First, Chen et al.", "(2017) compared results across Se-mEval's laptop and restaurant reviews in English (Pontiki et al., 2014) , a Twitter dataset (Dong et al., 2014) and their own Chinese news comments dataset.", "They did perform a comparison across different languages, domains, corpora types, and different methods; SVM with features (Kiritchenko et al., 2014) , Rec-NN (Dong et al., 2014) , TDLSTM (Tang et al., 2016a) , Memory Neural Network (MNet) (Tang et al., 2016b) and their own attention method.", "However, the Chinese dataset was not released, and the methods were not compared across all datasets.", "By contrast, we compare all methods across all datasets, using techniques that are not just from the Recurrent Neural Network (RNN) family.", "A second paper, by Barnes et al.", "(2017) compares seven approaches to (document and sentence level) sentiment analysis on six benchmark datasets, but does not systematically explore reproduction issues as we do in our paper.", "Datasets used in our experiments We are evaluating our models over six different English datasets deliberately chosen to represent a range of domains, types and mediums.", "As highlighted above, previous papers tend to only carry out evaluations on one or two datasets which limits the generalisability of their results.", "In this paper, we do not consider the quality or inter-annotator agreement levels of these datasets but it has been noted that some datasets may have issues here.", "For example, Pavlopoulos and Androutsopoulos (2014) point out that the Hu and Liu (2004) dataset does not state their inter-annotator agreement scores nor do they have aspect terms that express neutral opinion.", "We only use a subset of the English datasets available.", "For two reasons.", "First, the time it takes to write parsers and run the models.", "Second, we only used datasets that contain three distinct sentiments (Wilson (2008) only has two).", "From the datasets we have used, we have only had issue with parsing Wang et al.", "(2017) where the annotations for the first set of the data contains the target span but the second set does not.", "Thus making it impossible to use the second set of annotation and forcing us to only use a subset of the dataset.", "An as example of this: \"Got rid of bureaucrats 'and we put that money, into 9000 more doctors and nurses'... to turn the doctors into bureaucrats#BattleForNumber10\" in that Tweet 'bureaucrats' was annotated as negative but it does not state if it was the first or second instance of 'bureaucrats' since it does not use target spans.", "As we can see from table 2, generally the social media datasets (Twitter and YouTube) contain more targets per sentence with the exception of Dong et al.", "(2014) and Mitchell et al.", "(2013) .", "The only dataset that has a small difference between the number of unique sentiments per sentence is the Wang et al.", "(2017) Reproduction studies In the following subsections, we present the three different methods that we are reproducing and how their results differ from the original analysis.", "In all of the experiments below, we lower case all text and tokenise using Twokenizer (Gimpel et al., 2011) .", "This was done as the datasets originate from Twitter and this pre-processing method was to some extent stated in and assumed to be used across the others as they do not explicitly state how they pre-process in the papers.", "Reproduction of Vo and Zhang (2015) Vo and Zhang (2015) created the first NP method for TDSA.", "It takes the word vectors of the left, right, target word, and full tweet/sentence/text contexts and performs max, min, average, standard deviation, and product pooling over these contexts to create a feature vector as input to the Support Vector Machine For each of the experiments below we used the following configurations unless otherwise stated: we performed 5 fold stratified cross validation, features are scaled using Max Min scaling before inputting into the SVM, and used the respective C-Values for the SVM stated in the paper for each of the models.", "One major difficulty with the description of the method in the paper and re-implementation is handling the same target multiple appearances issue as originally raised by Wang et al.", "(2017) .", "As the method requires context with regards to the target word, if there is more than one appearance of the target word then the method does not specify which to use.", "We therefore took the approach of Wang et al.", "(2017) and found all of the features for each appearance and performed median pooling over features.", "This change could explain the subtle differences between the results we report and those of the original paper.", "used three different sentiment lexicons: MPQA 5 (Wilson et al., 2005) , NRC 6 (Mohammad and Turney, 2010) , and HL 7 (Hu and Liu, 2004) .", "We found a small difference in word counts between their reported statistics for the MPQA lexicons and those we performed ourselves, as can be seen in the bold numbers in table 3.", "Originally, we assumed that a word can only occur in one sentiment class within the same lexicon, and this resulted in differing counts for all lexicons.", "This distinction is not clearly documented in the paper or code.", "However, our assumption turned out to be incorrect, giving a further illustration of why detailed descriptions and documentation of all decisions is important.", "We ran the same experiment as to show the effectiveness of sentiment lexicons the results can be seen in table 4.", "We can clearly see there are some difference not just with the accuracy scores but the rank of the sentiment lexicons.", "We found just using HL was best and MPQA does help performance compared to the Target-dep baseline which differs to findings.", "Since we found that using just HL performed best, the rest of the results will apply the Target-dep+ method using HL and using HL & MPQA to show the affect of using the lexicon that both we and found best.", "The original authors tested their methods using three different word vectors: 1.", "Word2Vec trained by on 5 million Tweets containing emoticons (W2V), 2.", "Sentiment Specific Word Embedding (SSWE) from , and 3.", "W2V and SSWE combined.", "Neither of these word embeddings are available from the original authors as never released the embeddings and the link to embeddings no longer works 8 .", "However, the embeddings were released through Wang et al.", "(2017) code base 9 following requesting of the code from .", "Figure 1 shows the results of the different word embeddings across the different methods.", "The main finding we see is that SSWE by themselves are not as informative as W2V vectors which is different to the findings of .", "However we agree that combining the two vectors is beneficial and that the rank of methods is the same in our observations.", "Sentiment Lexicons Word Counts Scaling and Final Model comparison We test all of the methods on the test data set of Dong et al.", "(2014) and show the difference between the original and reproduced models in figure 2.", "Finally, we show the effect of scaling using Max Min and not scaling the data.", "As stated before, we have been using Max Min scaling on the NP features, however did not mention scaling in their paper.", "The library they were using, LibLinear (Fan et al., 2008) , suggests in its practical guide (Hsu et al., 2003) to scale each feature to [0, 1] but this was not re-iterated by .", "We are using scikit-learn's (Pedregosa et al., 2011) LinearSVC which is a wrapper of LibLinear, hence making it appropriate to use here.", "As can be seen in figure 2, not scaling can affect the results by around one-third.", "Reproduction of Wang et al.", "(2017) Wang et al.", "(2017) extended the NP work of and instead of using the full tweet/sentence/text contexts they used the full dependency graph of the target word.", "Thus, they created three different methods: 1.", "TDParseuses only the full dependency graph context, 2.", "TDParse the feature of TDParseand the left and right contexts, and 3.", "TDParse+ the features of TDParse and LS and RS contexts.", "The experiments are performed on the Dong et al.", "(2014) and Wang et al.", "(2017) Twitter datasets where we train and test on the previously specified train and test splits.", "We also scale our features using Max Min scaling before inputting into the SVM.", "We used all three sentiment lexicons as in the original paper, and we found the C-Value by performing 5 fold stratified cross validation on the training datasets.", "The results of these experiments can be seen in figure 3 10 .", "As found with the results of replication, scaling is very important but is typically overlooked when reporting.", "8 http://ir.hit.edu.cn/˜dytang/ 9 https://github.com/bluemonk482/tdparse 10 For the Election Twitter dataset TDParse+ result were never reported in the original paper.", "Tang et al.", "(2016a) was the first to use LSTMs specifically for TDSA.", "They created three different models: 1.", "LSTM a standard LSTM that runs over the length of the sentence and takes no target information into account, 2.", "TDLSTM runs two LSTMs, one over the left and the other over the right context of the target word and concatenates the output of the two, and 3.", "TCLSTM same as the TDLSTM method but each input word vector is concatenated with vector of the target word.", "All of the methods outputs are fed into a softmax activation function.", "The experiments are performed on the Dong et al.", "(2014) dataset where we train and test on the specified splits.", "For the LSTMs we initialised the weights using uniform distribution U(0.003, 0.003), used Stochastic Gradient Descent (SGD) a learning rate of 0.01, cross entropy loss, padded and truncated sequence to the length of the maximum sequence in the training dataset as stated in the original paper, and we did not \"set the clipping threshold of softmax layer as 200\" (Tang et al., 2016a) as we were unsure what this meant.", "With regards to the number of epochs trained, we used early stopping with a patience of 10 and allowed 300 epochs.", "Within their experiments they used SSWE and Glove Twitter vectors 11 (Pennington et al., 2014) .", "As the paper being reproduced does not define the number of epochs they trained for, we use early stopping.", "Thus for early stopping we require to split the training data into train and validation sets to know when to stop.", "As it has been shown by Reimers and Gurevych (2017) that the random seed statistically significantly changes the results of experiments we ran each model over each word embedding thirty times, using a different seed value but keeping the same stratified train and validation split, and reported the results on the same test data as the original paper.", "As can be seen in Figure 4 , the initial seed value makes a large difference more so for the smaller embeddings.", "In table 5, we show the difference between our mean and maximum result and the original result for each model using the 200 dimension Glove Twitter vectors.", "Even though the mean result is quite different from the original the maximum is much closer.", "Our results generally agree with their results on the ranking of the word vectors and the embeddings.", "Overall, we were able to reproduce the results of all three papers.", "However for the neural network/deep learning approach of Tang et al.", "(2016a) we agree with Reimers and Gurevych (2017) that reporting multiple runs of the system over different seed values is required as the single performance scores can be misleading, which could explain why previous papers obtained different results to the original for the TDLSTM method (Chen et al., 2017; Tay et al., 2017) .", "Mass Evaluation For all of the methods we pre-processed the text by lower casing and tokenising using Twokenizer (Gimpel et al., 2011) , and we used all three sentiment lexicons where applicable.", "We found the best word vectors from SSWE and the common crawl 42B 300 dimension Glove vectors by five fold stratified cross validation for the NP methods and the highest accuracy on the validation set for the LSTM methods.", "We chose these word vectors as they have very different sizes (50 and 300), also they have been shown to perform well in different text types; SSWE for social media (Tang et al., 2016a) and Glove for reviews (Chen et al., 2017) .", "To make the experiments quicker and computationally less expensive, we filtered out all words from the word vectors that did not appear in the train and test datasets, and this is equivalent with respect to word coverage as using all words.", "Finally we only reported results for the LSTM methods with one seed value and not multiple due to time constraints.", "The results of the methods using the best found word vectors on the test sets can be seen in table 6.", "We find that the TDParse methods generally perform best but only clearly outperforms the other nondependency parser methods on the YouTuBean dataset.", "We hypothesise that this is due to the dataset containing, on average, a deeper constituency tree depth which could be seen as on average more complex sentences.", "This could be due to it being from the spoken medium compared to the rest of the datasets which are written.", "Also that using a sentiment lexicon is almost always beneficial, but only by a small amount.", "Within the LSTM based methods the TDLSTM method generally performs the best indicating that the extra target information that the TCLSTM method contains is not needed, but we believe this needs further analysis.", "We can conclude that the simpler NP models perform well across domain, type and medium and that even without language specific tools and lexicons they are competitive to the more complex LSTM based methods.", "Dataset Target-Dep F1 Discussion and conclusion The fast developing subfield of TDSA has so far lacked a large-scale comparative mass evaluation of approaches using different models and datasets.", "In this paper, we address this generalisability limitation and perform the first direct comparison and reproduction of three different approaches for TDSA.", "While carrying out these reproductions, we have noted and described above, the many emerging issues in previous research related to incomplete descriptions of methods and settings, patchy release of code, and lack of comparative evaluations.", "This is natural in a developing field, but it is crucial for ongoing development within NLP in general that improved repeatability practices are adopted.", "The practices adopted in our case studies are to reproduce the methods in open source code, adopt only open data, provide format conversion tools to ingest the different data formats, and describe and document all settings via the code and Jupyter Notebooks (released initially in anonymous form at submission time) 12 .", "We therefore argue that papers should not consider repeatability (replication or reproduction) or generalisability alone, but these two key tenets of scientific practice should be brought together.", "In future work, we aim to extend our reproduction framework further, and extend the comparative evaluation to languages other than English.", "This will necessitate changes in the framework since we expect that dependency parsers and sentiment lexicons will be unavailable for specific languages.", "Also we will explore through error analysis in which situations different neural network architectures perform best." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.1.3", "4.2", "5", "6" ], "paper_header_content": [ "Introduction", "Related work", "Datasets used in our experiments", "Reproduction studies", "Reproduction of Vo and Zhang (2015)", "Scaling and Final Model comparison", "Reproduction of Wang et al. (2017)", "Mass Evaluation", "Discussion and conclusion" ] }
GEM-SciDuet-train-47#paper-1071#slide-2
Target Dependent Sentiment Analysis TDSA Example
Rude service, medicore food...there are tons of restaurants in NY...stay away from this one (Pontiki et al., 2015)
Rude service, medicore food...there are tons of restaurants in NY...stay away from this one (Pontiki et al., 2015)
[]
GEM-SciDuet-train-47#paper-1071#slide-3
1071
Bringing replication and reproduction together with generalisability in NLP: Three reproduction studies for Target Dependent Sentiment Analysis
Lack of repeatability and generalisability are two significant threats to continuing scientific development in Natural Language Processing. Language models and learning methods are so complex that scientific conference papers no longer contain enough space for the technical depth required for replication or reproduction. Taking Target Dependent Sentiment Analysis as a case study, we show how recent work in the field has not consistently released code, or described settings for learning methods in enough detail, and lacks comparability and generalisability in train, test or validation data. To investigate generalisability and to enable state of the art comparative evaluations, we carry out the first reproduction studies of three groups of complementary methods and perform the first large-scale mass evaluation on six different English datasets. Reflecting on our experiences, we recommend that future replication or reproduction experiments should always consider a variety of datasets alongside documenting and releasing their methods and published code in order to minimise the barriers to both repeatability and generalisability. We have released our code with a model zoo on GitHub with Jupyter Notebooks to aid understanding and full documentation, and we recommend that others do the same with their papers at submission time through an anonymised GitHub account.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205 ], "paper_content_text": [ "Introduction Repeatable (replicable and/or reproducible 1 ) experimentation is a core tenet of the scientific endeavour.", "In Natural Language Processing (NLP) research as in other areas, this requires three crucial components: (a) published methods described in sufficient detail (b) a working code base and (c) open dataset(s) to permit training, testing and validation to be reproduced and generalised.", "In the cognate sub-discipline of corpus linguistics, releasing textual datasets has been a defining feature of the community for many years, enabling multiple comparative experiments to be conducted on a stable basis since the core underlying corpora are community resources.", "In NLP, with methods becoming increasingly complex with the use of machine learning and deep learning approaches, it is often difficult to describe all settings and configurations in enough detail without releasing code.", "The work described in this paper emerged from recent efforts at our research centre to reimplement other's work across a number of topics (e.g.", "text reuse, identity resolution and sentiment analysis) where previously published methods were not easily repeatable because of missing or broken code or dependencies, and/or where methods were not sufficiently well described to enable reproduction.", "We focus on one sub-area of sentiment analysis to illustrate the extent of these problems, along with our initial recommendations and contributions to address the issues.", "The area of Target Dependent Sentiment Analysis (TDSA) and NLP in general has been growing rapidly in the last few years due to new neural network methods that require no feature engineering.", "However it is difficult to keep track of the state of the art as new models are tested on different datasets, thus preventing true comparative evaluations.", "This is best shown by table 1 where many approaches This work is licenced under a Creative Commons Attribution 4.0 International Licence.", "Licence details: http:// creativecommons.org/licenses/by/4.0/ 1 We follow the definitions in Antske Fokkens' guest blog post \"replication (obtaining the same results using the same experiment) as well as reproduction (reach the same conclusion through different means)\" from http://coling2018.", "org/slowly-growing-offspring-zigglebottom-anno-2017-guest-post/ are evaluated on the SemEval dataset (Pontiki et al., 2014) but not all.", "Datasets can vary by domain (e.g.", "product), type (social media, review), or medium (written or spoken), and to date there has been no comparative evaluation of methods from these multiple classes.", "Our primary and secondary contributions therefore, are to carry out the first study that reports results across all three different dataset classes, and to release a open source code framework implementing three complementary groups of TDSA methods.", "In terms of reproducibility via code release, recent TDSA papers have generally been very good with regards to publishing code alongside their papers (Mitchell et al., 2013; Zhang et al., 2016; Liu and Zhang, 2017; Wang et al., 2017) but other papers have not released code (Wang et al., 2016; Tay et al., 2017) .", "In some cases, the code was initially made available, then removed, and is now back online (Tang et al., 2016a) .", "Unfortunately, in some cases even when code has been published, different results have been obtained relative to the original paper.", "This can be seen when Chen et al.", "(2017) used the code and embeddings in Tang et al.", "(2016b) they observe different results.", "Similarly, when others (Tay et al., 2017; Chen et al., 2017) attempt to replicate the experiments of Tang et al.", "(2016a) they also produce different results to the original authors.", "Our observations within this one sub-field motivates the need to investigate further and understand how such problems can be avoided in the future.", "In some cases, when code has been released, it is difficult to use which could explain why the results were not reproduced.", "Of course, we would not expect researchers to produce industrial strength code, or provide continuing free ongoing support for multiple years after publication, but the situation is clearly problematic for the development of the new field in general.", "In this paper, we therefore reproduce three papers chosen as they employ widely differing methods: Neural Pooling (NP) , NP with dependency parsing (Wang et al., 2017) , and RNN (Tang et al., 2016a) , as well as having been applied largely to different datasets.", "At the end of the paper, we reflect on bringing together elements of repeatability and generalisability which we find are crucial to NLP and data science based disciplines more widely to enable others to make use of the science created.", "Related work Reproducibility and replicability have long been key elements of the scientific method, but have been gaining renewed prominence recently across a number of disciplines with attention being given to a 'reproducibility crisis'.", "For example, in pharmaceutical research, as little as 20-25% of papers were found to be replicable (Prinz et al., 2011) .", "The problem has also been recognised in computer science in general (Collberg and Proebsting, 2016) .", "Reproducibility and replicability have been researched for sometime in Information Retrieval (IR) since the Grid@CLEF pilot track (Ferro and Harman, 2009 ).", "The aim was to create a 'grid of points' where a point defined the performance of a particular IR system using certain pre-processing techniques on a defined dataset.", "Louridas and Gousios (2012) looked at reproducibility in Software Engineering after trying to replicate another authors results and concluded with a list of requirements for papers to be reproducible: (a) All data related to the paper, (b) All code required to reproduce the paper and (c) Documentation for the code and data.", "Fokkens et al.", "(2013) looked at reproducibility in WordNet similarity and Named Entity Recognition finding five key aspects that cause experimental variation and therefore need to be clearly stated: (a) pre-processing, (b) experimental setup, (c) versioning, (d) system output, (e) system variation.", "In Twitter sentiment analysis, Sygkounas et al.", "(2016) stated the need for using the same library versions and datasets when replicating work.", "Different methods of releasing datasets and code have been suggested.", "Ferro and Harman (2009) defined a framework (CIRCO) that enforces a pre-processing pipeline where data can be extracted at each stage therefore facilitating a validation step.", "They stated a mechanism for storing results, dataset and pre-processed data 2 .", "Louridas and Gousios (2012) suggested the use of a virtual machine alongside papers to bundle the data and code together, while most state the advantages of releasing source code (Fokkens et al., 2013; Potthast et al., 2016; Sygkounas et al., 2016) .", "The act of reproducing or replicating results is not just for validating research but to also show how it can be improved.", "Ferro and Silvello (2016) followed up their initial research and were able to analyse which pre-processing techniques were important on a French monolingual dataset and how the different techniques affected each other given an IR system.", "Fokkens et al.", "(2013) showed how changes in the five key aspects affected results.", "The closest related work to our reproducibility study is that of Marrese-Taylor and Matsuo (2017) which they replicate three different syntactic based aspect extraction methods.", "They found that parameter tuning was very important however using different pre-processing pipelines such as Stanford's CoreNLP did not have a consistent effect on the results.", "They found that the methods stated in the original papers are not detailed enough to replicate the study as evidenced by their large results differential.", "Dashtipour et al.", "(2016) undertook a replication study in sentiment prediction, however this was at the document level and on different datasets and languages to the originals.", "In other areas of (aspectbased) sentiment analysis, releasing code for published systems has not been a high priority, e.g.", "in SemEval 2016 task 5 (Pontiki et al., 2016) only 1 out of 21 papers released their source code.", "In IR, specific reproducible research tracks have been created 3 and we are pleased to see the same happening at COLING 2018 4 .", "Turning now to the focus of our investigations, Target Dependent sentiment analysis (TDSA) research (Nasukawa and Yi, 2003) arose as an extension to the coarse grained analysis of document level sentiment analysis (Pang et al., 2002; Turney, 2002) .", "Since its inception, papers have applied different methods such as feature based (Kiritchenko et al., 2014) , Recursive Neural Networks (RecNN) (Dong et al., 2014) , Recurrent Neural Networks (RNN) (Tang et al., 2016a) , attention applied to RNN (Wang et al., 2016; Chen et al., 2017; Tay et al., 2017) , Neural Pooling (NP) Wang et al., 2017) , RNN combined with NP (Zhang et al., 2016) , and attention based neural networks (Tang et al., 2016b) .", "Others have tackled TDSA as a joint task with target extraction, thus treating it as a sequence labelling problem.", "Mitchell et al.", "(2013) carried out this task using Conditional Random Fields (CRF), and this work was then extended using a neural CRF .", "Both approaches found that combining the two tasks did not improve results compared to treating the two tasks separately, apart from when considering POS and NEG when the joint task performs better.", "Finally, created an attention RNN for this task which was evaluated on two very different datasets containing written and spoken (video-based) reviews where the domain adaptation between the two shows some promise.", "Overall, within the field of sentiment analysis there are other granularities such as sentence level (Socher et al., 2013) , topic (Augenstein et al., 2018) , and aspect (Wang et al., 2016; Tay et al., 2017) .", "Aspect-level sentiment analysis relates to identifying the sentiment of (potentially multiple) topics in the same text although this can be seen as a similar task to TDSA.", "However the clear distinction between aspect and TDSA is that TDSA requires the target to be mentioned in the text itself while aspect-level employs a conceptual category with potentially multiple related instantiations in the text.", "Tang et al.", "(2016a) created a Target Dependent LSTM (TDLSTM) which encompassed two LSTMs either side of the target word, then improved the model by concatenating the target vector to the input embeddings to create a Target Connected LSTM (TCLSTM).", "Adding attention has become very popular recently.", "Tang et al.", "(2016b) showed the speed and accuracy improvements of using multiple attention layers only over LSTM based methods, however they found that it could not model complex sentences e.g.", "negations.", "Liu and Zhang (2017) showed that adding attention to a Bi-directional LSTM (BLSTM) improves the results as it takes the importance of each word into account with respect to the target.", "Chen et al.", "(2017) also combined a BLSTM and attention, however they used multiple attention layers and combined the results using a Gated Recurrent Unit (GRU) which they called Recurrent Attention on Memory (RAM), and they found this method to allow models to better understand more complex sentiment for each comparison.", "used neural pooling features e.g.", "max, min, etc of the word embeddings of the left and right context of the target word, the target itself, and the whole Tweet.", "They inputted the features into a linear SVM, and showed the importance of using the left and right context for the first time.", "They found in their study that using a combination of Word2Vec embeddings and sentiment embeddings performed best alongside using sentiment lexicons to filter the embedding space.", "Other studies have adopted more linguistic approaches.", "Wang et al.", "(2017) extended the work of by using the dependency linked words from the target.", "Dong et al.", "(2014) used the dependency tree to create a Recursive Neural Network (RecNN) inspired by Socher et al.", "(2013) but compared to Socher et al.", "(2013) they also utilised the dependency tags to create an Adaptive RecNN (ARecNN).", "Critically, the methods reported above have not been applied to the same datasets, therefore a true comparative evaluation between the different methods is somewhat difficult.", "This has serious implications for generalisability of methods.", "We correct that limitation in our study.", "There are two papers taking a similar approach to our work in terms of generalisability although they do not combine them with the reproduction issues that we highlight.", "First, Chen et al.", "(2017) compared results across Se-mEval's laptop and restaurant reviews in English (Pontiki et al., 2014) , a Twitter dataset (Dong et al., 2014) and their own Chinese news comments dataset.", "They did perform a comparison across different languages, domains, corpora types, and different methods; SVM with features (Kiritchenko et al., 2014) , Rec-NN (Dong et al., 2014) , TDLSTM (Tang et al., 2016a) , Memory Neural Network (MNet) (Tang et al., 2016b) and their own attention method.", "However, the Chinese dataset was not released, and the methods were not compared across all datasets.", "By contrast, we compare all methods across all datasets, using techniques that are not just from the Recurrent Neural Network (RNN) family.", "A second paper, by Barnes et al.", "(2017) compares seven approaches to (document and sentence level) sentiment analysis on six benchmark datasets, but does not systematically explore reproduction issues as we do in our paper.", "Datasets used in our experiments We are evaluating our models over six different English datasets deliberately chosen to represent a range of domains, types and mediums.", "As highlighted above, previous papers tend to only carry out evaluations on one or two datasets which limits the generalisability of their results.", "In this paper, we do not consider the quality or inter-annotator agreement levels of these datasets but it has been noted that some datasets may have issues here.", "For example, Pavlopoulos and Androutsopoulos (2014) point out that the Hu and Liu (2004) dataset does not state their inter-annotator agreement scores nor do they have aspect terms that express neutral opinion.", "We only use a subset of the English datasets available.", "For two reasons.", "First, the time it takes to write parsers and run the models.", "Second, we only used datasets that contain three distinct sentiments (Wilson (2008) only has two).", "From the datasets we have used, we have only had issue with parsing Wang et al.", "(2017) where the annotations for the first set of the data contains the target span but the second set does not.", "Thus making it impossible to use the second set of annotation and forcing us to only use a subset of the dataset.", "An as example of this: \"Got rid of bureaucrats 'and we put that money, into 9000 more doctors and nurses'... to turn the doctors into bureaucrats#BattleForNumber10\" in that Tweet 'bureaucrats' was annotated as negative but it does not state if it was the first or second instance of 'bureaucrats' since it does not use target spans.", "As we can see from table 2, generally the social media datasets (Twitter and YouTube) contain more targets per sentence with the exception of Dong et al.", "(2014) and Mitchell et al.", "(2013) .", "The only dataset that has a small difference between the number of unique sentiments per sentence is the Wang et al.", "(2017) Reproduction studies In the following subsections, we present the three different methods that we are reproducing and how their results differ from the original analysis.", "In all of the experiments below, we lower case all text and tokenise using Twokenizer (Gimpel et al., 2011) .", "This was done as the datasets originate from Twitter and this pre-processing method was to some extent stated in and assumed to be used across the others as they do not explicitly state how they pre-process in the papers.", "Reproduction of Vo and Zhang (2015) Vo and Zhang (2015) created the first NP method for TDSA.", "It takes the word vectors of the left, right, target word, and full tweet/sentence/text contexts and performs max, min, average, standard deviation, and product pooling over these contexts to create a feature vector as input to the Support Vector Machine For each of the experiments below we used the following configurations unless otherwise stated: we performed 5 fold stratified cross validation, features are scaled using Max Min scaling before inputting into the SVM, and used the respective C-Values for the SVM stated in the paper for each of the models.", "One major difficulty with the description of the method in the paper and re-implementation is handling the same target multiple appearances issue as originally raised by Wang et al.", "(2017) .", "As the method requires context with regards to the target word, if there is more than one appearance of the target word then the method does not specify which to use.", "We therefore took the approach of Wang et al.", "(2017) and found all of the features for each appearance and performed median pooling over features.", "This change could explain the subtle differences between the results we report and those of the original paper.", "used three different sentiment lexicons: MPQA 5 (Wilson et al., 2005) , NRC 6 (Mohammad and Turney, 2010) , and HL 7 (Hu and Liu, 2004) .", "We found a small difference in word counts between their reported statistics for the MPQA lexicons and those we performed ourselves, as can be seen in the bold numbers in table 3.", "Originally, we assumed that a word can only occur in one sentiment class within the same lexicon, and this resulted in differing counts for all lexicons.", "This distinction is not clearly documented in the paper or code.", "However, our assumption turned out to be incorrect, giving a further illustration of why detailed descriptions and documentation of all decisions is important.", "We ran the same experiment as to show the effectiveness of sentiment lexicons the results can be seen in table 4.", "We can clearly see there are some difference not just with the accuracy scores but the rank of the sentiment lexicons.", "We found just using HL was best and MPQA does help performance compared to the Target-dep baseline which differs to findings.", "Since we found that using just HL performed best, the rest of the results will apply the Target-dep+ method using HL and using HL & MPQA to show the affect of using the lexicon that both we and found best.", "The original authors tested their methods using three different word vectors: 1.", "Word2Vec trained by on 5 million Tweets containing emoticons (W2V), 2.", "Sentiment Specific Word Embedding (SSWE) from , and 3.", "W2V and SSWE combined.", "Neither of these word embeddings are available from the original authors as never released the embeddings and the link to embeddings no longer works 8 .", "However, the embeddings were released through Wang et al.", "(2017) code base 9 following requesting of the code from .", "Figure 1 shows the results of the different word embeddings across the different methods.", "The main finding we see is that SSWE by themselves are not as informative as W2V vectors which is different to the findings of .", "However we agree that combining the two vectors is beneficial and that the rank of methods is the same in our observations.", "Sentiment Lexicons Word Counts Scaling and Final Model comparison We test all of the methods on the test data set of Dong et al.", "(2014) and show the difference between the original and reproduced models in figure 2.", "Finally, we show the effect of scaling using Max Min and not scaling the data.", "As stated before, we have been using Max Min scaling on the NP features, however did not mention scaling in their paper.", "The library they were using, LibLinear (Fan et al., 2008) , suggests in its practical guide (Hsu et al., 2003) to scale each feature to [0, 1] but this was not re-iterated by .", "We are using scikit-learn's (Pedregosa et al., 2011) LinearSVC which is a wrapper of LibLinear, hence making it appropriate to use here.", "As can be seen in figure 2, not scaling can affect the results by around one-third.", "Reproduction of Wang et al.", "(2017) Wang et al.", "(2017) extended the NP work of and instead of using the full tweet/sentence/text contexts they used the full dependency graph of the target word.", "Thus, they created three different methods: 1.", "TDParseuses only the full dependency graph context, 2.", "TDParse the feature of TDParseand the left and right contexts, and 3.", "TDParse+ the features of TDParse and LS and RS contexts.", "The experiments are performed on the Dong et al.", "(2014) and Wang et al.", "(2017) Twitter datasets where we train and test on the previously specified train and test splits.", "We also scale our features using Max Min scaling before inputting into the SVM.", "We used all three sentiment lexicons as in the original paper, and we found the C-Value by performing 5 fold stratified cross validation on the training datasets.", "The results of these experiments can be seen in figure 3 10 .", "As found with the results of replication, scaling is very important but is typically overlooked when reporting.", "8 http://ir.hit.edu.cn/˜dytang/ 9 https://github.com/bluemonk482/tdparse 10 For the Election Twitter dataset TDParse+ result were never reported in the original paper.", "Tang et al.", "(2016a) was the first to use LSTMs specifically for TDSA.", "They created three different models: 1.", "LSTM a standard LSTM that runs over the length of the sentence and takes no target information into account, 2.", "TDLSTM runs two LSTMs, one over the left and the other over the right context of the target word and concatenates the output of the two, and 3.", "TCLSTM same as the TDLSTM method but each input word vector is concatenated with vector of the target word.", "All of the methods outputs are fed into a softmax activation function.", "The experiments are performed on the Dong et al.", "(2014) dataset where we train and test on the specified splits.", "For the LSTMs we initialised the weights using uniform distribution U(0.003, 0.003), used Stochastic Gradient Descent (SGD) a learning rate of 0.01, cross entropy loss, padded and truncated sequence to the length of the maximum sequence in the training dataset as stated in the original paper, and we did not \"set the clipping threshold of softmax layer as 200\" (Tang et al., 2016a) as we were unsure what this meant.", "With regards to the number of epochs trained, we used early stopping with a patience of 10 and allowed 300 epochs.", "Within their experiments they used SSWE and Glove Twitter vectors 11 (Pennington et al., 2014) .", "As the paper being reproduced does not define the number of epochs they trained for, we use early stopping.", "Thus for early stopping we require to split the training data into train and validation sets to know when to stop.", "As it has been shown by Reimers and Gurevych (2017) that the random seed statistically significantly changes the results of experiments we ran each model over each word embedding thirty times, using a different seed value but keeping the same stratified train and validation split, and reported the results on the same test data as the original paper.", "As can be seen in Figure 4 , the initial seed value makes a large difference more so for the smaller embeddings.", "In table 5, we show the difference between our mean and maximum result and the original result for each model using the 200 dimension Glove Twitter vectors.", "Even though the mean result is quite different from the original the maximum is much closer.", "Our results generally agree with their results on the ranking of the word vectors and the embeddings.", "Overall, we were able to reproduce the results of all three papers.", "However for the neural network/deep learning approach of Tang et al.", "(2016a) we agree with Reimers and Gurevych (2017) that reporting multiple runs of the system over different seed values is required as the single performance scores can be misleading, which could explain why previous papers obtained different results to the original for the TDLSTM method (Chen et al., 2017; Tay et al., 2017) .", "Mass Evaluation For all of the methods we pre-processed the text by lower casing and tokenising using Twokenizer (Gimpel et al., 2011) , and we used all three sentiment lexicons where applicable.", "We found the best word vectors from SSWE and the common crawl 42B 300 dimension Glove vectors by five fold stratified cross validation for the NP methods and the highest accuracy on the validation set for the LSTM methods.", "We chose these word vectors as they have very different sizes (50 and 300), also they have been shown to perform well in different text types; SSWE for social media (Tang et al., 2016a) and Glove for reviews (Chen et al., 2017) .", "To make the experiments quicker and computationally less expensive, we filtered out all words from the word vectors that did not appear in the train and test datasets, and this is equivalent with respect to word coverage as using all words.", "Finally we only reported results for the LSTM methods with one seed value and not multiple due to time constraints.", "The results of the methods using the best found word vectors on the test sets can be seen in table 6.", "We find that the TDParse methods generally perform best but only clearly outperforms the other nondependency parser methods on the YouTuBean dataset.", "We hypothesise that this is due to the dataset containing, on average, a deeper constituency tree depth which could be seen as on average more complex sentences.", "This could be due to it being from the spoken medium compared to the rest of the datasets which are written.", "Also that using a sentiment lexicon is almost always beneficial, but only by a small amount.", "Within the LSTM based methods the TDLSTM method generally performs the best indicating that the extra target information that the TCLSTM method contains is not needed, but we believe this needs further analysis.", "We can conclude that the simpler NP models perform well across domain, type and medium and that even without language specific tools and lexicons they are competitive to the more complex LSTM based methods.", "Dataset Target-Dep F1 Discussion and conclusion The fast developing subfield of TDSA has so far lacked a large-scale comparative mass evaluation of approaches using different models and datasets.", "In this paper, we address this generalisability limitation and perform the first direct comparison and reproduction of three different approaches for TDSA.", "While carrying out these reproductions, we have noted and described above, the many emerging issues in previous research related to incomplete descriptions of methods and settings, patchy release of code, and lack of comparative evaluations.", "This is natural in a developing field, but it is crucial for ongoing development within NLP in general that improved repeatability practices are adopted.", "The practices adopted in our case studies are to reproduce the methods in open source code, adopt only open data, provide format conversion tools to ingest the different data formats, and describe and document all settings via the code and Jupyter Notebooks (released initially in anonymous form at submission time) 12 .", "We therefore argue that papers should not consider repeatability (replication or reproduction) or generalisability alone, but these two key tenets of scientific practice should be brought together.", "In future work, we aim to extend our reproduction framework further, and extend the comparative evaluation to languages other than English.", "This will necessitate changes in the framework since we expect that dependency parsers and sentiment lexicons will be unavailable for specific languages.", "Also we will explore through error analysis in which situations different neural network architectures perform best." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.1.3", "4.2", "5", "6" ], "paper_header_content": [ "Introduction", "Related work", "Datasets used in our experiments", "Reproduction studies", "Reproduction of Vo and Zhang (2015)", "Scaling and Final Model comparison", "Reproduction of Wang et al. (2017)", "Mass Evaluation", "Discussion and conclusion" ] }
GEM-SciDuet-train-47#paper-1071#slide-3
Generalisability
1. Domain Restaurant, Laptop 2. Type Social Media, Reviews 3. Medium Written, Spoken 4. Data Set Size 5. Data Set Characteristics number of targets in a sentence.
1. Domain Restaurant, Laptop 2. Type Social Media, Reviews 3. Medium Written, Spoken 4. Data Set Size 5. Data Set Characteristics number of targets in a sentence.
[]
GEM-SciDuet-train-47#paper-1071#slide-4
1071
Bringing replication and reproduction together with generalisability in NLP: Three reproduction studies for Target Dependent Sentiment Analysis
Lack of repeatability and generalisability are two significant threats to continuing scientific development in Natural Language Processing. Language models and learning methods are so complex that scientific conference papers no longer contain enough space for the technical depth required for replication or reproduction. Taking Target Dependent Sentiment Analysis as a case study, we show how recent work in the field has not consistently released code, or described settings for learning methods in enough detail, and lacks comparability and generalisability in train, test or validation data. To investigate generalisability and to enable state of the art comparative evaluations, we carry out the first reproduction studies of three groups of complementary methods and perform the first large-scale mass evaluation on six different English datasets. Reflecting on our experiences, we recommend that future replication or reproduction experiments should always consider a variety of datasets alongside documenting and releasing their methods and published code in order to minimise the barriers to both repeatability and generalisability. We have released our code with a model zoo on GitHub with Jupyter Notebooks to aid understanding and full documentation, and we recommend that others do the same with their papers at submission time through an anonymised GitHub account.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205 ], "paper_content_text": [ "Introduction Repeatable (replicable and/or reproducible 1 ) experimentation is a core tenet of the scientific endeavour.", "In Natural Language Processing (NLP) research as in other areas, this requires three crucial components: (a) published methods described in sufficient detail (b) a working code base and (c) open dataset(s) to permit training, testing and validation to be reproduced and generalised.", "In the cognate sub-discipline of corpus linguistics, releasing textual datasets has been a defining feature of the community for many years, enabling multiple comparative experiments to be conducted on a stable basis since the core underlying corpora are community resources.", "In NLP, with methods becoming increasingly complex with the use of machine learning and deep learning approaches, it is often difficult to describe all settings and configurations in enough detail without releasing code.", "The work described in this paper emerged from recent efforts at our research centre to reimplement other's work across a number of topics (e.g.", "text reuse, identity resolution and sentiment analysis) where previously published methods were not easily repeatable because of missing or broken code or dependencies, and/or where methods were not sufficiently well described to enable reproduction.", "We focus on one sub-area of sentiment analysis to illustrate the extent of these problems, along with our initial recommendations and contributions to address the issues.", "The area of Target Dependent Sentiment Analysis (TDSA) and NLP in general has been growing rapidly in the last few years due to new neural network methods that require no feature engineering.", "However it is difficult to keep track of the state of the art as new models are tested on different datasets, thus preventing true comparative evaluations.", "This is best shown by table 1 where many approaches This work is licenced under a Creative Commons Attribution 4.0 International Licence.", "Licence details: http:// creativecommons.org/licenses/by/4.0/ 1 We follow the definitions in Antske Fokkens' guest blog post \"replication (obtaining the same results using the same experiment) as well as reproduction (reach the same conclusion through different means)\" from http://coling2018.", "org/slowly-growing-offspring-zigglebottom-anno-2017-guest-post/ are evaluated on the SemEval dataset (Pontiki et al., 2014) but not all.", "Datasets can vary by domain (e.g.", "product), type (social media, review), or medium (written or spoken), and to date there has been no comparative evaluation of methods from these multiple classes.", "Our primary and secondary contributions therefore, are to carry out the first study that reports results across all three different dataset classes, and to release a open source code framework implementing three complementary groups of TDSA methods.", "In terms of reproducibility via code release, recent TDSA papers have generally been very good with regards to publishing code alongside their papers (Mitchell et al., 2013; Zhang et al., 2016; Liu and Zhang, 2017; Wang et al., 2017) but other papers have not released code (Wang et al., 2016; Tay et al., 2017) .", "In some cases, the code was initially made available, then removed, and is now back online (Tang et al., 2016a) .", "Unfortunately, in some cases even when code has been published, different results have been obtained relative to the original paper.", "This can be seen when Chen et al.", "(2017) used the code and embeddings in Tang et al.", "(2016b) they observe different results.", "Similarly, when others (Tay et al., 2017; Chen et al., 2017) attempt to replicate the experiments of Tang et al.", "(2016a) they also produce different results to the original authors.", "Our observations within this one sub-field motivates the need to investigate further and understand how such problems can be avoided in the future.", "In some cases, when code has been released, it is difficult to use which could explain why the results were not reproduced.", "Of course, we would not expect researchers to produce industrial strength code, or provide continuing free ongoing support for multiple years after publication, but the situation is clearly problematic for the development of the new field in general.", "In this paper, we therefore reproduce three papers chosen as they employ widely differing methods: Neural Pooling (NP) , NP with dependency parsing (Wang et al., 2017) , and RNN (Tang et al., 2016a) , as well as having been applied largely to different datasets.", "At the end of the paper, we reflect on bringing together elements of repeatability and generalisability which we find are crucial to NLP and data science based disciplines more widely to enable others to make use of the science created.", "Related work Reproducibility and replicability have long been key elements of the scientific method, but have been gaining renewed prominence recently across a number of disciplines with attention being given to a 'reproducibility crisis'.", "For example, in pharmaceutical research, as little as 20-25% of papers were found to be replicable (Prinz et al., 2011) .", "The problem has also been recognised in computer science in general (Collberg and Proebsting, 2016) .", "Reproducibility and replicability have been researched for sometime in Information Retrieval (IR) since the Grid@CLEF pilot track (Ferro and Harman, 2009 ).", "The aim was to create a 'grid of points' where a point defined the performance of a particular IR system using certain pre-processing techniques on a defined dataset.", "Louridas and Gousios (2012) looked at reproducibility in Software Engineering after trying to replicate another authors results and concluded with a list of requirements for papers to be reproducible: (a) All data related to the paper, (b) All code required to reproduce the paper and (c) Documentation for the code and data.", "Fokkens et al.", "(2013) looked at reproducibility in WordNet similarity and Named Entity Recognition finding five key aspects that cause experimental variation and therefore need to be clearly stated: (a) pre-processing, (b) experimental setup, (c) versioning, (d) system output, (e) system variation.", "In Twitter sentiment analysis, Sygkounas et al.", "(2016) stated the need for using the same library versions and datasets when replicating work.", "Different methods of releasing datasets and code have been suggested.", "Ferro and Harman (2009) defined a framework (CIRCO) that enforces a pre-processing pipeline where data can be extracted at each stage therefore facilitating a validation step.", "They stated a mechanism for storing results, dataset and pre-processed data 2 .", "Louridas and Gousios (2012) suggested the use of a virtual machine alongside papers to bundle the data and code together, while most state the advantages of releasing source code (Fokkens et al., 2013; Potthast et al., 2016; Sygkounas et al., 2016) .", "The act of reproducing or replicating results is not just for validating research but to also show how it can be improved.", "Ferro and Silvello (2016) followed up their initial research and were able to analyse which pre-processing techniques were important on a French monolingual dataset and how the different techniques affected each other given an IR system.", "Fokkens et al.", "(2013) showed how changes in the five key aspects affected results.", "The closest related work to our reproducibility study is that of Marrese-Taylor and Matsuo (2017) which they replicate three different syntactic based aspect extraction methods.", "They found that parameter tuning was very important however using different pre-processing pipelines such as Stanford's CoreNLP did not have a consistent effect on the results.", "They found that the methods stated in the original papers are not detailed enough to replicate the study as evidenced by their large results differential.", "Dashtipour et al.", "(2016) undertook a replication study in sentiment prediction, however this was at the document level and on different datasets and languages to the originals.", "In other areas of (aspectbased) sentiment analysis, releasing code for published systems has not been a high priority, e.g.", "in SemEval 2016 task 5 (Pontiki et al., 2016) only 1 out of 21 papers released their source code.", "In IR, specific reproducible research tracks have been created 3 and we are pleased to see the same happening at COLING 2018 4 .", "Turning now to the focus of our investigations, Target Dependent sentiment analysis (TDSA) research (Nasukawa and Yi, 2003) arose as an extension to the coarse grained analysis of document level sentiment analysis (Pang et al., 2002; Turney, 2002) .", "Since its inception, papers have applied different methods such as feature based (Kiritchenko et al., 2014) , Recursive Neural Networks (RecNN) (Dong et al., 2014) , Recurrent Neural Networks (RNN) (Tang et al., 2016a) , attention applied to RNN (Wang et al., 2016; Chen et al., 2017; Tay et al., 2017) , Neural Pooling (NP) Wang et al., 2017) , RNN combined with NP (Zhang et al., 2016) , and attention based neural networks (Tang et al., 2016b) .", "Others have tackled TDSA as a joint task with target extraction, thus treating it as a sequence labelling problem.", "Mitchell et al.", "(2013) carried out this task using Conditional Random Fields (CRF), and this work was then extended using a neural CRF .", "Both approaches found that combining the two tasks did not improve results compared to treating the two tasks separately, apart from when considering POS and NEG when the joint task performs better.", "Finally, created an attention RNN for this task which was evaluated on two very different datasets containing written and spoken (video-based) reviews where the domain adaptation between the two shows some promise.", "Overall, within the field of sentiment analysis there are other granularities such as sentence level (Socher et al., 2013) , topic (Augenstein et al., 2018) , and aspect (Wang et al., 2016; Tay et al., 2017) .", "Aspect-level sentiment analysis relates to identifying the sentiment of (potentially multiple) topics in the same text although this can be seen as a similar task to TDSA.", "However the clear distinction between aspect and TDSA is that TDSA requires the target to be mentioned in the text itself while aspect-level employs a conceptual category with potentially multiple related instantiations in the text.", "Tang et al.", "(2016a) created a Target Dependent LSTM (TDLSTM) which encompassed two LSTMs either side of the target word, then improved the model by concatenating the target vector to the input embeddings to create a Target Connected LSTM (TCLSTM).", "Adding attention has become very popular recently.", "Tang et al.", "(2016b) showed the speed and accuracy improvements of using multiple attention layers only over LSTM based methods, however they found that it could not model complex sentences e.g.", "negations.", "Liu and Zhang (2017) showed that adding attention to a Bi-directional LSTM (BLSTM) improves the results as it takes the importance of each word into account with respect to the target.", "Chen et al.", "(2017) also combined a BLSTM and attention, however they used multiple attention layers and combined the results using a Gated Recurrent Unit (GRU) which they called Recurrent Attention on Memory (RAM), and they found this method to allow models to better understand more complex sentiment for each comparison.", "used neural pooling features e.g.", "max, min, etc of the word embeddings of the left and right context of the target word, the target itself, and the whole Tweet.", "They inputted the features into a linear SVM, and showed the importance of using the left and right context for the first time.", "They found in their study that using a combination of Word2Vec embeddings and sentiment embeddings performed best alongside using sentiment lexicons to filter the embedding space.", "Other studies have adopted more linguistic approaches.", "Wang et al.", "(2017) extended the work of by using the dependency linked words from the target.", "Dong et al.", "(2014) used the dependency tree to create a Recursive Neural Network (RecNN) inspired by Socher et al.", "(2013) but compared to Socher et al.", "(2013) they also utilised the dependency tags to create an Adaptive RecNN (ARecNN).", "Critically, the methods reported above have not been applied to the same datasets, therefore a true comparative evaluation between the different methods is somewhat difficult.", "This has serious implications for generalisability of methods.", "We correct that limitation in our study.", "There are two papers taking a similar approach to our work in terms of generalisability although they do not combine them with the reproduction issues that we highlight.", "First, Chen et al.", "(2017) compared results across Se-mEval's laptop and restaurant reviews in English (Pontiki et al., 2014) , a Twitter dataset (Dong et al., 2014) and their own Chinese news comments dataset.", "They did perform a comparison across different languages, domains, corpora types, and different methods; SVM with features (Kiritchenko et al., 2014) , Rec-NN (Dong et al., 2014) , TDLSTM (Tang et al., 2016a) , Memory Neural Network (MNet) (Tang et al., 2016b) and their own attention method.", "However, the Chinese dataset was not released, and the methods were not compared across all datasets.", "By contrast, we compare all methods across all datasets, using techniques that are not just from the Recurrent Neural Network (RNN) family.", "A second paper, by Barnes et al.", "(2017) compares seven approaches to (document and sentence level) sentiment analysis on six benchmark datasets, but does not systematically explore reproduction issues as we do in our paper.", "Datasets used in our experiments We are evaluating our models over six different English datasets deliberately chosen to represent a range of domains, types and mediums.", "As highlighted above, previous papers tend to only carry out evaluations on one or two datasets which limits the generalisability of their results.", "In this paper, we do not consider the quality or inter-annotator agreement levels of these datasets but it has been noted that some datasets may have issues here.", "For example, Pavlopoulos and Androutsopoulos (2014) point out that the Hu and Liu (2004) dataset does not state their inter-annotator agreement scores nor do they have aspect terms that express neutral opinion.", "We only use a subset of the English datasets available.", "For two reasons.", "First, the time it takes to write parsers and run the models.", "Second, we only used datasets that contain three distinct sentiments (Wilson (2008) only has two).", "From the datasets we have used, we have only had issue with parsing Wang et al.", "(2017) where the annotations for the first set of the data contains the target span but the second set does not.", "Thus making it impossible to use the second set of annotation and forcing us to only use a subset of the dataset.", "An as example of this: \"Got rid of bureaucrats 'and we put that money, into 9000 more doctors and nurses'... to turn the doctors into bureaucrats#BattleForNumber10\" in that Tweet 'bureaucrats' was annotated as negative but it does not state if it was the first or second instance of 'bureaucrats' since it does not use target spans.", "As we can see from table 2, generally the social media datasets (Twitter and YouTube) contain more targets per sentence with the exception of Dong et al.", "(2014) and Mitchell et al.", "(2013) .", "The only dataset that has a small difference between the number of unique sentiments per sentence is the Wang et al.", "(2017) Reproduction studies In the following subsections, we present the three different methods that we are reproducing and how their results differ from the original analysis.", "In all of the experiments below, we lower case all text and tokenise using Twokenizer (Gimpel et al., 2011) .", "This was done as the datasets originate from Twitter and this pre-processing method was to some extent stated in and assumed to be used across the others as they do not explicitly state how they pre-process in the papers.", "Reproduction of Vo and Zhang (2015) Vo and Zhang (2015) created the first NP method for TDSA.", "It takes the word vectors of the left, right, target word, and full tweet/sentence/text contexts and performs max, min, average, standard deviation, and product pooling over these contexts to create a feature vector as input to the Support Vector Machine For each of the experiments below we used the following configurations unless otherwise stated: we performed 5 fold stratified cross validation, features are scaled using Max Min scaling before inputting into the SVM, and used the respective C-Values for the SVM stated in the paper for each of the models.", "One major difficulty with the description of the method in the paper and re-implementation is handling the same target multiple appearances issue as originally raised by Wang et al.", "(2017) .", "As the method requires context with regards to the target word, if there is more than one appearance of the target word then the method does not specify which to use.", "We therefore took the approach of Wang et al.", "(2017) and found all of the features for each appearance and performed median pooling over features.", "This change could explain the subtle differences between the results we report and those of the original paper.", "used three different sentiment lexicons: MPQA 5 (Wilson et al., 2005) , NRC 6 (Mohammad and Turney, 2010) , and HL 7 (Hu and Liu, 2004) .", "We found a small difference in word counts between their reported statistics for the MPQA lexicons and those we performed ourselves, as can be seen in the bold numbers in table 3.", "Originally, we assumed that a word can only occur in one sentiment class within the same lexicon, and this resulted in differing counts for all lexicons.", "This distinction is not clearly documented in the paper or code.", "However, our assumption turned out to be incorrect, giving a further illustration of why detailed descriptions and documentation of all decisions is important.", "We ran the same experiment as to show the effectiveness of sentiment lexicons the results can be seen in table 4.", "We can clearly see there are some difference not just with the accuracy scores but the rank of the sentiment lexicons.", "We found just using HL was best and MPQA does help performance compared to the Target-dep baseline which differs to findings.", "Since we found that using just HL performed best, the rest of the results will apply the Target-dep+ method using HL and using HL & MPQA to show the affect of using the lexicon that both we and found best.", "The original authors tested their methods using three different word vectors: 1.", "Word2Vec trained by on 5 million Tweets containing emoticons (W2V), 2.", "Sentiment Specific Word Embedding (SSWE) from , and 3.", "W2V and SSWE combined.", "Neither of these word embeddings are available from the original authors as never released the embeddings and the link to embeddings no longer works 8 .", "However, the embeddings were released through Wang et al.", "(2017) code base 9 following requesting of the code from .", "Figure 1 shows the results of the different word embeddings across the different methods.", "The main finding we see is that SSWE by themselves are not as informative as W2V vectors which is different to the findings of .", "However we agree that combining the two vectors is beneficial and that the rank of methods is the same in our observations.", "Sentiment Lexicons Word Counts Scaling and Final Model comparison We test all of the methods on the test data set of Dong et al.", "(2014) and show the difference between the original and reproduced models in figure 2.", "Finally, we show the effect of scaling using Max Min and not scaling the data.", "As stated before, we have been using Max Min scaling on the NP features, however did not mention scaling in their paper.", "The library they were using, LibLinear (Fan et al., 2008) , suggests in its practical guide (Hsu et al., 2003) to scale each feature to [0, 1] but this was not re-iterated by .", "We are using scikit-learn's (Pedregosa et al., 2011) LinearSVC which is a wrapper of LibLinear, hence making it appropriate to use here.", "As can be seen in figure 2, not scaling can affect the results by around one-third.", "Reproduction of Wang et al.", "(2017) Wang et al.", "(2017) extended the NP work of and instead of using the full tweet/sentence/text contexts they used the full dependency graph of the target word.", "Thus, they created three different methods: 1.", "TDParseuses only the full dependency graph context, 2.", "TDParse the feature of TDParseand the left and right contexts, and 3.", "TDParse+ the features of TDParse and LS and RS contexts.", "The experiments are performed on the Dong et al.", "(2014) and Wang et al.", "(2017) Twitter datasets where we train and test on the previously specified train and test splits.", "We also scale our features using Max Min scaling before inputting into the SVM.", "We used all three sentiment lexicons as in the original paper, and we found the C-Value by performing 5 fold stratified cross validation on the training datasets.", "The results of these experiments can be seen in figure 3 10 .", "As found with the results of replication, scaling is very important but is typically overlooked when reporting.", "8 http://ir.hit.edu.cn/˜dytang/ 9 https://github.com/bluemonk482/tdparse 10 For the Election Twitter dataset TDParse+ result were never reported in the original paper.", "Tang et al.", "(2016a) was the first to use LSTMs specifically for TDSA.", "They created three different models: 1.", "LSTM a standard LSTM that runs over the length of the sentence and takes no target information into account, 2.", "TDLSTM runs two LSTMs, one over the left and the other over the right context of the target word and concatenates the output of the two, and 3.", "TCLSTM same as the TDLSTM method but each input word vector is concatenated with vector of the target word.", "All of the methods outputs are fed into a softmax activation function.", "The experiments are performed on the Dong et al.", "(2014) dataset where we train and test on the specified splits.", "For the LSTMs we initialised the weights using uniform distribution U(0.003, 0.003), used Stochastic Gradient Descent (SGD) a learning rate of 0.01, cross entropy loss, padded and truncated sequence to the length of the maximum sequence in the training dataset as stated in the original paper, and we did not \"set the clipping threshold of softmax layer as 200\" (Tang et al., 2016a) as we were unsure what this meant.", "With regards to the number of epochs trained, we used early stopping with a patience of 10 and allowed 300 epochs.", "Within their experiments they used SSWE and Glove Twitter vectors 11 (Pennington et al., 2014) .", "As the paper being reproduced does not define the number of epochs they trained for, we use early stopping.", "Thus for early stopping we require to split the training data into train and validation sets to know when to stop.", "As it has been shown by Reimers and Gurevych (2017) that the random seed statistically significantly changes the results of experiments we ran each model over each word embedding thirty times, using a different seed value but keeping the same stratified train and validation split, and reported the results on the same test data as the original paper.", "As can be seen in Figure 4 , the initial seed value makes a large difference more so for the smaller embeddings.", "In table 5, we show the difference between our mean and maximum result and the original result for each model using the 200 dimension Glove Twitter vectors.", "Even though the mean result is quite different from the original the maximum is much closer.", "Our results generally agree with their results on the ranking of the word vectors and the embeddings.", "Overall, we were able to reproduce the results of all three papers.", "However for the neural network/deep learning approach of Tang et al.", "(2016a) we agree with Reimers and Gurevych (2017) that reporting multiple runs of the system over different seed values is required as the single performance scores can be misleading, which could explain why previous papers obtained different results to the original for the TDLSTM method (Chen et al., 2017; Tay et al., 2017) .", "Mass Evaluation For all of the methods we pre-processed the text by lower casing and tokenising using Twokenizer (Gimpel et al., 2011) , and we used all three sentiment lexicons where applicable.", "We found the best word vectors from SSWE and the common crawl 42B 300 dimension Glove vectors by five fold stratified cross validation for the NP methods and the highest accuracy on the validation set for the LSTM methods.", "We chose these word vectors as they have very different sizes (50 and 300), also they have been shown to perform well in different text types; SSWE for social media (Tang et al., 2016a) and Glove for reviews (Chen et al., 2017) .", "To make the experiments quicker and computationally less expensive, we filtered out all words from the word vectors that did not appear in the train and test datasets, and this is equivalent with respect to word coverage as using all words.", "Finally we only reported results for the LSTM methods with one seed value and not multiple due to time constraints.", "The results of the methods using the best found word vectors on the test sets can be seen in table 6.", "We find that the TDParse methods generally perform best but only clearly outperforms the other nondependency parser methods on the YouTuBean dataset.", "We hypothesise that this is due to the dataset containing, on average, a deeper constituency tree depth which could be seen as on average more complex sentences.", "This could be due to it being from the spoken medium compared to the rest of the datasets which are written.", "Also that using a sentiment lexicon is almost always beneficial, but only by a small amount.", "Within the LSTM based methods the TDLSTM method generally performs the best indicating that the extra target information that the TCLSTM method contains is not needed, but we believe this needs further analysis.", "We can conclude that the simpler NP models perform well across domain, type and medium and that even without language specific tools and lexicons they are competitive to the more complex LSTM based methods.", "Dataset Target-Dep F1 Discussion and conclusion The fast developing subfield of TDSA has so far lacked a large-scale comparative mass evaluation of approaches using different models and datasets.", "In this paper, we address this generalisability limitation and perform the first direct comparison and reproduction of three different approaches for TDSA.", "While carrying out these reproductions, we have noted and described above, the many emerging issues in previous research related to incomplete descriptions of methods and settings, patchy release of code, and lack of comparative evaluations.", "This is natural in a developing field, but it is crucial for ongoing development within NLP in general that improved repeatability practices are adopted.", "The practices adopted in our case studies are to reproduce the methods in open source code, adopt only open data, provide format conversion tools to ingest the different data formats, and describe and document all settings via the code and Jupyter Notebooks (released initially in anonymous form at submission time) 12 .", "We therefore argue that papers should not consider repeatability (replication or reproduction) or generalisability alone, but these two key tenets of scientific practice should be brought together.", "In future work, we aim to extend our reproduction framework further, and extend the comparative evaluation to languages other than English.", "This will necessitate changes in the framework since we expect that dependency parsers and sentiment lexicons will be unavailable for specific languages.", "Also we will explore through error analysis in which situations different neural network architectures perform best." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.1.3", "4.2", "5", "6" ], "paper_header_content": [ "Introduction", "Related work", "Datasets used in our experiments", "Reproduction studies", "Reproduction of Vo and Zhang (2015)", "Scaling and Final Model comparison", "Reproduction of Wang et al. (2017)", "Mass Evaluation", "Discussion and conclusion" ] }
GEM-SciDuet-train-47#paper-1071#slide-4
Generalisability within TDSA
Table 1: Methods and Datasets Social Media Reviews News Not Applicable
Table 1: Methods and Datasets Social Media Reviews News Not Applicable
[]
GEM-SciDuet-train-47#paper-1071#slide-5
1071
Bringing replication and reproduction together with generalisability in NLP: Three reproduction studies for Target Dependent Sentiment Analysis
Lack of repeatability and generalisability are two significant threats to continuing scientific development in Natural Language Processing. Language models and learning methods are so complex that scientific conference papers no longer contain enough space for the technical depth required for replication or reproduction. Taking Target Dependent Sentiment Analysis as a case study, we show how recent work in the field has not consistently released code, or described settings for learning methods in enough detail, and lacks comparability and generalisability in train, test or validation data. To investigate generalisability and to enable state of the art comparative evaluations, we carry out the first reproduction studies of three groups of complementary methods and perform the first large-scale mass evaluation on six different English datasets. Reflecting on our experiences, we recommend that future replication or reproduction experiments should always consider a variety of datasets alongside documenting and releasing their methods and published code in order to minimise the barriers to both repeatability and generalisability. We have released our code with a model zoo on GitHub with Jupyter Notebooks to aid understanding and full documentation, and we recommend that others do the same with their papers at submission time through an anonymised GitHub account.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205 ], "paper_content_text": [ "Introduction Repeatable (replicable and/or reproducible 1 ) experimentation is a core tenet of the scientific endeavour.", "In Natural Language Processing (NLP) research as in other areas, this requires three crucial components: (a) published methods described in sufficient detail (b) a working code base and (c) open dataset(s) to permit training, testing and validation to be reproduced and generalised.", "In the cognate sub-discipline of corpus linguistics, releasing textual datasets has been a defining feature of the community for many years, enabling multiple comparative experiments to be conducted on a stable basis since the core underlying corpora are community resources.", "In NLP, with methods becoming increasingly complex with the use of machine learning and deep learning approaches, it is often difficult to describe all settings and configurations in enough detail without releasing code.", "The work described in this paper emerged from recent efforts at our research centre to reimplement other's work across a number of topics (e.g.", "text reuse, identity resolution and sentiment analysis) where previously published methods were not easily repeatable because of missing or broken code or dependencies, and/or where methods were not sufficiently well described to enable reproduction.", "We focus on one sub-area of sentiment analysis to illustrate the extent of these problems, along with our initial recommendations and contributions to address the issues.", "The area of Target Dependent Sentiment Analysis (TDSA) and NLP in general has been growing rapidly in the last few years due to new neural network methods that require no feature engineering.", "However it is difficult to keep track of the state of the art as new models are tested on different datasets, thus preventing true comparative evaluations.", "This is best shown by table 1 where many approaches This work is licenced under a Creative Commons Attribution 4.0 International Licence.", "Licence details: http:// creativecommons.org/licenses/by/4.0/ 1 We follow the definitions in Antske Fokkens' guest blog post \"replication (obtaining the same results using the same experiment) as well as reproduction (reach the same conclusion through different means)\" from http://coling2018.", "org/slowly-growing-offspring-zigglebottom-anno-2017-guest-post/ are evaluated on the SemEval dataset (Pontiki et al., 2014) but not all.", "Datasets can vary by domain (e.g.", "product), type (social media, review), or medium (written or spoken), and to date there has been no comparative evaluation of methods from these multiple classes.", "Our primary and secondary contributions therefore, are to carry out the first study that reports results across all three different dataset classes, and to release a open source code framework implementing three complementary groups of TDSA methods.", "In terms of reproducibility via code release, recent TDSA papers have generally been very good with regards to publishing code alongside their papers (Mitchell et al., 2013; Zhang et al., 2016; Liu and Zhang, 2017; Wang et al., 2017) but other papers have not released code (Wang et al., 2016; Tay et al., 2017) .", "In some cases, the code was initially made available, then removed, and is now back online (Tang et al., 2016a) .", "Unfortunately, in some cases even when code has been published, different results have been obtained relative to the original paper.", "This can be seen when Chen et al.", "(2017) used the code and embeddings in Tang et al.", "(2016b) they observe different results.", "Similarly, when others (Tay et al., 2017; Chen et al., 2017) attempt to replicate the experiments of Tang et al.", "(2016a) they also produce different results to the original authors.", "Our observations within this one sub-field motivates the need to investigate further and understand how such problems can be avoided in the future.", "In some cases, when code has been released, it is difficult to use which could explain why the results were not reproduced.", "Of course, we would not expect researchers to produce industrial strength code, or provide continuing free ongoing support for multiple years after publication, but the situation is clearly problematic for the development of the new field in general.", "In this paper, we therefore reproduce three papers chosen as they employ widely differing methods: Neural Pooling (NP) , NP with dependency parsing (Wang et al., 2017) , and RNN (Tang et al., 2016a) , as well as having been applied largely to different datasets.", "At the end of the paper, we reflect on bringing together elements of repeatability and generalisability which we find are crucial to NLP and data science based disciplines more widely to enable others to make use of the science created.", "Related work Reproducibility and replicability have long been key elements of the scientific method, but have been gaining renewed prominence recently across a number of disciplines with attention being given to a 'reproducibility crisis'.", "For example, in pharmaceutical research, as little as 20-25% of papers were found to be replicable (Prinz et al., 2011) .", "The problem has also been recognised in computer science in general (Collberg and Proebsting, 2016) .", "Reproducibility and replicability have been researched for sometime in Information Retrieval (IR) since the Grid@CLEF pilot track (Ferro and Harman, 2009 ).", "The aim was to create a 'grid of points' where a point defined the performance of a particular IR system using certain pre-processing techniques on a defined dataset.", "Louridas and Gousios (2012) looked at reproducibility in Software Engineering after trying to replicate another authors results and concluded with a list of requirements for papers to be reproducible: (a) All data related to the paper, (b) All code required to reproduce the paper and (c) Documentation for the code and data.", "Fokkens et al.", "(2013) looked at reproducibility in WordNet similarity and Named Entity Recognition finding five key aspects that cause experimental variation and therefore need to be clearly stated: (a) pre-processing, (b) experimental setup, (c) versioning, (d) system output, (e) system variation.", "In Twitter sentiment analysis, Sygkounas et al.", "(2016) stated the need for using the same library versions and datasets when replicating work.", "Different methods of releasing datasets and code have been suggested.", "Ferro and Harman (2009) defined a framework (CIRCO) that enforces a pre-processing pipeline where data can be extracted at each stage therefore facilitating a validation step.", "They stated a mechanism for storing results, dataset and pre-processed data 2 .", "Louridas and Gousios (2012) suggested the use of a virtual machine alongside papers to bundle the data and code together, while most state the advantages of releasing source code (Fokkens et al., 2013; Potthast et al., 2016; Sygkounas et al., 2016) .", "The act of reproducing or replicating results is not just for validating research but to also show how it can be improved.", "Ferro and Silvello (2016) followed up their initial research and were able to analyse which pre-processing techniques were important on a French monolingual dataset and how the different techniques affected each other given an IR system.", "Fokkens et al.", "(2013) showed how changes in the five key aspects affected results.", "The closest related work to our reproducibility study is that of Marrese-Taylor and Matsuo (2017) which they replicate three different syntactic based aspect extraction methods.", "They found that parameter tuning was very important however using different pre-processing pipelines such as Stanford's CoreNLP did not have a consistent effect on the results.", "They found that the methods stated in the original papers are not detailed enough to replicate the study as evidenced by their large results differential.", "Dashtipour et al.", "(2016) undertook a replication study in sentiment prediction, however this was at the document level and on different datasets and languages to the originals.", "In other areas of (aspectbased) sentiment analysis, releasing code for published systems has not been a high priority, e.g.", "in SemEval 2016 task 5 (Pontiki et al., 2016) only 1 out of 21 papers released their source code.", "In IR, specific reproducible research tracks have been created 3 and we are pleased to see the same happening at COLING 2018 4 .", "Turning now to the focus of our investigations, Target Dependent sentiment analysis (TDSA) research (Nasukawa and Yi, 2003) arose as an extension to the coarse grained analysis of document level sentiment analysis (Pang et al., 2002; Turney, 2002) .", "Since its inception, papers have applied different methods such as feature based (Kiritchenko et al., 2014) , Recursive Neural Networks (RecNN) (Dong et al., 2014) , Recurrent Neural Networks (RNN) (Tang et al., 2016a) , attention applied to RNN (Wang et al., 2016; Chen et al., 2017; Tay et al., 2017) , Neural Pooling (NP) Wang et al., 2017) , RNN combined with NP (Zhang et al., 2016) , and attention based neural networks (Tang et al., 2016b) .", "Others have tackled TDSA as a joint task with target extraction, thus treating it as a sequence labelling problem.", "Mitchell et al.", "(2013) carried out this task using Conditional Random Fields (CRF), and this work was then extended using a neural CRF .", "Both approaches found that combining the two tasks did not improve results compared to treating the two tasks separately, apart from when considering POS and NEG when the joint task performs better.", "Finally, created an attention RNN for this task which was evaluated on two very different datasets containing written and spoken (video-based) reviews where the domain adaptation between the two shows some promise.", "Overall, within the field of sentiment analysis there are other granularities such as sentence level (Socher et al., 2013) , topic (Augenstein et al., 2018) , and aspect (Wang et al., 2016; Tay et al., 2017) .", "Aspect-level sentiment analysis relates to identifying the sentiment of (potentially multiple) topics in the same text although this can be seen as a similar task to TDSA.", "However the clear distinction between aspect and TDSA is that TDSA requires the target to be mentioned in the text itself while aspect-level employs a conceptual category with potentially multiple related instantiations in the text.", "Tang et al.", "(2016a) created a Target Dependent LSTM (TDLSTM) which encompassed two LSTMs either side of the target word, then improved the model by concatenating the target vector to the input embeddings to create a Target Connected LSTM (TCLSTM).", "Adding attention has become very popular recently.", "Tang et al.", "(2016b) showed the speed and accuracy improvements of using multiple attention layers only over LSTM based methods, however they found that it could not model complex sentences e.g.", "negations.", "Liu and Zhang (2017) showed that adding attention to a Bi-directional LSTM (BLSTM) improves the results as it takes the importance of each word into account with respect to the target.", "Chen et al.", "(2017) also combined a BLSTM and attention, however they used multiple attention layers and combined the results using a Gated Recurrent Unit (GRU) which they called Recurrent Attention on Memory (RAM), and they found this method to allow models to better understand more complex sentiment for each comparison.", "used neural pooling features e.g.", "max, min, etc of the word embeddings of the left and right context of the target word, the target itself, and the whole Tweet.", "They inputted the features into a linear SVM, and showed the importance of using the left and right context for the first time.", "They found in their study that using a combination of Word2Vec embeddings and sentiment embeddings performed best alongside using sentiment lexicons to filter the embedding space.", "Other studies have adopted more linguistic approaches.", "Wang et al.", "(2017) extended the work of by using the dependency linked words from the target.", "Dong et al.", "(2014) used the dependency tree to create a Recursive Neural Network (RecNN) inspired by Socher et al.", "(2013) but compared to Socher et al.", "(2013) they also utilised the dependency tags to create an Adaptive RecNN (ARecNN).", "Critically, the methods reported above have not been applied to the same datasets, therefore a true comparative evaluation between the different methods is somewhat difficult.", "This has serious implications for generalisability of methods.", "We correct that limitation in our study.", "There are two papers taking a similar approach to our work in terms of generalisability although they do not combine them with the reproduction issues that we highlight.", "First, Chen et al.", "(2017) compared results across Se-mEval's laptop and restaurant reviews in English (Pontiki et al., 2014) , a Twitter dataset (Dong et al., 2014) and their own Chinese news comments dataset.", "They did perform a comparison across different languages, domains, corpora types, and different methods; SVM with features (Kiritchenko et al., 2014) , Rec-NN (Dong et al., 2014) , TDLSTM (Tang et al., 2016a) , Memory Neural Network (MNet) (Tang et al., 2016b) and their own attention method.", "However, the Chinese dataset was not released, and the methods were not compared across all datasets.", "By contrast, we compare all methods across all datasets, using techniques that are not just from the Recurrent Neural Network (RNN) family.", "A second paper, by Barnes et al.", "(2017) compares seven approaches to (document and sentence level) sentiment analysis on six benchmark datasets, but does not systematically explore reproduction issues as we do in our paper.", "Datasets used in our experiments We are evaluating our models over six different English datasets deliberately chosen to represent a range of domains, types and mediums.", "As highlighted above, previous papers tend to only carry out evaluations on one or two datasets which limits the generalisability of their results.", "In this paper, we do not consider the quality or inter-annotator agreement levels of these datasets but it has been noted that some datasets may have issues here.", "For example, Pavlopoulos and Androutsopoulos (2014) point out that the Hu and Liu (2004) dataset does not state their inter-annotator agreement scores nor do they have aspect terms that express neutral opinion.", "We only use a subset of the English datasets available.", "For two reasons.", "First, the time it takes to write parsers and run the models.", "Second, we only used datasets that contain three distinct sentiments (Wilson (2008) only has two).", "From the datasets we have used, we have only had issue with parsing Wang et al.", "(2017) where the annotations for the first set of the data contains the target span but the second set does not.", "Thus making it impossible to use the second set of annotation and forcing us to only use a subset of the dataset.", "An as example of this: \"Got rid of bureaucrats 'and we put that money, into 9000 more doctors and nurses'... to turn the doctors into bureaucrats#BattleForNumber10\" in that Tweet 'bureaucrats' was annotated as negative but it does not state if it was the first or second instance of 'bureaucrats' since it does not use target spans.", "As we can see from table 2, generally the social media datasets (Twitter and YouTube) contain more targets per sentence with the exception of Dong et al.", "(2014) and Mitchell et al.", "(2013) .", "The only dataset that has a small difference between the number of unique sentiments per sentence is the Wang et al.", "(2017) Reproduction studies In the following subsections, we present the three different methods that we are reproducing and how their results differ from the original analysis.", "In all of the experiments below, we lower case all text and tokenise using Twokenizer (Gimpel et al., 2011) .", "This was done as the datasets originate from Twitter and this pre-processing method was to some extent stated in and assumed to be used across the others as they do not explicitly state how they pre-process in the papers.", "Reproduction of Vo and Zhang (2015) Vo and Zhang (2015) created the first NP method for TDSA.", "It takes the word vectors of the left, right, target word, and full tweet/sentence/text contexts and performs max, min, average, standard deviation, and product pooling over these contexts to create a feature vector as input to the Support Vector Machine For each of the experiments below we used the following configurations unless otherwise stated: we performed 5 fold stratified cross validation, features are scaled using Max Min scaling before inputting into the SVM, and used the respective C-Values for the SVM stated in the paper for each of the models.", "One major difficulty with the description of the method in the paper and re-implementation is handling the same target multiple appearances issue as originally raised by Wang et al.", "(2017) .", "As the method requires context with regards to the target word, if there is more than one appearance of the target word then the method does not specify which to use.", "We therefore took the approach of Wang et al.", "(2017) and found all of the features for each appearance and performed median pooling over features.", "This change could explain the subtle differences between the results we report and those of the original paper.", "used three different sentiment lexicons: MPQA 5 (Wilson et al., 2005) , NRC 6 (Mohammad and Turney, 2010) , and HL 7 (Hu and Liu, 2004) .", "We found a small difference in word counts between their reported statistics for the MPQA lexicons and those we performed ourselves, as can be seen in the bold numbers in table 3.", "Originally, we assumed that a word can only occur in one sentiment class within the same lexicon, and this resulted in differing counts for all lexicons.", "This distinction is not clearly documented in the paper or code.", "However, our assumption turned out to be incorrect, giving a further illustration of why detailed descriptions and documentation of all decisions is important.", "We ran the same experiment as to show the effectiveness of sentiment lexicons the results can be seen in table 4.", "We can clearly see there are some difference not just with the accuracy scores but the rank of the sentiment lexicons.", "We found just using HL was best and MPQA does help performance compared to the Target-dep baseline which differs to findings.", "Since we found that using just HL performed best, the rest of the results will apply the Target-dep+ method using HL and using HL & MPQA to show the affect of using the lexicon that both we and found best.", "The original authors tested their methods using three different word vectors: 1.", "Word2Vec trained by on 5 million Tweets containing emoticons (W2V), 2.", "Sentiment Specific Word Embedding (SSWE) from , and 3.", "W2V and SSWE combined.", "Neither of these word embeddings are available from the original authors as never released the embeddings and the link to embeddings no longer works 8 .", "However, the embeddings were released through Wang et al.", "(2017) code base 9 following requesting of the code from .", "Figure 1 shows the results of the different word embeddings across the different methods.", "The main finding we see is that SSWE by themselves are not as informative as W2V vectors which is different to the findings of .", "However we agree that combining the two vectors is beneficial and that the rank of methods is the same in our observations.", "Sentiment Lexicons Word Counts Scaling and Final Model comparison We test all of the methods on the test data set of Dong et al.", "(2014) and show the difference between the original and reproduced models in figure 2.", "Finally, we show the effect of scaling using Max Min and not scaling the data.", "As stated before, we have been using Max Min scaling on the NP features, however did not mention scaling in their paper.", "The library they were using, LibLinear (Fan et al., 2008) , suggests in its practical guide (Hsu et al., 2003) to scale each feature to [0, 1] but this was not re-iterated by .", "We are using scikit-learn's (Pedregosa et al., 2011) LinearSVC which is a wrapper of LibLinear, hence making it appropriate to use here.", "As can be seen in figure 2, not scaling can affect the results by around one-third.", "Reproduction of Wang et al.", "(2017) Wang et al.", "(2017) extended the NP work of and instead of using the full tweet/sentence/text contexts they used the full dependency graph of the target word.", "Thus, they created three different methods: 1.", "TDParseuses only the full dependency graph context, 2.", "TDParse the feature of TDParseand the left and right contexts, and 3.", "TDParse+ the features of TDParse and LS and RS contexts.", "The experiments are performed on the Dong et al.", "(2014) and Wang et al.", "(2017) Twitter datasets where we train and test on the previously specified train and test splits.", "We also scale our features using Max Min scaling before inputting into the SVM.", "We used all three sentiment lexicons as in the original paper, and we found the C-Value by performing 5 fold stratified cross validation on the training datasets.", "The results of these experiments can be seen in figure 3 10 .", "As found with the results of replication, scaling is very important but is typically overlooked when reporting.", "8 http://ir.hit.edu.cn/˜dytang/ 9 https://github.com/bluemonk482/tdparse 10 For the Election Twitter dataset TDParse+ result were never reported in the original paper.", "Tang et al.", "(2016a) was the first to use LSTMs specifically for TDSA.", "They created three different models: 1.", "LSTM a standard LSTM that runs over the length of the sentence and takes no target information into account, 2.", "TDLSTM runs two LSTMs, one over the left and the other over the right context of the target word and concatenates the output of the two, and 3.", "TCLSTM same as the TDLSTM method but each input word vector is concatenated with vector of the target word.", "All of the methods outputs are fed into a softmax activation function.", "The experiments are performed on the Dong et al.", "(2014) dataset where we train and test on the specified splits.", "For the LSTMs we initialised the weights using uniform distribution U(0.003, 0.003), used Stochastic Gradient Descent (SGD) a learning rate of 0.01, cross entropy loss, padded and truncated sequence to the length of the maximum sequence in the training dataset as stated in the original paper, and we did not \"set the clipping threshold of softmax layer as 200\" (Tang et al., 2016a) as we were unsure what this meant.", "With regards to the number of epochs trained, we used early stopping with a patience of 10 and allowed 300 epochs.", "Within their experiments they used SSWE and Glove Twitter vectors 11 (Pennington et al., 2014) .", "As the paper being reproduced does not define the number of epochs they trained for, we use early stopping.", "Thus for early stopping we require to split the training data into train and validation sets to know when to stop.", "As it has been shown by Reimers and Gurevych (2017) that the random seed statistically significantly changes the results of experiments we ran each model over each word embedding thirty times, using a different seed value but keeping the same stratified train and validation split, and reported the results on the same test data as the original paper.", "As can be seen in Figure 4 , the initial seed value makes a large difference more so for the smaller embeddings.", "In table 5, we show the difference between our mean and maximum result and the original result for each model using the 200 dimension Glove Twitter vectors.", "Even though the mean result is quite different from the original the maximum is much closer.", "Our results generally agree with their results on the ranking of the word vectors and the embeddings.", "Overall, we were able to reproduce the results of all three papers.", "However for the neural network/deep learning approach of Tang et al.", "(2016a) we agree with Reimers and Gurevych (2017) that reporting multiple runs of the system over different seed values is required as the single performance scores can be misleading, which could explain why previous papers obtained different results to the original for the TDLSTM method (Chen et al., 2017; Tay et al., 2017) .", "Mass Evaluation For all of the methods we pre-processed the text by lower casing and tokenising using Twokenizer (Gimpel et al., 2011) , and we used all three sentiment lexicons where applicable.", "We found the best word vectors from SSWE and the common crawl 42B 300 dimension Glove vectors by five fold stratified cross validation for the NP methods and the highest accuracy on the validation set for the LSTM methods.", "We chose these word vectors as they have very different sizes (50 and 300), also they have been shown to perform well in different text types; SSWE for social media (Tang et al., 2016a) and Glove for reviews (Chen et al., 2017) .", "To make the experiments quicker and computationally less expensive, we filtered out all words from the word vectors that did not appear in the train and test datasets, and this is equivalent with respect to word coverage as using all words.", "Finally we only reported results for the LSTM methods with one seed value and not multiple due to time constraints.", "The results of the methods using the best found word vectors on the test sets can be seen in table 6.", "We find that the TDParse methods generally perform best but only clearly outperforms the other nondependency parser methods on the YouTuBean dataset.", "We hypothesise that this is due to the dataset containing, on average, a deeper constituency tree depth which could be seen as on average more complex sentences.", "This could be due to it being from the spoken medium compared to the rest of the datasets which are written.", "Also that using a sentiment lexicon is almost always beneficial, but only by a small amount.", "Within the LSTM based methods the TDLSTM method generally performs the best indicating that the extra target information that the TCLSTM method contains is not needed, but we believe this needs further analysis.", "We can conclude that the simpler NP models perform well across domain, type and medium and that even without language specific tools and lexicons they are competitive to the more complex LSTM based methods.", "Dataset Target-Dep F1 Discussion and conclusion The fast developing subfield of TDSA has so far lacked a large-scale comparative mass evaluation of approaches using different models and datasets.", "In this paper, we address this generalisability limitation and perform the first direct comparison and reproduction of three different approaches for TDSA.", "While carrying out these reproductions, we have noted and described above, the many emerging issues in previous research related to incomplete descriptions of methods and settings, patchy release of code, and lack of comparative evaluations.", "This is natural in a developing field, but it is crucial for ongoing development within NLP in general that improved repeatability practices are adopted.", "The practices adopted in our case studies are to reproduce the methods in open source code, adopt only open data, provide format conversion tools to ingest the different data formats, and describe and document all settings via the code and Jupyter Notebooks (released initially in anonymous form at submission time) 12 .", "We therefore argue that papers should not consider repeatability (replication or reproduction) or generalisability alone, but these two key tenets of scientific practice should be brought together.", "In future work, we aim to extend our reproduction framework further, and extend the comparative evaluation to languages other than English.", "This will necessitate changes in the framework since we expect that dependency parsers and sentiment lexicons will be unavailable for specific languages.", "Also we will explore through error analysis in which situations different neural network architectures perform best." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.1.3", "4.2", "5", "6" ], "paper_header_content": [ "Introduction", "Related work", "Datasets used in our experiments", "Reproduction studies", "Reproduction of Vo and Zhang (2015)", "Scaling and Final Model comparison", "Reproduction of Wang et al. (2017)", "Mass Evaluation", "Discussion and conclusion" ] }
GEM-SciDuet-train-47#paper-1071#slide-5
Why Reproduce
Authors Code with paper Original Re-used the same code Re-implemented
Authors Code with paper Original Re-used the same code Re-implemented
[]
GEM-SciDuet-train-47#paper-1071#slide-6
1071
Bringing replication and reproduction together with generalisability in NLP: Three reproduction studies for Target Dependent Sentiment Analysis
Lack of repeatability and generalisability are two significant threats to continuing scientific development in Natural Language Processing. Language models and learning methods are so complex that scientific conference papers no longer contain enough space for the technical depth required for replication or reproduction. Taking Target Dependent Sentiment Analysis as a case study, we show how recent work in the field has not consistently released code, or described settings for learning methods in enough detail, and lacks comparability and generalisability in train, test or validation data. To investigate generalisability and to enable state of the art comparative evaluations, we carry out the first reproduction studies of three groups of complementary methods and perform the first large-scale mass evaluation on six different English datasets. Reflecting on our experiences, we recommend that future replication or reproduction experiments should always consider a variety of datasets alongside documenting and releasing their methods and published code in order to minimise the barriers to both repeatability and generalisability. We have released our code with a model zoo on GitHub with Jupyter Notebooks to aid understanding and full documentation, and we recommend that others do the same with their papers at submission time through an anonymised GitHub account.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205 ], "paper_content_text": [ "Introduction Repeatable (replicable and/or reproducible 1 ) experimentation is a core tenet of the scientific endeavour.", "In Natural Language Processing (NLP) research as in other areas, this requires three crucial components: (a) published methods described in sufficient detail (b) a working code base and (c) open dataset(s) to permit training, testing and validation to be reproduced and generalised.", "In the cognate sub-discipline of corpus linguistics, releasing textual datasets has been a defining feature of the community for many years, enabling multiple comparative experiments to be conducted on a stable basis since the core underlying corpora are community resources.", "In NLP, with methods becoming increasingly complex with the use of machine learning and deep learning approaches, it is often difficult to describe all settings and configurations in enough detail without releasing code.", "The work described in this paper emerged from recent efforts at our research centre to reimplement other's work across a number of topics (e.g.", "text reuse, identity resolution and sentiment analysis) where previously published methods were not easily repeatable because of missing or broken code or dependencies, and/or where methods were not sufficiently well described to enable reproduction.", "We focus on one sub-area of sentiment analysis to illustrate the extent of these problems, along with our initial recommendations and contributions to address the issues.", "The area of Target Dependent Sentiment Analysis (TDSA) and NLP in general has been growing rapidly in the last few years due to new neural network methods that require no feature engineering.", "However it is difficult to keep track of the state of the art as new models are tested on different datasets, thus preventing true comparative evaluations.", "This is best shown by table 1 where many approaches This work is licenced under a Creative Commons Attribution 4.0 International Licence.", "Licence details: http:// creativecommons.org/licenses/by/4.0/ 1 We follow the definitions in Antske Fokkens' guest blog post \"replication (obtaining the same results using the same experiment) as well as reproduction (reach the same conclusion through different means)\" from http://coling2018.", "org/slowly-growing-offspring-zigglebottom-anno-2017-guest-post/ are evaluated on the SemEval dataset (Pontiki et al., 2014) but not all.", "Datasets can vary by domain (e.g.", "product), type (social media, review), or medium (written or spoken), and to date there has been no comparative evaluation of methods from these multiple classes.", "Our primary and secondary contributions therefore, are to carry out the first study that reports results across all three different dataset classes, and to release a open source code framework implementing three complementary groups of TDSA methods.", "In terms of reproducibility via code release, recent TDSA papers have generally been very good with regards to publishing code alongside their papers (Mitchell et al., 2013; Zhang et al., 2016; Liu and Zhang, 2017; Wang et al., 2017) but other papers have not released code (Wang et al., 2016; Tay et al., 2017) .", "In some cases, the code was initially made available, then removed, and is now back online (Tang et al., 2016a) .", "Unfortunately, in some cases even when code has been published, different results have been obtained relative to the original paper.", "This can be seen when Chen et al.", "(2017) used the code and embeddings in Tang et al.", "(2016b) they observe different results.", "Similarly, when others (Tay et al., 2017; Chen et al., 2017) attempt to replicate the experiments of Tang et al.", "(2016a) they also produce different results to the original authors.", "Our observations within this one sub-field motivates the need to investigate further and understand how such problems can be avoided in the future.", "In some cases, when code has been released, it is difficult to use which could explain why the results were not reproduced.", "Of course, we would not expect researchers to produce industrial strength code, or provide continuing free ongoing support for multiple years after publication, but the situation is clearly problematic for the development of the new field in general.", "In this paper, we therefore reproduce three papers chosen as they employ widely differing methods: Neural Pooling (NP) , NP with dependency parsing (Wang et al., 2017) , and RNN (Tang et al., 2016a) , as well as having been applied largely to different datasets.", "At the end of the paper, we reflect on bringing together elements of repeatability and generalisability which we find are crucial to NLP and data science based disciplines more widely to enable others to make use of the science created.", "Related work Reproducibility and replicability have long been key elements of the scientific method, but have been gaining renewed prominence recently across a number of disciplines with attention being given to a 'reproducibility crisis'.", "For example, in pharmaceutical research, as little as 20-25% of papers were found to be replicable (Prinz et al., 2011) .", "The problem has also been recognised in computer science in general (Collberg and Proebsting, 2016) .", "Reproducibility and replicability have been researched for sometime in Information Retrieval (IR) since the Grid@CLEF pilot track (Ferro and Harman, 2009 ).", "The aim was to create a 'grid of points' where a point defined the performance of a particular IR system using certain pre-processing techniques on a defined dataset.", "Louridas and Gousios (2012) looked at reproducibility in Software Engineering after trying to replicate another authors results and concluded with a list of requirements for papers to be reproducible: (a) All data related to the paper, (b) All code required to reproduce the paper and (c) Documentation for the code and data.", "Fokkens et al.", "(2013) looked at reproducibility in WordNet similarity and Named Entity Recognition finding five key aspects that cause experimental variation and therefore need to be clearly stated: (a) pre-processing, (b) experimental setup, (c) versioning, (d) system output, (e) system variation.", "In Twitter sentiment analysis, Sygkounas et al.", "(2016) stated the need for using the same library versions and datasets when replicating work.", "Different methods of releasing datasets and code have been suggested.", "Ferro and Harman (2009) defined a framework (CIRCO) that enforces a pre-processing pipeline where data can be extracted at each stage therefore facilitating a validation step.", "They stated a mechanism for storing results, dataset and pre-processed data 2 .", "Louridas and Gousios (2012) suggested the use of a virtual machine alongside papers to bundle the data and code together, while most state the advantages of releasing source code (Fokkens et al., 2013; Potthast et al., 2016; Sygkounas et al., 2016) .", "The act of reproducing or replicating results is not just for validating research but to also show how it can be improved.", "Ferro and Silvello (2016) followed up their initial research and were able to analyse which pre-processing techniques were important on a French monolingual dataset and how the different techniques affected each other given an IR system.", "Fokkens et al.", "(2013) showed how changes in the five key aspects affected results.", "The closest related work to our reproducibility study is that of Marrese-Taylor and Matsuo (2017) which they replicate three different syntactic based aspect extraction methods.", "They found that parameter tuning was very important however using different pre-processing pipelines such as Stanford's CoreNLP did not have a consistent effect on the results.", "They found that the methods stated in the original papers are not detailed enough to replicate the study as evidenced by their large results differential.", "Dashtipour et al.", "(2016) undertook a replication study in sentiment prediction, however this was at the document level and on different datasets and languages to the originals.", "In other areas of (aspectbased) sentiment analysis, releasing code for published systems has not been a high priority, e.g.", "in SemEval 2016 task 5 (Pontiki et al., 2016) only 1 out of 21 papers released their source code.", "In IR, specific reproducible research tracks have been created 3 and we are pleased to see the same happening at COLING 2018 4 .", "Turning now to the focus of our investigations, Target Dependent sentiment analysis (TDSA) research (Nasukawa and Yi, 2003) arose as an extension to the coarse grained analysis of document level sentiment analysis (Pang et al., 2002; Turney, 2002) .", "Since its inception, papers have applied different methods such as feature based (Kiritchenko et al., 2014) , Recursive Neural Networks (RecNN) (Dong et al., 2014) , Recurrent Neural Networks (RNN) (Tang et al., 2016a) , attention applied to RNN (Wang et al., 2016; Chen et al., 2017; Tay et al., 2017) , Neural Pooling (NP) Wang et al., 2017) , RNN combined with NP (Zhang et al., 2016) , and attention based neural networks (Tang et al., 2016b) .", "Others have tackled TDSA as a joint task with target extraction, thus treating it as a sequence labelling problem.", "Mitchell et al.", "(2013) carried out this task using Conditional Random Fields (CRF), and this work was then extended using a neural CRF .", "Both approaches found that combining the two tasks did not improve results compared to treating the two tasks separately, apart from when considering POS and NEG when the joint task performs better.", "Finally, created an attention RNN for this task which was evaluated on two very different datasets containing written and spoken (video-based) reviews where the domain adaptation between the two shows some promise.", "Overall, within the field of sentiment analysis there are other granularities such as sentence level (Socher et al., 2013) , topic (Augenstein et al., 2018) , and aspect (Wang et al., 2016; Tay et al., 2017) .", "Aspect-level sentiment analysis relates to identifying the sentiment of (potentially multiple) topics in the same text although this can be seen as a similar task to TDSA.", "However the clear distinction between aspect and TDSA is that TDSA requires the target to be mentioned in the text itself while aspect-level employs a conceptual category with potentially multiple related instantiations in the text.", "Tang et al.", "(2016a) created a Target Dependent LSTM (TDLSTM) which encompassed two LSTMs either side of the target word, then improved the model by concatenating the target vector to the input embeddings to create a Target Connected LSTM (TCLSTM).", "Adding attention has become very popular recently.", "Tang et al.", "(2016b) showed the speed and accuracy improvements of using multiple attention layers only over LSTM based methods, however they found that it could not model complex sentences e.g.", "negations.", "Liu and Zhang (2017) showed that adding attention to a Bi-directional LSTM (BLSTM) improves the results as it takes the importance of each word into account with respect to the target.", "Chen et al.", "(2017) also combined a BLSTM and attention, however they used multiple attention layers and combined the results using a Gated Recurrent Unit (GRU) which they called Recurrent Attention on Memory (RAM), and they found this method to allow models to better understand more complex sentiment for each comparison.", "used neural pooling features e.g.", "max, min, etc of the word embeddings of the left and right context of the target word, the target itself, and the whole Tweet.", "They inputted the features into a linear SVM, and showed the importance of using the left and right context for the first time.", "They found in their study that using a combination of Word2Vec embeddings and sentiment embeddings performed best alongside using sentiment lexicons to filter the embedding space.", "Other studies have adopted more linguistic approaches.", "Wang et al.", "(2017) extended the work of by using the dependency linked words from the target.", "Dong et al.", "(2014) used the dependency tree to create a Recursive Neural Network (RecNN) inspired by Socher et al.", "(2013) but compared to Socher et al.", "(2013) they also utilised the dependency tags to create an Adaptive RecNN (ARecNN).", "Critically, the methods reported above have not been applied to the same datasets, therefore a true comparative evaluation between the different methods is somewhat difficult.", "This has serious implications for generalisability of methods.", "We correct that limitation in our study.", "There are two papers taking a similar approach to our work in terms of generalisability although they do not combine them with the reproduction issues that we highlight.", "First, Chen et al.", "(2017) compared results across Se-mEval's laptop and restaurant reviews in English (Pontiki et al., 2014) , a Twitter dataset (Dong et al., 2014) and their own Chinese news comments dataset.", "They did perform a comparison across different languages, domains, corpora types, and different methods; SVM with features (Kiritchenko et al., 2014) , Rec-NN (Dong et al., 2014) , TDLSTM (Tang et al., 2016a) , Memory Neural Network (MNet) (Tang et al., 2016b) and their own attention method.", "However, the Chinese dataset was not released, and the methods were not compared across all datasets.", "By contrast, we compare all methods across all datasets, using techniques that are not just from the Recurrent Neural Network (RNN) family.", "A second paper, by Barnes et al.", "(2017) compares seven approaches to (document and sentence level) sentiment analysis on six benchmark datasets, but does not systematically explore reproduction issues as we do in our paper.", "Datasets used in our experiments We are evaluating our models over six different English datasets deliberately chosen to represent a range of domains, types and mediums.", "As highlighted above, previous papers tend to only carry out evaluations on one or two datasets which limits the generalisability of their results.", "In this paper, we do not consider the quality or inter-annotator agreement levels of these datasets but it has been noted that some datasets may have issues here.", "For example, Pavlopoulos and Androutsopoulos (2014) point out that the Hu and Liu (2004) dataset does not state their inter-annotator agreement scores nor do they have aspect terms that express neutral opinion.", "We only use a subset of the English datasets available.", "For two reasons.", "First, the time it takes to write parsers and run the models.", "Second, we only used datasets that contain three distinct sentiments (Wilson (2008) only has two).", "From the datasets we have used, we have only had issue with parsing Wang et al.", "(2017) where the annotations for the first set of the data contains the target span but the second set does not.", "Thus making it impossible to use the second set of annotation and forcing us to only use a subset of the dataset.", "An as example of this: \"Got rid of bureaucrats 'and we put that money, into 9000 more doctors and nurses'... to turn the doctors into bureaucrats#BattleForNumber10\" in that Tweet 'bureaucrats' was annotated as negative but it does not state if it was the first or second instance of 'bureaucrats' since it does not use target spans.", "As we can see from table 2, generally the social media datasets (Twitter and YouTube) contain more targets per sentence with the exception of Dong et al.", "(2014) and Mitchell et al.", "(2013) .", "The only dataset that has a small difference between the number of unique sentiments per sentence is the Wang et al.", "(2017) Reproduction studies In the following subsections, we present the three different methods that we are reproducing and how their results differ from the original analysis.", "In all of the experiments below, we lower case all text and tokenise using Twokenizer (Gimpel et al., 2011) .", "This was done as the datasets originate from Twitter and this pre-processing method was to some extent stated in and assumed to be used across the others as they do not explicitly state how they pre-process in the papers.", "Reproduction of Vo and Zhang (2015) Vo and Zhang (2015) created the first NP method for TDSA.", "It takes the word vectors of the left, right, target word, and full tweet/sentence/text contexts and performs max, min, average, standard deviation, and product pooling over these contexts to create a feature vector as input to the Support Vector Machine For each of the experiments below we used the following configurations unless otherwise stated: we performed 5 fold stratified cross validation, features are scaled using Max Min scaling before inputting into the SVM, and used the respective C-Values for the SVM stated in the paper for each of the models.", "One major difficulty with the description of the method in the paper and re-implementation is handling the same target multiple appearances issue as originally raised by Wang et al.", "(2017) .", "As the method requires context with regards to the target word, if there is more than one appearance of the target word then the method does not specify which to use.", "We therefore took the approach of Wang et al.", "(2017) and found all of the features for each appearance and performed median pooling over features.", "This change could explain the subtle differences between the results we report and those of the original paper.", "used three different sentiment lexicons: MPQA 5 (Wilson et al., 2005) , NRC 6 (Mohammad and Turney, 2010) , and HL 7 (Hu and Liu, 2004) .", "We found a small difference in word counts between their reported statistics for the MPQA lexicons and those we performed ourselves, as can be seen in the bold numbers in table 3.", "Originally, we assumed that a word can only occur in one sentiment class within the same lexicon, and this resulted in differing counts for all lexicons.", "This distinction is not clearly documented in the paper or code.", "However, our assumption turned out to be incorrect, giving a further illustration of why detailed descriptions and documentation of all decisions is important.", "We ran the same experiment as to show the effectiveness of sentiment lexicons the results can be seen in table 4.", "We can clearly see there are some difference not just with the accuracy scores but the rank of the sentiment lexicons.", "We found just using HL was best and MPQA does help performance compared to the Target-dep baseline which differs to findings.", "Since we found that using just HL performed best, the rest of the results will apply the Target-dep+ method using HL and using HL & MPQA to show the affect of using the lexicon that both we and found best.", "The original authors tested their methods using three different word vectors: 1.", "Word2Vec trained by on 5 million Tweets containing emoticons (W2V), 2.", "Sentiment Specific Word Embedding (SSWE) from , and 3.", "W2V and SSWE combined.", "Neither of these word embeddings are available from the original authors as never released the embeddings and the link to embeddings no longer works 8 .", "However, the embeddings were released through Wang et al.", "(2017) code base 9 following requesting of the code from .", "Figure 1 shows the results of the different word embeddings across the different methods.", "The main finding we see is that SSWE by themselves are not as informative as W2V vectors which is different to the findings of .", "However we agree that combining the two vectors is beneficial and that the rank of methods is the same in our observations.", "Sentiment Lexicons Word Counts Scaling and Final Model comparison We test all of the methods on the test data set of Dong et al.", "(2014) and show the difference between the original and reproduced models in figure 2.", "Finally, we show the effect of scaling using Max Min and not scaling the data.", "As stated before, we have been using Max Min scaling on the NP features, however did not mention scaling in their paper.", "The library they were using, LibLinear (Fan et al., 2008) , suggests in its practical guide (Hsu et al., 2003) to scale each feature to [0, 1] but this was not re-iterated by .", "We are using scikit-learn's (Pedregosa et al., 2011) LinearSVC which is a wrapper of LibLinear, hence making it appropriate to use here.", "As can be seen in figure 2, not scaling can affect the results by around one-third.", "Reproduction of Wang et al.", "(2017) Wang et al.", "(2017) extended the NP work of and instead of using the full tweet/sentence/text contexts they used the full dependency graph of the target word.", "Thus, they created three different methods: 1.", "TDParseuses only the full dependency graph context, 2.", "TDParse the feature of TDParseand the left and right contexts, and 3.", "TDParse+ the features of TDParse and LS and RS contexts.", "The experiments are performed on the Dong et al.", "(2014) and Wang et al.", "(2017) Twitter datasets where we train and test on the previously specified train and test splits.", "We also scale our features using Max Min scaling before inputting into the SVM.", "We used all three sentiment lexicons as in the original paper, and we found the C-Value by performing 5 fold stratified cross validation on the training datasets.", "The results of these experiments can be seen in figure 3 10 .", "As found with the results of replication, scaling is very important but is typically overlooked when reporting.", "8 http://ir.hit.edu.cn/˜dytang/ 9 https://github.com/bluemonk482/tdparse 10 For the Election Twitter dataset TDParse+ result were never reported in the original paper.", "Tang et al.", "(2016a) was the first to use LSTMs specifically for TDSA.", "They created three different models: 1.", "LSTM a standard LSTM that runs over the length of the sentence and takes no target information into account, 2.", "TDLSTM runs two LSTMs, one over the left and the other over the right context of the target word and concatenates the output of the two, and 3.", "TCLSTM same as the TDLSTM method but each input word vector is concatenated with vector of the target word.", "All of the methods outputs are fed into a softmax activation function.", "The experiments are performed on the Dong et al.", "(2014) dataset where we train and test on the specified splits.", "For the LSTMs we initialised the weights using uniform distribution U(0.003, 0.003), used Stochastic Gradient Descent (SGD) a learning rate of 0.01, cross entropy loss, padded and truncated sequence to the length of the maximum sequence in the training dataset as stated in the original paper, and we did not \"set the clipping threshold of softmax layer as 200\" (Tang et al., 2016a) as we were unsure what this meant.", "With regards to the number of epochs trained, we used early stopping with a patience of 10 and allowed 300 epochs.", "Within their experiments they used SSWE and Glove Twitter vectors 11 (Pennington et al., 2014) .", "As the paper being reproduced does not define the number of epochs they trained for, we use early stopping.", "Thus for early stopping we require to split the training data into train and validation sets to know when to stop.", "As it has been shown by Reimers and Gurevych (2017) that the random seed statistically significantly changes the results of experiments we ran each model over each word embedding thirty times, using a different seed value but keeping the same stratified train and validation split, and reported the results on the same test data as the original paper.", "As can be seen in Figure 4 , the initial seed value makes a large difference more so for the smaller embeddings.", "In table 5, we show the difference between our mean and maximum result and the original result for each model using the 200 dimension Glove Twitter vectors.", "Even though the mean result is quite different from the original the maximum is much closer.", "Our results generally agree with their results on the ranking of the word vectors and the embeddings.", "Overall, we were able to reproduce the results of all three papers.", "However for the neural network/deep learning approach of Tang et al.", "(2016a) we agree with Reimers and Gurevych (2017) that reporting multiple runs of the system over different seed values is required as the single performance scores can be misleading, which could explain why previous papers obtained different results to the original for the TDLSTM method (Chen et al., 2017; Tay et al., 2017) .", "Mass Evaluation For all of the methods we pre-processed the text by lower casing and tokenising using Twokenizer (Gimpel et al., 2011) , and we used all three sentiment lexicons where applicable.", "We found the best word vectors from SSWE and the common crawl 42B 300 dimension Glove vectors by five fold stratified cross validation for the NP methods and the highest accuracy on the validation set for the LSTM methods.", "We chose these word vectors as they have very different sizes (50 and 300), also they have been shown to perform well in different text types; SSWE for social media (Tang et al., 2016a) and Glove for reviews (Chen et al., 2017) .", "To make the experiments quicker and computationally less expensive, we filtered out all words from the word vectors that did not appear in the train and test datasets, and this is equivalent with respect to word coverage as using all words.", "Finally we only reported results for the LSTM methods with one seed value and not multiple due to time constraints.", "The results of the methods using the best found word vectors on the test sets can be seen in table 6.", "We find that the TDParse methods generally perform best but only clearly outperforms the other nondependency parser methods on the YouTuBean dataset.", "We hypothesise that this is due to the dataset containing, on average, a deeper constituency tree depth which could be seen as on average more complex sentences.", "This could be due to it being from the spoken medium compared to the rest of the datasets which are written.", "Also that using a sentiment lexicon is almost always beneficial, but only by a small amount.", "Within the LSTM based methods the TDLSTM method generally performs the best indicating that the extra target information that the TCLSTM method contains is not needed, but we believe this needs further analysis.", "We can conclude that the simpler NP models perform well across domain, type and medium and that even without language specific tools and lexicons they are competitive to the more complex LSTM based methods.", "Dataset Target-Dep F1 Discussion and conclusion The fast developing subfield of TDSA has so far lacked a large-scale comparative mass evaluation of approaches using different models and datasets.", "In this paper, we address this generalisability limitation and perform the first direct comparison and reproduction of three different approaches for TDSA.", "While carrying out these reproductions, we have noted and described above, the many emerging issues in previous research related to incomplete descriptions of methods and settings, patchy release of code, and lack of comparative evaluations.", "This is natural in a developing field, but it is crucial for ongoing development within NLP in general that improved repeatability practices are adopted.", "The practices adopted in our case studies are to reproduce the methods in open source code, adopt only open data, provide format conversion tools to ingest the different data formats, and describe and document all settings via the code and Jupyter Notebooks (released initially in anonymous form at submission time) 12 .", "We therefore argue that papers should not consider repeatability (replication or reproduction) or generalisability alone, but these two key tenets of scientific practice should be brought together.", "In future work, we aim to extend our reproduction framework further, and extend the comparative evaluation to languages other than English.", "This will necessitate changes in the framework since we expect that dependency parsers and sentiment lexicons will be unavailable for specific languages.", "Also we will explore through error analysis in which situations different neural network architectures perform best." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.1.3", "4.2", "5", "6" ], "paper_header_content": [ "Introduction", "Related work", "Datasets used in our experiments", "Reproduction studies", "Reproduction of Vo and Zhang (2015)", "Scaling and Final Model comparison", "Reproduction of Wang et al. (2017)", "Mass Evaluation", "Discussion and conclusion" ] }
GEM-SciDuet-train-47#paper-1071#slide-6
Vo et al 2015 Method
Pooling (Max, Min, Prod, Std, Avg) Left Context Target Context Right Context
Pooling (Max, Min, Prod, Std, Avg) Left Context Target Context Right Context
[]